Controversial AI theorist Eliezer Yudkowsky sits on the fringe of the industry’s most extreme circle of commentators, where extinction of the human species is the inevitable result of developing advanced artificial intelligence.
“I think we’re not ready, I think we don’t know what we’re doing, and I think we’re all going to die,” Yudkowsky said on this week’s episode of the Bloomberg Originals series AI IRL.
For the past two decades, Yudkowsky has consistently promoted his theory that hostile AI could spark a mass extinction event. As many in the AI industry shrugged or raised eyebrows at this assessment, he created the Machine Intelligence Research Institute with funding from Peter Thiel, among others, and collaborated on written work with futurists such as Nick Bostrom.
To say that some of his visions for the end of the world are unpopular would be a gross understatement; they’re on par with the prophecy that the world would end in 2012. That prediction was based on a questionable interpretation of an ancient text, as well as a dearth of supportive evidence.
AI doomsday scenarios are gaining traction in Silicon Valley, which can deflect from actual harms like algorithmic bias and racism. https://t.co/Y6nJD8xkGl
— Bloomberg (@business) July 13, 2023