During a speech at the Massachusetts Institute of Technology on Friday, the founder of Tesla told an audience that the tech sector should be “very careful” about pioneering AI, The Post reported, calling it “our biggest existential threat.” On several occasions, Musk has called the technology a big risk that can’t be controlled.
At MIT, Musk carried the metaphor a bit further than he has in the past. “With artificial intelligence we are summoning the demon,” The Post quoted Musk as saying.
Musk’s comments highlighted a budding ethical debate in the broader society about whether machines should be able to think for themselves. Autonomous technology is a hot topic in engineering circles, and occupies a prominent place in popular culture.
For years, movies and television have breathed life into scenarios, in extremis, about technology eventually spinning out of control and coming to dominate the very population it was created to serve. Classic films like “The Terminator” franchise, “The Matrix” and the soon to be released “Avengers 2: Age of Ultron” all depict scenarios of machines developing sentience—with often apocalyptic results.
Proponents say AI is the next logical step of an increasingly tech dependent society, but opponents like Musk argue there could be unintended consequences.
Musk likened the quandary to a horror movie where protagonists call forth spirits that eventually wreak havoc.
“In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out,” The Post reported Musk as saying.
To a certain extent, machine-based intellect already dictates modern contrivances such as financial trading, video games and robotics, all functions most people take for granted. That said, the rise of semi-autonomous technology has dislocated workers across key industries, even as it saves companies on money and make services more efficient.
In addition, some ethicists and technology practitioners are concerned on the potential for what Oxford University recently called “moral outsourcing.”
In a blog post last year, Oxford scholars cautioned that “when a machine is ‘wrong,’, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could. Simple algorithms should be extremely predictable, but can make bizarre decisions in ‘unusual’ circumstances.”
After acquiring British technology firm DeepMind earlier this year, Google bowed to the growing controversy over AI by agreeing to establish an ethics board that would oversee its efforts to create conscious machines. The search giant has made steady advances to make its applications more convenient to users by making them increasingly autonomous.