Mr Musk, in a Daily Mail article covering an interview by media star/scientist Neil deGrasse Tyson, said that humanity needs to be careful about what it programs superintelligent robots to do–that if we ask for happy humans, the unhappy among us may be eliminated. A shorter term outcome: we won’t be allowed to drive cars once self-driving cars are perfected, because the cars will know how to drive better than us. (So get that Aston Martin, Morgan, Corvette or Maserati while you can.)
If you, like your Editor, is wondering why these articles are piling on, these gentlemen are part of an interestingly constituted organization called The Future of Life Institute, which wrestles with existential risks facing humanity. Right now their #1 With A Bullet are the “potential risks from the development of human-level artificial intelligence.” A merciful change from ‘climate change’, volcanoes and nukes. One senses a media blitz of Vitally Concerned Brainiacs. Mr Musk has backed FLI with $10 million, so he has put his money where his speech originates. Other problems include the millions of humans who are put out of work by robotics (a Japan bank is introducing robotic employees, see Daily Mail sidebar).
However, the Woz does give us a bit of hope. Moore’s Law, which states that computer processing speeds double every two years, will top out by 2020, when a chip shrinks to the size of an atom. Unless quantum (sub-atomic) computing is successful….
Also: an FLI article on why well-intentioned AI could pose a greater threat to humanity than malevolent cyborgs (hint: they’re too goal-oriented) and what the Machine Intelligence Research Institute (MIRI) wants to do about it; Entrepreneur/YahooTech