Your Friday superintelligent robot fix: the disturbing consequences of ultimate AI

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2014/01/Overrun-by-Robots1-183×108.jpg” thumb_width=”200″ /]Our own superintelligent humans–Elon Musk (Tesla), Steve Wozniak (Apple), Bill Gates (Microsoft) and Stephen Hawking–are converging on artificial intelligence, not just everyday, pedestrian robotics, but the kind of AI superintellect that could make pets out of people–if we are lucky. In his interview with Australian Financial Review, the Woz (now an Australian resident) quipped: ‘Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?’

Mr Musk, in a Daily Mail article covering an interview by media star/scientist Neil deGrasse Tyson, said that humanity needs to be careful about what it programs superintelligent robots to do–that if we ask for happy humans, the unhappy among us may be eliminated. A shorter term outcome: we won’t be allowed to drive cars once self-driving cars are perfected, because the cars will know how to drive better than us. (So get that Aston Martin, Morgan, Corvette or Maserati while you can.)

If you, like your Editor, is wondering why these articles are piling on, these gentlemen are part of an interestingly constituted organization called The Future of Life Institute, which wrestles with existential risks facing humanity. Right now their #1 With A Bullet are the “potential risks from the development of human-level artificial intelligence.” A merciful change from ‘climate change’, volcanoes and nukes. One senses a media blitz of Vitally Concerned Brainiacs. Mr Musk has backed FLI with $10 million, so he has put his money where his speech originates. Other problems include the millions of humans who are put out of work by robotics (a Japan bank is introducing robotic employees, see Daily Mail sidebar).

However, the Woz does give us a bit of hope. Moore’s Law, which states that computer processing speeds double every two years, will top out by 2020, when a chip shrinks to the size of an atom. Unless quantum (sub-atomic) computing is successful….

Also: an FLI article on why well-intentioned AI could pose a greater threat to humanity than malevolent cyborgs (hint: they’re too goal-oriented) and what the Machine Intelligence Research Institute (MIRI) wants to do about it; Entrepreneur/YahooTech

Categories: Latest News.