Artificial Intelligence Experts Respond to Elon Musk?s Dire

Posted by jesuslewis on August 3rd, 2017

Though that may seem a quixotic stance for the head of multiple tech companies to take, it seems that his proximity to the bleeding edge of technological development has given him the heebie-jeebies when it comes to artificial intelligence. He’s shared his fears of AI running amok before, likening it to “summoning the demon,” and Musk doubled down on his stance at a meeting of the National Governors Association this weekend, telling state leaders that AI poses an existential threat to humanity.

It’s far from delusional to voice such concerns, given that AI could one day reach the point where it becomes capable of improving upon itself, sparking a feedback loop of progress that takes it far beyond human capabilities. When we’ll actually reach that point is anyone’s guess, and we’re not at all close at the moment, as today’s footage of a security robot wandering blindly into a fountain makes clear.

While computers may be snapping up video game records and mastering poker, they cannot approximate anything like general intelligence — the broad reasoning skills that allow us to accomplish many variable tasks. This is why AI that excels at a single task, like playing chess, fails miserably when asked to do something as simple as describe a chair.

To get some perspective on Musk’s comments, Discover reached out to computer scientists and futurists working on the very kind of AI that the tech CEO warns about.

Elon Musk’s remarks are alarmist. I recently surveyed 300 leading AI researchers and the majority of them think it will take at least 50 more years to get to machines as smart as humans. So this is not a problem that needs immediate attention.

And I’m not too worried about what happens when we get to super-intelligence, as there’s a healthy research community working on ensuring that these machines won’t pose an existential threat to humanity. I expect they’ll have worked out precisely what safeguards are needed by then.

But Elon is right about one thing: We do need government to start regulating AI now. However, it is the stupid AI we have today that we need to start regulating. The biased algorithms. The arms race to develop “killer robots”, where stupid AI will be given the ability to make life or death decisions. The threat to our privacy as the tech companies get hold of all our personal and medical data. And the distortion of political debate that the internet is enabling.

The tech companies realize they have a problem, and they have made some efforts to avoid government regulation by beginning to self-regulate. But there are serious questions to be asked whether they can be left to do this themselves. We are witnessing an AI race between the big tech giants, investing billions of dollars in this winner takes all contest. Many other industries have seen government step in to prevent monopolies behaving poorly. I’ve said this in a talk recently, but I’ll repeat it again: If some of the giants like Google and Facebook aren’t broken up in twenty years time, I’ll be immensely worried for the future of our society.

For More Home Services Video

Like it? Share it!


jesuslewis

About the Author

jesuslewis
Joined: July 1st, 2017
Articles Posted: 143

More by this author