Tuesday, February 26, 2019

Summon the Demon: How AI Will Go Out Of Control According To 52 Experts

From CB Insights:
'Summoning the demon.' 'The new tools of our oppression.' 'Children playing with a bomb.' These are just a few ways the world's top researchers and industry leaders have described the threat that artificial intelligence poses to mankind. Will AI enhance our lives or completely upend them?

There’s no way around it — artificial intelligence is changing human civilization, from how we work to how we travel to how we enforce laws.
As AI technology advances and seeps deeper into our daily lives, its potential to create dangerous situations is becoming more apparent. A Tesla Model 3 owner in California died while using the car’s Autopilot feature. In Arizona, a self-driving Uber vehicle hit and killed a pedestrian (though there was a driver behind the wheel).

Other instances have been more insidious. For example, when IBM’s Watson was tasked with helping physicians diagnose cancer patients, it gave numerous “unsafe and incorrect treatment recommendations.
Some of the world’s top researchers and industry leaders believe these issues are just the tip of the iceberg. What if AI advances to the point where its creators can no longer control it? How might that redefine humanity’s place in the world?
Below, 52 experts weigh in on the threat that AI poses to the future of humanity, and what we can do to ensure that AI is an aid to the human race rather than a destructive force.

Table of contents
Unpredictable behavior

1. Stephen Hawking
AI technology could be impossible to control

The late Stephen Hawking, world-renowned astrophysicist and author of A Brief History of Time, believed that artificial intelligence would be impossible to control in the long term, and could quickly surpass humanity if given an opportunity:
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
2. Elon Musk
Regulation will be essential

Few technologists have been as outspoken about the perils of AI as the prolific founder of Tesla Inc, Elon Musk.
“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”
Musk believes that proper regulatory oversight will be crucial to safeguarding humanity’s future as AI networks become increasingly sophisticated and are entrusted with mission-critical responsibilities:
“Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA won’t make flying safer. They’re there for good reason.”
Musk has compared the destructive potential of AI networks to the risks of global nuclear conflict posed by North Korea:
“If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”
He has also pointed out that AI doesn’t necessarily have to be malevolent to threaten humanity’s future. To Musk, the cold, immutable efficiency of machine logic is as dangerous as any evil science-fiction construct:
“AI doesn’t have to be evil to destroy humanity — if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.”
3. Tim Urban
We cannot regulate technology that we cannot predict

Tim Urban, blogger and creator of Wait But Why, believes the real danger of AI and ASI is the fact that it is inherently unknowable. According to Urban, there’s simply no way we can predict the behavior of AI:
“And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means.”
4. Oren Etzioni
Deep learning programs lack common sense

Considerable problems of bias and neutrality aside, one of the most significant challenges facing AI researchers is how to give neural networks the kind of decision-making and rationalization skills we learn as children.

According to Dr. Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, common sense is even less common in AI systems than it is in most human beings — a drawback that could create additional difficulties with future AI networks:
“A huge problem on the horizon is endowing AI programs with common sense. Even little kids have it, but no deep learning program does.”...
...MUCH MORE