The Worst That Artificial Intelligence Could Do: War, Famine, Unemployment, and Campaign Finance
From Popular Science:
Fiction is full of evil robots, from the Cylons of “Battlestar Galactica” to the vengeful replicants of Blade Runner
 to the iconic, humanity-destroying Terminators. Yet these are all 
robots built from good intentions, whose horrific violence is an 
unintended consequence of their design, rather than the explicit point.
What if, instead of human folly, an artificial intelligence caused harm because a human explicitly designed it for malicious purposes? A new study, funded in part by Elon Musk, looks at the possibilities of deliberately evil machines.
Titled “Unethical Research: How to Create a Malevolent Artificial 
Intelligence,” by Roman V. Yampolskiy of the University of Louisville 
and futurist Federico Pistono, the short paper looks at just what harm someone could do with an actively evil program. Why? For a similar reason that DARPA asked people to invent new weapons in their backyards:
 better to find the threat now, through peaceful research, than have to 
adapt to it later when it’s used in an aggressive attack.
What did Pistono and Yampolskiy find? Their list of groups that could
 make a vicious A.I. starts familiar: military (developing cyber-weapons
 and robot soldiers to achieve dominance),  governments (attempting to 
use AI to establish hegemony, control people, or take down other 
governments), corporations (trying to achieve monopoly, destroying the 
competition through illegal means); and continues to include black hat 
hackers, villains, doomsday cults, and criminals, among others. And the 
A.I. could come from many places. According to the authors, code written
 without oversight, or closed-source programming designed to be seen by 
as few eyes as possible, are both ways to make a harmful artificial 
intelligence without warning the world first.
Okay, but what does the malicious A.I. actually do that causes problems? Undercut human labor, say Pistono and Yampolskiy:
By
 exploiting the natural tendency of companies to want to increase their 
productivity and profits, a [malicious AI] could lend its services to 
corporations, which would be more competitive than most, if not all 
humans workers, making it desirable for a company to employ the 
quasi-zero marginal cost A.I., in place of the previously employed 
expensive and inefficient human labor....
 
...
MORE