Tuesday, July 31, 2018

"Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security"

Via the Social Science Research Network:
59 Pages Posted: 21 Jul 2018  
Robert ChesneyUniversity of Texas School of La
Danielle Keats Citron
University of Maryland Francis King Carey School of Law; Yale University - Yale Information Society Project; Stanford Law School Center for Internet and Society
 Date Written: July 14, 2018
Abstract
Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection. Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors.

While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well.

Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions.
SSRN download page

Previously:

AI: "Experts Bet on First Deepfakes Political Scandal"
"Talk down to Siri like she's a mere servant – your safety demands it"
"The US military is funding an effort to catch deepfakes and other AI trickery"
But, but...I saw it on the internet....