Researchers wager on a possible Deepfake video scandal during the 2018 U.S. midterm elections
A quiet wager has taken hold among researchers who study artificial intelligence techniques and the societal impacts of such technologies. They’re betting whether or not someone will create a so-called Deepfake video about a political candidate that receives more than 2 million views before getting debunked by the end of 2018.
The actual stakes in the bet are fairly small: Manhattan cocktails as a reward for the “yes” camp and tropical tiki drinks for the “no” camp. But the implications of the technology behind the bet’s premise could potentially reshape governments and undermine societal trust in the idea of having shared facts. It all comes down to when the technology may mature enough to digitally create fake but believable videos of politicians and celebrities saying or doing things that never actually happened in real life.
“We talk about these technologies and we see the fact you can simulate Obama’s voice or simulate a Trump video, and it seems so obvious that there would be a lot of financial interest in seeing the technology used,” says Tim Hwang, director of the Ethics and Governance of AI Initiative at the Harvard Berkman-Klein Center and the MIT Media Lab. “But one thing in my mind is, why haven’t we seen it yet?”
The Deepfake technology in question first gained notoriety in December 2017 when a person going by the pseudonym “DeepFakes” showed how deep learning—a popular AI technique based on neural network computer architecture—could digitally stitch the faces of celebrities onto the faces of porn actors in pornography videos. Since that time, social network services such as Twitter and Reddit have attempted to clamp down on a slew of amateur-created Deepfake videos that are typically being used for pornographic purposes.Recently:
Such technology relies upon a “generative adversarial networks” (GANs) approach. One network learns to identify the patterns in images or videos to recreate, say, a particular celebrity’s face as its output. The second network acts as the discriminating viewer by trying to figure out whether a given image or video frame is authentic or a synthetic fake. That second network then provides feedback to reinforce and strengthen the believability of the first network’s output.
Experts have been investigating and refining the deep learning techniques behind such Deepfake videos. Beyond just face swapping, researchers have shown how to digitally mimic both the appearance and voice of individuals in order to create the equivalent of digital puppets. Stanford University researchers recently unveiled some of the most realistic-looking examples to date in their “Deep Video Portraits” paper that will be presented at the SIGGRAPH 2018 annual conference on computer graphics in Vancouver from August 12 to 16....MUCH MORE
"The US military is funding an effort to catch deepfakes and other AI trickery"
But, but...I saw it on the internet....
"Talk down to Siri like she's a mere servant – your safety demands it"
The "mere" is troubling for some reason but it's CPI day so no time to reflect on why....