From The Conversation:
A new form of misinformation is poised to spread through online communities as the 2018 midterm election campaigns heat up. Called “deepfakes” after the pseudonymous online account that popularized the technique – which may have chosen its name because the process uses a technical method called “deep learning” – these fake videos look very realistic.
So far, people have used deepfake videos in pornography and satire to make it appear that famous people are doing things they wouldn’t normally. But it’s almost certain deepfakes will appear during the campaign season, purporting to depict candidates saying things or going places the real candidate wouldn’t.
Because these techniques are so new, people are having trouble telling the difference between real videos and the deepfake videos. My work, with my colleague Ming-Ching Chang and our Ph.D. student Yuezun Li, has found a way to reliably tell real videos from deepfake videos. It’s not a permanent solution, because technology will improve. But it’s a start, and offers hope that computers will be able to help people tell truth from fiction....MORE
What’s a ‘deepfake,’ anyway?
Making a deepfake video is a lot like translating between languages. Services like Google Translate use machine learning – computer analysis of tens of thousands of texts in multiple languages – to detect word-use patterns that they use to create the translation.
Deepfake algorithms work the same way: They use a type of machine learning system called a deep neural network to examine the facial movements of one person. Then they synthesize images of another person’s face making analogous movements. Doing so effectively creates a video of the target person appearing to do or say the things the source person did.
Before they can work properly, deep neural networks need a lot of source information, such as photos of the persons being the source or target of impersonation. The more images used to train a deepfake algorithm, the more realistic the digital impersonation will be.
Detecting blinking
There are still flaws in this new type of algorithm. One of them has to do with how the simulated faces blink – or don’t. Healthy adult humans blink somewhere between every 2 and 10 seconds, and a single blink takes between one-tenth and four-tenths of a second. That’s what would be normal to see in a video of a person talking. But it’s not what happens in many deepfake videos....
See also:
AI: "Experts Bet on First Deepfakes Political Scandal"
"Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security"
"Talk down to Siri like she's a mere servant – your safety demands it"
"The US military is funding an effort to catch deepfakes and other AI trickery"
...Related (and because all news is local), from the Columbia Journalism Review:
Reporting in a Machine Reality: Deepfakes, misinformation, and what journalists can do about themThat's not local in the geographical sense but rather intellectual provincialism:
...In yesterday's "Questions America Wants Answered: How Will Brexit Affect The Art Market?" I amused myself with the provincialism of the headline question, somewhat akin to the old joke about the small Italian town that sent its most esteemed resident, a tailor by trade, to represent said villaggio at an audience with the Pope. Upon his return from Rome the citizens crowded around and asked "What kind of man is Il Papa?We all see the world through our own self-created lenses. And on a related provincialism point, easily the most terrifying news of the last couple years:
Their emissary replied, "About a 42 regular"....
"Equity Analysts Join the Gig Economy"
and
"The automation of creativity: scary but inevitable":
First they came for the journalists and I did not speak out-
Because I was not a journalist.
Then they came for the ad agency creatives and I did not speak out-
Because I was not an ad agency creative. (see below)
Then they came for the financial analysts and I
said 'hang on one effin minute'....