Quick victory lap: We've been going on about adversarial images for the last half-decade and a quick glance at the posts reveals what amounts to a primer on this stuff. Our focus was on machine vision and how it is trained. Links after the jump.
From Inference Review:
Disinformed
There have been fakes as long as there have been frauds, and that is a very long time; but deepfakes are new fakes, and having initially loitered along the margins of general awareness, they are now occupied in haunting it. Tens of thousands of deepfakes have already been created. The technical means of fiddling with images is hardly new. Standing beside Joseph Stalin in one photograph taken along the newly completed White Sea Canal, Nikolai Yezhov disappeared from the very same photograph some months later, as he, in fact, had disappeared from life. The fakery is fine, but it is no better than that, the ensuing photograph visually unbalanced by a lot of gray canal water where Yezhov had once stood. It is thanks to a technology invented in 2014 that deepfakery is capable of taking verisimilitude to a new level.
Generative Adversarial Networks
The ability to produce ever more persuasive deepfakes has been made possible by a recent form of machine learning called generative adversarial networks—or GANs. A GAN operator pits a generator (G) against a discriminator (D) in a gamelike environment in which G tries to fool D into incorrectly discriminating between fake and real data. The technology works by means of a series of incremental but rapid adjustments that allows D to discriminate data while G tries to fool it.
How fast are these adjustments? Very fast. A computer can play 24 trillion games of Texas Hold’em every second. To beat human opponents, a computer does not need to assess their strategies. It relies on the patterns it picks out, and assumes only that human strategy is limited to a few flexible tactics. DeepMind beat human players at 99.8% of StarCraft II games, a game subtler and more abstract than Texas Hold’em.
GAN technology is not particularly exotic; the software is available commercially, and anyone who can write code can figure out how to use it. If simply using it is open admission, what about using it to change the 2020 election? That, David Doermann argues, “would take a massive amount of computing power.” Rogue actors, he adds, are too small to do much. “A nation state is required.”1
What about an organized group scaled somewhere between a rogue actor and a rogue state?
It is too late to ban GANs. But it is possible to criminalize certain uses, and efforts are afoot to do so. Beyond the ambit of domestic law, legal remedies are less likely to be effective. GANs have any number of applications. Some are pure as the driven snow. GANs can reconstruct three-dimensional images from two-dimensional photographs. They can be used to visualize industrial design, improve astronomical images by filling in statistically what real cameras cannot capture, and generate showers of imaginary particles for high-energy physics experiments. GANs can also be used to visualize motion in static environments, which could help find people lost or hiding in forests or jungles. In 2016, GAN technology was used to generate new molecules for a variety of protein targets in cells implicated in fibrosis, inflammation, and cancer.2
So much for Dr. Jekyll. Mr. Hyde now follows. What makes GANs frightening is their power to produce photographic images of people who do not exist, or to generate video from voice recordings, or to doctor images of people who do exist to make them seem to be someone else, or to say things they never did or would say. GANs can be used to create pornography by using an image without the subject’s knowledge or consent. According to the company Sensity, formerly Deeptrace, of the 15,000 online deepfakes detected by September 2019, 96% were pornographic.3
GAN technology is intended to deceive.
And the technology is flexible. Those who mean mischief favor the adversarial neural network; those who do not, the discriminators. This allows authorities to better detect deepfake attacks; but it also makes them adept at offense if they themselves go rogue. Any formula that helps the defense can be used to improve an attack. In planting false positives, clever operators tag real videos as fakes. Ambiguity infects the entire informational domain.
Pornography aside, many other nefarious uses of deepfakery are obvious. In August 2019, the Wall Street Journal reported on the first big-money case of identity fraud.4 Scammers used voice-changing technology to impersonate a chief executive. The money is gone; they have not been caught. Business leaders or banking lenders can be made to say things that dupe investors and markets, the ensuing herd yielding millions for those in the know....
....MUCH MORE
Previously:
June 2019
Stephen Wolfram: "A Few Thoughts about Deep Fakes" (+ how to know what's real)
June 2018
AI: "Experts Bet on First Deepfakes Political Scandal"
May 2018
"The US military is funding an effort to catch deepfakes and other AI trickery"
But, but...I saw it on the internet....
February 2018
"Talk down to Siri like she's a mere servant – your safety demands it"
The "mere" is troubling for some reason but it's CPI day so no time to reflect on why....
Related (and because all news is local), from the Columbia Journalism Review:
Reporting in a Machine Reality: Deepfakes, misinformation, and what journalists can do about themThat's not local in the geographical sense but rather intellectual provincialism:
...In yesterday's "Questions America Wants Answered: How Will Brexit Affect The Art Market?" I amused myself with the provincialism of the headline question, somewhat akin to the old joke about the small Italian town that sent its most esteemed resident, a tailor by trade, to represent said villaggio at an audience with the Pope. Upon his return from Rome the citizens crowded around and asked "What kind of man is Il Papa?We all see the world through our own self-created lenses. And on a related provincialism point, easily the most terrifying news of the last couple years:
Their emissary replied, "About a 42 regular"....
"Equity Analysts Join the Gig Economy"
and
"The automation of creativity: scary but inevitable":
First they came for the journalists and I did not speak out-If interested see also:
Because I was not a journalist.
Then they came for the ad agency creatives and I did not speak out-
Because I was not an ad agency creative. (see below)
Then they came for the financial analysts and I
said 'hang on one effin minute'....
Adversarial Images, Or How To Fool Machine Vision
"Boffins don bad 1980s fashion to avoid being detected by object-recognizing AI cameras"
"Magic AI: These are the Optical Illusions that Trick, Fool, and Flummox Computers"
Fooling The Machine: The Byzantine Science of Deceiving Artificial Intelligence
Another Way To Fool The Facial Recognition Algos
News You Can Use—"Invisible Mask: Practical Attacks on Face Recognition with Infrared"
Disrupting Surveillance Capitalism
And finally, the essential "Machine Learning and the Importance of 'Cat Face'".