Wednesday, May 23, 2018

"The US military is funding an effort to catch deepfakes and other AI trickery"

But, but...I saw it on the internet.
From MIT's Technology Review:

But DARPA’s technologists admit that it might be a losing battle.
Think that AI will help put a stop to fake news? The US military isn’t so sure.
The Department of Defense is funding a project that will try to determine whether the increasingly real-looking fake video and audio generated by artificial intelligence might soon be impossible to distinguish from the real thing—even for another AI system.
This summer, under a project funded by the Defense Advanced Research Projects Agency (DARPA), the world’s leading digital forensics experts will gather for an AI fakery contest. They will compete to generate the most convincing AI-generated fake video, imagery, and audio—and they will also try to develop tools that can catch these counterfeits automatically.
The contest will include so-called “deepfakes,” videos in which one person’s face is stitched onto another person’s body. Rather predictably, the technology has already been used to generate a number of counterfeit celebrity porn videos. But the method could also be used to create a clip of a politician saying or doing something outrageous.
DARPA’s technologists are especially concerned about a relatively new AI technique that could make AI fakery almost impossible to spot automatically. Using what are known as generative adversarial networks, or GANs, it is possible to generate stunningly realistic artificial imagery.

“Theoretically, if you gave a GAN all the techniques we know to detect it, it could pass all of those techniques,” says David Gunning, the DARPA program manager in charge of the project. “We don’t know if there’s a limit. It’s unclear.”

 A GAN consists of two components. The first, known as the “actor,” tries to learn the statistical patterns in a data set, such as a set of images or videos, and then generate convincing synthetic pieces of data. The second, called the “critic,” tries to distinguish between real and fake examples. Feedback from the critic enables the actor to produce ever-more-realistic examples. And because GANs are designed to outwit an AI system already, it is unclear if any automated system could catch them....MUCH MORE
Related (and because all news is local), from the Columbia Journalism Review:
Reporting in a Machine Reality: Deepfakes, misinformation, and what journalists can do about them
That's not local in the geographical sense but rather intellectual provincialism:
...In yesterday's "Questions America Wants Answered: How Will Brexit Affect The Art Market?" I amused myself with the provincialism of the headline question, somewhat akin to the old joke about the small Italian town that sent its most esteemed resident, a tailor by trade, to represent said villaggio at an audience with the Pope. Upon his return from Rome the citizens crowded around and asked "What kind of man is Il Papa?

Their emissary replied, "About a 42 regular"....
We all see the world through our own self-created lenses. And on a related provincialism point, easily the most terrifying news of the last couple years:

"Equity Analysts Join the Gig Economy"
"The automation of creativity: scary but inevitable"
First they came for the journalists and I did not speak out-
Because I was not a journalist.

Then they came for the ad agency creatives and I did not speak out-
Because I was not an ad agency creative. (see below)

Then they came for the financial analysts and I
said 'hang on one effin minute'....