Someone from the House Permanent Select Committee on Intelligence recently contacted me about a hearing they’re having on the subject of deep fakes. I can’t attend the hearing, but the conversation got me thinking about the subject of deep fakes, and I made a few quick notes….
What You See May Not Be What Happened
The idea of modifying images is as old as photography. At first, it had to be done by hand (sometimes with airbrushing). By the 1990s, it was routinely being done with image manipulation software such as Photoshop. But it’s something of an art to get a convincing result, say for a person inserted into a scene. And if, for example, the lighting or shadows don’t agree, it’s easy to tell that what one has isn’t real.What about videos? If one does motion capture, and spends enough effort, it’s perfectly possible to get quite convincing results—say for animating aliens, or for putting dead actors into movies. The way this works, at least in a first approximation, is for example to painstakingly pick out the keypoints on one face, and map them onto another.What’s new in the past couple of years is that this process can basically be automated using machine learning. And, for example, there are now neural nets that are simply trained to do “face swapping”:
In essence, what these neural nets do is to fit an internal model to one face, and then apply it to the other. The parameters of the model are in effect learned from looking at lots of real-world scenes, and seeing what’s needed to reproduce them. The current approaches typically use generative adversarial networks (GANs), in which there’s iteration between two networks: one trying to generate a result, and one trying to discriminate that result from a real one.Today’s examples are far from perfect, and it’s not too hard for a human to tell that something isn’t right. But even just as a result of engineering tweaks and faster computers, there’s been progressive improvement, and there’s no reason to think that within a modest amount of time it won’t be possible to routinely produce human-indistinguishable results.
Can Machine Learning Police Itself?
OK, so maybe a human won’t immediately be able to tell what’s real and what’s not. But why not have a machine do it? Surely there’s some signature of something being “machine generated”. Surely there’s something about a machine-generated image that’s statistically implausible for a real image.
Well, not naturally. Because, in fact, the whole way the machine images are generated is by having models that as faithfully as possible reproduce the “statistics” of real images. Indeed, inside a GAN there’s explicitly a “fake or not” discriminator. And the whole point of the GAN is to iterate until the discriminator can’t tell the difference between what’s being generated, and something real.
Could one find some other feature of an image that the GAN isn’t paying attention to—like whether a face is symmetric enough, or whether writing in the background is readable? Sure. But at this level it’s just an arms race: having identified a feature, one puts it into the model the neural net is using, and then one can’t use that feature to discriminate any more.There are limitations to this, however. Because there’s a limit to what a typical neural net can learn. Generally, neural nets do well at tasks like image recognition that humans do without thinking. But it’s a different story if one tries to get neural nets to do math, and for example factor numbers.
Imagine that in modifying a video one has to fill in a background that’s showing some elaborate computation—say a mathematical one. Well, then a standard neural net basically doesn’t stand a chance.Will it be easy to tell that it’s getting it wrong? It could be. If one’s dealing with public-key cryptography, or digital signatures, one can certainly imagine setting things up so that it’s very hard to generate something that is correct, but easy to check whether it is....
....MUCH MORE
Also at Stephen Wolfram|blog: