Monday, August 13, 2018

Potemkin AI: Many instances of 'artificial intelligence' are artificial displays of its power and potential

One of the problems with artificial intelligence and its presentation to society is the image of omnipresence that many of its detractors ascribe to what is currently a not-so-advanced technology.
That image is also presented by proponents as a means to cow citizens—see China's totalitarian internal propaganda—into a version of Martin Seligman's learned helplessness. Resistance is futile, might as well just curl up in a ball, etc.

I don't have a lot of time for Howard Zinn's approach to history but one of his ideas seems to bear on this point:
“If those in charge of our society - politicians, corporate executives, and owners of press and television - can dominate our ideas, they will be secure in their power. They will not need soldiers patrolling the streets. We will control ourselves.”
If you think you are always being out-thought by folks with access to all-powerful A.I. you will comport yourself differently than if you don't think the stuff is omnipotent.

From Real Life Magazine, August 6:

Potemkin AI
Jathan Sadowski 
In 1770, the Hungarian inventor Wolfgang von Kempelen unveiled the Mechanical Turk, a chess-playing contraption that “consisted of a wooden cabinet behind which was seated a life-size figure of a man, made of carved wood, wearing an ermine-trimmed robe, loose trousers and a turban — the traditional costume of an Oriental sorcerer,” according to journalist Tom Standage. The chess-playing robot was toured around Europe and America, and exhibition matches were staged with such famous opponents as Napoleon Bonaparte. All the while, Kempelen maintained that the automaton operated by its own accord.

To prove there was no trickery, he opened the cabinet before every exhibition and showed spectators the dense tangle of gears, wheels, and levers. But Kempelen had actually created an elaborate illusion, not a robot. Inside was a human chess master who used magnets and levers to operate the Mechanical Turk and hid behind the fake machinery when Kempelen opened the cabinet. In other words, the complex mechanical system that Kempelen showed people was meant to distract their attention from how the automaton really worked: human labor. Kempelen sold the idea of an intelligent machine, but what people witnessed was just human effort disguised by clever engineering.

In the 1730s, a French inventor named Jacques de Vaucanson a copper-pated cyborg called La Canard Digérateur, or the Digesting Duck. It was the size of a living duck, walked like a duck, and quacked like a duck. But its real trick, which amazed and baffled audiences, was that it could shit like a duck. The automaton “ate food out of the exhibitor’s hand, swallowed it, digested it, and excreted it, all before an audience,” as journalist Gaby Wood described it in an article for the Guardian.
Vaucanson claimed that he had built a “chemical laboratory” in the duck’s stomach to decompose the food before expelling it from the mechanical butt. While Vaucanson was an expert engineer — the duck was an intricate piece of machinery — like a good magician he did not reveal how the duck worked. After his death, the secret was uncovered: There was no innovative chemical technology inside the duck, rather two containers, one for the food and one for preloaded excrement. (Strangely, the Digesting Duck and Mechanical Turk were both destroyed by museum fires around the same time in the mid-19th century.)

Kempelen and Vaucanson would fit very well into Silicon Valley today. They could make mysterious machines and wondrous claims to the public about what they could do. Vaucanson literally snuck shit into his technological system and called it innovation. And Kempelen’s Mechanical Turk was a forerunner of today’s systems of artificial intelligence, not because it managed to play a game well, as with IBM’s Deep Blue or Google’s AlphaGo, but because many AI systems are, in large part, also technical illusions designed to fool the public. Whether it’s content moderation for social media or image recognition for police surveillance, claims abound about the effectiveness of AI-powered analytics, when, in reality, the cognitive labor comes from an office building full of (low-waged) workers.

We can call this way of building and presenting such systems — whether analog automatons or digital software — Potemkin AI. There is a long list of services that purport to be powered by sophisticated software, but actually rely on humans acting like robots. Autonomous vehicles use remote-driving and human drivers disguised as seats to hide their Potemkin AI. App developers for email-based services like personalized ads, price comparisons, and automated travel-itinerary planners use humans to read private emails. A service that converted voicemails into text, SpinVox, was accused of using humans and not machines to transcribe audio. Facebook’s much vaunted personal assistant, M, relied on humans — until, that is, it shut down the service this year to focus on other AI projects. The list of Potemkin AI continues to grow with every cycle of VC investment.

The term Potemkin derives from the name of a Russian minister who built fake villages to impress Empress Catherine II and disguise the true state of things. Potemkin tech, then, constructs a façade that not only hides what’s going on but deceives potential customers and the general public alike. Rather than the Wizard of Oz telling us to pay no attention to the man behind the curtain, we have programmers telling us to pay no attention to the humans behind the platform.

When the inner workings of a technology are obscured, it’s often labeled a “black box,” a term derived from engineering diagrams where you can see the inputs and outputs but not what happens in between. An algorithm, for example, might effectively be black-boxed because the technical details are described using dense jargon decipherable by only a small group of experts. Accusations of willful obscurantism are often reserved for postmodernism, but as a recent paper on “troubling trends in machine learning scholarship” points out, research and applications in this field are rife with ambiguous details, shaky claims, and deceptive obfuscation. Being baffled by abstruse critical theory is one thing, but not being able to discern how an AI makes medical diagnoses is much more consequential.

Algorithms might also be black-boxed through the force of law by the tech companies who claim them as trade secrets. In The Black Box Society, Frank Pasquale details how many of the algorithms that govern information and finance —the circulation of data and dollars — are shrouded in opacity. Algorithms are often described as a type of recipes. Just as Coca Cola keeps their formula a tightly guarded secret, so too do tech companies fiercely protect their “secret sauce.” Again, it’s one thing to enjoy a beverage we can’t reverse-engineer, but quite another to take on faith proprietary software that makes sentencing decisions in criminal cases.

Potemkin AI is related to black boxing, but it pushes obfuscation into deception. The Mechanical Turk, like many of the much-discussed AI systems today, was not just a black box that hides its inner workings from prying eyes. After all, Kempelen literally opened his automaton’s cabinet and purported to explain how what looked to be a complex machine worked. Except that he was lying. Similarly, marketing about AI systems deploy technical buzzwords work as though they were a magician’s incantations: Smart! Intelligent! Automated! Cognitive computing! Deep learning! Abracadabra! Alakazam!
...MUCH MORE 

See also July's "The rise of 'pseudo-AI': how tech firms quietly use humans to do bots' work".

And previously from Mr. Sadowski and Professor Frank Pasquale a mini-tour de force:
The Spectrum of Control: A Social Theory of The Smart City