Questions Americans Want Answered: "Does my algorithm have a mental-health problem? "
From Aeon:
Is my car hallucinating? Is the algorithm that runs the police
surveillance system in my city paranoid? Marvin the android in Douglas
Adams’s Hitchhikers Guide to the Galaxy had a pain in all the diodes down his left-hand side. Is that how my toaster feels?
This
all sounds ludicrous until we realise that our algorithms are
increasingly being made in our own image. As we’ve learned more about
our own brains, we’ve enlisted that knowledge to create algorithmic
versions of ourselves. These algorithms control the speeds of driverless
cars, identify targets for autonomous military drones, compute our
susceptibility to commercial and political advertising, find our
soulmates in online dating services, and evaluate our insurance and
credit risks. Algorithms are becoming the near-sentient backdrop of our
lives.
The most popular algorithms currently being put into the
workforce are deep learning algorithms. These algorithms mirror the
architecture of human brains by building complex representations of
information. They learn to understand environments by experiencing them,
identify what seems to matter, and figure out what predicts what. Being
like our brains, these algorithms are increasingly at risk of
mental-health problems.
Deep Blue, the algorithm that beat the
world chess champion Garry Kasparov in 1997, did so through brute force,
examining millions of positions a second, up to 20 moves in the future.
Anyone could understand how it worked even if they couldn’t do it
themselves. AlphaGo, the deep learning algorithm that beat Lee Sedol at
the game of Go in 2016, is fundamentally different. Using deep neural
networks, it created its own understanding of the game, considered to be
the most complex of board games. AlphaGo learned
by watching others and by playing itself. Computer scientists and Go
players alike are befuddled by AlphaGo’s unorthodox play. Its strategy
seems at first to be awkward. Only in retrospect do we understand what
AlphaGo was thinking, and even then it’s not all that clear.
To
give you a better understanding of what I mean by thinking, consider
this. Programs such as Deep Blue can have a bug in their programming.
They can crash from memory overload. They can enter a state of paralysis
due to a neverending loop or simply spit out the wrong answer on a
lookup table. But all of these problems are solvable by a programmer
with access to the source code, the code in which the algorithm was
written.
Algorithms such as AlphaGo are entirely different. Their
problems are not apparent by looking at their source code. They are
embedded in the way that they represent information. That representation
is an ever-changing high-dimensional space, much like walking around in
a dream. Solving problems there requires nothing less than a
psychotherapist for algorithms.
Take the case of driverless cars. A
driverless car that sees its first stop sign in the real world will
have already seen millions of stop signs during training, when it built
up its mental representation of what a stop sign is. Under various light
conditions, in good weather and bad, with and without bullet holes, the
stop signs it was exposed to contain a bewildering variety of
information. Under most normal conditions, the driverless car will
recognise a stop sign for what it is. But not all conditions are normal.
Some recent demonstrations have shown
that a few black stickers on a stop sign can fool the algorithm into
thinking that the stop sign is a 60 mph sign. Subjected to something
frighteningly similar to the high-contrast shade of a tree, the
algorithm hallucinates.
How many different ways can the algorithm
hallucinate? To find out, we would have to provide the algorithm with
all possible combinations of input stimuli. This means that there are
potentially infinite ways in which it can go wrong. Crackerjack
programmers already know this, and take advantage of it by creating what
are called adversarial examples. The AI research group LabSix at the
Massachusetts Institute of Technology has shown
that, by presenting images to Google’s image-classifying algorithm and
using the data it sends back, they can identify the algorithm’s weak
spots. They can then do things similar to fooling Google’s
image-recognition software into believing that an X-rated image is just a
couple of puppies playing in the grass....MUCH MORE