Big Brother’s Blind Spot: "Mining the failures of surveillance tech"
From The Baffler:
Netflix believes, algorithmically at least,
that I am the kind of person who likes to watch “Dark TV Shows
Featuring a Strong Female Lead.” This picksome genre is never one that I
that seek out intentionally, and I’m not sure it even represents my
viewing habits. (Maybe I fell asleep watching The Killing one
night?) It is an image of me that Netflix compiled from personal data it
gathers, and, like a portrait taken slantwise and at a distance, much
finer detail is missing. As it happens, television sometimes puts me to
sleep; other times I stream a movie as I work on my laptop, and by the
time I’ve finished typing and look back, the credits are rolling. Either
way, the idea offered of me after my data has been mined is curiously
off-base.
More than a decade ago, Netflix ushered in a cultural conversation
about big data and algorithms with stunts like the Netflix Prize—an open
competition to improve user rating predictions—and its eventual use of
subscriber data to produce and cast the show House of Cards.
Now, with Cambridge Analytica and driverless cars in the headlines, the
artless future that some technology critics forecasted back then—movies
cast by algorithms!—sounds quaint in comparison. For the time being, the
stakes are low (rifle through streaming titles to find something good
to watch), and the service declares the way it categorizes me—as a fan
of the “Strong Female Lead”—rather than clandestinely populating the
interface with lady detective shows. To be sure, there is plenty to
criticize about its micro-targeting practices, but now that
“surveillance capitalism” has eclipsed “big data” as the tech media
buzzphrase of choice, at least its subscriber-based business model
suggests the company has little incentive to partner with data brokers
like Acxiom and Experian, to determine whether mine is a BoJack Horseman household or more apt to stream 13 Reasons Why.
Netflix is an accessible example of the gap between an
algorithmically generated consumer profile and the untidy bundle of our
lived experiences and preferences. The reality of living a digital life
is that we’re routinely confronted with similarly less than spot-on
categories: Facebook ads for products you would never buy, iPhoto
tagging your house as a person’s face, false positives, false negatives,
and all the outliers that might be marked as red dots on prediction
models. Mix-ups like these might be laughable or bothersome; the octopus
of interlinked corporate and state surveillance apparatuses has
inevitable blind spots, after all. Still, I wonder if these blunders are
better than the alternative: perfect, all-knowing,
firing-on-all-cylinders systems of user tracking and categorization.
Perhaps these mistakes are default countermeasures: Can we, as users,
take shelter in the gaps of inefficacy and misclassification? Is a
failed category to the benefit of the user—is it privacy, by accident?
Surveillance is “Orwellian when accurate, Kafkaesque when
inaccurate,” Privacy International’s Frederike Kaltheuner told me. These
systems are probabilistic, and “by definition, get things wrong
sometimes,“ Kaltheuner elaborated. “There is no 100 percent. Definitely
not when it comes to subjective things.” As a target of surveillance and
data collection, whether you are a Winston Smith or Josef K is a matter
of spectrum and a dual-condition: depending on the tool, you’re either
tilting one way or both, not in the least because even data recorded
with precision can get gummed up in automated clusters and categories.
In other words, even when the tech works, the data gathered can be
opaque and prone to misinterpretation.
Companies generally don’t flaunt their imperfection—especially those
with Orwellian services under contract—but nearly every internet user
has a story about being inaccurately tagged or categorized in an absurd
and irrelevant way. Kaltheuner told me she once received an
advertisement from the UK government “encouraging me not to join ISIS,”
after she watched hijab videos on YouTube. The ad was bigoted, and its
execution was bumbling; still, to focus on the wide net cast is to
sidestep the pressing issue: the UK government has no business judging a
user’s YouTube history. Ethical debates about artificial intelligence
tend to focus on the “micro level,” Kaltheuner said. When “sometimes the
broader question is, do we want to use this in the first place?”
Mask Off
This is precisely the question taken up by software developer Nabil Hassein in “Against Black Inclusion in Facial Recognition,”
an essay he wrote last year for the blog Decolonized Tech. Making a
case both strategic and political, Hassein argues that technology under
police control never benefits black communities and voluntary
participation in these systems will backfire. Facial recognition
commonly fails to detect black faces, in an example of what Hassein
calls “technological bias.” Rather than working to resolve this bias,
Hassein writes, we should “demand instead that police be forbidden to
use such unreliable surveillance technologies.”
Hassein’s essay is in part a response to Joy Buolamwini’s influential work as founder of the Algorithmic Justice League.
Buolamwini, who is also a researcher at MIT Media Lab, is concerned
with the glaring racial bias expressed in computer vision training data.
The open source facial recognition corpus largely comprises white
faces, so the computation in practice interprets aspects of whiteness as
a “face.” In a TED Talk about her project, Buolamwini, a black woman,
demonstrates the consequences of this bias in real time. It is alarming
to watch as the digital triangles of facial recognition software begin
to scan and register her countenance on the screen only after
she puts on a white mask. For his part, Hassein empathized with
Buolamwini in his response, adding that “modern technology has rendered
literal Frantz Fanon’s metaphor of ‘Black Skin, White Masks.’” Still, he
disagrees with the broader political objective. “I have no reason to
support the development or deployment of technology which makes it
easier for the state to recognize and surveil members of my community.
Just the opposite: by refusing to don white masks, we may be able to
gain some temporary advantages by partially obscuring ourselves from the
eyes of the white supremacist state.”...MUCH MORE