Following up on yesterday's "
The Big Questions We'd Better Figure Out, Part 1: How Do We Humanize The Algorithms That Will Rule Us?" we take a look at Ms. Kaminska's "
Algorithmic Discrimination."
There must be something in the air with the empathy meme and we caught it ourselves a few weeks ago in "
Short Selling Prerequisite: Empathy (and Charlie Munger quotes)" which riffed on the fact that your equity short book works better if you understand (have empathy for) what is driving the longs.
Then this week both the New Yorker and the New York Times had articles on empathy and tech: "
Silicon Valley Has an Empathy Vacuum" and "
Is ‘Empathy’ Really What the Nation Needs?" respectively, the latter being a little too touchy-feely for where we're going but which begins on this note from Facebook's Mark Zuckerberg:
...“There is a certain
profound lack of empathy,” he said,
“in asserting that the only reason why someone could have voted the way
they did is because they saw some fake news. If you believe that, then I
don’t think you have internalized the message that Trump supporters are
trying to send in this election.” When asked to articulate that
message, he dodged the question.
“Empathy”
is one of Facebook’s all-time favorite buzzwords. For years, Zuckerberg
has hopped from conference to conference in a selection of muted
hoodies and T-shirts, delivering variations on the same pitch. “More
people are using Facebook to share more stuff,” he said in 2010. “That
means that if we want, there’s more out there that we can go look at and
research and understand what’s going on with the people around us. And I
just think that leads to broader empathy, understanding — just a lot of
good, core, human things that make society function better.”...
Closer to where Ms. Kaminska is getting to, but still coming in on a tangent is Om Malik's New Yorker piece, but here, you be the judge.
Picking up where we left off with the Alphaville post yesterday:
Algorithmic discrimination
...And yet, what is being lost in the midst of all this outrage, is that there’s an entity being born out there which stands to be far more discriminatory, bull-headed and fascist
than any human being could ever be. That entity is “artificial
intelligence”. Thus far, the entire quant/data-first movement which
supports and feeds it, by actively celebrating the removal of feelings
from the evaluation equation, seems entirely oblivious to that fact.
Yet, what we’re discovering through the AI big data phenomenon, isn’t just that algorithms discriminate on a continuous basis,
it’s that they make terrible assumptions about the human condition when
they do so. Without the ability to empathise or respect a human’s
capacity to change, better himself or hold contradictory view points at
the same time, algorithms crunch data for patterns and correlations and
interpret them to assume x, y or z is a defaulter or a criminal just
because that’s what the data probabilities for his group-type or
associations suggest.
All in all, this is why discrimination is
the single biggest problem facing the artificial intelligence field.
It’s all the more imposing, we’d add, in sectors where AI and big data
is being used to assess or reduce risk, such as insurance.
If you smoke, live in a dodgy area and have a penchant for chocolate or
wine, you might become uninsurable. Nor does the big data solutionist
mindset address the fact that the rich, who can afford to pay the
premiums, can get away with the wrong behaviours for as long as they
have the money to fund the privileges.
And since algos can’t judge
exceptions, or be appealed to in exceptional circumstance without the
intervention of a costly human go-between, the only work-around is for
the discriminated to agree to an extended period of extreme surveillance
and restricted freedom....MORE
And the New Yorker:
Silicon Valley Has an Empathy Vacuum
Silicon Valley
seems to have lost a bit of its verve since the Presidential election.
The streets of San Francisco—spiritually part of the Valley—feel less
crowded. Coffee-shop conversations are hushed. Everything feels a little
muted, an eerie quiet broken by chants of protesters. It even seems as
if there are more parking spots. Technology leaders, their employees,
and those who make up the entire technology ecosystem seem to have been
shaken up and shocked by the election of Donald Trump.
One
conversation has centered on a rather simplistic narrative of Trump as
an enemy of Silicon Valley; this goes along with a self-flagellating
regret that the technology industry didn’t do enough to get Hillary
Clinton into the White House. Others have decided that the real villains
are Silicon Valley giants, especially Twitter, Facebook, and Google,
for spreading fake news stories that vilified Clinton and helped elect
an unpopular President.
These
charges don’t come as a surprise to me. Silicon Valley’s biggest
failing is not poor marketing of its products, or follow-through on
promises, but, rather, the distinct lack of empathy for those whose
lives are disturbed by its technological wizardry. Two years ago, on my
blog,
I wrote,
“It is important for us to talk about the societal impact of what
Google is doing or what Facebook can do with all the data. If it can
influence emotions (for increased engagements), can it compromise the
political process?”
Perhaps
it is time for those of us who populate the technology sphere to ask
ourselves some really hard questions. Let’s start with this: Why did so
many people vote for Donald Trump? Glenn Greenwald, the firebrand
investigative journalist
writing for
The Intercept, and the
documentary filmmaker Michael Moore have
listed many reasons Clinton lost. Like Brexit, the election of Donald
Trump has focussed attention on the sense that globalization has eroded
the real prospects and hopes of the working class in this country.
Globalization is a proxy for technology-powered capitalism, which tends
to reward fewer and fewer members of society.
My
hope is that we in the technology industry will look up from our
smartphones and try to understand the impact of whiplashing change on a
generation of our fellow-citizens who feel hopeless and left behind.
Instead, I read the comments of Balaji Srinivasan, the C.E.O. of the San
Francisco-based Bitcoin startup
21 Inc.,
telling the Wall Street Journal columnist Christopher Mims
that he feels more connected to people in his “Stanford network” around
the globe than to those in California’s Central Valley: “There will be a
recognition that if we don’t have control of the nation state, we
should reduce the nation state’s power over us.”
It’s
hard to think about the human consequences of technology as a founder
of a startup racing to prove itself or as a chief executive who is
worried about achieving the incessant growth that keeps investors happy.
Against the immediate numerical pressures of increasing users and
sales, and the corporate pressures of hiring the right (but not too
expensive) employees to execute your vision, the displacement of people
you don’t know can get lost.
However,
when you are a data-driven oligarchy like Facebook, Google, Amazon, or
Uber, you can’t really wash your hands of the impact of your algorithms
and your ability to shape popular sentiment in our society. We are not
just talking about the ability to influence voters with fake news. If
you are Amazon, you have to acknowledge that you are slowly corroding
the retail sector, which employs many people in this country. If you are
Airbnb, no matter how well-meaning your focus on delighting travellers,
you are also going to affect hotel-industry employment.
Otto, a Bay Area startup that was recently acquired by Uber, wants to automate trucking—and recently wrapped up
a hundred-and-twenty-mile driverless delivery of fifty thousand cans of
beer between Fort Collins and Colorado Springs. From a technological
standpoint it was a jaw-dropping achievement, accompanied by predictions
of improved highway safety. From the point of view of a truck driver
with a mortgage and a kid in college, it was a devastating “oh, shit”
moment. ...MORE
That quote from the Andreessen Horowitz/21Inc. fellow in the WSJ's "
New Populism and Silicon Valley on a Collision Course" is representative of a lot of Silicon Valley thinking and is worth repeating in full:
...To
many in Silicon Valley, this is just part of inexorable progress.
Electing Mr. Trump won’t shield his supporters from the reality that
they are now competing with every other worker on Earth, says Balaji
Srinivasan, a board partner at venture-capital firm Andreessen Horowitz
and CEO of bitcoin startup 21 Inc.
Mr.
Srinivasan views the collision between tech culture and Mr. Trump’s
populist movement as inevitable, and potentially so divisive that tech’s
global elites should effectively secede from their respective countries, an idea he calls “the ultimate exit.”
Already,
he says, elites in Silicon Valley are more connected to one another and
to their counterparts around the globe than to non-techies in their
midst or nearby. “My Stanford network connects to Harvard and Beijing
more than [California’s] Central Valley,” says Mr. Srinivasan.
Eventually, he argues, “there will be a recognition that if we don’t
have control of the nation state, we should reduce the nation state’s
power over us.”...
We mentioned Srinivasan in this weekend's "
Why the FT's Izabella Kaminska Won't Be Invited to the Andreessen-Horowitz Christmas Party, Redux". She's pretty sure, and makes a pretty good case, that Mr. S. is running a bit of an Emperor's New Clothes operation.
Fortunately Artificial Intelligence is still in its formative stages, see "
Investing AI: 'Why Machines Still Can’t Learn So Good'" if interested, and there is time to program into it some of the ideas raised in "
Algorithmic discrimination".
And come to think of it, the computer scientist profiled in yesterday's "
The Big Questions We'd Better Figure Out, Part 1: How Do We Humanize The Algorithms That Will Rule Us?" is closer to Ms. Kaminska's point of view than any of the writers we've come across this week, despite their very different career paths.
Go figure.