Monday, May 14, 2018

“In 10 Years, the Surveillance Business Model Will Have Been Made Illegal” (AMZN; FB; GOOG)

From the University of Chicago's ProMarket blog:
The opening panel of the Stigler Center’s annual antitrust conference discussed the source of digital platforms’ power and what, if anything, can be done to address the numerous challenges their ability to shape opinions and outcomes present.

Google CEO Sundar Pichai caused a worldwide sensation earlier this week when he unveiled Duplex, an AI-driven digital assistant able to mimic human speech patterns (complete with vocal tics) to such a convincing degree that it managed to have real conversations with ordinary people without them realizing they were actually talking to a robot.

While Google presented Duplex as an exciting technological breakthrough, others saw something else: a system able to deceive people into believing they were talking to a human being, an ethical red flag (and a surefire way to get to robocall hell). Following the backlash, Google announced on Thursday that the new service will be designed “with disclosure built-in.” Nevertheless, the episode created the impression that ethical concerns were an “after-the-fact consideration” for Google, despite the fierce public scrutiny it and other tech giants faced over the past two months. “Silicon Valley is ethically lost, rudderless and has not learned a thing,” tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill and a prominent critic of tech firms.

The controversial demonstration was not the only sign that the global outrage has yet to inspire the profound rethinking critics hoped it would bring to Silicon Valley firms. In Pichai’s speech at Google’s annual I/O developer conference, the ethical concerns regarding the company’s data mining, business model, and political influence were briefly addressed with a general, laconic statement: “The path ahead needs to be navigated carefully and deliberately and we feel a deep sense of responsibility to get this right.”

A joke regarding the flawed design of Google’s beer and burger emojis received roughly the same amount of time.

Google’s fellow FAANGs also seem eager to put the “techlash” of the past two years behind them. Facebook, its shares now fully recovered from the Cambridge Analytica scandal, is already charging full-steam ahead into new areas like dating and blockchain.

But the techlash likely isn’t going away soon. The rise of digital platforms has had profound political, economic, and social effects, many of which are only now becoming apparent, and their sheer size and power makes it virtually impossible to exist on the Internet without using their services. As Stratechery’s Ben Thompson noted in the opening panel of the Stigler Center’s annual antitrust conference last month, Google and Facebook—already dominating search and social media and enjoying a duopoly in digital advertising—own many of the world’s top mobile apps. Amazon has more than 100 million Prime members, for whom it is usually the first and last stop for shopping online.

Many of the mechanisms that allowed for this growth are opaque and rooted in manipulation. What are those mechanisms, and how should policymakers and antitrust enforcers address them? These questions, and others, were the focus of the Stigler Center panel, which was moderated by the Economist’s New York bureau chief, Patrick Foulis.

The Race to the Bottom of the Brainstem
“The way to win in Silicon Valley now is by figuring out how to capture human attention. How do you manipulate people’s deepest psychological instincts, so you can get them to come back?” said Tristan Harris, a former design ethicist at Google who has since become one of Silicon Valley’s most influential critics. Harris, who co-founded the Center for Humane Technology, an organization seeking to change the culture of the tech industry, described the tech industry as an “arms race for basically who’s good at getting attention and who’s better in the race to the bottom of the brainstem to hijack the human animal.”

The proliferation of AI, Harris said, creates an asymmetric relationship between platforms and users. “When someone uses a screen, they don’t really realize they’re walking into an environment where there’s 1,000 engineers on the other side of the screen who asymmetrically know way more about their mind [and] their psychology, have 10 years about what’s ever gotten them to click, and use AI prediction engines to play chess against that person’s mind. The reason you land on YouTube and wake up two hours later asking ‘What the hell just happened?’ is that Alphabet and Google are basically deploying the best supercomputers in the world—not at climate change, not at solving cancer, but at basically hijacking human animals and getting them to stay on screens.”

This “fiduciary relationship,” in which one party is able to massively exploit the other, is best exemplified by Facebook, which is akin to a “psychotherapist who knows every single detail in your life, including the details of your inner life, in the sense that it doesn’t just know who you click on at two in the morning and what you post and your TINs and your photos and your family and who you talk to the most and who your friends are. It also intermediates every single one of your communications. It knows what colors your brain lights up to if I give you a red button or a green button or a yellow button. It knows which words activate your psychology. It knows an unprecedented amount of information about what will manipulate you. If there’s ever been a precedent or a need for defining something as being an asymmetric or fiduciary relationship, it’s this one.”...MUCH MORE