Saturday, May 5, 2018

"Searching for a Future Beyond Facebook"

We're seeing a lot of "Beyond" stories.
Beyond the Cloud. Beyond Cobalt. Beyond Capitalism, etc.  
And that's just the "C's"
There seems to be a generalized disenchantment that is coming out sideways. We'll have more on the technological and sociological zeitgeist and how to make a buck off them later this year.

For today, here's Longreads:
If we want to liberate ourselves from the tech monopolies, we have to figure out what to do with our data.

For the better part of two decades, an important set of assumptions has underwritten our use of the internet. In exchange for being monitored — to what degree, many people still have no idea — we would receive free digital services. We would give up our privacy, but our data and our rights, unarticulated though they might be, would be respected. This is the simple bargain that drove the development of the social web and rewarded its pioneers — Facebook, Google, and the many apps and services they’ve swallowed up — with global user bases and multi-billion-dollar fortunes.


Now that bargain has been called into question by the scandal surrounding Facebook and the data-hungry political consultancy Cambridge Analytica. Or at least, it should have been. But rather than turning attention to the profound structural issues surrounding surveillance capitalism, mainstream media — along with the U.S. Congress — largely centered this affair around issues of Facebook’s stewardship of user data. The presumption is that Facebook has a right to our information; it simply mishandled it in this case, handing it over to a nefarious actor. Facebook executives did a penitent tour through the halls of media and the Capitol, offering apologies and begging for the public’s forgiveness. And then, this week at Facebook’s developer conference F8, they’ll close off some of their data, make some small concessions, then launch a new commercial analytics app.

The number of victims in this supposed Cambridge Analytica “breach” was first pegged at 50 million, but Facebook since revised the number upwards to 87 million. In another announcement, Facebook said that nearly all of its 2.2 billion users had their public profiles scraped, meaning that some of their basic personal information was gathered by — well, we have no way to know who.

Both of these events are significant, but the latter actually speaks more acutely to the crisis surrounding Facebook, for the truth is that our personal information has long been for sale — through data brokers and other shadowy entities — to any commercial or governmental actor that might be interested. The shocking part of the Cambridge Analytica scandal is that it has torn the veil away from this arrangement. For the first time, many people not only have a sense of what data is being collected about them but also how it’s sold and what it can do — in this case, contribute to the election of a singularly disturbing character as president.

At least, that’s the presiding narrative if you believe that Cambridge Analytica’s psychographic targeting techniques are effective persuasive tools. Despite a raft of excellent reporting, it remains hard to know how a company like Cambridge Analytica works and what influence it has in the real world. The undercover videos filmed by a British news outlet of CA executives bragging about swaying elections all over the globe might be chalked up to salesmanship. And some respectable scholars and industry figures have questioned whether psychographic targeting does much at all. But to put it simply, advertising would not be a hugely profitable industry if it were total hokum, and Facebook made just shy of $40 billion last year in digital advertising largely on the strength of connecting advertisers directly with their audiences, all thanks to an increasingly granular set of microtargeting tools. While Facebook has since restricted some of the tools it offers to advertisers, the company still allows for extensive targeting options. Advertisers can upload what are known as “custom audiences,” so if a company like Cambridge Analytica has a large dataset of voter records that it wants to connect to Facebook profiles, it can upload it to Facebook and do just that. CA, or anyone else for that matter, could also use Facebook’s lookalike tool to then target people who resemble their original dataset, thus expanding their potential audience.

Pessimism over the effectiveness of the Cambridge Analytica campaign is tied in part to an understandable reluctance to believe that we can be persuaded. Steeped in advertising all our lives, we are supposed to be cynically immune to its charms. But a similar blitheness once attended how we treated lies and fake news, which many thought had gone the way of the chain letter. Instead, Facebook proved all too fertile a platform for cultivating the most extreme, and frequently the most unbelievable, views. Cambridge Analytica was targeting people who readily consumed this sort of dubious far-right media, people who its model showed were also sympathetic to Trump or who might be persuaded not to vote at all. CA was doing this at a huge scale, bombarding millions of people with an equally varied number of ads. As even some Trump partisans have noted, you need only persuade a small percentage to move the needle in some of the closely contested states that Trump unexpectedly won.

If Cambridge Analytica didn’t have much impact in practice, it certainly has altered the discourse, becoming a kind of post-election emblem of all that can go wrong in the personal data economy. Whereas we once dismissed the ads that follow us around the internet as noisome stalkers (who were sometimes hawking a pair of shoes that we had already expressed interest in), they now seem capable of something far more pernicious. Rather than a prodding offer for a Caribbean vacation, internet ads might now be carefully engineered political messages paid for by some foreign oligarch. We simply don’t know.

In Europe, the situation, along with the regulatory system, is far more developed — perhaps because the big tech giants are seen as powerful foreign players who don’t need to be coddled by European governments. This month, the continent will implement the General Data Protection Regulation, or GDPR, an effort that is considered at the forefront of establishing personal data rights and legal provisions for users to, for instance, demand that companies delete data they hold about them. The GDPR is undoubtedly a positive step forward, and it’s prompted Mark Zuckerberg and other Facebook executives to indicate that they’ll adopt at least the spirit of the law for the rest of its user base.

But we might also beware of short-term incrementalism. True, adopting the GDPR would provide a modicum of data privacy rights currently unavailable to American users. And there are additional steps that could be taken to grant users more agency in the data marketplace. Giving people control over their social graphs — the records of who they know, essentially — would allow them to easily transition to another service. That would be a nightmare for Facebook but a key competitive tool for the next company hoping to challenge them. Tamping down the industry’s most extreme excesses through prudent regulation, including the occasional sweeping fine, might encourage more ethical behavior. These measures could also entrench the Facebook and Google duopoly, both of which boast bottomless fortunes and deep rosters of Beltway lobbyists.

Ultimately, to challenge Facebook, Google, and the many unknown players of the data economy, we must devise new business models and structural incentives that aren’t rooted in manipulation and coercion; that don’t depend on the constant surveillance of users, on gathering information on everything they read and purchase, and on building that information into complex dossiers designed to elicit some action — a click, a purchase, a vote. We must move beyond surveillance capitalism and its built-in inequities. In the short-term, that might be achieved by turning to basic subscription services, by paying for the things we use. But on a longer time horizon, we must consider whether we want to live in a world that converts all of our experiences into machine-readable data — data that doesn’t belong to us, that doesn’t serve us....MUCH MORE