The people who talk-and-tweet loudest about the horrors of mis- and dis- information are the same
ones who spent half a decade lying at every opportunity they could find. At first I tried to take the charitable view that this was psychological projection, that the people going on-and-on about misinformation on the internet simply hated in others that which they hated in themselves. But what we've see on a daily basis for six+ years goes much deeper than that. Read on.
From Harper's Magazine, August 2021:
In the beginning, there were ABC, NBC, and CBS, and they were good. Midcentury American man could come home after eight hours of work and turn on his television and know where he stood in relation to his wife, and his children, and his neighbors, and his town, and his country, and his world. And that was good. Or he could open the local paper in the morning in the ritual fashion, taking his civic communion with his coffee, and know that identical scenes were unfolding in households across the country.
Over frequencies our American never tuned in to, red-baiting, ultra-right-wing radio preachers hyperventilated to millions. In magazines and books he didn’t read, elites fretted at great length about the dislocating effects of television. And for people who didn’t look like him, the media had hardly anything to say at all. But our man lived in an Eden, not because it was unspoiled, but because he hadn’t considered any other state of affairs. For him, information was in its right—that is to say, unquestioned—place. And that was good, too.
Today, we are lapsed. We understand the media through a metaphor—“the information ecosystem”—which suggests to the American subject that she occupies a hopelessly denatured habitat. Every time she logs on to Facebook or YouTube or Twitter, she encounters the toxic byproducts of modernity as fast as her fingers can scroll. Here is hate speech, foreign interference, and trolling; there are lies about the sizes of inauguration crowds, the origins of pandemics, and the outcomes of elections.
She looks out at her fellow citizens and sees them as contaminated, like tufted coastal animals after an oil spill, with “disinformation” and “misinformation.” She can’t quite define these terms, but she feels that they define the world, online and, increasingly, off.
Everyone scrounges this wasteland for tainted morsels of content, and it’s impossible to know exactly what anyone else has found, in what condition, and in what order. Nevertheless, our American is sure that what her fellow citizens are reading and watching is bad. According to a 2019 Pew survey, half of Americans think that “made-up news/info” is “a very big problem in the country today,” about on par with the “U.S. political system,” the “gap between rich and poor,” and “violent crime.” But she is most worried about disinformation, because it seems so new, and because so new, so isolable, and because so isolable, so fixable. It has something to do, she knows, with the algorithm.
What is to be done with all the bad content? In March, the Aspen Institute announced that it would convene an exquisitely nonpartisan Commission on Information Disorder, co-chaired by Katie Couric, which would “deliver recommendations for how the country can respond to this modern-day crisis of faith in key institutions.” The fifteen commissioners include Yasmin Green, the director of research and development for Jigsaw, a technology incubator within Google that “explores threats to open societies”; Garry Kasparov, the chess champion and Kremlin critic; Alex Stamos, formerly Facebook’s chief security officer and now the director of the Stanford Internet Observatory; Kathryn Murdoch, Rupert Murdoch’s estranged daughter-in-law; and Prince Harry, Prince Charles’s estranged son. Among the commission’s goals is to determine “how government, private industry, and civil society can work together . . . to engage disaffected populations who have lost faith in evidence-based reality,” faith being a well-known prerequisite for evidence-based reality.
The Commission on Information Disorder is the latest (and most creepily named) addition to a new field of knowledge production that emerged during the Trump years at the juncture of media, academia, and policy research: Big Disinfo. A kind of EPA for content, it seeks to expose the spread of various sorts of “toxicity” on social-media platforms, the downstream effects of this spread, and the platforms’ clumsy, dishonest, and half-hearted attempts to halt it. As an environmental cleanup project, it presumes a harm model of content consumption. Just as, say, smoking causes cancer, consuming bad information must cause changes in belief or behavior that are bad, by some standard. Otherwise, why care what people read and watch?
Big Disinfo has found energetic support from the highest echelons of the American political center, which has been warning of an existential content crisis more or less constantly since the 2016 election. To take only the most recent example: in May, Hillary Clinton told the former Tory leader Lord Hague that “there must be a reckoning by the tech companies for the role that they play in undermining the information ecosystem that is absolutely essential for the functioning of any democracy.”
Somewhat surprisingly, Big Tech agrees. Compared with other, more literally toxic corporate giants, those in the tech industry have been rather quick to concede the role they played in corrupting the allegedly pure stream of American reality. Only five years ago, Mark Zuckerberg said it was a “pretty crazy idea” that bad content on his website had persuaded enough voters to swing the 2016 election to Donald Trump. “Voters make decisions based on their lived experience,” he said. “There is a profound lack of empathy in asserting that the only reason someone could have voted the way they did is because they saw fake news.” A year later, suddenly chastened, he apologized for being glib and pledged to do his part to thwart those who “spread misinformation.”
Denial was always untenable, for Zuckerberg in particular. The so-called techlash, a season of belatedly brutal media coverage and political pressure in the aftermath of Brexit and Trump’s win, made it difficult. But Facebook’s basic business pitch made denial impossible. Zuckerberg’s company profits by convincing advertisers that it can standardize its audience for commercial persuasion. How could it simultaneously claim that people aren’t persuaded by its content? Ironically, it turned out that the big social-media platforms shared a foundational premise with their strongest critics in the disinformation field: that platforms have a unique power to influence users, in profound and measurable ways. Over the past five years, these critics helped shatter Silicon Valley’s myth of civic benevolence, while burnishing its image as the ultra-rational overseer of a consumerist future.
Behold, the platforms and their most prominent critics both proclaim: hundreds of millions of Americans in an endless grid, ready for manipulation, ready for activation. Want to change an output—say, an insurrection, or a culture of vaccine skepticism? Change your input. Want to solve the “crisis of faith in key institutions” and the “loss of faith in evidence-based reality”? Adopt a better content-moderation policy. The fix, you see, has something to do with the algorithm.
In the run-up to the 1952 presidential election, a group of Republican donors were concerned about Dwight Eisenhower’s wooden public image. They turned to a Madison Avenue ad firm, Ted Bates, to create commercials for the exciting new device that was suddenly in millions of households. In Eisenhower Answers America, the first series of political spots in television history, a strenuously grinning Ike gave pithy answers to questions about the IRS, the Korean War, and the national debt. The ads marked the beginning of mass marketing in American politics. They also introduced ad-industry logic into the American political imagination: the idea that the right combination of images and words, presented in the right format, can predictably persuade people to act, or not act.
This mechanistic view of humanity was not without its skeptics. “The psychological premise of human manipulability,” Hannah Arendt wrote, “has become one of the chief wares that are sold on the market of common and learned opinion.” To her point, Eisenhower, who carried 442 electoral votes in 1952, would have likely won even if he hadn’t spent a dime on TV.
What was needed to quell doubts about the efficacy of advertising among people who buy ads was empirical proof, or at least the appearance thereof. Modern political persuasion, the sociologist Jacques Ellul wrote in his landmark 1962 study of propaganda, is defined by its aspirations to scientific rigor, “the increasing attempt to control its use, measure its results, define its effects.” Customers seek persuasion that audiences have been persuaded.
Luckily for the aspiring Cold War propagandist, the American ad industry had polished up a pitch. It had spent the first half of the century trying to substantiate its worth through association with the burgeoning fields of scientific management and laboratory psychology. Cultivating behavioral scientists and appropriating their jargon, writes the economist Zoe Sherman, allowed ad sellers to offer “a veneer of scientific certainty” to the art of persuasion:
They asserted that audiences, like the workers in a Taylorized workplace, need not be persuaded through reason, but could be trained through repetition to adopt the new consumption habits desired by the sellers.The profitable relationship between the ad industry and the soft sciences took on a dark cast in 1957, when the journalist Vance Packard published The Hidden Persuaders, his exposé of “motivation research”—then the bleeding edge of collaboration between Madison Avenue and research psychology. The alarming public image Packard’s bestseller created—ad men wielding some unholy concoction of Pavlov and Freud to manipulate the American public into buying toothpaste—is still with us today. And the idea of the manipulability of the public is, as Arendt noted, an indispensable part of the product. Advertising is targeted at consumers, but sold to businesses.
Packard’s reporting was based on what motivation researchers told him. Among their own motivations, hardly hidden, was a desire to appear clairvoyant. In a late chapter, Packard admits as much:
Some of the researchers were sometimes prone to oversell themselves—or in a sense to exploit the exploiters. John Dollard, [a] Yale psychologist doing consulting work for industry, chided some of his colleagues by saying that those who promise advertisers “a mild form of omnipotence are well received.”Today, an even greater aura of omnipotence surrounds the digital ad maker than did his print and broadcast forebears. According to Tim Hwang, a lawyer who formerly led public policy at Google, this image is maintained by two “pillars of faith”: that digital ads are both more measurable and more effective than other forms of commercial persuasion. The asset that structures digital advertising is attention. But, Hwang argues in his 2020 book Subprime Attention Crisis, attention is harder to standardize, and thus worth much less as a commodity, than the people buying it seem to think. An “illusion of greater transparency” offered to ad buyers hides a “deeply opaque” marketplace, automated and packaged in unseen ways and dominated by two grimly secretive companies, Facebook and Google, with every interest in making attention seem as uniform as possible. This is perhaps the deepest criticism one can make of these Silicon Valley giants: not that their gleaming industrial information process creates nasty runoff, but that nothing all that valuable is coming out of the factory in the first place.
Look closer and it’s clear that much of the attention for sale on the internet is haphazard, unmeasurable, or simply fraudulent. Hwang points out that despite being exposed to an enormous amount of online advertising, the public is largely apathetic toward it. More than that, online ads tend to produce clicks among people who are already loyal customers. This is, as Hwang puts it, “an expensive way of attracting users who would have purchased anyway.” Mistaking correlation for causation has given ad buyers a wildly exaggerated sense of their ability to persuade.
So too has the all-important consumer data on which targeted advertising is based, and which research has exposed as frequently shoddy or overstated. In recently unsealed court documents, Facebook managers disparaged the quality of their own ad targeting for just this reason. An internal Facebook email suggests that COO Sheryl Sandberg knew for years that the company was overstating the reach of its ads.
Why, then, do buyers love digital advertising so much? In many cases, Hwang concludes, it’s simply because it looks good at a meeting, blown up on an analytics dashboard: “It makes for great theater.” In other words, the digital-advertising industry relies on our perception of its ability to persuade as much as on any measurement of its ability to actually do so. This is a matter of public relations, of storytelling. And here, the disinformation frame has been a great asset....
.... MUCH MORE, he's just getting started.