Of course the fear of misinformation is overblown. And it's not even a fear. It's a cudgel used by sociopathic totalitarians to beat people and ideas they disagree with. If the authoritarian spectrum, from the Karens at the local level to the Davos crowd of transnationals keep it up, they won't believe what is going to happen. People get tired of both the attitudes and the platitudes.
From UnDark, October 24:
The influence of false and inflammatory online content is overblown, says researcher David Rothschild.
In June, the journal Nature published a perspective suggesting that the harms of online misinformation have been misunderstood. The paper’s authors, representing four universities and Microsoft, conducted a review of the behavioral science literature and identified what they characterize as three common misperceptions: That the average person’s exposure to false and inflammatory content is high, that algorithms are driving this exposure, and that many broader problems in society are predominantly caused by social media.
“People who show up to YouTube to watch baking videos and end up at Nazi websites — this is very, very rare,” said David Rothschild, an economist at Microsoft Research who is also a researcher with the University of Pennsylvania’s Penn Media Accountability Project. That’s not to say that edge cases don’t matter, he and his colleagues wrote, but treating them as typical can contribute to misunderstandings — and divert attention away from more pressing issues.
Rothschild spoke to Undark about the paper in a video call. Our conversation has been edited for length and clarity.
Undark: What motivated you and your co-authors to write this perspective?
David Rothschild: The five co-authors on this paper had all been doing a lot of different research in this space for years, trying to understand what it is that is happening on social media: What’s good, what’s bad, and especially understanding how it differs from the stories that we’re hearing from the mainstream media and from other researchers.
Specifically, we were narrowing in on these questions about what the experience of a typical consumer is, a typical person versus a more extreme example. A lot of what we saw, or a lot of what we understood — it was referenced in a lot of research — really described a pretty extreme scenario.
The second part of that is a lot of emphasis around algorithms, a lot of concern about algorithms. What we’re seeing is that a lot of harmful content is coming not from an algorithm pushing it on people. Actually, it’s the exact opposite. The algorithm kind of is pulling you towards the center.
And then there are these questions about causation and correlation. A lot of research, and especially mainstream media, conflate the proximate cause of something with the underlying cause of it.
There’s a lot of people saying: “Oh, these yellow vest riots are happening in France. They were organized on Facebook.” Well, there’s been riots in France for a couple hundred years. They find ways to organize even without the existence of social media.
The proximate cause — the proximate way in which people were organizing around [January 6] — was certainly a lot of online. But then the question comes, could these things have happened in an offline world? And these are tricky questions.
Writing a perspective here in Nature really allows us to then get to stakeholders outside of academia to really address the broader discussion because there’s real world consequences. Research gets allocated, funding gets allocated, platforms get pressure to solve the problem that people discuss.
UN: Can you talk about the example of the 2016 election: What you found about it and also the role that perhaps the media played in putting forth information that was not entirely accurate?
DR: The bottom line is that what the Russians did in 2016 is certainly interesting and newsworthy. They invested pretty heavily in creating sleeper Facebook organizations that posted viral content and then slipped in a bunch of non-true fake news towards the end. Certainly meaningful and certainly something that I understand why people were intrigued by. But ultimately, what we wanted to say is, “How much impact could that plausibly have?”
Impact is really hard [to measure], but at least we can put in perspective about people’s news diets and showcase that the amount of views of Russian direct misinformation is just a microscopic portion of people’s consumption of news on Facebook — let alone their consumption of Facebook, let alone their consumption of news in general, which Facebook is just a tiny portion of. Especially in 2016, the vast majority of people, even younger people, were still consuming way more news on television than they were on social media, let alone online.
While we agree that any fake news is probably not good, there is ample research to see that repeated interaction with content is really what drives underlying causal understanding of the world, narratives, however you want to describe it. Getting occasionally hit by some fake news, and at very low numbers for the typical consumer, is just not the driving force.
UD: My impression from reading your Nature paper is that you found that journalists are spreading misinformation about the effects of misinformation. Is that accurate? And why do you think this is happening if so?
DR: Ultimately, it’s a good story. And nuance is hard, very hard, and negative is popular....
....MUCH MORE
If interested see also:
January 2021
Always, Always Remember That Control Freaks Are Mentally Ill
And not nuts like the slightly ditzy Grandmas of stage and screen were but dangerously—try to hurt you if they get the chance—sometimes psychotically, off kilter.
You won't be disappointed if you approach this stuff thinking "People will do whatever they think they can get away with" China will invade Taiwan when then think they can get away with it. The corollary is also true: Guys don't attack the world heavyweight boxing champion, they think they won't get away with it.
And if you find someone not pushing their advantage to dominate and hurt others rejoice in tiny treasures and simple pleasures.
Rules for Radicals was originally conceived by Saul Alinsky as part of a larger war against politicians who got in his way, the tactics and techniques have been formalized and disseminated in colleges and universities and are now used against pretty much anyone that one disagrees with.
From Rules for Radicals, seventh chapter: Tactics...