Monday, January 27, 2020

It's Time For Big Tech To Decide If They Are Platforms or Publishers (FB; GOOG; TWTR; AMZN)

Okay, maybe allowing big tech to make the call is too weak-willed. Maybe they should be told they are one or the other.
From the University of Chicago's ProMarket, January 17:

How to Change Section 230 and Make Digital Platforms More Accountable
If elected, former Vice President and current Democratic presidential candidate Joe Biden promised to “revoke immediately” the 1996 provision that gave tech companies like Facebook protection from civil liability for harmful or misleading content published on their platforms. The Stigler Center Committee on Digital Platforms has a proposal to fix the problem.
In a surprising statement, former Vice President and current Democratic presidential candidate Joe Biden announced that, if elected president, he’d seek to repeal one of the most crucial pieces of legislation related to digital platforms, Section 230 of the Communications Decency Act. According to this 1996 provision, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Biden, who has personally clashed with Facebook to have a defamatory political ad removed, said he wants to make social networks more accountable for the content they host, the way newspapers already are.

“[The New York Times] can’t write something you know to be false and be exempt from being sued. But [Mark Zuckerberg] can. The idea that it’s a tech company is that Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms,” Biden said in an interview with The New York Times.

“If there’s proven harm that Facebook has done, should someone like Mark Zuckerberg be submitted to criminal penalties, perhaps?” The Times’ editors asked Biden, who replied: “He should be submitted to civil liability and his company to civil liability, just like you would be here at The New York Times.
Structural reform of Section 230 is also one of the policy proposals made by the recent Stigler Center Committee on Digital Platforms’ final report. The independent and non-partisan Committeecomposed of more than 30 highly-respected academics, policymakers, and expertsspent over a year studying in-depth how digital platforms such as Google and Facebook impact the economy and antitrust laws, data protection, the political system, and the news media industry.

What follows is the Committee’s analysis of Section 230, its origins, and how digital platforms have used it far beyond its original scope, as well as the Committee’s proposals on how to amend it to improve the accountability of digital platforms, reduce the unfair subsidy they receive through the legal shield provided to them by Section 230, and improve the quality of the public debate without limiting freedom of speech.
The Origin of Section 230
Section 230 was enacted as part of the Telecommunications Act of 1996 to govern internet service providers (ISPs). The ISP to the ordinary publisher in 1996 was something like the scooter to automobiles today: a useful invention, but one hardly on the verge of dominance.

Tarleton Gillespie writes: “At the time Section 230 was crafted, few social media platforms existed. US lawmakers were regulating a web largely populated by ISPs and web ‘publishers’—amateurs posting personal pages, companies designing stand-alone websites, and online communities having discussions.”

Although sometimes viewed as a sweeping libertarian intervention, Section 230 actually began life as a smut-busting provision: an amendment for the “Protection for Private Blocking and Screening of Offensive Material.” Its purpose was to allow and encourage internet service providers to create safe spaces, free of pornography, for children.

The goals at the time of adoption were (1) to give new “interactive computer services” breathing room to develop without lawsuits “to promote the continued development of the Internet,” while (2) also encouraging them to filter out harmful content without fear of getting into trouble for under- or over-filtering. Thus, Section 230 is both a shield to protect speech and innovation and a sword to attack speech abuses on platforms.

The shield part is embodied in Section 230(c)(1): “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This is not blanket immunity for the distribution of content, and indeed platforms are still liable for their own content, and for federal crimes and copyright violations related to third-party content. The immunity is really limited to the speech-related harms that publishers ordinarily face such as defamation and intentional infliction of emotional distress. In other words, a platform like Facebook remains liable for distributing child pornography, which is federal criminal content. It also remains liable for Facebook-authored defamatory content.

Facebook cannot, however, be held secondarily liable for defamatory content posted by its users.

The sword part of Section 230 is contained in Section 230(c)(2)(A): “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers being obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

This was designed to avoid the paradoxical situation in which an intermediary tries to moderate its platform to reduce harmful content, but then is subject to liability because it has exercised editorial control....

Also at ProMarket:
January 23
“We Were Naïve,” Says FCC Chair Who Oversaw the Creation of Section 230

The Various Tribes Have Chosen The Platform They Will Defend (FB; TWTR; GOOG; AMZN)