The political economy of digitization is a fraught topic. Scholars and policymakers have disputed the relative merits of centralization and decentralization. Do we want to encourage massive firms to become even bigger, so they can accelerate AI via increasingly comprehensive data collection, analysis, and use? Or do we want to trust-bust the digital economy, encouraging competitors to develop algorithms that can “learn” more from less data? I recently wrote on this tension, exploring the pro’s and con’s of each approach.
However, there are some ways out of the dilemma. Imagine if we could require large firms to license data to potential competitors in both the public and private sectors. That may sound like a privacy nightmare. But anonymization could allay some of these concerns, as it has in the health care context. Moreover, the first areas opened up to such mandated sharing may not even be personal data. Sharing the world’s best mapping data beyond the Googleplex could unleash innovation in logistics, real estate, and transport. Some activists have pushed to characterize Google’s trove of digitized books as an essential facility, which it would be required to license at fair, reasonable, and non-discriminatory (FRAND) rates to other firms aspiring to categorize, sell, and learn from books. Fair use doctrine could provide another approach here, as Amanda Levendowski argues.
In a recent issue of Logic, Ben Tarnoff has gone beyond the essential facilities argument to make a case for nationalization. Tarnoff believes that nationalized data banks would allow companies (and nonprofits) to “continue to extract and refine data—under democratically determined rules—but with the crucial distinction that they are doing so on our behalf, and for our benefit.” He analogizes such data to natural resources, like minerals and oil. Just as the Norwegian sovereign wealth fund and Alaska Permanent Fund socialize the benefits of oil and gas, public ownership and provision of data could promote more equitable sharing of the plenitude that digitization ought to enable.
Many scholars have interrogated the data/oil comparison. They usually focus on the externalities of oil use, such as air and water pollution and climate change. There are also downsides to data’s concentration and subsequent dissemination. Democratic control will not guarantee privacy protections. Even when directly personally identifiable information is removed from databases, anonymization can sometimes be reversed. Both governments and corporations will be tempted to engage in “modulation”—what Cohen describes as a pervasive form of influence on the beliefs and behaviors of citizens. Such modulation is designed to “produce a particular kind of subject[:] tractable, predictable citizen-consumers whose preferred modes of self-determination play out along predictable and profit-generating trajectories.” Tarnoff acknowledges this dark possibility, and I’d like to dig a bit deeper to explore how it could be mitigated.
Reputational Economies of Social Credit and Debt
Modulation can play out in authoritarian, market, and paternalistic modes. In its mildest form, such modulation relies on nudges plausibly based on the nudged person’s own goals and aspirations—a “libertarian paternalism” aimed at making good choices easier. In market mode, the highest bidder for some set of persons’ attention enjoys the chance to influence them. Each of these are problematic, as I have noted in articles and a book. However, I think that authoritarian modulation is the biggest worry we face as we contemplate the centralization of data in repositories owned by (or accessible to) governments. China appears to be experimenting with such a system, and provides some excellent examples of what data centralizers should constitutionally prohibit as they develop the data gathering power of the state.
The Chinese social credit system (SCS) is one of the most ambitious systems of social control ever proposed. Jay Stanley, a senior policy analyst at the ACLU’s Speech, Privacy & Technology Project, has summarized a series of disturbing news stories on China’s “Planning Outline for the Construction of a Social Credit System.” As Stanley observes, “Among the things that will hurt a citizen’s score are posting political opinions without prior permission, or posting information that the regime does not like.” At least one potential version of the system would also be based on peer scoring. That is, if an activist criticized the government or otherwise deviated from prescribed behavior, not only would her score go down, but her family and friends’ scores would also decline. This algorithmic contagion bears an uncomfortable resemblance to theories of collective punishment.
Admittedly, at least one scholar has characterized the SCS as less fearsome: more “an ecosystem of initiatives broadly sharing a similar underlying logic, than a fully unified and integrated machine for social control.” However, heavy-handed application of no-travel and no-hotel lists in China do not inspire much confidence. There is no appeal mechanism—a basic aspect of due process in any scored society.
The SCS’s stated aim is to enable the “trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.” But the system is not even succeeding on its own terms in many contexts. Message boards indicate that some citizens are gaming the SCS’s data feeds. For example, a bank may send in false information to blackball its best customer, in order to keep that customer from seeking better terms at competing banks. To the extent the system is a black box, there is no way for the victim to find out about the defamation....MUCH MORE
Saturday, June 30, 2018
"Data Nationalization in the Shadow of Social Credit Systems"
Frank Pasquale at Law and Political Economy, June 18: