Friday, October 23, 2020

The Algo As Manipulator: "More Than a Feeling"

 Professor Frank Pasquale at Real Life Magazine:

Emotion detection doesn’t work, but it will try to change your behavior anyway

So many authorities want to use computational power to uncover how you feel. School superintendents have deputized “aggression detectors” to record and analyze voices of children. Human resources departments are using AI to search workers’ and job applicants’ expressions and gestures for “nervousness, mood, and behavior patterns.” Corporations are investing in profiling to “decode” customers, separating the wheat from the chaff, the wooed from the waste. Richard Yonck’s 2017 book Heart of the Machine predicts that the “ability of a car to read and learn preferences via emotional monitoring of the driver will be a game changer.”

Affective computing — the computer-science field’s term for such attempts to read, simulate, predict, and stimulate human emotion with software — was pioneered at the MIT Media Lab by Rosalind Picard in the 1990s and has since become wildly popular as a computational and psychological research program. Volumes like The Oxford Handbook of Affective Computing describe teams that are programming robots, chatbots, and animations to appear to express sadness, empathy, curiosity, and much more. “Automated face analysis” is translating countless images of human expressions into standardized code that elicits certain responses from machines. As affective computing is slowly adopted in health care, education, and policing, it will increasingly judge us and try to manipulate us.

Troubling aspects of human-decoding software are already emerging. Over 1,000 experts recently signed a letter condemning “crime-predictive” facial analysis. Their concern is well-founded. Psychology researchers have demonstrated that faces and expressions do not necessarily map neatly onto particular traits and emotions, let alone to the broader mental states evoked in “aggression detection.” Since “instances of the same emotion category are neither reliably expressed through nor perceived from a common set of facial movements,” the researchers write, communicative capacities of the face are limited. The dangers of misinterpretation are clear and present in all these scenarios.

Bias is endemic in U.S. law enforcement. Affective computing may exacerbate it. For example, as researcher Lauren Rhue has found, “Black men’s facial expressions are scored with emotions associated with threatening behaviors more often than white men, even when they are smiling.” Sampling problems are also likely to be rife. If a database of aggression is developed from observation of a particular subset of the population, the resulting AI may be far better at finding “suspect behavior” in that subset rather than others. Those who were most exposed to surveillance systems in the past may then be far more likely to suffer computational judgments of their behavior as “threatening” or worse. The Robocops of the future are “machine learning” from data distorted by a discrimination-ridden past.


To many of the problems detailed above, affective computing’s enthusiasts have a simple response: Help us fix it. Some of these appeals are classic Tom Sawyering, where researchers ask critics to work for free to de-bias their systems. Others appear more sincere, properly compensating experts in the ethical, legal, and social implications of AI to help better design sociotechnical systems (rather than just clean up after technologists). As minoritized groups are invited to participate in developing more fair and transparent emotion analyzers, some of the worst abuses of crime-predicting and hiring software may be preempted.

But should we really aim to “fix” affective computing? What does such a mechanical metaphor entail? One of Picard’s former MIT colleagues, the late Marvin Minsky, complained in his book The Emotion Machine that we “know very little about how our brains manage” common experiences: ...

....MUCH MORE

Although Pasquale teaches Law at the University of Maryland his interests range far beyond the legal.

Two of his pieces have ended up among the 100 most popular links in the history of the blog: 

Frank Pasquale: "Tech Platforms and the Knowledge Problem" 

The Spectrum of Control: A Social Theory of The Smart City