Friday, April 5, 2024

"Why AI bias is a systemic rather than a technological problem"

Following up on March 31's "The Purpose Of A System Is What It Does...":

“....There is after all,” Beer observed, “no point in claiming that the purpose of a system is to do what it constantly fails to do.”

In late February when Google's Gemini generative AI generated some hubbub with the portraits of a black George Washington or a black Nazi, commenters missed the point of what the people at Google were doing. As Marc Andreessen—a guy who knows something about tech having developed the first commercial browser, among other things—put it regarding these so-called 'mistakes':

Going back to that March 31 post, the penultimate bit before Voltaire took over:

Using this heuristic to look at systems like education or government helps focus on the fact that in a system, as opposed, possibly, to a one-off event, the result is the reality to focus upon. 

Reality is not the intentions of the systems designers and the systems implementers and reality is surely not the protestations or explanations, excuses or justifications that surround most human endeavors.

The end result of a system, is what the system is meant to do. For the rest it is hard to put it better than:...

And from The Conversation via Dublin's Silicon Republic March 29:

Dr Antje Scharenberg and Dr Philip Di Salvo from the University of St Gallen discuss the ‘automation of inequality’ that underpins AI innovation.

In public administrations across Europe, artificial intelligence (AI) and automated decision making (ADM) systems are already being used extensively.

These systems, often built on opaque ‘black box’ algorithms, recognise our faces in public, organise unemployment programmes and even forecast exam grades. Their task is to predict human behaviour and to make decisions, even in sensitive areas such as welfare, health and social services.

As seen in the US, where algorithmic policing has been readily adopted, these decisions are inherently influenced by underlying biases and errors. This can have disastrous consequences: in Michigan in June 2020 a black man was arrested, interrogated and detained overnight for a crime he did not commit. He had been mistakenly identified by an AI system.

These systems are trained on pre-existing human-made data, which is flawed by its very nature. This means they can perpetuate existing forms of discrimination and bias, leading to what Virginia Eubanks has called the “automation of inequality“.

Holding AI responsible
The widespread adoption of these systems begs an urgent question: what would it take to hold an algorithm to account for its decisions?

This was tested recently in Canada, when courts ordered an airline to pay compensation to a customer who had acted on bad advice given by their AI-powered chatbot. The airline tried to rebut the claim by stating that the chatbot was “responsible for its own actions”.

In Europe, there has been an institutional move to regulate the use of AI, in the form of the recently passed Artificial Intelligence Act.

This Act aims to regulate large and powerful AI systems, preventing them from posing systemic threats while also protecting citizens from their potential misuse. The Act’s launch has been accompanied by a wide range of preceding direct actions, initiatives and campaigns launched by civil society organisations across EU member states.

This growing resistance to problematic AI systems has gained momentum and visibility in recent years. It has also influenced regulators’ choices in crucial ways, putting pressure on them to introduce measures that safeguard fundamental rights.

The Human Error Project
As part of The Human Error Project, based at Universität St Gallen in Switzerland, we have studied the ways in which civil society actors are resisting the rise of automated discrimination in Europe. Our project focuses on AI errors, an umbrella term that encompasses bias, discrimination and un-accountability of algorithms and AI....

....MUCH MORE

Again, it is not the AI or algo, it is the creators, trainers, and promoters of these things that have to be held accountable.

The great embarrassment at Google was not that their ChatBot made historical mistakes but that the agenda behind the GOOGs public-facing offerings was exposed for all to see.