Following on May 28's "Something Has Gone Very Wrong At Google (GOOG; EVIL)" we see this at Yahoo Finance, May 29:
It was a busy Memorial Day weekend for Google (GOOG, GOOGL)
as the company raced to contain the fallout from a number of wild
suggestions by the new AI Overview feature in its Search platform. In
case you were sunning yourself on a beach or downing hotdogs and beer
instead of scrolling through Instagram (META) and X, let me get you up to speed.
AI
Overview is supposed to provide generative AI-based responses to search
queries. Normally, it does that. But over the last week it’s also told
users they can use nontoxic glue to keep cheese from sliding off their
pizza, that they can eat one rock a day, and claimed Barack Obama was
the first Muslim president.
Google
responded by taking down the responses and saying it’s using the errors
to improve its systems. But the incidents, coupled with Google’s
disastrous Gemini image generator launch that allowed the app to
generate historically inaccurate images, could seriously damage the
search giant’s credibility.
“Google is supposed to be the premier
source of information on the internet,” explained Chinmay Hegde,
associate professor of computer science and engineering at NYU’s Tandon
School of Engineering. “And if that product is watered down, it will
slowly erode our trust in Google.”
Google’s AI flubs
Google’s AI Overview problems aren’t the first time the company has run into trouble since it began its generative AI drive. The company’s Bard chatbot, which Google rebranded as Gemini in February, famously showed an error in one of its responses in a promo video in February 2023, sending Google shares sliding.Then there was its Gemini image generator software, which generated photos of diverse groups of people in inaccurate settings, including as German soldiers in 1943.
AI has a history of bias, and Google tried to overcome that by
including a wider diversity of ethnicities when generating images of
people. But the company overcorrected, and the software ended up
rejecting some requests for images of people of specific backgrounds.
Google responded by temporarily taking the software offline and
apologizing for the episode.
The AI Overview issues, meanwhile,
cropped up because Google said users were asking uncommon questions. In
the rock-eating example, a Google spokesperson said it “seems a website
about geology was syndicating articles from other sources on that topic
onto their site, and that happened to include an article that originally
appeared on the Onion. AI Overviews linked out to that source.”
Those
are fine explanations, but the fact that Google continues to release
products with flaws that it then needs to explain away is getting
tiring.
“At some point, you have to stand by the product that you
roll out,” said Derek Leben, associate teaching professor of business
ethics at Carnegie Mellon University’s Tepper School of Business.
“You
can't just say … 'We are going to incorporate AI into all of our
well-established products, and also it's in constant beta mode, and any
kinds of mistakes or problems that it makes we can't be held responsible
for and even blamed for,' in terms of just trust in the products
themselves.”....
....MUCH MORE
If interested a couple springtime posts addressed some of what's going on in Mountain View California. April 5, 2024:
"Why AI bias is a systemic rather than a technological problem"
Following up on March 31's "The Purpose Of A System Is What It Does...":
“....There is after all,” Beer observed, “no point in claiming that the purpose of a system is to do what it constantly fails to do.”
In
late February when Google's Gemini generative AI generated some hubbub
with the portraits of a black George Washington or a black Nazi,
commenters missed the point of what the people at Google were doing. As
Marc Andreessen—a guy who knows something about tech having developed
the first commercial browser, among other things—put it regarding these
so-called 'mistakes':
Going back to that March 31 post, the penultimate bit before Voltaire took over:
Using this heuristic to look at systems like education or government
helps focus on the fact that in a system, as opposed, possibly, to a
one-off event, the result is the reality to focus upon.
Reality
is not the intentions of the systems designers and the systems
implementers and reality is surely not the protestations or
explanations, excuses or justifications that surround most human
endeavors.
The end result of a system, is what the system is meant to do. For the rest it is hard to put it better than:...
And from The Conversation via Dublin's Silicon Republic March 29:
Dr
Antje Scharenberg and Dr Philip Di Salvo from the University of St
Gallen discuss the ‘automation of inequality’ that underpins AI
innovation.
In public administrations across Europe, artificial intelligence (AI)
and automated decision making (ADM) systems are already being used
extensively.
These systems, often built on opaque ‘black box’ algorithms, recognise our faces in public, organise unemployment programmes and even forecast exam grades.
Their task is to predict human behaviour and to make decisions, even in
sensitive areas such as welfare, health and social services.
As seen in the US,
where algorithmic policing has been readily adopted, these decisions
are inherently influenced by underlying biases and errors. This can have
disastrous consequences: in Michigan in June 2020 a black man was
arrested, interrogated and detained overnight for a crime he did not
commit. He had been mistakenly identified by an AI system.
These systems are trained on pre-existing human-made data, which is
flawed by its very nature. This means they can perpetuate existing forms
of discrimination and bias, leading to what Virginia Eubanks has called
the “automation of inequality“.
Holding AI responsible
The
widespread adoption of these systems begs an urgent question: what
would it take to hold an algorithm to account for its decisions?This was tested recently in Canada, when courts ordered an airline to pay compensation to
a customer who had acted on bad advice given by their AI-powered
chatbot. The airline tried to rebut the claim by stating that the
chatbot was “responsible for its own actions”.
In Europe, there has been an institutional move to regulate the use of AI, in the form of the recently passed Artificial Intelligence Act.
This Act
aims to regulate large and powerful AI systems, preventing them from
posing systemic threats while also protecting citizens from their
potential misuse. The Act’s launch has been accompanied by a wide range
of preceding direct actions, initiatives and campaigns launched by civil society organisations across EU member states.
This growing resistance to problematic AI systems has
gained momentum and visibility in recent years. It has also influenced
regulators’ choices in crucial ways, putting pressure on them to introduce measures that safeguard fundamental rights.
The Human Error Project
As part of The Human Error Project,
based at Universität St Gallen in Switzerland, we have studied the ways
in which civil society actors are resisting the rise of automated
discrimination in Europe. Our project focuses on AI errors, an umbrella
term that encompasses bias, discrimination and un-accountability of
algorithms and AI....
....MUCH MORE
Again, it is not the AI or algo, it is the creators, trainers, and promoters of these things that have to be held accountable.
The
great embarrassment at Google was not that their ChatBot made
historical mistakes but that the agenda behind the GOOGs public-facing
offerings was exposed for all to see.