Thursday, December 4, 2025

“AI’s inherent incomprehensibility is a unique flaw.”

Yes, yes it is.*

From Sweden's  Institute for Futures Studies (IFFS, Institutet för framtidsstudier), November 13:

Being able to explain how something works has value, but being able to explain why it works is enormously more valuable, because that knowledge can be built upon. The fact that AI is inherently incomprehensible - even to the people who created the models - is therefore highly problematic and a unique flaw. This argument was made by Emma Engström, researcher at IFFS in an op-ed in Dagens industri recently. In this interview below she explains further.

One of the recipients of the 2025 Riksbank Prize in Economic Sciences in Memory of Alfred Nobel is Joel Mokyr. In his work, he describes how “explanatory knowledge” was an important ingredient in the breakthroughs of the Enlightenment period, and why this led to lasting development in science and innovation. His point is that this was when we began to base knowledge on prior knowledge in a new way. Instead of merely explaining that something worked, we wanted to understand why. This makes improvement possible. And from this perspective, the inherent incomprehensibility of today’s AI technology should be seen as a serious problem for knowledge-building, Engström argues.

– There is a body of literature about how today’s society is becoming increasingly incomprehensible. In the 2015 book The Black Box Society, Frank Pasquale describes a society increasingly governed by algorithms. For example, loan decisions and other assessments determined by algorithms that are difficult to understand. The latest AI models are infinitely more incomprehensible than those algorithms. Today’s hype is specifically centered on transformer models which, by their nature, get better and better the more parameters they incorporate - the more complex they become. I think that’s a huge problem, says Emma Engström.

In what ways?

– You could say there are two different categories of problems. One concerns ethics and arises when you cannot explain why a certain decision was made. For example why I got a job and you didn’t, or why you received a loan and I didn’t. For people to accept a decision or be able to appeal it, an explanation is required.

– The second category concerns knowledge-building and the power of being able to explain why something works. Take scissors as an example. Understanding that it is the lever effect that makes scissors work enables the next person to create even more effective scissors with an even longer handle. Mokyr shows that this kind of “explanatory knowledge” lies behind the exponential development of science since the Enlightenment.

You mention weather forecasts in your op-ed. AI has proven to be very good at getting them right.

– In one sense, this is not a problem. A weather forecast is not a decision about, say, a loan, and therefore does not require an explanation in the same way. It may be enough that the forecast is accurate, it is valuable in itself. But at the same time, if we were to replace human meteorologists with AI, we would over time lose knowledge about how weather, climate, winds, solar radiation, currents, and so on fit together and humans in 100 years might not understand anything about weather, because we would simply feed data into an AI model that spits out a result that may be accurate, but without us knowing why. And we cannot know whether all the data we feed in is relevant or whether we are missing important data. Perhaps the AI model uses only 10 percent of the data while 90 percent is junk.

– A further problem is that AI has obviously found a pattern that humans have not. In human hands, that knowledge might be used to create a deeper understanding of the climate that could be useful in many ways, or be applied in some other field. Scientific breakthroughs often happen this way. If the knowledge is locked inside an incomprehensible AI, this does not occur.... 

....MORE
*
Way back in 2017 we posted "Cracking Open the Black Box of Deep Learning" with this introduction:

One of the spookiest features of black box artificial intelligence is that, when it is working correctly, the AI is making connections and casting probabilities that are difficult-to-impossible for human beings to intuit.
Try explaining that to your outside investors.

You start to sound, to their ears anyway, like a loony who is saying "Etaoin shrdlu, give me your money, gizzlefab, blythfornik, trust me."

See also the famous Gary Larson cartoons on how various animals hear and comprehend:...

Which was followed, three days later, by a piece from Bloomberg: 

Matt Levine commends to our attention a story about one of the world's biggest hedge funds and prize-putter-upper of what's probably the most prestigious honor in  literature, short of the Nobel, the Man Booker Award.

On Tuesday September 26, 2017, 11:00 PM CDT Bloomberg posted:
The Massive Hedge Fund Betting on AI

The second paragraph of the story:

...Man Group, which has about $96 billion under management, typically takes its most promising ideas from testing to trading real money within weeks. In the fast-moving world of modern finance, an edge today can be gone tomorrow. The catch here was that, even as the new software produced encouraging returns in simulations, the engineers couldn’t explain why the AI was executing the trades it was making. The creation was such a black box that even its creators didn’t fully understand how it worked. That gave Ellis pause. He’s not an engineer and wasn’t intimately involved in the technology’s creation, but he instinctively knew that one explanation—“I can’t tell you why …”—would never fly with big clients looking for answers when Man inevitably lost some of their money... 
Now that is just, to reuse the phrase, spooky. Do read both the Bloomberg Markets and the Bloomberg View pieces but I'll note right now it's only with Levine you get:
"I imagine a leather-clad dominatrix standing over the computer, ready to administer punishment as necessary."... 
As retold in "Let Me Be Clear: I Have No Inside Information On Who Will Win The Man-Booker Prize Next Month (hedge funds, AI and simultaneous discovery)"