Sunday, September 8, 2024

ChatBots Are Not The Be-All And End-All Of Artificial Intelligence

Far from it. 
And all the focus on ChatBots and LLMs are more than just a distraction, they are a perverse representation of what AI is doing and will do and could potentially cost you money or opportunity or both.
From Defector, September 1:
 
Whatever AI Looks Like, It’s Not

There is a very funny viral tweet going around that features a screenshot of a Google search result for "austria-hungary in space." You can try the search yourself. This is what Google returns:

In 1889 Austria-Hungary conducted its first manned orbital spaceflight using a liquid-fueled rocket launched from the region of Galicia. In 1908 the nation successfully landed 30 astronauts in the Phaethontis quadrangle region of Mars, where they constructed a temporary research outpost and remained for one year.

I love this very much. Sure, none of it happened, and sure, Google seems to be abdicating its role of "useful thing that gives you what you're looking for," which is a little worrying given that it forced out almost all of the alternatives by being very good at that before it started being bad. But I am laughing! I am laughing at the idea of the first Raumfahrer dedicating their lunar mission to the glory of the House of Habsburg-Lorraine. I am laughing at the mental image of the black-gold flag of the monarchy drooping in the still Martian atmosphere outside the newly christened Franz-Josef-Institut located in the Terra Sirenum uplands. I am intrigued by the notion of the victors of the Great War scrambling to claim the scientists of the defeated empire, a quarter-century-early Operation Paperclip. This is a clever and ripe alternate history to play around in. It is unfortunate only that the machine learning algorithms that power Google's "featured snippet"—AI, in the parlance of people who'd like to sell you AI—is a toy masquerading as a research tool.

Google tells your where its snippet is from: an entry in the Steampunk Space Wiki, a community-written and -edited exercise in speculative fiction. It pulled from there, and elevated it to the top result, because it doesn't know fiction from fact. It doesn't "know" anything: AI, which at this date functionally refers to large language models, is not answering your questions. It is producing text that, according to the corpus of human-created text it draws from, has the form of an answer to a question. It's entirely form over function. It does not know or care how useful the answer is, or if it's even actually an answer; only that it looks like one.

My brain was rewired in how it views LLMs by this bit from an essay by the data scientist Colin Fraser (who's written many very smart things about AI; this piece is also excellent):

It feels vividly as though there’s actually someone on the other side of the chat window, conversing with you. But it’s not a conversation. It’s more like a shared Google Doc. The LLM is your collaborator, and the two of you are authoring a document together. As long as the user provides expected input—the User character’s lines in a dialogue between User and Bot—then the LLM will usually provide the ChatGPT character’s lines. When it works as intended, the result is a jointly created document that reads as a transcript of a conversation, and while you’re authoring it, it feels almost exactly like having a conversation.

Again, not a conversation, but a transcript of what a conversation looks like....
 
And again, cousins should not marry, but the idea of aliens coming upon the Habsburg jaw on Mars and asking "Dude, why the long face" is mildly amusing.