Sunday, February 4, 2024

"Why This AI Moment May Be the Real Deal"

The author of this piece digs science.

From The New Atlantis, Summer 2023 edition: 

This time, believe the hype.

For many years, those in the know in the tech world have known that “artificial intelligence” is a scam. It’s been true for so long in Silicon Valley that it was true before there even was a Silicon Valley. 

That’s not to say that AI hadn’t done impressive things, solved real problems, generated real wealth and worthy endowed professorships. But peek under the hood of Tesla’s “Autopilot” mode and you would find odd glitches, frustrated promise, and, well, still quite a lot of people hidden away in backrooms manually plugging gaps in the system, often in real time. Study Deep Blue’s 1997 defeat of world chess champion Garry Kasparov, and your excitement about how quickly this technology would take over other cognitive work would wane as you learned just how much brute human force went into fine-tuning the software specifically to beat Kasparov. Read press release after press release of Facebook, Twitter, and YouTube promising to use more machine learning to fight hate speech and save democracy — and then find out that the new thing was mostly a handmaid to armies of human grunts, and for many years relied on a technological paradigm that was decades old.

Call it AI’s man-behind-the-curtain effect: What appear at first to be dazzling new achievements in artificial intelligence routinely lose their luster and seem limited, one-off, jerry-rigged, with nothing all that impressive happening behind the scenes aside from sweat and tears, certainly nothing that deserves the name “intelligence” even by loose analogy.

So what’s different now? What follows in this essay is an attempt to contrast some of the most notable features of the new transformer paradigm (the T in ChatGPT) with what came before. It is an attempt to articulate why the new AIs that have garnered so much attention over the past year seem to defy some of the major lines of skepticism that have rightly applied to past eras — why this AI moment might, just might, be the real deal.

Men Behind Curtains
Artificial intelligence pioneer Joseph Weizenbaum originated the man-behind-the-curtain critique in his 1976 book Computer Power and Human Reason. Weizenbaum was the inventor of ELIZA, the world’s first chatbot. Imitating a psychotherapist who was just running through the motions to hit the one-hour mark, it worked by parroting people’s queries back at them: “I am sorry to hear you are depressed.” “Tell me more about your family.” But Weizenbaum was alarmed to find that users would ask to have privacy with the chatbot, and then spill their deepest secrets to it. They did this even when he told them that ELIZA did not understand them, that it was just a few hundred lines of dirt-stupid computer code. He spent the rest of his life warning of how susceptible the public was to believing that the lights were on and someone was home, even when no one was.

I experienced this effect firsthand as a computer science student at the University of Texas at Austin in the 2000s, even though the field by this time was nominally much more advanced. Everything in our studies seemed to point us toward the semester where we would qualify for the Artificial Intelligence course. Sure, you knew that nothing like HAL 9000 existed yet. But the building blocks of intelligence, you understood, had been cracked — it was right there in the course title.

When Alan Turing and Claude Shannon and John von Neumann were shaping the building blocks of computing in the 1940s, the words “computer science” would have seemed aspirational too — just like “artificial intelligence,” nothing then was really worthy of that name. But in due time these blocks were arranged into a marvelous edifice. So there was a titter surrounding the course: Someone someday would do the same for AI, and maybe, just maybe, it would be you.

The reality was different. The state of the art at the time was neural nets, and it had been for twenty or thirty years. Neural nets were good at solving some basic pattern-matching problems. For an app I was building to let students plan out their course schedules, I used neural nets to match a list of textbook titles and author names to their corresponding entries on Amazon. This allowed my site to make a few bucks through referral fees, an outcome that would have been impossible for a college-student side hustle if not for AI research. So it worked — mostly, narrowly, sort of — but it was brittle: Adjust the neural net to resolve one set of false matches and you would create three more. It could be tuned, but it had no responsiveness, no real grasp. That’s it?, you had to think. There was no way, however many “neurons” you added to the net, however much computing power you gave it, that you could imagine arranging these building blocks into any grand edifice. And so the more impressed people sounded when you mentioned using this technology, the more cynicism you had to adopt about the entire enterprise.

All of this is to say that skepticism about the new AI moment we are in rests on very solid ground.....

....MUCH MORE

Of that trinity of raw intellectual achievement the brightest of the three was probably Shannon. I say that while acknowledging Turing's brilliance and having posted stuff like "The Word Genius Is Often Overused But....John von Neumann and Pretty Much Everything".  

Shannon made major contributions in at least three—and maybe as many as five—different fields of intellectual endeavor. If interested see:

The Bit Bomb: The True Nature of Information

The subject of this article, Claude Shannon has a couple interesting connections to finance/investing/trading beyond 'just' creating information theory (along with MIT's Norbert Wiener who was coming in on a different angle of attack), more after the jump.
Both Aeon and Climateer are reposting, "The Bit Bomb" first appeared at Aeon on August 30, 2017 and graced our pages over the Labor Day weekend, September 3, 2017.

"Claude Shannon, the Las Vegas Shark"
"How Information Got Re-Invented"
"How Claude Shannon Helped Kick-start Machine Learning"
"How Claude Shannon Invented the Future"
In last week's link to Quanta Magazine's "Maxwell’s Demon And The Physics Of Information." I went off on a Claude Shannon linkfest tangent and completely forgot to link Quanta's own post on the guy.

For more on von Neumann (and students of Kobayashi Maru scenarios) we have "The Curse of Game Theory: Why It’s in Your Self-Interest to Exit the Rules of the Game"

Finally, Shannon's second wife, Betty, was Claude's collaborator and went deep into some very fancy math and science, right there with him. I should probably do a post on her.