Tuesday, March 15, 2022

Deep Learning Is Hitting a Wall

Man, time flies. It seems like it was yesterday we were posting "Deep Learning is VC Worthy" but it was August 2014. Here's the latest from Nautil.us, March 10:

What would it take for artificial intelligence to make real progress?

Let me start by saying a few things that seem obvious,” Geoffrey Hinton, “Godfather” of deep learning, and one of the most celebrated scientists of our time, told a leading AI conference in Toronto in 2016. “If you work as a radiologist you’re like the coyote that’s already over the edge of the cliff but hasn’t looked down.” Deep learning is so well-suited to reading images from MRIs and CT scans, he reasoned, that people should “stop training radiologists now” and that it’s “just completely obvious within five years deep learning is going to do better.”

Fast forward to 2022, and not a single radiologist has been replaced. Rather, the consensus view nowadays is that machine learning for radiology is harder than it looks1; at least for now, humans and machines complement each other’s strengths.2

Deep learning is at its best when all we need are rough-ready results.

Few fields have been more filled with hype and bravado than artificial intelligence. It has flitted from fad to fad decade by decade, always promising the moon, and only occasionally delivering. One minute it was expert systems, next it was Bayesian networks, and then Support Vector Machines. In 2011, it was IBM’s Watson, once pitched as a revolution in medicine, more recently sold for parts.3 Nowadays, and in fact ever since 2012, the flavor of choice has been deep learning, the multibillion-dollar technique that drives so much of contemporary AI and which Hinton helped pioneer: He’s been cited an astonishing half-million times and won, with Yoshua Bengio and Yann LeCun, the 2018 Turing Award.

Like AI pioneers before him, Hinton frequently heralds the Great Revolution that is coming. Radiology is just part of it. In 2015, shortly after Hinton joined Google, The Guardian reported that the company was on the verge of “developing algorithms with the capacity for logic, natural conversation and even flirtation.” In November 2020, Hinton told MIT Technology Review that “deep learning is going to be able to do everything.”4

I seriously doubt it. In truth, we are still a long way from machines that can genuinely understand human language, and nowhere near the ordinary day-to-day intelligence of Rosey the Robot, a science-fiction housekeeper that could not only interpret a wide variety of human requests but safely act on them in real time. Sure, Elon Musk recently said that the new humanoid robot he was hoping to build, Optimus, would someday be bigger than the vehicle industry, but as of Tesla’s AI Demo Day 2021, in which the robot was announced, Optimus was nothing more than a human in a costume. Google’s latest contribution to language is a system (Lamda) that is so flighty that one of its own authors recently acknowledged it is prone to producing “bullshit.”5  Turning the tide, and getting to AI we can really trust, ain’t going to be easy.

In time we will see that deep learning was only a tiny part of what we need to build if we’re ever going to get trustworthy AI.

Deep learning, which is fundamentally a technique for recognizing patterns, is at its best when all we need are rough-ready results, where stakes are low and perfect results optional. Take photo tagging. I asked my iPhone the other day to find a picture of a rabbit that I had taken a few years ago; the phone obliged instantly, even though I never labeled the picture. It worked because my rabbit photo was similar enough to other photos in some large database of other rabbit-labeled photos. But automatic, deep-learning-powered photo tagging is also prone to error; it may miss some rabbit photos (especially cluttered ones, or ones taken with weird light or unusual angles or with the rabbit partly obscured; it occasionally confuses baby photos of my two children. But the stakes are low—if the app makes an occasional error, I am not going to throw away my phone.

When the stakes are higher, though, as in radiology or driverless cars, we need to be much more cautious about adopting deep learning. When a single error can cost a life, it’s just not good enough. Deep-learning systems are particularly problematic when it comes to “outliers” that differ substantially from the things on which they are trained. Not long ago, for example, a Tesla in so-called “Full Self Driving Mode” encountered a person holding up a stop sign in the middle of a road. The car failed to recognize the person (partly obscured by the stop sign) and the stop sign (out of its usual context on the side of a road); the human driver had to take over. The scene was far enough outside of the training database that the system had no idea what to do.....

....MUCH MORE

Ja, ja, so your puny Tesla intelligence couldn't handle an out of place stop sign. What about the real, real world:

"When Google was training its self-driving car on the streets of Mountain View, California, the car rounded a corner and  encountered a woman in a wheelchair, waving a broom, chasing a duck. The car hadn’t encountered this before so it stopped and waited."

Related, 2021:
"Deep Learning’s Diminishing Returns: The cost of improvement is becoming unsustainable."
A major piece from IEEESpectrum, September 24....

And hundreds more, including one featuring the great Gary Larson's Far Side:

Cracking Open the Black Box of Deep Learning

One of the spookiest features of black box artificial intelligence is that, when it is working correctly, the AI is making connections and casting probabilities that are difficult-to-impossible for human beings to intuit.
Try explaining that to your outside investors.

You start to sound, to their ears anyway, like a loony who is saying "Etaoin shrdlu, give me your money, gizzlefab, blythfornik, trust me."

See also the famous Gary Larson cartoons on how various animals hear and comprehend:
https://consciouscompanion2012.files.wordpress.com/2015/03/blahblah_gary-larson-ginger-dog-what-dogs-hear.jpg
He has one for cats as well but it's not as deep.Something about them not hearing anything.

Finally one of those factoids that make you ask yourself 'what's going on?' 2013's "Why Is Machine Learning (CS 229) The Most Popular Course At Stanford?"