Sunday, December 3, 2023

The Godfather of Artificial Intelligence Has Some Concerns

From Toronto Life, November 16:

Rage Against the Machine
Geoffrey Hinton spent half a century developing artificial intelligence. Now, he worries that his life’s work could spell the end of humanity. Inside his mission to warn the world

In 2023, artificial intelligence finally caught up to the hype. Last November, American research lab OpenAI released the now-ubiquitous chatbot ChatGPT. It could summarize novels in seconds. It could write computer code. Its potential to generate scripts contributed to sending Hollywood’s writers on strike. Within two months, it had 100 million users, making it the fastest-growing app of all time, and Microsoft threw $10 billion at Open-AI to keep the party going. After decades of false starts, AI was finally off to the races.

There was, however, one guy who wasn’t popping champagne: Geoffrey Hinton, the University of Toronto computer science professor better known as the godfather of AI. On paper, ChatGPT should have thrilled Hinton—he’d spent his entire career trying to perfect neural networks, the architecture that undergirds GPT, and now they worked better than ever. When he fed the chatbot jokes, it could explain why they were funny. If he gave it brain teasers, the chatbot could solve them. “That’s way more reasoning than we thought these things could do a few years ago,” he says. It seemed to him that, for the first time, machines were passing the Turing test, the benchmark at which computers demonstrate intelligence indistinguishable from a human’s. It wouldn’t take long—maybe five to 20 years, he thought—for AI to become smarter than humans altogether.

This prediction comes with some terrifying implications. Humans have dominated the earth for millennia precisely because we are the most intelligent species. What would it mean for a superior form of intelligence to emerge? Yes, AI might cure diseases, mitigate climate change and improve life on earth in other ways we can’t yet envision—if we can control it. If we can’t? Hinton fears the worst: machines taking the reins from humanity. “I don’t think there’s any chance of us maintaining control if they want control,” says Hinton. “It will be hopeless.”

Hinton wondered what to do. Having decided that AI could very well be pushing humanity to the brink, he couldn’t just carry on with his work. So, on May 2, he appeared on the front page of the New York Times announcing that he was stepping down from his job at Google and warning the world about the existential threat of AI.

Hinton wasn’t the first person to prophesy an AI apocalypse. Elon Musk, for one, has spent years harping about the impending singularity, the point at which humans irrevocably lose control of AI—but Elon says a lot of nutty stuff. Among AI experts, few people paid serious attention to the idea that machines would become extremely harmful any time soon.

Hinton changed that. After all, there is no greater authority on AI. A Brit by birth and a Canadian by choice, he has been directly—or, through the work of his students and colleagues, indirectly—involved in nearly every major deep-learning breakthrough, including the development of generative AI tools like DALL-E. When he spoke up, the world listened. Jeff Clune, an associate professor at the University of British Columbia and senior research adviser to Google’s AI research lab, DeepMind, told me that Hinton’s warning was a “thunderclap” that opened the eyes of scientists, regulators and the public. “There are people on both sides of this debate. What you rarely see is someone changing sides, so that causes people to take notice,” he says. “When that person is the most influential person in the field and, in many ways, the father of it, it is impossible to ignore.”....

....MUCH MORE

Previously on Professor Hinton:

2014: "As Machines Get Smarter, Evidence They Learn Like Us"
2015: "Inside Google’s Massive Effort in Deep Learning" (GOOG)
2017: Questions America Wants Answered: "Is AI Riding a One-Trick Pony?"
2018: Artificial Intelligence Chips: Past, Present and Future
2022: Deep Learning Is Hitting a Wall