From Observer, February 15:
"A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs."
Yann LeCun, Meta (META)’s chief A.I. scientist and known in his field as one of the “godfathers” of deep learning, believes the widespread fear that powerful A.I. models are dangerous is largely imaginary, because current A.I. technology is nowhere near human-level intelligence—not even cat-level intelligence. And while he is certain that A.I. will reach the so called artificial general intelligence (AGI) stage eventually, that timeline may be a lot longer than most researchers think.
“We are really far from human-level intelligence. There were stories about the fact that you could use an LLM (large language model) to give you instructions of how to make chemical weapon or bioweapon. That turns out to be false,” LeCun said during an onstage interview at the World Government Summit in Dubai this week.
“Those systems are trained on public data. They can’t really invent anything, at least today,” he went on to explain. “Some time in the future those systems might be actually smart enough to give you useful information better than you can get with a search engine. But it’s just not true today.”
The French scientist, who won the 2018 Turing Award along with Geoffrey Hinton and Yoshua Bengio for his contribution to artificial neural networks research, has famously said that even the most advanced A.I. systems today have less common sense than a house cat.
“The brain of a house cat has about 800 million neurons. You have to multiply that by 2,000 to get to the number of synapses, or the connections between neurons, which is the equivalent of the number of parameters in an LLM,” LeCun said, noting that the largest LLMs have about the same number of parameters as the number of synapses in a cat’s brain. For example, OpenAI’s GPT-3.5 model, which powers the free version of ChatGPT, has 175 billion parameters. The more advanced GPT-4, is said to be run on eight language models, each with 220 billion parameters....
....MUCH MORE