Tuesday, February 23, 2021

"Is Google’s AI research about to implode?"

Hmmm, this might be a problem. As noted in the intro to February 15's Chips: "Baidu in talks to raise money for a stand-alone A.I. chip company"

They are a bit slow out of the gate, Google is basically betting their entire company on AI. 
And as for machine learning the GOOG announced their tensor chip in 2016, which seems an eternity ago.

From Soccermatics at Medium, February 19:

What does Timnit Gebru’s firing and the recent papers coming out of Google tell us about the state of research at the world’s biggest AI research department.

The high point for Google’s research in to Artifical Intelligence may well turn out to be the 19th of October 2017. This was the date that David Silver and his co-workers at DeepMind published a report, in the journal Nature, showing how their deep-learning algorithm AlphaGo Zero was a better Go player than not only the best human in the world, but all other Go-playing computers.

What was most remarkable about AlphaGo Zero was that it worked without human assistance. The researchers set up a neural network, let it play lots of games of Go against itself and a few days later it was the best Go player in the world. Then they showed it chess and it took only four hours to become the best chess player in the world. Unlike previous game-playing algorithms there was no rulebook built in to the algorithm or specialised search algorithm, just a machine playing game after game, from novice up to master level, all the way up to a level where nobody, computer or person, could beat it.

But there was a problem.

Maybe it wasn’t Silver and his colleagues’ problem, but it was a problem all the same. The DeepMind research program had shown what deep neural networks could do, but it had also revealed what they couldn’t do. For example, although they could train their system to win at Atari games Space Invaders and Breakout, it couldn’t play games like Montezuma Revenge where rewards could only be collected after completing a series of actions (for example, climb down ladder, get down rope, get down another ladder, jump over skull and climb up a third ladder). For these types of games, the algorithms can’t learn because they require an understanding of the concept of ladders, ropes and keys. Something us humans have built in to our cognitive model of the world. But also something that can’t be learnt by the reinforcement learning approach that DeepMind applied.

Another example of the limitations of the deep learning approach can be found in language models. One approach to getting machines to understand language, pursued at Google Brain as well as Open AI and other research institutes, is to train models to predict sequences of words and sentences in large corpuses of text. This approach goes all the way back to 1913 and the work of Andrej Markov, who used it to predict the order of vowels and consonants in Pushkin’s novel in verse Eugene Onegin. There are well defined patterns within language and by ‘learning’ those patterns an algorithm can speak that language.

The pattern detecting approach to langauge is interesting in the sense that it can reproduce paragraphs that seem to make sense, at least superficially. A nice example of this was published in The Guardian in September 2020, where an AI mused on whether computers could bring world peace. But, as Emily Bender, Timnit Gebru and co-workers point out in their recent paper ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’ these techniques do not understand what we are writing. They are simply storing language in a convenient form and the outputs they produce are just parroting the data. And, as the example below shows, these ouputs can be dangerously untrue. Primed with data about QAnon, the GPT-3 language model produces lies and conspiracy theories.

Image for post
Examples of lies and conspiracy theory parotted by GPT-3 (work by Kris McGuffie and Alex Newhouse)...

...MUCH MORE 

Bringing to mind Microsft's Tay and the Yandex chatbot Alice: 

Russian Chatbot Goes Off The Rails, Endorses Stalin, Says "Enemies of the people must be shot" etc