Sunday, May 26, 2024

Former Google CEO Schmidt On The Ever-Increasing Tempo Of AI

From Noema, May 21:

Mapping AI’s Rapid Advance
Former Google CEO Eric Schmidt weighs in on where AI is headed, when to “pull the plug” and how to cope with China.

Nathan Gardels: Generative AI is exponentially climbing the capability ladder. Where are we now? Where is it going? How fast is it going? When do you stop it, and how? 

Eric Schmidt: The key thing that’s going on now is we’re moving very quickly through the capability ladder steps. There are roughly three things going on now that are going to profoundly change the world very quickly. And when I say very quickly, the cycle is roughly a new model every year to 18 months. So, let’s say in three or four years. 

The first pertains to the question of the “context window.” For non-technical people, the context window is the prompt that you ask. That context window can have a million words in it. And this year, people are inventing a context window that is infinitely long. This is very important because it means that you can take the answer from the system and feed it back in and ask it another question.

Say I want a recipe to make a drug. I ask, “What’s the first step?” and it says, “Buy these materials.” So, then you say, “OK, I bought these materials. Now, what’s my next step?” And then it says, “Buy a mixing pan.” And then the next step is “How long do I mix it for?”

That’s called chain of thought reasoning. And it generalizes really well. In five years, we should be able to produce 1,000-step recipes to solve really important problems in medicine and material science or climate change. 

The second thing going on presently is enhanced agency. An agent can be understood as a large language model that can learn something new. An example would be that an agent can read all of chemistry, learn something about it, have a bunch of hypotheses about the chemistry, run some tests in a lab and then add that knowledge to what it knows. 

These agents are going to be really powerful, and it’s reasonable to expect that there will be millions of them out there.  So, there will be lots and lots of agents running around and available to you. 

The third development already beginning to happen, which to me is the most profound, is called “text to action.” You might say to AI, “Write me a piece of software to do X” and it does. You just say it and it transpires.  Can you imagine having programmers that actually do what you say you want? And they do it 24 hours a day? These systems are good at writing code, such as languages like Python. 

Put all that together, and you’ve got, (a) an infinite context window, (b) chain of thought reasoning in agents and then (c) the text-to-action capacity for programming. 

What happens then poses a lot of issues. Here we get into the questions raised by science fiction. What I’ve described is what is happening already. But at some point, these systems will get powerful enough that the agents will start to work together.  So your agent, my agent, her agent and his agent will all combine to solve a new problem. 

Some believe that these agents will develop their own language to communicate with each other.  And that’s the point when we won’t understand what the models are doing. What should we do? Pull the plug? Literally unplug the computer? It will really be a problem when agents start to communicate and do things in ways that we as humans do not understand. That’s the limit, in my view.

Gardels: How far off is that future? 

Schmidt: Clearly agents with the capacity I’ve described will occur in the next few years. There won’t be one day when we realize “Oh, my God.”  It is more about the cumulative evolution of capabilities every month, every six months and so forth. A reasonable expectation is that we will be in this new world within five years, not 10. And the reason is that there’s so much money being invested in this path. There are also so many ways in which people are trying to accomplish this. 

You have the big guys, the large so-called frontier models at OpenAI, Microsoft, Google and Anthropic. But you also have a very large number of players who are programming at one level lower at much less or lower costs, all iterating very quickly.

“These agents are going to be really powerful, and it’s reasonable 
to expect that there will be millions of them out there.”

Gardels: You say “pull the plug.” How and when do you pull the plug? But even before you pull the plug, you know you are already in chain of thought reasoning, and you know where that leads. Don’t you need to regulate at some point along the capability ladder before you get where you don’t want to go?

Schmidt: A group of us from the tech world have been working very closely with the governments in the West on just this set of questions. And we have started talking to the Chinese, which of course, is complicated and takes time.  

At the moment, governments have mostly been doing the right thing. They’ve set up trust and safety institutes to learn how to measure and continuously monitor and check ongoing developments, especially of frontier models as they move up the capability ladder. 

So as long as the companies are well-run Western companies, with shareholders and exposure to lawsuits, all that will be fine. There’s a great deal of concern in these Western companies about the liability of doing bad things. It is not as if they wake up in the morning saying let’s figure out how to hurt somebody or damage humanity. Now, of course, there’s the proliferation problem outside the realm of today’s largely responsible companies. But in terms of the core research, the researchers are trying to be honest.

Gardels: By specifying the Western companies, you’re implying that proliferation outside the West is where the danger is. The bad guys are out there somewhere.

Schmidt: Well, one of the things that we know, and it’s always useful to remind the techno-optimists in my world, is that there are evil people. And they will use your tools to hurt people. 

The example that epitomizes this is facial recognition. It was not invented to constrain the Uyghurs. You know, the creators of it didn’t say we’re going to invent face recognition in order to constrain a minority population in China, but it’s happening

All technology is dual use. All of these inventions can be misused, and it’s important for the inventors to be honest about that. In open-source and open-weights models the source code and the weights in models [the numbers used to determine the strength of different connections] are released to the public. Those immediately go throughout the world, and who do they go to? They go to China, of course, they go to Russia, they go to Iran. They go to Belarus and North Korea. 

When I was most recently in China, essentially all of the work I saw started with open-source models from the West and was then amplified. 

So, it sure looks to me like these leading firms in the West I’ve been talking about, the ones that are putting hundreds of billions into AI, will eventually be tightly regulated as they move further up the capability ladder. I worry that the rest will not. 

Look at this problem of misinformation and deepfakes. I think it’s largely unsolvable. And the reason is that code-generated misinformation is essentially free. Any person  — a good person, a bad person — has access to them. It doesn’t cost anything, and they can produce very, very good images. There are some ways regulation can be attempted. But the cat is out of the bag, the genie is out of the bottle. 

That is why it is so important that these more powerful systems, especially as they get closer to general intelligence, have some limits on proliferation. And that problem is not yet solved.

Gardels: One thing that worries Fei-Fei Li of the Stanford Institute on Human-Centered AI is the asymmetry of research funding between the Microsofts and Googles of the world and even the top universities. As you point out, there are hundreds of billions invested in compute power to climb up the capability ladder in the private sector, but scarce resources for safe development at research institutes, no less the public sector....

....Eventually, in both the U.S. and China, I suspect there will be a small number of extremely powerful computers with the capability for autonomous invention that will exceed what we want to give either to our own citizens without permission or to our competitors.  They will be housed in an army base, powered by some nuclear power source and surrounded by barbed wire and machine guns..... 

....MUCH MORE

Again, only those with massive amounts of cash will be able to maximize the benefits of AI.

See advantage flywheels and hyper-Pareto distribution of profits if interested.

Our most recent post on Professor Li: "‘Godmother of A.I.’ Fei-Fei Li On Why You Shouldn’t Trust Any A.I. Company"