Thursday, November 14, 2024

"OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI"

 From Bloomberg, November 13:

Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.

OpenAI was on the cusp of a milestone. The startup finished an initial round of training in September for a massive new artificial intelligence model that it hoped would significantly surpass prior versions of the technology behind ChatGPT and move closer to its goal of powerful AI that outperforms humans.

But the model, known internally as Orion, did not hit the company’s desired performance, according to two people familiar with the matter, who spoke on condition of anonymity to discuss company matters. As of late summer, for example, Orion fell short when trying to answer coding questions that it hadn’t been trained on, the people said. Overall, Orion is so far not considered to be as big a step up from OpenAI’s existing models as GPT-4 was from GPT-3.5, the system that originally powered the company’s flagship chatbot, the people said.

OpenAI isn’t alone in hitting stumbling blocks recently. After years of pushing out increasingly sophisticated AI products at a breakneck pace, three of the leading AI companies are now seeing diminishing returns from their costly efforts to build newer models. At Alphabet Inc.’s Google, an upcoming iteration of its Gemini software is not living up to internal expectations, according to three people with knowledge of the matter. Anthropic, meanwhile, has seen the timetable slip for the release of its long-awaited Claude model called 3.5 Opus.

The companies are facing several challenges. It’s become increasingly difficult to find new, untapped sources of high-quality, human-made training data that can be used to build more advanced AI systems. Orion’s unsatisfactory coding performance was due in part to the lack of sufficient coding data to train on, two people said. At the same time, even modest improvements may not be enough to justify the tremendous costs associated with building and operating new models, or to live up to the expectations that come with branding a product as a major upgrade.

There is plenty of potential to make these models better. OpenAI has been putting Orion through a months-long process often referred to as post-training, according to one of the people. That procedure, which is routine before a company releases new AI software publicly, includes incorporating human feedback to improve responses and refining the tone for how the model should interact with users, among other things. But Orion is still not at the level OpenAI would want in order to release it to users, and the company is unlikely to roll out the system until early next year, one person said. The Information previously reported some details of OpenAI’s challenges developing its new model, including with coding tasks.

These issues challenge the gospel that has taken hold in Silicon Valley in recent years, particularly since OpenAI released ChatGPT two years ago. Much of the tech industry has bet on so-called scaling laws that say more computing power, data and larger models will inevitably pave the way for greater leaps forward in the power of AI.

The recent setbacks also raise doubts about the heavy investment in AI and the feasibility of reaching an overarching goal these companies are aggressively pursuing: artificial general intelligence. The term typically refers to hypothetical AI systems that would match or exceed humans on many intellectual tasks. The chief executives of OpenAI and Anthropic have previously said AGI may be only several years away.

“The AGI bubble is bursting a little bit,” said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face. It’s become clear, she said, that “different training approaches” may be needed to make AI models work really well on a variety of tasks — an idea a number of experts in artificial intelligence echoed to Bloomberg News.

In a statement, a Google DeepMind spokesperson said the company is “pleased with the progress we’re seeing on Gemini and we’ll share more when we’re ready.” OpenAI declined to comment. Anthropic declined to comment, but referred Bloomberg News to a five-hour podcast featuring Chief Executive Officer Dario Amodei that was released Monday.

“People call them scaling laws. That’s a misnomer,” he said on the podcast. “They’re not laws of the universe. They’re empirical regularities. I am going to bet in favor of them continuing, but I’m not certain of that.”

Amodei said there are “lots of things” that could “derail” the process of reaching more powerful AI in the next few years, including the possibility that “we could run out of data.” But Amodei said he’s optimistic AI companies will find a way to get over any hurdles.

Plateauing Performance
The technology that underpins ChatGPT and a wave of rival AI chatbots was built on a trove of social media posts, online comments, books and other data freely scraped from around the web. That was enough to create products that can spit out clever essays and poems, but building AI systems that are smarter than a Nobel laureate — as some companies hope to do — may require data sources other than Wikipedia posts and YouTube captions....

....MUCH MORE

If interested see also:

November 12 - "OpenAI reportedly developing new strategies to deal with AI improvement slowdown"  

November 11 - "Andreessen Horowitz Founders Notice A.I. Models Are Hitting a Ceiling"