OpenAI Q*—Credit Where Credit Is Due: The First Article We Saw Hinting That Sam Altman Thought He Was Building God
It was at Futurism on November 15, just pre-firing, re-hiring, etc. :
Sam Altman Seems to Imply That OpenAI Is Building God Is that what AGI is going to be?
Ever since becoming CEO of OpenAI in 2019, cofounder Sam Altman has made the company's number one mission to build an "artificial general intelligence" (AGI) that is both "safe" and can benefit "all of humanity."
OpenAI's own definition
of AGI is a "system that outperforms humans at most economically
valuable work," a far more down-to-earth description of what amounts to
an omnipotent "superintelligence" for Altman.
In an interview with The Atlantic
earlier this year, Altman painted a rosy and speculative vision an
AGI-powered future, describing a utopian society in which "robots that
use solar power for energy can go and mine and refine all of the
minerals that they need," all without the requiring the input of "human
labor."
And Altman isn't the only one invoking the language of a God-like AI in the sky.
"We’re creating God," an AI engineer working on large language models told Vanity Fair in September. "We're creating conscious machines."....
Sadly for the rest of us and maybe for someone building God, Altman may have a personality disorder. Washington Post, November 22:
Altman’s polarizing past hints at OpenAI board’s reason for firing him Before OpenAI, Altman was asked to leave by his mentor at the prominent start-up incubator Y Combinator, part of a pattern of clashes that some attribute to his self-serving approach
Friday’s shocking ouster
of Sam Altman, who negotiated his return as CEO of OpenAI late Tuesday
night, was not the first time the shrewd Silicon Valley operator has
found himself on the outs.
Four
years ago, one of Altman’s mentors, Y Combinator founder Paul Graham,
flew from the United Kingdom to San Francisco to give his protégé the
boot, according to three people familiar with the incident, which has
not been previously reported.
Graham had surprised the tech world in2014
by tapping Altman, then in his 20s, to lead the vaunted Silicon Valley
incubator. Five years later, he flew across the Atlantic with concerns
that the company’s president put his own interests ahead of the
organization — worriesthat would be echoed by OpenAI’s board.
Though
a revered tactician and chooser of promising start-ups, Altman had
developed a reputation for favoring personal priorities over official
duties and for an absenteeism that rankled his peers and some of the
start-ups he was supposed to nurture, said two of the people, as well as
an additional person, all of whom spoke on the condition of anonymity
to candidly describe private deliberations. The largest of those
priorities was his intense focus on growing OpenAI, which he saw as his life’s mission, one person said.
A
separate concern, unrelated to his initial firing, was that Altman
personally invested in start-ups he discovered through the incubator
using a fund he created with his brother Jack — a kind of double-dipping
for personal enrichment that was practiced by other founders and later
limited by the organization.
“It was the school of loose management that is all about prioritizing what’s in it for me,” said one of the people.
Graham did not respond to a request for comment.
Though Altman’s Friday ouster has been attributed in numerous news media reports to an ideological battle between safety concerns vs. commercial interests, a person familiar with the board’s proceedingssaid
the group’s vote was rooted in worries he was trying to avoid any
checks on his power at the company — a trait evidenced by his
unwillingness to entertain any board makeup that wasn’t heavily skewed
in his favor.
Allegations of self-interest jeopardized the first days ofnegotiations
to broker Altman’s return to OpenAI, which is the leading artificial
intelligence company and is responsible for ChatGPT.
Over the weekend, the four members of the original board, including three independent directors, had been willing to bring Altman back
as CEO and replace themselves as long as Altman agreed to a group that
promised meaningful oversight of his activities, according to the person
familiar with the board, who spoke on the condition of anonymity to
discuss sensitive matters.
Though
the board met with and approved of one of Altman’s recommended
candidates, Altman was unwilling to talk to anyone he didn’t already
know, said the person. By Sunday, it became clear that Altman wanted a
board composed of a majority of people who would let him get his way.Another
person familiar with Altman’s thinking said he was willing to meet with
the board’s shortlist of proposed candidates, except for one person
whom he declined on ethical grounds.....
Here's a backgrounder from Analytics India Magazine, November 23:
OpenAI is reportedly working on a project Q* (pronounced Q-Star), capable of solving unfamiliar math problems.
A few people at OpenAI believe that Q* could be a big step towards achieving artificial general intelligence (AGI).
At the same time, this new model is raising concerns among some AI
safety researchers due to the accelerated advancements, particularly
after watching the demo of the model circulated within OpenAI in recent
weeks, as per The Information.
Karpathy is mostly talking about building an AI system where it
involves a trade-off between centralisation and decentralisation of
decision-making and information. In order to achieve optimal results,
you have to balance these two aspects, and Q-Learning seems to be
fitting perfectly in the equation to enable all of this.
What is Q-Learning? Experts believe that Q* is built on the principles of Q-learning which is a foundational concept in the field of AI, specifically in the area of reinforcement learning. Q-learning’s algorithm is categorised as model-free reinforcement learning, and is designed to understand the value of an action within a specific state...