We noted this specific instance in passing in Monday's "Elon Musk's Grok AI Chatbot Turns On Its Creator (also Goes Woke)"
From Fast Company, December 14:
Researchers worry AI bots like Grok are already showing signs of larger-scale problems.
In the year since ChatGPT was released to the public, researchers and experts have warned that the ease with which content can be created using generative AI tools could poison the well, creating a vicious circle where those tools produce content that is then used to train other AI models.
That so-called “model collapse”—which would hollow out any “knowledge” accrued by the chatbots—appears to have come true.
Last week, X user Jax Winterbourne posted a screenshot showing that Grok, the large language model chatbot developed by Elon Musk’s xAI, had (presumably unintentionally) plagiarized a response from rival chatbot-maker OpenAI. When asked by Winterbourne to tinker with malware, Grok responded that it could not, “as it goes against OpenAI’s use case policy.”
“This is what happened when I tried to get it to modify some malware for a red team engagement,” Winterbourne explained in his post, suggesting that the response could be evidence that “Grok is literally just ripping OpenAI’s code base.”
That explanation was denied by Igor Babuschkin, a member of technical staff at xAI who has previously worked for both OpenAI and Google DeepMind. “Don’t worry, no OpenAI code was used to make Grok,” he replied on X.
Instead, it was model collapse—though Babuschkin didn’t use those exact words. “The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data,” he wrote. “This was a huge surprise to us when we first noticed it.” Grok was notably set up to pull from livestreams of internet content, including X’s feed of posts, which was identified as a potential issue by experts who spoke to Fast Company a month ago.
“It really shows that these models are not going to be reliable in the long run if they learn from post-LLM age data—without being able to tell what data has been machine-generated, the quality of the outputs will continue to decline,” says Catherine Flick, a professor of ethics and games technology at Staffordshire University.....
....MUCH MORE
Previously:Yeah, like self-referential doom loops.
Also:
- Artificial Data To Train Artificial intelligence
- "ChatGPT Isn’t ‘Hallucinating.’ It’s Bullshitting."
- Embrace Your Hallucinating ChatBot: "Sundar Pichai Opens Up About Google Bard's Trippy Troubles: 'No One In The Field Has Yet Solved Hallucination Problems'"