Tuesday, April 16, 2024

"Is Google's AI Actually Discovering 'Millions of New Materials?'" (GOOG; EVIL)

 Magic 8-ball says "No" (also Betteridge)

From 404 Media April 11: 

"In the DeepMind paper there are many examples of predicted materials that are clearly nonsensical."

In November, Google’s AI outfit DeepMind published a press release titled “Millions of new materials discovered with deep learning." But now, researchers who have analyzed a subset of what DeepMind discovered say "we have yet to find any strikingly novel compounds" in that subset.

“AI tool GNoME finds 2.2 million new crystals, including 380,000 stable materials that could power future technologies,” Google wrote of the finding, adding that this was “equivalent to nearly 800 years’ worth of knowledge,” that many of the discoveries “escaped previous human chemical intuition,” and that it was “an order-of-magnitude expansion in stable materials known to humanity.” The paper was published in Nature and was picked up very widely in the press as an example of the incredible promise of AI in science. 

Another paper, published at the same time and done by researchers at Lawrence Berkeley National Laboratory “in partnership with Google DeepMind … shows how our AI predictions can be leveraged for autonomous material synthesis,” Google wrote. In this experiment, researchers created an “autonomous laboratory” (A-Lab) that used “computations, historical data from the literature, machine learning, and active learning to plan and interpret the outcomes of experiments performed using robotics.” Essentially, the researchers used AI and robots to remove humans from the laboratory, and came out the other end after 17 days having discovered and synthesized new materials, which the researchers wrote “demonstrates the effectiveness of artificial intelligence-driven platforms for autonomous materials discovery.” 

But in the last month, two external groups of researchers that analyzed the DeepMind and Berkeley papers and published their own analyses that at the very least suggest this specific research is being oversold. Everyone in the materials science world that I spoke to stressed that AI holds great promise for discovering new types of materials. But they say Google and its deep learning techniques have not suddenly made an incredible breakthrough in the materials science world. 

In a perspective paper published in Chemical Materials this week, Anthony Cheetham and Ram Seshadri of the University of California, Santa Barbara selected a random sample of the 380,000 proposed structures released by DeepMind and say that none of them meet a three-part test of whether the proposed material is “credible,” “useful,” and “novel.” They believe that what DeepMind found are “crystalline inorganic compounds and should be described as such, rather than using the more generic label ‘material,’” which they say is a term that should be reserved for things that “demonstrate some utility.” 

In the analysis, they write “we have yet to find any strikingly novel compounds in the GNoME and Stable Structure listings, although we anticipate that there must be some among the 384,870 compositions. We also note that, while many of the new compositions are trivial adaptations of known materials, the computational approach delivers credible overall compositions, which gives us confidence that the underlying approach is sound.” 

"most of them might be credible, but they’re not very novel because 
they’re simple derivatives of things that are already known"


There seems to be a deep dishonesty coming out of Google and Google AI.

And that would be entirely on the people at Google:

"Why AI bias is a systemic rather than a technological problem"

Following up on March 31's "The Purpose Of A System Is What It Does...":

“....There is after all,” Beer observed, “no point in claiming that the purpose of a system is to do what it constantly fails to do.”

In late February when Google's Gemini generative AI generated some hubbub with the portraits of a black George Washington or a black Nazi, commenters missed the point of what the people at Google were doing. As Marc Andreessen—a guy who knows something about tech having developed the first commercial browser, among other things—put it regarding these so-called 'mistakes':

Possibly related at 404 Media, April 12:

OpenAI Training Bot Crawls 'World's Lamest Content Farm' 3 Million Times in One Day