Thursday, November 16, 2023

"Pentagon experiments find generative AI easy to exploit" (DARPA)

DARPA has had a profound interest in artificial intelligence for a very long time.*

From the Washington Times:

Powerful artificial intelligence models are easier to exploit than people know, and generative tools are not ready for prime time in the military, according to Defense Department officials.  

A Defense Advanced Research Projects Agency program blew past security constraints to probe complex algorithms called large language models and discovered resulting tech posed risks, according to program manager Alvaro Velasquez.

Such models are “a lot easier to attack than they are to defend,” he said in remarks shedding new light on Pentagon experiments of AI at a National Defense Industrial Association symposium on Halloween.

“I’ve actually funded some work under one of my programs at DARPA where we could completely bypass the safety guardrails of these LLMs, and we actually got ChatGPT to tell us how to make a bomb, and we got it to tell us all kinds of unsavory things that it shouldn’t be telling us, and we did it in a mathematically principled way,” he said.

Mr. Velasquez joined DARPA last year to research AI. He is managing programs scrutinizing AI models and tools, including one focused on machine learning techniques called Reverse Engineering of Deceptions, according to DARPA’s website.

Artificial intelligence is a field of science and engineering that uses advanced computing and statistical analysis to enable machines to complete tasks requiring complex reasoning.

The popularity of generative AI tools, creating text like it was written by a human, has grown rapidly in the past year as products such as ChatGPT solve problems and generate content upon people’s requests.

The Pentagon’s experiments with generative AI predate the arrival of ChatGPT in the marketplace, according to Kathleen Hicks, deputy defense secretary.

Ms. Hicks told reporters on Thursday that some Pentagon components have made their own AI models that are under experimentation with human supervision.

“Most commercially available systems enabled by large language models aren’t yet technically mature enough to comply with our ethical AI principles, which is required for responsible operational use,” she said. “But we have found over 180 instances where such generative AI tools could add value for us with oversight, like helping to debug and develop software faster, speeding analysis of battle damage assessments, and verifiably summarizing texts from both open source and classified data sets.”

The Defense Department unveiled a formal strategy for adopting AI on Thursday. The plan said America’s competitors will continue to grab at advanced AI tech as its potential use for warfighting expands....

....MUCH MORE
*
AI Forward
DARPA's initiative to explore future directions of trustworthy artificial intelligence for national security....
 
And some current programs: