DARPA has had a profound interest in artificial intelligence for a very long time.*
From the Washington Times:
Powerful artificial intelligence models are easier to exploit than people know, and generative tools are not ready for prime time in the military, according to Defense Department officials.
....MUCH MOREA Defense Advanced Research Projects Agency program blew past security constraints to probe complex algorithms called large language models and discovered resulting tech posed risks, according to program manager Alvaro Velasquez.
Such models are “a lot easier to attack than they are to defend,” he said in remarks shedding new light on Pentagon experiments of AI at a National Defense Industrial Association symposium on Halloween.
“I’ve actually funded some work under one of my programs at DARPA where we could completely bypass the safety guardrails of these LLMs, and we actually got ChatGPT to tell us how to make a bomb, and we got it to tell us all kinds of unsavory things that it shouldn’t be telling us, and we did it in a mathematically principled way,” he said.
Mr. Velasquez joined DARPA last year to research AI. He is managing programs scrutinizing AI models and tools, including one focused on machine learning techniques called Reverse Engineering of Deceptions, according to DARPA’s website.
Artificial intelligence is a field of science and engineering that uses advanced computing and statistical analysis to enable machines to complete tasks requiring complex reasoning.
The popularity of generative AI tools, creating text like it was written by a human, has grown rapidly in the past year as products such as ChatGPT solve problems and generate content upon people’s requests.
The Pentagon’s experiments with generative AI predate the arrival of ChatGPT in the marketplace, according to Kathleen Hicks, deputy defense secretary.
Ms. Hicks told reporters on Thursday that some Pentagon components have made their own AI models that are under experimentation with human supervision.
“Most commercially available systems enabled by large language models aren’t yet technically mature enough to comply with our ethical AI principles, which is required for responsible operational use,” she said. “But we have found over 180 instances where such generative AI tools could add value for us with oversight, like helping to debug and develop software faster, speeding analysis of battle damage assessments, and verifiably summarizing texts from both open source and classified data sets.”
The Defense Department unveiled a formal strategy for adopting AI on Thursday. The plan said America’s competitors will continue to grab at advanced AI tech as its potential use for warfighting expands....
*AI Forward
- AI-assisted Climate Tipping-point Modeling (ACTM)
- Civil Sanctuary
- Constructive Machine Learning Battles with Adversary Tactics (COMBAT)
- Critical Mineral Assessments with AI Support (CriticalMAAS)
- Ditto: Intelligent Auto-Generation and Composition of Surrogate
- ECoSystemic
- Enabling Confidence (EC)
- Gamebreaker
- Geometries of Learning (GoL)
- Ground Artificial Intelligence Language Acquisition (GAILA)
- Hybrid AI to Protect Integrity of Open Source Code (SocialCyber)
- In-Pixel Intelligent Processing (IP2)
- Measuring the Information Control Environment (MICE)
- Modeling Influence Pathways (MIPs)
- POCUS AI
- Proof Engineering, Adaptation, Repair, and Learning for Software (PEARLS)
- Reduction of Entropy for Probabilistic Organization (REPO)
- Reverse Engineering of Deceptions (RED)
- Reversible Quantum Machine Learning and Simulation (RQMLS)
- SHADE
- Shared Experience Lifelong Learning (ShELL)
- Signal Processing in Neural Networks (SPiNN)
- Time Aware Machine Intelligence (TAMI)