Skip to Main Content

DOD CDAO Rethinks Adopting Department-Wide AI Acquisition Guidance

The AI office is considering whether to provide more guidance on AI acquisition.

7m read
Written by:
CDAO Rethinks
U.S. Air Force Tech. Sgt. Alyssa Wier, weapons director assigned to the 176th Air Defense Squadron, Alaska Air National Guard tests the new Battle Management Training NEXT system Aug. 26, 2021. Photo Credit: Maj. Kimberly Burke / DVIDS

As the Defense Department’s Chief Digital and Artificial Intelligence Office is thinking through its acquisition best practices for artificial intelligence, officials said it might not develop a department-wide AI acquisition guidance after all.

“So, judging from my response, we are not far along in providing department AI acquisition guidance, but I don’t know if we necessarily need to,” the office’s Deputy Chief Margaret Palmieri said at a RAND Corporation event Tuesday.

“There’s a testing evaluation piece around development that’s not entirely acquisition. It could be, and I think we’re trying to find that balance of … what degree is this a core set of tools that we ask people to use to what is it, I hate to say it, a checklist, but to what extent is it a set of criteria that developers inside of government or industry must meet in order to field AI, whether that’s on the development .. or operational test and evaluation side?” she added.

Palmieri mentioned that the office already has its core acquisition vehicles, including Tradewind, a suite of services designed to accelerate the adoption of AI and machine learning, as well as data analytics solutions across the department. Last year, the platform launched the Tradewind Solutions Marketplace, a digital repository of post-competition designed to help the DOD solve its most pressing challenges around AI and machine learning technologies.

“Tradewind is the platform really focused on kind of three things: increasing speed, enabling a variety of industry partners to play, and just agility in how well the contract and the needs can meet both the industry and the user, but not really on the path for policy quite yet,” Palmieri said.

“What we’ve really been trying to wrestle with instead is not to over-centralize because the department is so diverse and distributed and so large that we want innovation to happen at the edge,” she added.

The office has also been exploring how it can apply the large language models to defense use cases. It has been experimenting with different generative AI models through a series of experiments called the Global Information Dominance Experiments (GIDE) to test solutions around AI processes.

“Really, just to test out, you know, how do they work? Can we train them on DOD data, or tune them on DOD data? How do our users interact with them? And then what metrics do we want to come up with based off of what we were seeing to facilitate evaluation of these tools? Because there are really great evaluation metrics for generative AI yet,” Palmieri said.

Palmieri emphasized that DOD needs to pay more attention to the possible negative consequences of the technology, particularly referring to “hallucination” when the technology generates false information.

“There are going to be use cases that it’s really really good for, and there are going to be use cases it’s not good for. What we found is there’s not enough attention being paid to the potential downsides of generative AI — specifically, hallucination,” Palmieri said

“This is a huge problem for DOD and it really matters for us and how we apply that, and so we’re looking to work more closely with industry on these types of downsides and not just hand-wave them away,” she added.

Woman typing at computer

Stay in the know

Subscribe now to receive our curated newsletters

Subscribe
Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe