Skip to Main Content

DOD is Standing Up a Pilot Center to Help Assess AI Trustworthiness

The Pentagon’s Center for Calibrated Trust Measurement and Evaluation will address challenges around assessing its AI systems.

7m read
Written by:
DOD Is Standing Up Pilot
Under Secretary of Defense for Research and Engineering Heidi Shyu speaks at a press briefing on June 27, 2022. Photo Credit: Lisa Ferdinando / DVIDS

The Defense Department is in the midst of establishing a center that will help the U.S. armed forces evaluate the trustworthiness of military systems powered by artificial intelligence.

The Center for Calibrated Trust Measurement and Evaluation comes as the Pentagon is starting to accelerate the use of AI in military applications, according to Kim Sablon, principal director for trusted AI and autonomy at DOD’s Office of the Under Secretary of Defense for Research and Engineering.

“There’s a balance of roles and responsibilities between humans and machines, and there’s different levels of human autonomy interactions that we ought to be thinking about,”  Sablon said at the NDIA’s Emerging Technologies for Defense conference Tuesday. “I just want to put it out there that at least we’re taking some critical steps to tackling those.”

The new effort is part of a broader effort across the U.S. military to embrace AI, which has vast opportunities but also presents a wide array of risks.

Earlier this month, the Air Force Research Laboratory said it successfully flew an XQ-58A Valkyrie drone entirely run by AI.

The lab’s Autonomous Air Combat Operations team developed the algorithms that “matured during millions of hours in high fidelity simulation events, sorties on the X-62 VISTA, Hardware-in-the-Loop events with the XQ-58A and ground test operations.”

“The speed in which we’re achieving new things … it’s blowing my mind. I’ve literally been doing AI since 1973. I’m now on my 50th year of doing AI and, in that time, it’s never been as exciting as it is now,” said Steve Rogers, senior scientist for automatic target recognition and sensor fusion at the Air Force Research Laboratory.

“Last December, we publicly acknowledged we flew in Air Force F-16, with our AI bots controlling it, doing mission-related tasks. And just a couple of weeks ago, we publicly announced we flew in a Kratos XQ-58 drone over the Gulf of New Mexico doing it, being controlled by AI, doing mission-related tasks. Things have never happened this fast. It is an extremely exciting time,” Rogers added.

The Air Force requested $5.8 billion over the next five years in its budget for the Collaborative Combat Aircraft program. The money will allow the service to build crewless weapon systems run by AI in an effort to enhance crewed weapon systems and achieve air superiority.

The new Center for Calibrated Trust Measurement and Evaluation will be another tool in the DOD’s toolkit to address the core challenges of ensuring that AI military systems are reliable and accurate.

But experts say that while the technology is developing rapidly, the path to trustworthy AI will be long and complicated. Even defining what it means to have a trustworthy and reliable system could be a challenge.

“During the different stages of AI lifecycle through the design, development, deployment and regular monitoring of the systems, it’s really important to reach to a very broad sense of expertise … the tech community, but also … psychologists, sociologists, cognitive scientists to be able to help us understand the impact of the systems,” NIST Information Technology Laboratory Chief of Staff Elham Tabassi told GovCIO Media & Research in March.

Woman typing at computer

Stay in the know

Subscribe now to receive our curated newsletters

Subscribe
Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe