Skip to Main Content

Is NIST’s AI Risk Management Framework a Model For Agency AI Development?

A new bill can help the industry and public grapple with integration of the technology ethically.

13m read
Written by:
Gaithersburg, MD, USA 01-30-2021: Entrance of the Gaithersburg Campus of National Institute of Standards and Technology ( NIST ), a Physical sciences lab complex under US department of commerce.
Photo Credit: grandbrothers/Shutterstock.com

Federal leaders are calling attention to NIST’s AI framework as a guide that can help agencies further develop AI capabilities in a responsible and efficient manner.

“With the NIST AI risk management framework, you have something that’s already been put out there, vetted out and tested both by the private sector and public sector,” said Rep. Ted Lieu during an Axios event this week. “Let’s mandate the NIST AI Risk Management Framework for the federal government as well as for people who want to contract with the federal government.”

Agencies are seeing great promise in AI and especially now with generative AI across all sectors. Industry analysts this month advised industry CIOs to prioritize generative AI, while other officials in federal government see it helping save time and money.

But ensuring AI is integrated and used with optimal ethical principles in mind is a challenge, especially when agencies’ oversight only spans so far in the supply chain among its vendors and partners.

Lieu’s comments suggest the government can use its acquisition strategy to confirm that contractors commit to the NIST framework for firms contracting with agencies. Lieu cited the White House’s collaborative Blueprint for an AI Bill of Rights as a right step.

“[The White House has] convened stakeholders, they have gotten voluntary commitments from a number of AI companies,” Lieu said. “We’ve got something that looks like a pretty good framework. Let’s now apply that to the federal government and to people who want to contract with the federal government.”

It Starts With Education

A common concern about AI is that many commonly feel it’s intimidating and mysterious. It’s part of the reason the Senate is holding a series of insight forums about AI use in the public and private sectors. Lieu recently introduced a bill in the House to educate the public on ways to responsibly use the emerging technology.

“I think we need to do more to educate both members [of Congress] as well as the American public,” Lieu said. “There’s so much going on in artificial intelligence right now that you really need a lot of voices that can come in and talk about all the different aspects of it.”

Lieu echoed concerns that many officials have about generative AI, including the potential for misinformation, bias and other inaccuracies that can come from bad input data. Hearing how to discern good AI output from bad output is key.

“If you look at the large generative AI models, they are not designed to seek the truth, … they’re essentially popularity models,” Lieu said. ”I don’t want people to think just because it came from AI that, therefore, it is absolutely true. Sometimes it’s exactly quite the opposite.”

The Human Factor

While federal leaders repeatedly say out-of-control AI turning on humanity is a far-fetched idea, some more realistic concerns include those around inaccuracies, bad information and poor control. Sanja Basaric, former AI program lead at the Department of Health and Human Services, said that humanity can’t be lost in the rush to adopt AI in government.

“We’re talking about human lives and safety and the rights of Americans. We cannot get that wrong. There has to be a human that makes the final decisions for those important, impactful AI situations,” Basaric said during the GovCIO Media & Research Health IT Summit in September.

Lieu echoed that humans need to oversee AI in its decisions and that rules need to be established for human oversight.

“There always has to be a human in the loop,” Lieu said. “It goes back to the fundamental principle with what we see with AI right now: that it is not designed to seek the truth.”

Where AI is Headed

Agencies are standing up AI throughout government. The Department of Veterans Affairs is already using it to treat millions of veterans, the Homeland Security Department is already adopting AI technology and other agencies are at different points of exploration.

The Defense Department, for example, has been working with AI for decades. In August, the Air Force Research Laboratory said it successfully flew an XQ-58A Valkyrie drone entirely run by AI. The department also is standing up the Pentagon’s Center for Calibrated Trust Measurement and Evaluation, which will address challenges around assessing DOD AI systems.

“The speed in which we’re achieving new things … it’s blowing my mind. I’ve literally been doing AI since 1973. I’m now on my 50th year of doing AI and, in that time, it’s never been as exciting as it is now,” said Steve Rogers, senior scientist for automatic target recognition and sensor fusion at the Air Force Research Laboratory.

Lieu pledged that his bill would help educate the public about AI, no matter their starting point.

“You have people who know a lot about AI, you have people that have never experienced it, and then you have people in between,” Lieu said, “That’s why we should have a national Blue Ribbon AI Commission that can establish some of that common ground.”

Woman typing at computer

Stay in the know

Subscribe now to receive our curated newsletters

Subscribe
Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe