Skip to Main Content

VA’s Road to Trustworthy AI with Transparency, Standards

The agency is endeavoring to guide its artificial intelligence capacities through a foundational integration of AI ethics principles.

7m read
Written by:
Data scientists. Male programmer using laptop analyzing and developing in various information on futuristic virtual interface screen. Algorithm. marketing and deep learning of artificial intelligence
Photo Credit: istock:ipopba

As federal agencies advance capabilities in artificial intelligence, they also are honing in on ethical considerations to ensure data and algorithms are trustworthy and equitable from the get-go. The Department of Veterans Affairs has been prioritizing developing trustworthy AI within its modernization program, following both a trustworthy AI executive order signed in December 2020, NIST standards and internal directives maintained by the agency’s National Artificial Intelligence Institute (NAII).

Speaking at the GovLoop AI and Ethics seminar earlier this month, NAII Director Gil Alterovitz outlined VA’s comprehensive approach to ensuring AI development follows rigorous standards that allow for best use and an ethical approach to leveraging new technology. Alterovitz noted that this requires context that renders AI ethically sound and technically reliable, and one that upholds the requirements for both.

“It is useful to note that trustworthy artificial intelligence is ethical AI, but not all ethical AI is always trustworthy AI. Trustworthy AI when it’s implemented is ethical, but also removes potential biases and protects privacy. Upholding this allows for increased adoption of artificial intelligence,” Alterovitz said.

VA is endeavoring to ensure the dual requirements of privacy and consent are maintained in forming AI models, but also that biases that could lead to inaccuracies and inefficacies are also avoided in the architecture of the models themselves.

“It is important to look at the underlying AI models themselves. They can be designed and programmed correctly, but if the training data is flawed then the models can still have biased outcomes. Everything can be transparent, but it’s still not producing the kind of output that you want. So there are a number of factors that we’re looking at when measuring success in terms of performance. We want to have increased model effectiveness and improved accuracy, but we want to make sure we have that cybersecurity and private health information are protected as well,” Alterovitz said.

This has entailed transparency on how data is used in VA’s AI model, while encouraging collaboration and buy-in from veterans to help with applying these models in health care capacities.

“We’ve been looking at gathering input on what are the use cases where AI could make the biggest difference and have the least risks. We have a veteran engagement board that we’ve been engaging to get these important use cases. We recently put out guidance documents where we discuss how we are working on different barriers in different priority AI use cases so people can look those up in a public document. And we’re also gathering additional use cases, whether they be in the clinical area and looking at diagnosis and prognosis, or in processing text,” Alterovitiz said.

This also includes an internal focus on understanding how the foundational data inputs and models extrapolate the results provided, while making sure this process is fully disclosed to the public so the technical implications are fully known.

“Accountability is very important. But one that I wanted to really emphasize is that AI should be understandable. If what we’re doing in the models are done in such a way that they’re understandable and we can communicate how they’re working, then that will help give people confidence in basic AI and machine learning, so that people can understand the insights and can see where they come from. As you trace that out, you know where it is using which kind of data, and then it can give you confidence that it’s not releasing any data that it should not,” Alterovitz said.

As a core priority, applying AI to research, health care and agency services is an ongoing process, and ensuring AI is modeled and leveraged ethically from the start is essential to producing the best possible outcomes and maintaining public trust.

“AI is a journey. It’s not a destination. You don’t just build this AI and it’s done. It’s more like a path. To get toward adoption, trustworthy AI is one of the waystations along that journey … We’re at a very critical time where there are some areas where AI is becoming more efficient than humans are, but there’s many areas where humans are better. We’re at that inflection point, so decisions we make now will have a big influence on the future,” Alterovitz said.

Related Content
Woman typing at computer

Stay in the know

Subscribe now to receive our curated newsletters

Subscribe