Artificial intelligence is poised to impact many aspects of society, and policymakers are keen to make sure federal agencies are integrating the technology safely and ethically. Technologists need to make sense of these new directives and compliance requirements to ethically harness the power of AI.
In health care, the possibilities in applying AI are significant. AI can assist in areas like detecting breast cancer and other health services. The Substance Abuse and Mental Health Services Administration (SAMHSA) is looking to implement a chatbot to act as a virtual assistant helping patients find answers around mental health and substance use addiction crises.
Health leaders are also using AI to tackle fraud, waste and abuse.
“We use artificial intelligence and machine learning to find potential fraud that would not be apparent to the human eye. We try to use the latest technology to make potential fraud easier to detect more quickly,” a spokesperson from the Centers for Medicare and Medicaid told GovCIO Media & Research.
The Department of Veteran Affairs sees AI as the “next frontier” of health care.
“There are new possibilities [AI] is going to open for health IT, where AI may have its own ideas that come up, and we'll engage the people we're talking with,” VA AI Chief Gil Alterovitz told GovCIO Media & Research in an interview last year.
Putting AI Policies Into Practice
There are various frameworks and directives guiding the use of AI, including the White House's AI Bill of Rights, the National Institute of Standards and Technology (NIST)’s voluntary AI Risk Management Framework and the Defense Department’s ethical AI principles.
These directives represent the prioritized solutions chosen by Leidos. To ensure compliance with prevailing regulations, laws and policies, the company employs an internal framework that enables its team to remain informed and up to date. This framework also facilitates the development of AI solutions that uphold both safety and efficacy standards.
“There is an internal framework that we use to make sure that it captures the current regulations and laws and policies. And it's a framework that gets enhanced as things change, but it's a framework that we all adhere to when we are developing our AI and machine-learning solutions,” Narasa Susarla, solution architect in Leidos’ Health Group, told GovCIO Media & Research.
To deliver the right solution, Susarla described combining technology delivered in a framework called FAIRS with the Leidos 4A methodology, which begins with analysis, assistance and augmentation, progressing up through automation.
“This is basically a methodology for us to gradually introduce and increase the level of AI capability while building human trust and reducing error,” Ning Yu, chief NLP research scientist and technical fellow in Leidos’ AI/ML Accelerator, told GovCIO Media & Research. “We don't want to jump into automation directly, we want to be able to really understand the potential data bias, human bias as well as gradually build trust when working with humans.”
Guided by this framework, Leidos has successfully developed and deployed large language model (LLM) applications in the health domain and will continue to develop more initiatives for responsive adaptation of the newer models. One initiative is to develop specific ethical assessments to help assess and mitigate risks throughout the lifecycle of generalized solutions, Yu said.
“When it comes to integrating generative AI, first and foremost we want to make sure we are still developing secure and responsible solutions with these new tools,” Yu said.
One generative AI project looks at how it can help the patient experience around filling in claims forms more quickly.
“Generative AI can be used to assist medical providers filling out medical forms by pre-filling the forms based on hundreds or thousands of pages of patient’s medical record,” Yu added. “It can also help the providers diagnose, take notes, assist patient-doctor communication and also train staff.”
Partnerships, especially with the human, are essential to putting those advantages into action and creating well-rounded solutions.
“Most of it is really co-joined development activities,” Susarla said. “We're trying to look at all kinds of innovative solutions and some of these partners are helping us figure those out. Additionally, we are also focusing on enhancing image-processing capabilities and exploring various audio aspects related to communication.”
“We add the human into the workflow loop, especially in health because we are looking to develop AI that can support clinicians and lead to better care outcomes, improve productivity and efficiency of the care delivery,” Susarla added.