Skip to Main Content

Ethical AI and the Role of Trust, Equity, Standards

Medical data experts argue for components behind ethical AI and data use.

7m read
Written by:
Businessman on blurred background using digital artificial intelligence icon hologram 3D rendering
Photo Credit: sdecoret/iStock

As researchers and federal agencies adopt and apply data and artificial intelligence in their work, experts in the medical space called for trust, equity and, ultimately, standards around AI to set ethical guardrails around the technology.

Trust and equity in AI are “inextricably linked,” said Stanford University Associate Professor Tina Hernandez-Boussard during GovernmentCIO Media & Research’s Data Insights event Thursday. When developing AI, researchers must be mindful of bias in health data in AI algorithms that could lead to bias and diminished trust in the models.

“When we’re developing these technologies, we need to be very sensitive to that information,” Hernandez-Boussard said. “We need to be very sensitive to the types of data that we use in these algorithms, who it represents and how we can do a better job being more open and transparent about the inconsistencies in the data, about the biases in the data.”

National Institute of Health All of Us Program Policy Director Katherine Blizinsky added that trust can be better built when researchers set clear boundaries around acceptable uses of data, consulting with groups that could be disproportionately and negatively impacted by potential bias. Group harm, she said, is a dangerous consequence of not building trust and equity into AI development.

“How do we approach those communities and solicit feedback on making policies that make sense for them and make policies that prevent that group harm,” Blizinsky said. “We are in the process of learning how to do that, but it is something that we need to be committed to if we’re going to be successful.”

Hernandez-Boussard doubled down on collaboration with stakeholder communities in developing AI. Diverse teams within a research team, as well as across affected communities, can help clinicians and policymakers think differently about addressing certain questions or inquiries in research with algorithms.

Getting to ethical AI and data usage will also take a series of interlinked policies, Blizinsky said, like compliance protections within the Federal Information Security Management Act (FISMA) and data access standards. It’s important, the two experts said, to balance protection and bias prevention with a drive for innovation to seek progress, as all components are essential to research.

Even with ad hoc policies that federal agencies and researchers may adopt, there should be more universally enforced standards that data and AI should face, especially in the medical field, Hernandez-Boussard said. Her team at Stanford developed recommendations for reporting standards for AI applied in health care called MINimum Information for Medical AI Reporting (MINIMAR) as one area to consider.

“We need to know information on where the data is coming from, what’s the architecture or the technology being developed, what’s the output,” Hernandez-Boussard said. “We need to develop standards on what is the minimal amount of information that we need, and what are our standards regarding metrics, and what type of metrics should be applied.”

Until the nation reaches standards in ethics and use of data and AI, however, Blizinsky said that it is important for researchers to understand their responsibility in weaving in ethical policies and guardrails into their work to build trust and equity as they innovate.

“We need to take a step to acknowledge our social responsibility, and it’s only the beginning really because dealing with the value implications of our work is part of the responsible conduct of research,” Blizinsky said. “We need to press the role, the political, the social and the policy issues that are at stake. And I think that a good start is to talk about it, to think about it, to engage with other scientists about it, to engage with other non-scientists about it and especially thinking about it in terms of engaging with, again, those populations that could be disproportionately affected by those downstream effects.”

Related Content
Woman typing at computer

Stay in the know

Subscribe now to receive our curated newsletters

Subscribe