Equitable AI Requires Knowledge Sharing, Safeguards Against Bias

Equitable AI Requires Knowledge Sharing, Safeguards Against Bias

Federal AI adoption emphasizes collaboration and efforts to develop objective data models.

Technology leaders across government are looking to guide AI implementation through the adoption of proven best practices and a focus on building data models with minimal innate biases.

Speaking at the GovernmentCIO Media & Research Data Insights virtual summit, Director of Federal AI Implementations at General Services Administration's IT Modernization Centers of Excellence Anil Chaudhry described a growing approach across the public sector of using knowledge sharing to facilitate the AI applications in improving federal services.

Chaudhry noted that the most effective means of leveraging complex data analytics is through using proven methods already developed across the private sector.

“It's not about recreating the wheel. It's about taking what exists in the private sector, and applying it while making sure it's the right fit for an agency. So it's about leveraging commercial solutions and expertise from industry to deliver this enterprise-level transformation,” Chaudhry said.

Much of this has required a certain degree of calibration, particularly in being attentive to the modifications needed to apply a solution in supporting an agency’s distinct mission. Once these are accounted for, the time and resources needed to stand up an AI solution is markedly decreased.

“Regardless of the agency you get to, at the core of it their issues are parametrically similar. So it's really useful to suggest proven solutions to them. Instead of taking four years to get to where we need to just by partnering with industry, we can deliver results within six months, nine months,” Chaudhry said.

This represents a broader trend across government, where agencies are leaning on knowledge sharing to facilitate broader IT modernization that can persist irrespective of changes in presidential administrations.

“The fundamentals of good government don't change regardless of administration. The priorities may change in terms of which programs are more important. But the underlying issues with technology transformation and providing good service to citizens — those issues don't change,” Chaudhry said.

Chaudry outlined that making effective use of AI solutions requires particular efforts in terms of data curation, a process that has necessitated federal technologists being vigilant to prevent biases or harmful limitations from being built into foundational data models.

“AI at its core is taking patterns and expanding those patterns that scale. But those patterns and that algorithm are not developed magically somewhere. It's people developing those. So it's people bringing in their preexisting biases into any problem set,” Chaudhry said.

The solution, Chaudhry said, is to develop methods for preventing these biases from being built into the data sets and corrupting the potential efficacy of AI in providing critical insights.

“When I come into an agency, some of the leading questions I ask are: what does your program team that's going to implement the solution look like? Is there diversity on that team? Is there not just diversity in what we think of as civil rights diversity, but also diversity of thought? Are there people that can push back on the status quo, bring in, you know, challenge. What is that algorithm is supposed to look like?” Chaudhry said.

 
Standard