Congressman Pushes Faster AI Adoption in Government

Congressman Pushes Faster AI Adoption in Government

Rep. Will Hurd, head of IT Subcommittee, reflects on challenges, opportunities and urgencies of federal AI adoption.

The Oversight and Government Reform IT Subcommittee has been on a mission to increase the presence of artificial intelligence in government.

After a series of congressional hearings on the matter, Rep. Will Hurd, R-Texas, is realizing how far behind we are from robots taking over the world or replacing our colleagues. In these hearings, the committee heard from members of government, industry, academia and nonprofits about the challenges and opportunities AI brings — including ethics, funding, capabilities, security, consumer expectations and more.

In a recent piece written by Hurd in Fortune, he emphasizes government adopting AI doesn’t mean it’ll be run by AI; but rather, it’ll be “run by people with help from algorithms dramatically improving government services for all Americans,” he writes.

Hurd also believes AI can make it easier and faster for citizens to interact with government, while increasing government response time and speed. And after reflecting on how industry uses tools like robotic process automation to replace lost ATM cards, Hurd asks, “why can’t we automate government services like renewing passports?”

Automation Saving Time and Money

Hurd stressed the amount of hours and dollars automating routine business processes similar to the private sector will save government. According to the Deloitte Center for Government Insights, government could save 96.7 million federal hours and $3.3 billion each year.

And the government is already dabbling with bots. The General Services Administration’s Federal Acquisition Service has done successful pilots with RPA, and its Emerging Technologies Program has developed a governmentwide RPA community, along with its AI, blockchain, virtual and augmented reality communities, where agencies can share pilot programs and information. NASA’s Shared Services Center team has already successfully piloted bots in finance and human resources as well, but while the tech is gaining momentum, it’s a matter of making them core IT modernization.

AI Can Tackle Waste, Fraud and Abuse

AI can consume a lot of data and analyze the data for patterns, anomalies and duplications. Hurd cited a 2016 Government Accountability Office report that found overpayments in the Medicare program totaled $60 billion, and if AI was used to identify those overpayments faster, investigators could have focused on the costliest overpayments first and saved money.

In the first of the IT subcommittee's three-part AI hearing, members of industry were asked to help demystify AI and discuss its potential place in government. Ian Buck, vice president and general manager of accelerated computing at NVIDIA, said one of the areas where AI could be most helpful to government was in waste, fraud and abuse (as well as cyber defense, health care, transportation and defense platform sustainment cost).

Buck said the credit and insurance industries already use AI to identify suspicious transactions, and PayPal uses AI to detect credit fraud, which has reduced their fraud rates in half, saving billions of dollars.

So, why can’t the government pick up similar practices?

Addressing Security and Privacy

Hurd also finds it crucial the government invest in AI to improve the security of citizens, especially as other countries like China make plans to heavily invest in AI and become a world leader in the technology. But “it is in the interest of both our national and economic security that the United States not be left behind,” Hurd wrote.

And security includes protecting the nation from cyberwarfare and the evolution of cybersecurity, as good AI will be used to fight against bad AI. For example, Hurd cited Russia’s disinformation campaigns using AI to push fake news and false information or representations of people — hence, bad AI. But the Pentagon’s Project Maven is working to automate the analysis of millions of hours of video collected by drones and sensors with AI-based algorithms — an example of good AI.

And there’s always the concern around biased AI and accountability. Hurd said this goes beyond auditable algorithms and data sets, to responsible data management and ensuring people’s privacy is protected and ethical design is used.

Privacy was discussed in the third and last AI hearing, where Ben Buchanan, a postdoctoral fellow at the Science, Technology, and Public Policy Program at Harvard Kennedy School’s Belfer Center for Science and International Affairs, discussed how to mitigate AI systems’ use of personal data to preserve citizen privacy.  

There are innovations like differential privacy, which means adding “statistical noise” to a person’s data to obscure it while retaining a data set’s value; and on-device processing, which brings the AI system to the user rather than bringing the user’s data to an AI system central repository. These require technical skill, but Congress can encourage AI companies to use stricter safeguards.

AI in the Federal Making

In the second AI hearing, government officials discussed how their agencies already use AI to be more efficient or fund AI research and development. The committee learned that since the 1960s, the Defense Advanced Research Projects Agency has funded more than 50 programs in AI, but more needs to be done in areas of research in common sense reasoning and natural language processing. The National Science Foundation invests more than $100 million a year in programs spanning the AI technology stack, and the Homeland Security Department is exploring AI in its Science and Technology Cyber Security Division for predictive analytics for malware evolution and to detect anomalous network traffic and behaviors for defensive decision-making.

In other efforts, the Modernizing Government Technology Act was signed into law in 2017, so agencies have begun the process of modernizing outdated IT systems that have historically cost billions to maintain. Still, it’ll take time to implement new systems.

GSA’s Emerging Technology Office focuses on spreading knowledge of AI through government and supports the development of AI programs, but doesn’t have a “concrete plan of action,” Hurd wrote. And though the IT Subcommittee is committed to the AI cause, Hurd said it needs to make AI implementation a higher priority nationally and solve the current challenges getting in the way.