Skip to Main Content

Federal AI Test Cases Streamline Data Processing

AI leads focus on “low-hanging fruit” closely aligned with departmental needs and goals.

10m read
Written by:
Hands of robot and human touching on global virtual network connection future interface. Artificial intelligence technology concept.
Photo Credit: ipopba/istock.com

The most time-consuming part of any survey or study is taking the information and categorizing it into actionable data. Some government leaders are turning to artificial intelligence to alleviate the burden.

Senior Economist Alex Measure and his team at the Bureau of Labor Statistics were not immune to this process.

“We used to have to manually read through each of [hundreds of thousands of] descriptions and categorize them by hand,” Measure said, speaking at the NextGov Emerging Tech Summit Aug. 14. The Bureau of Labor Statistics’ injury and illness database is built upon text-based descriptions of work-related injuries and illnesses it receives from workplaces across the country each year.

“This year,” Measure said, “we did about 80% of that automatically using deep neural networks.”

Measure’s problem — and the artificial intelligence solution that his team implemented at BLS — demonstrates the catalyst for AI initiatives across the government. Agencies that rely upon large quantities of data, including the Internal Revenue Service, Census Bureau and Defense Department, are piloting test cases and studying robotics process automation to reduce operating costs and streamline the process from data collection to action.

The panelists at the NextGov event referred to AI initiatives in the short term as “low-hanging fruit,” but clarified that their agencies are not undertaking these initiatives without prior planning.

Greg Allen, chief of strategy and communications at the Defense Department’s Joint Artificial Intelligence Center, said there are three key components his team looks for when implementing AI.

“You need the data — and the right kind of data” as well as a user community willing to run a pilot program and mature algorithms that are relevant to the dataset, he said.

JAIC started its work about a year ago and some of its minimum viable product will soon enter its end-user test cases, Allen said. The next step for his team is taking user feedback and using it to more closely tie AI-enabled products to the end users.

IRS Deputy Chief Procurement Officer Harrison Smith concurred that an end-user focus is critical for any sort of emerging tech implication. He said everyone should ask three questions: “who is the end user; what do they need help with; and are they actually going to use the solution?”

Smith is concerned that several of his counterparts want to implement a solution based on market research, but without having researched the solution’s capabilities or applications. “You’re absolutely sure you want a [sportscar] … to travel on a road you don’t have yet,” he said. “From my perspective, if you work to standardize a particular application or technology, and you’re not sure how it works with your team, within your architecture, my belief is that you’re headed in the wrong direction.” Agencies should first focus on “near-term awareness” to inform their approach to emerging technologies, he suggested.

BLS started with an approach centered on “a problem we need to address,” and only used AI “if we have the data,” Measure said. Rather than chasing after the most hyped solution, he said, BLS usually starts with supervised machine learning to ensure the data analytics algorithms are automating the task correctly.

“We joke, ‘it’s not a Terminator, it’s a toaster oven,” Smith said of early AI initiatives in the short term. “It takes a set task and does it for you.”

While this description sounds unexciting compared to the hopes for AI in the long term, Smith is enthusiastic about the IRS’s approach to test piloting AI and RPA. Rather than wait months or years for acquisition of fast-changing technologies, he said, his office is writing short, narrowly defined solicitations designed to “improve the data and manage human work.”

Short-term contracts allow the IRS to efficiently collect feedback on the programs’ effectiveness and transfer that learning to future RPA and AI projects. These solicitations do not propose a solution, Smith said, but instead say, “Here are my problems, and here are my goals,” encouraging industry to develop innovative solutions tied to those factors rather than the specializations of a solicitation.

Measure was similarly enthusiastic about bringing transfer learning to BLS. “Transfer learning” will open up the range of tasks to apply AI and RPA as they are “techniques allowing you to train models when you don’t have a lot of training data,” he said.

Measure also looks forward to applying differential trust, a technology that allows teams to utilize data in machine learning and other applications while still protecting personal information and other sensitive aspects in the data.

“Data is the key ingredient to these systems,” he said. “These tools allow us to maintain usability and privacy.”

As a senior acquisition official, Smith said he is most excited about working with the rest of the senior staff at IRS on explaining the value proposition of AI.

“For being an office that buys stuff, we’re in sales quite a bit,” he joked. He has found that selling AI acquisition in terms of increasing employee productivity has been effective so far. “We’re trying to get our teams to focus on things that are more valuable to themselves and are more valuable to the customer.”

Woman typing at computer

Stay in the know

Subscribe now to receive our curated newsletters

Subscribe
Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe