Skip to Main Content

AI Game-Changers in Federal IT

Leaders and stakeholders across federal government agencies weigh the implications of artificial intelligence.
7m read
Written by:
Photo Credit: shilh/iStock

For something as, well, “artificial” as artificial intelligence, experts often use nature metaphors when they talk about it. DARPA officials regularly talk about the three waves of AI when they discuss the history of AI development and ongoing AI projects. MeriTalk’s upcoming 2019 Cloud Computing Brainstorm is aptly titled, “Modernization Tsunami: Ride the Cloud Wave.” And Dr. Timothy Persons, chief scientist and managing director of the Government Accountability Office recently presented “The A.I. Wave is Coming. Will You Surf It or Risk Getting Swamped?” at an April FCW Workshop on preparing agencies for AI and automation.

“Are you going to ride the wave or be subsumed by the wave?” Persons asked during his presentation.

Without going too far down the rabbit hole, it might be worth examining why experts are framing the discussions around AI like this. Is there something about AI reminiscent of the power of the ocean? Are we worried it will be uncontrollable, calm one moment and raging the next? Even if we prepare for it, will there be moments when all of our preparation is for nothing, and we are subjected to forces beyond our control?

Perhaps the simple answer is the most accurate, and the water metaphor is an evocative, but ultimately harmless, description that lets nonexperts better understand an inherently technical subject.

Regardless of the implications of using water metaphors to frame the narrative around AI, what is clear is that AI and related emerging technologies are here to stay, from civilian agency uses like at the National Science Foundation (NSF) and the Department of Veterans Affairs to military applications like at the U.S. Naval Research Laboratory and across the Defense Department.

Federal CIO Suzette Kent highlighted the transformative nature of AI across the board from national security to agriculture, medicine and transportation as the reason why the federal government is making its current AI push.

“The transformative capabilities we’re talking about now help us solve some of our most complex problems faster and in ways that we couldn’t even imagine many years ago,” Kent said at the AFCEA Washington, D.C. Artificial Intelligence and Machine Learning Tech Summit March 27.

Dr. Lynne Parker, assistant director for artificial intelligence at the White House Office of Science and Technology Policy, also emphasized the current administration’s focus on AI and the resources being made available to enhance the country’s AI capabilities.

“At the federal level … we’re doing our part to try to help move the nation along together,” Parker said. “This administration is committed to our continued leadership in AI, ensuring that AI benefits the American people and it reflects our American values.” Parker also said that AI has been something that the country will need to grapple with, signifying an awareness of the benefits and drawbacks that may arise.

Parker highlighted the February 2019 Executive Order on Maintaining American Leadership in Artificial Intelligence as a guide for how the federal government can accelerate its AI leadership, using AI.gov, which launched in mid-March, as an example of how the government is beginning to implement AI across agencies. AI.gov is a portal to all of the AI activities happening in federal agencies. The website’s overview begins with the imposing statement: “The age of artificial intelligence (AI) has arrived.”

AI.gov breaks down the multitude of AI-related categories and real-world examples into five sections starting with the AI executive order and including “AI for American Innovation,” “AI for American Industry,” “AI for the American Worker” and “AI with American Values,” each with various subcategories and links to important documents like the October 2016 National Artificial Intelligence Research and Development Strategic Plan, the aforementioned GAO AI report and the National Science and Technology Council (NSTC) Select Committee on AI’s charter, among others.

With all of the AI interest and investments across the federal government, opening up channels of communication to allow for sharing best practices, successes and failures will be necessary to effectively leverage AI’s benefits. “Regardless of how much we invest in AI R&D, we need to make sure that we’re coordinating across the federal government,” Parker said.

Within the military, too, AI technologies have tremendous potential to increase lethality and effectiveness to continue providing the U.S. military with acceptable overmatch. Robert Work, former deputy secretary of defense and currently senior counselor for defense and distinguished senior fellow for defense and national security at the Center for a New American Security (CNAS), discussed the future of the U.S. approach to implementing AI in the military at the AFCEA Washington, D.C. Artificial Intelligence and Machine Learning Tech Summit. Work, who also was recently appointed co-chair of the National Security Commission for Artificial Intelligence, spoke of a new kind of AI-enabled warfare that sounds like something out of a science fiction movie: algorithmic warfare.

“If we’re going to succeed against a competitor like China that is all-in in this competition … we’re going to have to grasp the inevitability of AI and adapt our own innovation culture and behavior so that AI has a chance to take hold,” Work said. “We don’t need to plan big plays; we need a lot of plays going on simultaneously.”

So far, AI seems to represent the solutions to complicated real problems for things like maximizing agricultural output, automating the driving experience for safer and faster transportation and augmenting warfighters to increase lethality, yet it also comes with a series of challenges for things like built-in bias, workforce displacement and ever-present cybersecurity issues. If AI were a medication, it would have a long list of side effects.

Nevertheless, it’s clear from the quotes above that leaders across and around government are preparing for a world in which AI becomes the new normal. In fact, it might already be in everyday life.

AI and Accountability

In the summer of 2017, the U.S. Comptroller General assembled a forum on artificial intelligence to examine the implications of AI. That report formed the basis of a Government Accountability Office (GAO) report to the House Committee on Science, Space, and Technology in March 2018 titled, “Artificial Intelligence: Emerging Opportunities, Challenges, and Implications.”

In March 2019, GAO published another report about the effects of advanced technologies like AI on the workforce, recommending that the Department of Labor implements methods to improve tracking how emerging technologies are affecting the workforce.

Both GAO reports signify an increasing awareness of AI within the federal government, awareness of both the benefits and potential drawbacks. The first GAO report focused specifically on four topic areas, including cybersecurity, automated vehicles, criminal justice and financial services, with the potential to greatly impact everyday life, while the second honed in on the challenges of implementing AI in the workforce, including the thorny issue of workforce displacement.

Dr. Timothy Persons, chief scientist and managing director at GAO recently weighed in on GAO’s AI findings. Persons said that congressional interest in AI led GAO to conduct its strategic implications study on AI in 2018. Certainly, some members of Congress have been vocal about thinking more strategically about AI as a tool rather than a destination.

“I think this particular conversation demands a cross-sectoral conversation and a holistic view,” Persons said at an April FCW Workshop.

But Persons didn’t shy away from some of the AI challenges moving forward.

“We collectively — our nation, our world — have issues. We have major issues … It’s the fact that they’re all interconnected as well. To solve one node on the graph is necessary, but not sufficient. You have to solve all of these things,” Persons said. “Technology is always … a double-edged sword. What’s the blessing of the internet? It’s open. What’s the curse of the internet? It’s open.”

It doesn’t take much to understand the parallel between the wonders of AI and internet and the major problems of unlimited connectivity that society is still grappling with today.

“But the point here is that major tech innovations have had major impacts on productivity, so there’s hope, there’s promise to deal with the issues of our day,” Persons said.

Persons added that the GAO’s AI study was the second part of an earlier study on data and analytics. He said that public data collection and the “datafication” of things are other issues that need to be addressed as we move into the fourth industrial revolution. Big data, AI, predictive analytics and the growing “internet of things,” among other emerging technologies, comprise this new industrial revolution, which Persons calls “the merging of the cyber and the physical world.”

Because GAO deals with established definitions in order to conduct and presents its work publicly, and to Congress, the lack of a concrete definition of AI is a real problem, according to Persons. What we once considered artificial intelligence may not qualify now. Persons gave the example of how a virtual assistant like Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana or Google Assistant were once considered revolutionary demonstrations of AI, but are now standard for anyone with a smartphone or computer. In other words, our perception of what constitutes AI has evolved from virtual assistant systems that we can understand and will continue to evolve.

“From a philosophical perspective, we really don’t even have a closed-form definition of intelligence itself, much less to say we’re going to make something artificial about it,” Persons said.

Persons said the key issue of the current state of AI is explainability. Being able to explain why an AI algorithm or system does what it does is crucial because machine systems are programmed to optimize toward something. If we don’t understand what is being optimized then machine system actions may flummox — perhaps even harm — us. A dramatic example involves a driverless vehicle optimized to mitigate harm to its occupants that crashes into a pedestrian to avoid an oncoming truck, thereby saving the passengers but harming or killing a pedestrian. A less dramatic, yet important, example of why the importance of explainability involves bias in something like college admissions. A biased algorithm that discriminates against a subset of applicants must be explainable so it can be corrected.

“Of the next epoch, and really research, in AI is going to be in this area: how do you know? Let’s unpack. No black boxes,” Persons said.

Despite the problems that need to be addressed, the potential benefits of AI-enabled technologies are many. They include improving the criminal justice system to dispense justice more uniformly by scanning libraries of case law and precedent to better inform judges and regulating financial services to better track the flow of money to provide more oversight and better accountability and combat money laundering, according to Persons.

Persons also highlighted the Department of Transportation’s driverless car sandboxes as innovative starts to experimenting with AI-enabled technology like automated driving. But, he added, these AI-enabled technologies need billions of miles driven and tested before they can be relied on, not millions of miles like where we are now. “We’re like at 1% of where we need to be on driverless cars,” Persons said. “We still have a my-data-is-my-data mentality, and we’re in an era where data are the new oil, so a big finding for us is that you really need to have regulatory sandboxes” to pilot AI technologies and de-risk them.

In terms of top-level changes that might be implemented to streamline AI implementation, Persons advocated for shifting organizational mindsets to empower chief data officers (CDOs) to inform leadership about the power of utilizing data and seeing it as an asset rather than a burden. He identified cultural resistance as the primary challenge to overcome. In the battle for supremacy, Persons said that agencies across the federal sector will experience, or already are experiencing, the power of culture of strategy because cultural conflict or lack of buy-in can create tremendous friction that slows or halts the speed of modernization.

“The challenges, as I see them: there’s prodigious technical challenges, but the sociocultural ones are the largest,” Persons said.

That assertion can be heard across agencies.

“IT is the easiest part of the job — to build the technology and maybe to deploy it — that doesn’t mean anybody’s going to use it if you didn’t bring them along with you,” National Science Foundation Division of IT Budget Lead and IT Governance and Strategy Advisor Robyn Rees said.

NSF AI Case Study

At NSF, Rees recently outlined a case study of a real pilot program for using governance as a catalyst of AI at an April FCW Workshop.

To begin the AI pilot program, Rees and NSF asked, “Can governance be a catalyst for advanced technology insertion?”

Briefly, NSF exists “to promote the progress of science; to advance the national health, prosperity, and welfare; and to secure the national defense; and for other purposes,” according to its statutory mission.

Part of that mission is reviewing research proposals submitted by principal investigators, and an integral part of that process is finding expert reviewers who can properly evaluate proposals to determine if an award should be made. This can be a monumental task, even for NSF scientists and program officers, because expert reviewers may be scattered all over the globe, and even knowing who might the perfect reviewer for a potentially obscure proposal topic might be tricky.

To alleviate this problem, Rees and NSF made the decision to utilize AI to help suggest reviewers. Rees explained that by enabling AI to suggest — not select — reviewers, NSF program officers would be more amenable to working with AI.

“We said, ‘I want to test the ability of artificial intelligence to make a suggestion for reviewers that might be appropriate for this proposal or set of proposals, and I want to let the program officer at NSF with the federal responsibility make the final decision,” Rees said. People may be much more amenable to suggestions from AI rather than decisions, at least in the early stages of AI implementation that many agencies find themselves in now.

But in potentially inertial environments that understandably must manage risk, the question of how does one get an AI pilot off the ground is a good one.

“Start with a problem that everybody has,” Rees advised.

Rather than starting on big enterprise-level problems that may have a big payoff, Rees advised starting small and with a particular problem at that scale to help people buy in to the adoption and implementation of emerging technologies like AI and RPA. If successful, that grassroots adoption of emerging technology might scale up easier from the ground up, according to Rees. “Sometimes you just have to start and learn as you go,” Rees said.

Rees also advocated for the use of agile software development, which NSF has been doing for nearly seven years, as a flexible development process. “If you’re already doing agile development, then you’re already on your way … to governing with agility,” Rees said. “Making the decision closer to the point of execution is not a lack of governance.”

Regarding the risks of inaction on emerging technologies like artificial intelligence, Rees highlighted three that should be accounted for when deciding whether to adopt emerging technologies within federal agencies.

“I think there are opportunity costs. How many of us … are experts on the explainability of artificial intelligence as it supports delivery of the mission to the American people?” Rees asked. “In this case, we had the opportunity to learn how to be experts by trying it.” Rees added that customer satisfaction is another risk. If NSF doesn’t develop to meet the continuing needs of its customers to leverage innovative technologies, then it will be at risk of being redundant in the minds of those whom it serves.

“No IT can be successfully implemented without understanding the customer, keeping them at the middle of your design and including them in rolling out the IT,” Rees said.

Thirdly, NSF wanted to ensure that its AI implementation was easily understandable and ethical. Explainability, therefore, is a point of emphasis, especially if AI-enabled technologies like NSF’s review pilot program were to expand beyond the relatively small scope of the program.

Rees returned to the question that guided NSF’s AI pilot: “Can governance decisions flex to adopt emerging technologies?”

“Reflecting on what we’ve done at NSF over the past year, inserting artificial intelligence and deploying bots into the financial environment … I think yes. I think that with supporting mechanisms, governance can flex to allow you to make decisions that incorporate emerging technologies,” Rees added.

Flexible governance and a willingness to experiment can lead to small successes at first to prove an AI concept like suggesting NSF proposal reviewers, which is important because other agencies can use that example as a jumping off point to explore their own applications of AI. Overcoming that initial hesitation is key to getting things done, according to Rees.

“Don’t not act. Just start,” Rees said. “Sometimes it just takes each one of us thinking a little differently about what we’re already doing to realize that we’re not as far away from our intended outcome as we are.”

AI and the Workforce

With talk of seemingly all-powerful algorithms and unrealistic representations of AI capabilities in movies and on television, it’s easy to lose sight of the primary function of AI: to make human life and work better and easier.

One of the most important convergences of AI technologies and humanity will occur in the workplace, where the benefits of automation and machine intelligence counterbalance fears of massive workforce displacement.

Ranjeev Mittu, the branch head for information management and the decision architectures branch of the Information Technology Division at the U.S. Naval Research Laboratory, recently discussed the workforce implications of AI in the appropriately titled presentation, “Artificial Intelligence: It’s all about the PEOPLE!”

“I think what we do in the future as a workforce in the federal government is going to change with the adoption of AI,” Mittu said at an April FCW Workshop.

Three areas where Mittu thinks agencies can begin to look at AI and machine learning to augment human resources functions are recruiting, retraining and retaining talent.

Mittu said that AI and machine learning can be used to accurately identify job candidates to fill vacant positions within the federal government, acting as an active, autonomous recruiter beyond scanning resumes for keywords. Mittu added that going beyond natural language processing to natural language understanding will allow algorithms to gain context to produce better results for whatever task is it doing.

AI could also help mine insights from HR data to find candidates similar to successful current employees, according to Mittu. “I think there’s a lot of techniques — for example, in similarity matching — that AI can help with so you can attract the same kinds of people.”

Mittu also discussed the importance of examining bias in AI algorithms to ensure they aren’t accidentally replicating any human biases that are input with training data. Mittu gave an even more dramatic example of a potential cyberattack that purposefully corrupts AI-utilized data to affect AI algorithms’ decisions.

“If you start to train the algorithms with a very biased dataset, if you’re feeding it resumes or what have you, and if that doesn’t represent a diverse base of people you want to go out and hire, you’re training the algorithms with the same bias,” Mittu said.

This problem is magnified by the fact that we might often have trouble determining what biases exist in datasets. Even if we can determine that, there may be no adequate replacement data available to swap out the biased data, such as with a large digital image gallery.

Another challenge to address is leveraging AI with end users in mind. While this applies to most technologies, without emphasizing user experience with AI, tools meant to help users could easily end up being unwieldy and, if they lack explainability, difficult to fix retroactively. Therefore, agencies must be careful to automate tasks that improve efficiency and serve the customer base, be it internal or external.

“Just because a technology provides a capability for you to have some efficiencies, doesn’t mean it’s a good user experience,” Mittu said.

In terms of retaining employees, Mittu discussed how AI might enable HR departments to predict the length of employment engagements by leveraging employee feedback and exit survey data together to correlate employee progress and sentiment. This would allow HR departments to build indicators that predict when employees leave. In turn, this may empower HR departments to preemptively work with employees to meet their needs in order to maintain an effective and engaged workforce. “I think there’s a rich opportunity not just to apply machine learning, but a variety of approaches under AI to solve this kind of problem,” Mittu said.

Echoing federal IT leaders like Kent and Parker, Mittu cautioned against moving slowly with AI.

“We’re going to get in a further vacuum if we don’t start to track the right people and do it fast, leverage from industry and their best practices and use that as a model and a benchmark, which we compare and start to improve our processes of hiring, retraining and retaining,” Mittu said.

AI and Other Emerging Tech

To leverage AI technologies at the enterprise scale of federal government agencies, other emergent and emerging technologies must also be utilized, according to VA and DOD leaders and Republican Texas Rep. Will Hurd.

While discussing the implementation of 5G wireless mobile communication technology, Hurd stressed that any serious discussion on the future of AI would rely on successful 5G networks.

“We will not achieve true, ubiquitous use of artificial intelligence until we have that 5G infrastructure,” Hurd said at IBM’s Think Gov 2019 in Washington, D.C., March 14.

Likewise, David Catanoso, director of the enterprise cloud solutions office at VA, said cloud technology plays an important role in AI technologies. Catanoso said VA is leveraging the cloud in conjunction with DevOps, automation and AI to enhance VA’s ability to deliver veterans services. Vets.gov, for example, was “built entirely in the cloud,” Catanoso said at an April FCW Summit.

VA has also been experimenting with AI to streamline some internal processes.

“One recent pilot that we’re doing is to use artificial intelligence with our help desk to see if we can explore how we can use chatbots to streamline our help desk, our internal IT help desk, to respond to — to triage requests coming in,” Catanoso said.

In addition to civilian-agency AI efforts like at VA and NSF, DOD has been experimenting with AI and other emerging technologies. In fact, DOD’s AI capabilities would be a far cry from what they are today without the cloud capabilities that allow them to leverage both new and existing technologies, including AI and machine learning at the enterprise level.

“To do AI and machine learning at the scale that we’re proposing to do it … assumes in that strategy that we have a very large-scale cloud available that can handle supporting AI solutions all the way out to the tactical edge,” said DOD CIO Dana Deasy at the GDIT Emerge event in April.

Regardless of where in the federal government AI is being explored and used, it’s clear from the words of top officials like Kent, Parker, Persons and the president himself that AI is the future of federal IT.

Terry Gerton, president and CEO of the National Academy of Public Administration, summarized the state of AI and the accompanying hope and uncertainty, succinctly at an April NAPA event on the impacts of AI on public administration.

“Artificial Intelligence is one of the biggest innovations — and challenges — that we as a society will face in the years to come. While AI and robotics are very exciting, they have the potential to impact the government and public administration in ways that we are still trying to understand,” Gerton said.

Related Content
Woman typing at computer

Stay in the know

Subscribe now to receive our curated newsletters

Subscribe