AI Workforce Calls for 'Interdisciplinary' Players

AI Workforce Calls for 'Interdisciplinary' Players

Agencies are pulling people with diverse skill sets and areas of expertise to advance AI capabilities and applications.

For agencies to get the full value out of data and make the best use of emerging technologies like artificial intelligence, one best practice that federal leaders are using is incorporating interdisciplinary players in their strategies.

“One of the things that we have really learned in implementing analytics at [the Department of Agriculture] is the need for really interdisciplinary teams working to answer the best questions,” Agriculture Chief Data Officer Ted Kaouk said during the AFCEA Bethesda Tech Summit.

Through the Federal Data Strategy and the Federal Chief Data Officers Council, which Kaouk currently chairs, the federal government aims to establish best practices for the use, protection and dissemination of data to drive innovative solutions that create public value.

Using an interdisciplinary approach, the General Services Administration also will develop a data ethics framework that advises the responsible and ethical use of emerging technologies like AI.

Since data provides the fuel for AI processes, such a framework is critical to mitigating biases that may arise.

“The important thing — because [AI] involves all these interdisciplinary players — is to involve all those players,” said Department of Veterans Affairs' National Artificial Intelligence Institute Director Gil Alterovitz. “That's kind of the take-home point around interdisciplinary aspects of AI: that it is its own field, but it is really in some sense a synergy of these other fields."

The VA is currently undergoing a second AI tech sprint to determine how AI could be used to prompt veterans who are not currently involved with the VA health system to enroll into care, Alterovitz said. The sprint will be open to companies, nonprofits and academic institutes to build AI tools for this purpose.

“We need to leverage some of those principles that we're talking about for AI — to not have a bias, have fairness, transparency and so forth as we develop this work,” he added.

Experts from the social sciences and humanities fields have the opportunity to work alongside computer science and technology experts to contribute to these framework principles, noted Virginia Tech’s Innovation Campus Master of Engineering Program Director Sara Hooshangi.

“Building interdisciplinary teams becomes really integral when it comes to policies and addressing ethical issues. I think this is something that we see a lot now in academia — we're trying to break silos down and bring people together so that they can actually work on problems together from different sides,” said Hooshangi, adding that diversity is also critical to the AI workforce. 

Asking the best questions also means bringing together people with technical knowledge and people who have strong creative-thinking skills and analytics literacy to further support agencies’ missions, said Department of Commerce Interim CDO Thomas Beach.

“There's skills to create and understand and manipulate data, and then there's certain data acumen, which I think is really critical," he explained. "Skills need to be a broadly interpreted construct of not only training, but understanding what to do with outputs and understanding how to ask the right questions because you want to understand how the output was created."

One main selling point in attracting an AI workforce is the fact that AI can be used to support the ongoing COVID-19 pandemic.

“There's so much that can be done with AI in health care, and there's a lot of room for improvement to get into that notion of trying to use AI to come up with therapeutics, use it for drug discovery, for diagnostics, for better sensitivity to testing and imaging, and also, for early diagnosis and prevention," Hooshangi said. "There's so much going on in the health care industry that AI and automation could help."

Standard