The Department of Homeland Security is prioritizing artificial intelligence, machine learning and data sharing to support data analytics for mission delivery — an effort that enabled the department to roll out COVID-19 vaccines rapidly to frontline workers earlier this year.
Mike Horton, the newly appointed chief data officer at DHS, said the department’s contribution to the Vaccinate Our Workforce (VOW) program was possible because of the department’s data analytics and interoperability initiatives.
“When vaccinations were a big priority for the administration, especially frontline workforce, we found ourselves in a situation where we could impact one portion of that: provide data on the people who got vaccinated,” Horton said during an ATARC webinar earlier this month. “We sorted all that data and found it came from a lot of different places. It helped us understand how disparate the data is in different components. ... That coalition of data and managing that data on a daily basis to update those lists and get our employees vaccinated as quickly and efficiently as we could, working with multiple agencies across government, really highlighted how good data, (when) shared, can serve the people.”
DHS’ immigration data domain is another success story for data analytics and interoperability.
“That immigration data domain for us at DHS is the most secure and productive of the domains we have,” Horton said. “Getting that story told, and helping people understand that a culture of sharing data and pushing data up to provide product and decision-making for leadership, helps the mission.”
Damian Kostiuk, chief of the data analytics division at U.S. Citizenship and Immigration Services, said while the agency is deploying AI and machine learning for data analytics, internal culture is the top priority for quality and governance of that data.
“Trying to get trust and culture change inside your organization really bolsters all of that,” he said during the ATARC webinar. “One of the key things when we get to automation, you're really talking about going from a group of people who are producing widgets tens per hour, to thousands per hour (which create a bigger likelihood of making a mistake). In order to buffer that, you need to have extremely fast recognition of data-quality error. It goes to the public trust component of what you're doing."
Data sharing and strong interoperability standards are also critical for improved data quality and governance, he added.
“If you've got a thousand mistakes that can be made very quickly because your data quality goes really bad, if you have different siloed data sets all over the place, how hard is that going to be to correct the data quickly to go back to your operations? It's really difficult,” he said. “You want to have a single space, and that's part of that culture change."