Artificial intelligence and machine learning have practical benefits in many mission sets, but government and industry players need to work together not only to develop the tech to its full potential, but also to be mindful of the ethical implications such technology might have on society at large, according to leading industry experts.
When it comes to national security, ramping up efforts with AI could have cost-saving, and potentially lifesaving, benefits that can be applied to other mission sets like humanitarian and disaster relief, explained Defense Innovation Unit Director Mike Madsen, who spoke on a panel at the CXO Tech Forum: AI and Big Data in Government in Arlington, Va., Dec. 12.
But such applications come with a warning from Madsen and other speakers on the National Security panel about such technology potentially taking us down a slippery slope regarding human rights and privacy.
“We need to operate faster than our adversaries” by keeping up with AI and machine learning advancements, said Madsen, whose team works with stakeholders in the Pentagon and the Hill to bring innovative technologies from the commercial space to the government in support of national security.
But keeping up with adversaries can mean operating on or above their levels, which presents ethical challenges, Madsen and fellow panelists explained.
Big-data technologies were “born for surveillance,” said Mike Olson, chief strategy officer at software provider Cloudera. “We are a democracy committed to human rights. … We need to think of the ethical implications of the technologies we build.”
Olson cited recent events in China, whose government implemented a “social credit system,” a big-data-based system that standardizes assessments of Chinese citizens in order to determine where they can live, for example.
“This is deeply offensive to most Americans,” Olson cautioned. “We apply machine-learning tech differently than our adversaries. It’s an uneven playing field. Advances in AI are happening worldwide.”
Despite its potential concerns, AI applications do have many potential benefits that are in development in the government today.
One of those is in predictive maintenance in the aviation industry.
Madsen recalled a time when, as a C-17 pilot during a humanitarian mission, he experienced a part failure on his jet that became grounded for five days while waiting for a part. The downtime caused a delay in supplies getting delivered to the end user in support of that mission.
Madsen and his team worked on a predictive maintenance solution for the Air Force where DIU ingested seven years of data that led to about a 30 percent reduction in unscheduled maintenance for aircraft, increasing the fleet’s maintenance reliability rate. The Army just signed on for this tech for its Bradley Fighting Vehicles, and the Marine Corps has expressed interest as well, Madsen said.
Developing AI to its full potential across government means it needs to work together with industry more closely, first by making it easier to do business with government, Madsen said.
Furthermore, taking ownership of the tech is key, said Todd Myers, automation lead at National Geospatial-Intelligence Agency.
“We want to revitalize skills set and ownership from a government perspective to have employees who can code, develop and know their history,” said Myers. “Our efforts have been to harness the tech … to bring code in government space so government can own it and develop it.”
Myers’ approach is to not think about the technology in terms of system designs, which require static definition of design state, but think of it as things that live for a second and move on, Myers said.
“The challenge is the amount of activities that understand [code source and pipeline delivery] and comprehend what to do in their organizations to bring this in is foreign,” Myers explained. “We have to normalize these environments. You don’t buy this, you own it.”
In addition to making it easier to do business with the government, increasing science, technology, engineering and mathematics (STEM) education and the national security innovation base by investing in tech companies are necessary, said Madsen.
And the bar is set extremely high.
“Industry players wanting to work in this space need to take into account their obligations,” said Olson. “There is an importance of reliable, robust, bullet-proof systems.”
“With what we’re doing there is no error ratio. It has to be on point,” said Olson. “Right now, we’re not close to that high bar. Industry and government have to get to that bar. The days of designing a system to sell to the government are gone.”