Skip to Main Content

Under Pentagon’s Plans, There’s An ‘AI’ in Team

If military researchers have their way, machines will be riding shotgun.
7m read
Written by:
Photo Credit: DrAfter123/iStock & Jerome Wiley

The question of whether intelligent machines will someday take control of everything is mostly a matter of futuristic speculation, but if military researchers have their way, the machines will be riding shotgun.

The idea of human-machine teaming is at the core of the Pentagon’s long-range vision, reflected in the Third Offset Strategy for taking advantage of next-generation technologies and documents like the Army’s “Winning in a Complex World” operating concept for 2020-2040. In practice, Defense Department research teams are showing how that vision could take shape. And in some ways, it’s basically the buddy system.

The Air Force Research Laboratory, for example, is developing what researchers call a “synthetic partner” that would help an airman do his or her job — and take care of them in the process, monitoring their physical, mental and emotional state and taking those factors into account before recommending a course of action. It would be like having an assistant, doctor and adviser in one package.

— Sign up for our weekly newsletter to receive the latest analysis and insights on emerging federal technologies and IT modernization.

The Army Research Laboratory, meanwhile, is working to improve its battlefield robots by adding more autonomy so the robot behaves more like a teammate and less like an appliance.

Meeting of the Minds

AFRL is tapping into exponential increases in computing power, combined with miniaturization and advances in artificial intelligence, to find better ways to link human and machine intelligence to create a partner that can help by doing what a human can’t.

“Seventy years into the future, we’ll still be limited by the fact that we have a very limited short-term memory, we get bored easily, we’re not known to just sit there and stare at one place for a long period of time,” Morley Stone, AFRL’s chief technology officer, said in an Air Force report. AFRL’s Human Autonomy Lab is focusing on the medical monitoring aspect — incorporating wearable and other noninvasive sensors and putting the data into a format its human partner can easily read — while also supporting the task at hand.

Among other things, a synthetic partner’s advanced analytics can help a pilot, for instance, avoid the “helmet fire” of too much incoming data from sensors by sorting through that data and putting the most critical points into a sensible format.

“We have sensors becoming very miniaturized and able to sense the human physiology without even being attached to the human,” said Mark Draper, a principal engineering research psychologist leading the lab’s work. “In a vision of the future, artificial intelligence can serve to continually monitor the human while the human is engaged in various tasks, and then dynamically adapt the interaction with the machinery, the interaction with the environment, and the offloading of tasks — all with the express purpose of better team performance.”

Trust the Processor

While the Air Force research might be looking further down the road, ARL’s work on robot-soldier teaming is looking for more immediate results. The military services already make extensive use of robotic vehicles for purposes like bomb disposalfirefighting and resupply missions, but those robots require a lot of work by a controller. ARL wants to increase the level of autonomy for ground and air vehicles so robots can work with soldiers reliably according to a plan without showing so much “intelligence” they go off on their own.

“We want to push the level of autonomy up just enough so that there’s a specific suite of behaviors the robot can execute very efficiently and reliably based on the commander’s intent, with as little guidance as possible,” said Joseph Conroy, an electronics engineer at ARL.

The Office of Naval Research, the Marine Corps and Aurora Flight Sciences recently demonstrated that concept with the successful autonomous flight of a UH-1 “Huey” helicopter. At the command of a minimally trained Marine with a tablet computer, the helicopter picked up supplies, selected the best route for delivery and then altered its route on the fly based on changing conditions, arriving at a new landing site.

The reliability of these kinds of systems will be essential to building the level of trust necessary for humans and machines to work together. Under those circumstances, the buddy system can work.

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe