Skip to Main Content

No Lie: AI Can Compromise Better Than Humans

With straight talk and finding a resolution, the stubbornness of machines gets good results.
7m read
Written by:
Photo Credit: Jerome Wiley

Negotiating with a machine has never been a winning proposition. They can be unyielding, obstinate, even downright pig-headed. Life with machines would be easier, we figure, if only they could think more like humans do, which is one goal of artificial intelligence. But in at least one case, machines are proving better able to accomplish a very human task by being a little less human.

A research team at Brigham Young University has found AI systems can often outperform humans at two of the most beneficial lubricants of human relations: cooperation and compromise. The big advantage for the machines? They’re more honest.

BYU computer science professors Jacob Crandall and Michael Goodrich, with some help from MIT and other universities, tested machine-machine, human-machine and human-human interactions in three different two-player games, each in series of 100, 1,000 and 50,000 rounds, they said in a paper published in Nature Communications. Most of the time, the machines — equipped with an AI algorithm called S# (“S sharp”) — outperformed their human counterparts at finding compromises that benefitted both parties, they write.

“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” Crandall said in a BYU release. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It’s programmed to not lie, and it also learns to maintain cooperation once it emerges.”

When it comes to straight talk and finding a resolution, the stubbornness of machines gets good results.

Talk is Cheap, But ‘Cheap Talk’ Has Value

Part of the challenge of getting machines and humans to talk turkey is that cooperation isn’t really a function of CPU power, which the machines have in spades. Instead, Crandall and Goodrich write, it comes down to “intuition, cultural norms, emotions and signals, and pre-evolved dispositions toward cooperation,” areas where machines are lacking. In addition to the mathematics of finding a mutually agreeable solution, two parties also need to develop a relationship with each other.

One thing that helps is what the researchers called “cheap talk’’ — nonbinding interactions that don’t seal any deals or even put suggestions on the table, but just help establish a rapport through friendly banter or, in some cases, a virtual smack upside the head.

The BYU team equipped the S# machines with positive interjections such as “Sweet. We are getting rich!,” or “I accept your last proposal.” But if a negotiating partner tried to back out of a deal or otherwise deceive a machine, it might be met with “Curse you!,” or, “You will pay for that!” Assuming the machine thwarted whatever dirty trick its negotiating partner was up to, it was even prepared to throw out an “In your face!”

Such utterances might seem a little buttoned-up in a reality TV show world, but they did move the ball forward. Researchers found machines got twice the cooperation when using cheap talk. Not only that, but human negotiators were less likely to be able to tell if they were dealing with a human or a machine (thus passing the elusive “imitation game” test put forth by Alan Turing in 1950).

Intelligence You Can Trust

The government’s current plans for AI aim toward human-machine teaming, in areas as diverse as aviation, cybersecurity, vehicle maintenance, and medicine. Successful teaming, AI researchers say, depends on trust, which works both ways between machines and humans. Machines do need to become better at explaining, in understandable terms, how they interpreted data and reached a certain conclusion, which would help human partners trust them. But machines also need to be able to trust their human counterparts for the arrangement to work best.

The BYU research could help show ways to foster that trust — while also teaching humans a thing or two about dealing with each other.

“In society, relationships break down all the time,” Crandall said. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”

— Sign up for our weekly newsletter to receive the latest analysis and insights on emerging federal technologies and IT modernization.

Related Content