Why Do Some Companies Have Humans Pretending to be Bots?

Why Do Some Companies Have Humans Pretending to be Bots?

Because artificial intelligence is selling like hotcakes.

Artificial intelligence has become such a catchy selling point for online services that some companies have turned to a bit of sleight of hand — and voice — to convince people they have smart machines working for them.

Several companies have employed humans to impersonate chatbots performing scheduling and other services. A prototype for Facebook’s AI assistant for Messenger, called M, had people behind the curtain calling M’s shots for two and a half years before taking humans out of the equation in January. Amazon’s Mechanical Turk reportedly used people to transcribe some expense and benefits documents Amazon’s touted SmartScan software was supposed to be doing but couldn’t handle.

To anyone who has dreaded the labyrinthine house of mirrors automated call systems have become, it might seem odd companies want to pretend to have a machine on the other end of the line rather than a human, but there are several factors at play. Aside from the miracles of advertising, there also are requirements for training AI systems and the somewhat counterintuitive fact that, in some circumstances, machines could do good in ways humans can’t.

Cures What Ails You

For one thing, AI is hot. And it’s everywhere, from customer service apps and virtual assistants like Siri and Alexa, to all manner of transportation apps and social media features. Government is no stranger to using AI either, whether it comes to surveillance, power and water systems, or predictive policing. The prevalence of AI puts pressure on those without the capability, who, standing with their noses pressed against the glass, may feel they need to pretend they have AI just to stay in the game.

Misleading advertising that tries to glom onto trendy features is nothing new, of course, evidenced in recent years by companies that have felt the need to “greenwash” their products, claiming an environmental-friendliness they didn’t have.

Sometimes, they get caught: In 2016, Volkswagen came under fire from the Federal Trade Commission for allegedly cheating on emissions tests while promoting “clean diesel” vehicles. Health-related products have a long history of misleading claims, going back to “health tonics” of the 19th century that were basically just alcohol, to whole aisles full of sugary cereals, energy drinks and “power” foods. The Food and Drug Administration has a website devoted to health fraud scams.

People Behind the Curtain

For companies making questionable claims about AI, using humans posing as bots doesn’t say much about their transparency, but in some cases, it is a step in the machines’ development, as humans are necessary to teach the bots what to do before they could act on their own.

In both voice and text apps like Mechanical Turk, humans are being used to train the bots, because machine learning systems learn by example. Facebook said M, which was only ever available to a couple of thousand people in California, was a beta intended to help the company learn how AI bots could better interact with people, and that it would use those lessons in future AI projects. (A little bit of M lives on in M suggestions within Messenger.)

In product development, this is known as the “Wizard of Oz” design technique, essentially simulating what the product will be while someone behind the curtain is actually pulling the levers. It’s used in agile software development and other fields to test and improve how software operates, while the simulated result can help draw investors.

Aside from the commercial advantages of AI systems, there is another side to AI interactions that has shown up in psychological tests — people in some circumstances are more willing to open up to a bot than to a human.  

The University of Southern California’s Institute for Creative Technologies found U.S. veterans returning from Afghanistan were more willing to disclose symptoms of PTSD to an AI-powered avatar they would on a military health checklist, even if that checklist were anonymized. Funded by the Defense Advanced Research Projects Agency, ICT researchers used Ellie, a software diagnostic tool on a screen in the 3-D image of a woman with a soothing voice and demeanor, to establish a rapport with veterans during health screening.

The Defense Department routinely checks on the health of returning personnel with the Post-Deployment Health Assessment, a form that includes assessments of PTSD symptoms. But PDHA responses are included in a soldier’s military record, which could affect future prospects. Dealing with a virtual interviewer could ease the stigma around mental health —  the perceived weakness of asking people for help — the research suggests. Ellie was also seen as being outside of the official chain of command, which allowed veterans to be more open.

AI chatbots are now turning up in a number of mental health-related applications, assessing users’ moods and feelings such as anxiety and depression. Any long-term benefits, or risks, of such apps are up in the air, but they are accessible around the clock. As government agencies employ more automated AI-based services, maybe other advantages of using humans in the guise of bots will crop up.