How Secure are Artificial Intelligence Chatbots?

How Secure are Artificial Intelligence Chatbots?

It's a game-changing technology, but a locus for cyberattacks.

Matthew van Putten is a research analyst for GovernmentCIO Media and federal research manager for GovernmentCIO. A graduate of Johns Hopkins SAIS Strategic Studies and International Economics programs, he focuses on strategic affairs, Chinese financial markets, African politics and the impact of technology on international politics.

With rumblings about “disruption” and automation-generated job insecurity, companies understand they have to look for opportunities to leverage emerging technology to stay relevant and generate returns. Enter artificial intelligence.

Companies increasingly rely on AI to bolster their cyberdefenses — for offloading customer-service interactions, predicting likelihood of opioid overprescription and managing growing piles of data. AI increases output and efficiency, be it good or bad. That’s essentially it.

Let’s take a relatively simple case to start developing an approach: Which security challenges would something as seemingly simple and low risk as an AI chatbot pose, in a controlled, data-sensitive environment such as a bank, insurance company, or government agency?

Here’s the situation: A large insurance company, bank or government agency’s call center deals with thousands of calls and online messages a day, 24/7. The wait times are long customers don’t always like the quality of the service. To deal with the problem, the company uses a virtual assistant to tackle the most common questions and problems — online and over the phone.

It’s conceptually simple, but execution is everything. There is no comparison between waiting on hold to reach a call center when the customer can call or chat online (for example, canceling a credit card in under 2 minutes or filing an insurance claim in 10 minutes). Managing such a system can boost savings and effectiveness while also enhancing service delivery.

Every technological development creates benefits as well as new difficulties. This case is no exception. If enterprise infrastructure, data management and other elements are poorly managed, introducing an AI will not solve these underlying problems — instead, it might simply draw attention to them.

AI is Just Harder than Everything else, Plain and Simple

An implemented AI system is harder to secure than most systems engineering, development network or devices. An AI must interact with a wide range of systems across a company — this makes the AI critical point of failure. A team must address problems as soon as they arise, as critical flaws break the AI, reduce performance or worsen the problem the AI aims to solve. If two parts of the company have incompatible data that don’t talk to one another, the AI may not be able to coordinate between the two. For instance, if the AI aims to reduce call volume, a malfunctioning AI could spike call volume and wait times beyond the original.

An AI assistant is most useful if it can make real-time changes to systems across the enterprise. For example, entirely over the phone, an AI adds a spouse to a veteran’s health and benefits records at the veteran’s request. To do this, the AI must verify the caller’s identity, access his or her records and change them — all in real time.

AI Has Access to Everything

AI systems live on data and require significant amount of it to learn and improve. If an AI has to look at thousands of cat pictures to identify cats, how much more data does it need to interact with people in a natural manner? Pre-existing problems with component systems may cripple the machine learning process, producing bottlenecks, as the AI mishandles customer interactions, gets confused or misdirects clients.

An AI must have access to data across the enterprise to verify users and provide services. For instance, if the Veterans Health Administration implemented a virtual assistant to manage the diverse, bespoke component systems and cut call center call volumes, the AI would need to identify the caller. It would need access to personally identifiable information to process a claim or appointment verification.

AI is Like all Other Cyber Systems Except More so

Because the AI must have access to a wide range of enterprise data to be useful, it will be a locus for cyberattacks. Though AI may be a great cyberdefense multiplier, it also helps malicious actors generate more sophisticated attacks against AI and non-AI targets. AI systems are unpredictable. Through repeated testing, a potential attacker can get to know your AI better than you do. If attackers find unnoticed weaknesses or loopholes, they could exploit them.

Attacks against AI systems are often similar to other cyberattacks. Through understanding hardware and software vulnerabilities, attackers have made computers overheat, shut down, continually reboot, etc. Stuxnet is a great example of that. The malware reportedly disrupted one-fifth of Iranian enrichment capacity through targeting weaknesses in the monitoring software. The same can be done with AI assistants or chatbots. 

AI-based attacks can also evade traditional security measures. IBM’s Deep Locker outlines how AI malware could beat traditional security. Deep Locker evades notice until it finds its target and executes. AI-based attacks can exploit the weaknesses of the AI systems they seek to penetrate or corrupt.

Not to forget the traditional cybersecurity challenges: 80 percent of successful cyberincidents trace back to poor user practices, inadequate network and management practices, and poor implementation of network architecture. A Pentagon and a Cyber Weekly poll of cybersecurity professionals found 84 percent of successful attacks happen in part because of human error.

Attribution and incident response are never easy, but even more so when dealing with virtual assistants. If chatbots can be manipulated into divulging secure information, that activity might go unnoticed. A company might not notice if its AI was vulnerable until after the fact. Without constantly monitoring the chatbots' communications with internal systems and with customers, vulnerabilities could remain unnoticed and unaddressed.

Not a Panacea

AI can be a game changer for organizations trying to deliver better services to customers and streamline internal processes. A seamless interaction with an AI system can cut call center volume, reduce HR burden and deliver more effective services. With a responsive and layered defense, good situational awareness, and constant review of the AI’s communications, an AI could perhaps change the game.