Face recognition is getting some dirty looks these days, essentially for two reasons: It’s too accurate, and it’s not accurate enough.
On one hand, privacy rights groups and even technology giants like Microsoft warn that face recognition programs, fueled by headlong advances in artificial intelligence and the proliferation of cameras everywhere, are becoming too powerful while at the same time being rife for abuse. Brad Smith, president and chief legal officer of Microsoft, which itself has contributed to the technology’s development, went so far as to call for government regulation of face recognition in a recent blog post.
Another element in the opposition, which also factors into the Big Brother fears of surveillance run rampant, is that face recognition isn’t exactly perfect.
ACLU made the point recently, reporting that for a cost of $12.33, it used Amazon’s Rekognition to scan 25,000 publicly available mugshots and then matched them against every member of the U.S. Senate and House. The software flagged 28 members of Congress as criminals. The inevitable jokes aside — only 28? — their exercise underscored the risks of trusting smart algorithms too much to make decisions that affect people’s lives.
An ongoing problem with the results from AI recognition is an undercurrent of bias, which ACLU said also was reflected in its results. A number of research projects, such as one done by the Massachusetts Institute of Technology, have shown a pattern of “algorithmic bias,” likely dating to the original data input, particularly with women and people of color. Although AI, ironically, was originally seen as a way to remove bias from the hiring process, and companies such as IBM are looking to solve face recognition’s bias by expanding its data sets, the problem persists.
Face Your Accuser?
Valid criticisms notwithstanding, AI-powered face recognition isn’t going away, because it does have a lot of upside. Many know it as an effective and handy (if not totally unhackable) tool for authenticating users on smart devices, an improvement over smudgy fingerprints or passwords that can be forgotten or stolen. It’s also used by governments and private organizations in authentication systems for accessing buildings and networks. And while it might not work with the magical omniscience you see on TV cop shows, police have found it useful for searching criminal databases.
Almost all states use some form of face recognition, with more applying it to driver’s license renewals, which can stop people from getting fake licenses as well as other uses. The state of New York, for instance, made 4,000 arrests for ID theft or fraud in the first few months after installing face recognition, and that is expected to spike substantially as its database grows.
U.S. Customs and Border Protection is demonstrating face recognition at eight major U.S. airports — a prospect drawing opposition as well — and is testing a system at a point along the Texas/Mexico border that will scan peoples’ faces through windshields, which has been a challenge for face recognition systems.
Unchecked, however, organizations can get carried away with it.
In one example, cops in a city in China use surveillance cameras and AI face recognition to ID jaywalkers and plans to send them notice of fines via text. Interestingly, by the Orwellian ground rules of the future, the text-message idea is actually a nod in favor of privacy, as the current system posts jaywalkers’ pictures on a large screen at the intersection. In their defense, officials of the city, Shenzhen (population 12.5 million), say jaywalking has been a serious problem and their public shaming project reduced jaywalking by 80 percent in 18 months in the seven intersections where they tried it.
Maybe that’s an extreme example, as China isn’t exactly known for being strong on civil liberties, but it shows the potential for where face recognition could go. The FBI has more than 400 million photos in its face recognition database, more than 80 percent of which are unrelated to any criminal activity (driver's licenses, passports, etc.). And at a House hearing last year, lawmakers were told the agency’s face recognition technology got its matches wrong 15 percent of the time. So, mistakes involving innocent people can be made.
Watching the Watchers
If AI face recognition is the problem, maybe AI face recognition is the solution, too, at least in terms of the technology, which is always improving, and in many ways is already pretty good.
In response to ACLU’s Rekognition demonstration, Amazon argued ACLU didn’t use the system right. It used Rekognition’s default match threshold of 80 percent, which is fine for some uses but not for law enforcement purposes, where Amazon say it recommends setting the threshold to 99 percent. At that setting, the company said, it ran congressional photos against a database of 850,000 faces and produced zero false positives.
Meanwhile, further improvements to the technology are regularly on the way. The Intelligence Advanced Research Projects Activity, for one, is looking to improve face recognition’s accuracy by both fusing AI algorithms when dealing with a single image, and fusing multiple images to get a more complete picture of someone.
As for policies governing how it’s applied, Microsoft’s Smith may have a point, as even perfect face recognition shouldn’t warrant unfettered use. Acknowledging that companies such as Microsoft need to exercise responsibility, Smith contends that regulation decisions, especially around such a pervasive technology, ultimately rest with the public and its representatives. He called on Congress to create a nonpartisan expert commission that would study the questions at hand and advise lawmakers on what steps to take.
Whether that approach yields practical protections against misuse is anybody’s guess, but considering government’s own widespread use of AI and face recognition, it might be the best option. As Smith wrote, "The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself."