Could AI and Predictive Analytics Stop Sex Offenders?

Could AI and Predictive Analytics Stop Sex Offenders?

These tools have become more common in law enforcement over the last decade.

Predictive policing has come a long way in recent years. Police have always known when and where to step up patrols in certain city areas, for example, by applying simple predictive equations such as: payday + alcohol = trouble. You didn’t need to be a data scientist to anticipate a skirmish or three in bars on Friday night. But developments in data analytics and artificial intelligence are refining how law enforcement uses data and applies predictive analytics in a range of areas, focusing not just on overall trends but on individuals — and in the process renewing ethical concerns.

One area where predictive tools have caught on is with sex offenders. The Vanderburgh County Sheriff’s Office in Evansville, Indiana, for instance, this month joined a growing list of law enforcement agencies that have signed up with OffenderWatch, a sex offender registry network, for its Focus product, which the company says can help police better manage oversight of sex offenders.

OffenderWatch collects sex-offense data from more than 3,500 agencies, including information on about 60 percent of all sex offenders and other data from federal, state and local sources. Focus draws on that information and applies predictive analytics to over 100 risk factors in an individual sex offender’s record, the company says. It then comes up with a score that is added to an offender’s record, so police searching the system can identify those with highest risk factors. The sheriff’s office has been able to better identify and monitor high-risk offenders since it began using the system last fall, according to OffenderWatch’s release.

But while these systems often have produced positive results, they’ve also raised questions about privacy and fairness law enforcement agencies, courts and the companies behind these services must address.

Predictable Concerns

Predictive analytics has become a more common tool in law enforcement over the last decade, being used in everything from surveillance and crime prevention to criminal sentencing and parole recommendations.

The term “predictive policing” tends to conjure images of “Minority Report” Pre-Cogs seeing crimes before they happen, so Tom Cruise can pre-emptively arrest them. But algorithms don’t see the future. Like any data system, they deal with probabilities — like, it was probable the New England Patriots would win the Super Bowl, probable Hillary Clinton would win the 2016 presidential election, and probable someone couldn’t win the lottery twice in one day. Probabilities aren’t perfect. Predictive algorithms also can be led astray by incomplete or bad data, as evidenced by Google Flu Trends’ infamous misfire in predicting the spread of flu in 2013.

As predictive systems get enhanced and supercharged by the addition of AI algorithms, the question of bias in these systems also becomes more urgent. Human rights groups have claimed predictive policing unfairly targets minorities by reinforcing existing attitudes. A Wisconsin man challenged a 7-year jail sentence he got for driving a car without the owner’s consent and eluding police because a risk-assessment tool called COMPAS determined he was a high risk for committing future crimes. The man tried to get the case claiming the use of the tool was unfair before the Supreme Court, which declined to hear it last year.

Machines Get Smarter

Further enhancements might cloud the issue further. In February at the Artificial Intelligence, Ethics and Society conference in New Orleans, a research team presented an innovative AI tool designed to combat gang violence using partial information from a crime scene.

The team, from the University of Southern California, University of Nebraska-Lincoln and UCLA, used a “partially generative neural network” to examine four factors in a crime — the weapon, the number of suspects, the neighborhood and the exact location — which is a lot less than it would get from a police report, and fill in the gaps from there. It also drew on data from 50,000 violent crimes in Los Angeles from 2014 to 2016 to determine if a crime was likely to be gang-related, and reduced errors by up to 30 percent compared with a similar system, according to the team’s research paper.

The tool’s efficiency didn’t quell concerns about ethics from some others at the conference, as the web outlet Futurism reported. Some attendees said the partially generative system, while impressive from a technology perspective, could also reinforce biases reflected in its underlying data, and even result in an innocent person being charged with a crime.

Meanwhile, companies that supply predictive tools to police try to allay fears they go too far. One company, PredPol, which addresses the pros and cons of predictive policing, emphasizes its services can help prevent crime by predicting when and where it can happen. It does not try to predict who will commit a crime, nor does it use “hot spot” tools based on past crimes. That might allay some concerns, though this is an issue that won’t be resolved anytime soon, and which will certainly evolve over time.