Does Predictive Policing Work?
What if it were possible to predict where a crime took place before it actually occurred – even determining the identity of the culprit in advance? Social scientists have long believed that historical crime trends influence future patterns. The revolution in advanced machine learning is putting these theories to the test. A new generation of forecasting tools is emerging that will dramatically change the nature of law enforcement – and our privacy – forever.
Predictive policing is one of the most widely known forecasting platforms. It is based on the expectation that crime is hyper-concentrated and contagious. Take the case of Chicago´s west side where over 40 percent of all firearm-related murders are committed by a tiny network made up of less than four percent of the neighborhood’s population. Some places and people in the west side are also predisposed to “repeat victimization” – they are more likely to be a victim of crime than others.
The shift from describing patterns of crime to predicting them was triggered in part by the spread of powerful mapping software, data processing systems and social media. The underlying mathematical models for predictive policing can also be traced to an unlikely source: seismology. Very generally, crime is analogous to earthquakes: built-in features of the environment strongly influence associated aftershocks. For example, crimes associated with a particular nightclub, apartment block, or street corner can influence the intensity and spread of future criminal activity.
While the stated goal of predictive policing is to reduce crime rates and police bias, there are fears it could do the opposite. Organizations like the American Civil Liberties Union are concerned that such tools exacerbate profiling and selective policing, since they perpetuate racial bias under a veneer of scientific credibility. Meanwhile, the Electronic Frontier Foundation fears that prediction platforms can result in self-fulfilling prophecies: when police expect violence, they are more inclined to respond with violence. Both institutions are concerned that such tools are already impinging on the privacy of citizens.
The potential of the platform
But does predictive policing even work? The short answer is that we still don’t know. At least 60 police departments in the U.S. and Europe have already started rolling-out crime forecasting systems – with some reported success. When trialed in California, for example, certain types of burglary reportedly dropped. It is difficult to compare across cases, since the police departments, the predictive platforms, and the ways in which they are applied differ vastly. In some cases, the underlying algorithms draw on victimization data and social media profiles, while in others they combine criminal records, evidence of substance abuse, level of social isolation, and even financial status.
One of the greatest challenges of evaluating the impact of predictive policing is that the designers of the software are frequently unwilling to disclose the formula or data sources on which their tools are based. It is exceedingly difficult, perhaps impossible, to unpack the quality of the algorithms since they are for the most part proprietary. We don´t know what´s inside the black box.
And while potentially offering insights into the probability of crime occurring in a particular place and time, these tools are still unable to predict specific types of crime with a high degree of certainty. Nor are they able to determine precisely who will commit a crime. Instead, what predictive policing offers is an additional layer of information for an officers’ assessment. But just because the model offers a prediction, does not necessarily mean crime will be prevented or deterred – much depends on how the systems are implemented.
The backlash against prediction
Though some analysts believe the impacts of predictive policing are incremental, the criticism has done little to impede their rapid adoption by police agencies. With comparatively limited evidence of positive impact, there is also growing criticism of the underlying mathematical formulas driving predictive platforms, with some researchers suggesting they are not as accurate as many claim. Others are concerned the predictions will not just exacerbate police discrimination, but also contribute to under-policing in some areas.
A 2012 assessment of predictive policing evaluating its effects on property crime in Shreveport, Louisiana detected no statistically significant difference in crime reduction between the control and experimental areas. Even so, further evaluations of crime forecasting systems could generate new insights into their intended and unintended outcomes. For example, Carnegie Mellon University is currently testing systems with police in Pittsburgh through a program called CrimeScan, which in addition to tracking crime incidents collected by police, incorporates other variables, including 9/11 calls.
An especially controversial predictive policing platform is the Chicago police’s “Custom Notification Program.” The “heat list,” as it is called, is an algorithmic tool designed to identify people most likely to be involved in perpetrating or being a victim of violent crime. Launched with support from the U.S. National Institute for Justice, it collates data on, among other things, arrests and social networks associated with known shooters and shooting victims.
A recent assessment found that since its launch in 2013, the program has not saved any lives. Instead, “at-risk individuals were not more or less likely to become victims of a homicide or shooting as a result [of the intervention], and this is further supported by city-level analysis finding no effect on the city homicide trend.” Making matters worse, the police data used by the tool is biased toward crimes committed by minority groups.
What next?
The backlash against predictive policing is growing. A collection of civil rights groups released a petition in August 2016 announcing the “systemic flaws, inherent bias, and lack of transparency endemic to predictive policing products and their vendors.” The release claims the systems “reinforce bias and sanitize injustice.”
However, predictive analytics are here to stay, and advances in machine learning and processing power means that they will become more pervasive. They raise difficult ethical questions about the transparency of technologies, the interests of vendors, and the implications for policing and civil liberties. We cannot simply wish them away, but we can call for more openness about how algorithms are constructed. Communities can invest in open source platforms to allow for more scrutiny and improvement of these programs.
One example of the potential future of open source predictive policing is CrimeRadar. Unlike other crime forecasting tools designed for sale to police departments, CrimeRadar was created free for use by city residents. It assembles over 14 million crime events from police records to assess the probable risk of crime in specific locations at specific times – operating much like a weather forecast. What´s more, the underlying data and methods are public. These and other apps designed to improve crime reporting are increasingly common. While they may not resolve all the ethical questions, they are likely to disrupt the hold of proprietary systems currently dominating the market.
In the end, the effectiveness of predictive policing depends not just on the quality of the algorithm, but also the data – forecasting is meaningless without trustworthy information. In the future, there should be the introduction of independent audits for both the algorithms and data, complete with compliance requirements, whistle-blower options, and protections.
Por Robert Muggah
Artigo de opinião publicado em 4 de dezembro de 2016
The Cipher Brief