In some sense, police have long been in the business of going beyond reactive law enforcement, using information from various sources (e.g., anonymous tips and leads) as well as historical analyses to draw inferences from which to anticipate and address crime before it happens. But as policing budgets shrink and applications of predictive analytics (the catch-all phrase for a broad array of statistical analysis, machine learning, and myriad other algorithmic techniques) to the social sciences and commercial markets become more proven and ubiquitous, local law enforcement agencies have also begun to shift interest to formal, quantitative research programs collectively dubbed “predictive policing”. As paths to the systematic forecasting of criminal activities, these programs are intended to help agencies more efficiently allocate the increasingly scarce resources needed to fight crime.
Perhaps not surprisingly, however, predictive policing has also generated waves of often sensationalistic media coverage and raised serious concerns among privacy advocates. The prevailing focus on this sensationalism unfortunately obscures the more meaningful discourse on how this quantitative realm of predictive policing might—under the appropriate conditions and with applicable caveats—become a valuable and constitutionally viable component of the law enforcement arsenal. Indeed, a measured analysis of the privacy risks at stake provides an occasion to remind us that police are by and large committed to and have a vested interest in not simply enforcing the law and stopping bad guys, but doing so in a manner that can be rigorously defended if challenged in the criminal justice system. As such, it is in the interest of law enforcement sponsors of predictive policing programs to carefully evaluate how their efforts uphold privacy and civil liberty standards.
A recent article (Emory Law Journal, Volume 62, forthcoming 2012) by University of District Columbia Assistant Law Professor Andrew Guthrie Ferguson addresses the question of whether and to what extent these emerging approaches to predictive policing can impact the reasonable suspicion calculus (i.e., the set of considerations weighing individuals’ Fourth Amendment interests against countervailing governmental interests at stake in policing efforts such as pat-down searches when it is believed “that criminal activity may be afoot” (Terry v. Ohio 392 U.S. at 1)). In his discussion, Ferguson does an excellent job of both elucidating the landscape of predictive modeling regimes as well as outlining their respective privacy and Fourth Amendment implications. The article thereby provides a good framework for considering the privacy implications of predictive policing in general.
As a starting point, Ferguson provides a survey of the various techniques that have been employed under the heading of “predictive policing.” In so doing, the reader begins to appreciate that treating all predictive analytical approaches monolithically oversimplifies this landscape and does a disservice to those seeking a measured understanding of this field. The broad spectrum of analytical approaches entails a similarly broad range of privacy implications. At one extreme, algorithms that profile particular persons tend to evoke Minority Report anxieties amongst privacy advocates (i.e. concerns around wholly, almost mystically, opaque systems profiling citizens and accusing them of misdeeds they have yet to commit). Arguably more palatable approaches, such as event-based “near-repeat theory” models, focus on identifying behaviors (rather than people) that repeat known patterns under circumstances where specific environmental vulnerabilities are known to exist. These latter types are not only close kin to the well-established “high crime areas” policing paradigm, but also, as Ferguson suggests, offer a potential privacy protective refinement over the existing model.
But even the more agreeable end of the predictive policing spectrum is not wholly exempt from privacy considerations when it comes to assertions of reasonable suspicion of criminal activity. The phrase “reasonable suspicion” points to a standard of certainty less than the probable cause standard explicit in the Fourth Amendment (Terry v. Ohio 392 U.S. 1) and, in that sense, can be thought of as nothing more than a lower probability of certitude. Ferguson argues that while a notion of probability inheres in both Fourth Amendment “probable cause” (and, by extension, “reasonable suspicion”) considerations and also in the “probability scoring” of many predictive policing models, the resemblance is superficial. The two are, in fact, far from constituting comparable notions of predictability and an important distinction should be drawn between the “clinical” and “statistical” applications of prediction. While the former focuses on the particularities or “specific and articulable facts” of a given investigation “taken together with rational inferences from those facts” (Terry v. Ohio, 392 U.S. at 21) in order to derive specific assertions of criminal involvement, the latter attempts to apply a statistical modeling framework to infer possible criminal involvement based entirely on training data from previously observed circumstances that (more likely than not) have no direct relationship to the particular circumstances to which the model’s predictions are applied. It’s the difference between stopping and frisking an individual observed to engage in activity that, for example, has all the outward appearances of a drug deal versus stopping and frisking a person who happens to be standing in a particular place and time at which a statistical model predicts an occurrence of a drug deal.
What distinguishes the clinical, human-driven predictions of traditional policing practice from the machine-driven, statistical variant can be further elaborated by analysis of the (albeit imperfect) analogy to anonymous tips, which have long been in use by law enforcement to trigger or assist investigations. Ferguson identifies four principles that courts have historically relied upon to differentiate between the degree of certitude implied by an “inchoate and unparticularized suspicion or ‘hunch’” (Terry v. Ohio 392 U.S. 1, 27) provided by a tip and the reasonable suspicion threshold required to motivate police action: 1) predictive tips must be individualized, i.e. specific to persons and ongoing criminal activity in which they are suspected of being implicated; 2) predictive tips must be further corroborated by police observations related to those specific persons and their suspected criminal activities; 3) the predictive value of those tips turns on the level of particularized information involved; and 4) predictions may only remain viable for a relatively short period of time absent new corroborating evidence or fresh analysis.
Put another way, it may be legitimate and defensible to employ predictive models as a preliminary factor in reasonable suspicion analysis, but, as with suspicious activity reports, tips, or leads, “the use of predictive policing forecasts, alone, will not constitute sufficient information to justify reasonable suspicion or probable cause for a Fourth Amendment stop.” (Ferguson, 26) What is lacking from the statistical prediction is the particular detail linking abstract modeling outputs to the totality of circumstances constituting the actual observed particularities of a presumed crime. Moreover, even where a predictive model is intended to be applied to augment the evaluation of such particulars, timing is critical. Models do not have an interminable shelf-life, because the environmental factors underlying their predictive power are subject to change.
Even more importantly, the models themselves are only as reliable as experience—real, hard empirical evidence—dictates. A viable and defensible predictive algorithm (i.e. one that can be justified as a legitimate aid to law enforcement), must be backed by demonstrable proof of success under well-understood circumstances not only in order to impede criminal activity but also to stand up to scrutiny if and when a well-resourced individual challenges it. The defensible application of predictive policing is therefore contingent upon a number of factors: the continued existence of environmental vulnerabilities (e.g., poor lighting making a storefront or home vulnerable to burglary), a causal logic that plausibly explains how those factors precipitate specific criminal activities, reliable and accurate data and reporting on which to base the model, and finally a sound experimental framework for evaluation of the fidelity of those predictions. In other words, insofar as predictive policing aspires to science over clairvoyance (by and large it is scientists—e.g., sociologists, applied mathematicians, statisticians—who are leading predictive policing research on behalf of sponsoring law enforcement agencies), it must adhere to sound methodology to pass empirical muster.
But even beyond methodological considerations, Ferguson hints at another set of concerns that come into play in Fourth Amendment analysis of the merits of predictive policing models: the explicability and defensibility of algorithms in the courtroom. Much as criminal prosecutions today involve defensive challenges to the methodological steps by which probable cause for arrest is constructed, future criminal prosecutions of predictive policing cases will likely require a qualified witness to speak to the questions of the underlying predictive model’s provenance, accuracy, timeliness, and reliability. Certainly few police officers would have the background or be expected to attest to these qualities. Hence, qualified statisticians and program administrators will likely need to be able to explain, when challenged in court, how these algorithms work and how their predictions are translated into actionable intelligence for police. This may be less challenging for simpler algorithms, but may become increasingly problematic as a model’s complexity swells and algorithms become increasingly opaque.
As successes accumulate, sponsoring law enforcement agencies are likely to be tempted to stretch predictive models to accommodate more general crime types, a move that will typically entail introducing increased complexity. Program administrators and researchers may be tempted to generalize models to go beyond the circumscribed categories for which they were initially developed and tested and to apply them to categories of criminal activity that may not have clean, causal, pattern-based explanations (e.g., crimes of passion), or that require adaptation to dynamic adversaries and/or changing environmental vulnerabilities. Machine learning algorithms, for example, may be employed in an attempt to address these onerous desiderata—the machine is expected to adapt in response to changing modeling parameters and/or feedback from historical applications of predictions in a way that would otherwise require laborious, iterative manual modeling effort.
Beyond the aforementioned difficulties of providing a clear explanation of how a complex predictive algorithm works when challenged in trial, machine learning surfaces a novel and perhaps more daunting set of challenges to the demands of courtroom justification. Absent human supervision to ensure empirical soundness, adaptive machine learning models may develop in ways that are either resistant or wholly opaque to introspection and later explanation. If the trajectory of predictive analytics in other disciplines is any kind of leading indicator, one can reasonably anticipate that machine learning techniques will increasingly become the focus of predictive policing research and that these seemingly speculative concerns will become all the more relevant.
As these algorithms become ever more inscrutable black boxes, the lines between reality and the mystical fiction of Minority Report “precognition” indeed begin to blur. So it isn’t any wonder that the popularized fiction of predictive policing as a kind of clairvoyance seduces not only journalists reporting on predictive policing, but also some researchers as well. Still, Ferguson’s analysis dictates a more sober and circumscribed role for predictive analytics in policing wherein defensible algorithms have the following common features:
- They cannot be over-reaching, i.e., they focus on a narrow set of crime types, all of which relate to environmental factors that are known to persist over time. Once those environmental factors change, the predictions lose their applicability. This also implies that most crimes of passion (including many violent crimes) do not lend themselves to predictive modeling.
- They look at near-term predictions. They do not forecast years in advance. As with the weather, there are far too many complex variables at play to be fully modeled and, over time, the compounded influence of these exogenous factors will degrade even the most carefully calibrated predictions.
- They are evaluated and proven in controlled experimental circumstances to, as much as possible, account for the influence of exogenous factors. Even in the best of circumstances, however, it is not possible to control for everything nor can it be expected that experimental circumstances will inhere in typical policing scenarios. Which is why…
- Outcomes demonstrating crime reduction improvements are likely to be modest, and must be reported with careful attention to applicable caveats. The most credible predictive policing studies to date tend to suggest moderate improvements in reductions of certain classes of crime along with pointing to the need for continued study (see, for example, Modest Gains in First Six Months of Santa Cruz’s Predictive Police Program, Santa Cruz Sentinel (Feb. 26, 2012)
All of this is not to suggest that every effort under the predictive policing umbrella is legally intractable or practically unjustifiable, but rather to call out necessary considerations of the scope and limitations of plausible predictive policing regimes. When thoughtfully approached, as Ferguson points out, certain methods may in fact enhance privacy protections. For example, near-repeat predictive models (of the type currently being tested by the Santa Cruz and Los Angeles Police Departments) may offer a legitimate opportunity to tighten notions of “high crime areas” (already in heavy currency with many police forces) by refining the temporal and spatial dimensions of the high crime calculus. If—through such modeling—a police force can surgically apply pressure to specific areas at specific times and directed at specific criminal activities, the risk of privacy/civil liberties infringements can potentially be reduced. Moreover, scarce policing resources can be more efficiently utilized through such an approach.
Now, granted, all of this hinges on an “if” that is ultimately at best an empirical matter and must be borne out through careful experimentation. It also assumes the persistence of specific environmental vulnerabilities, and even with those in place, it must be recognized that the prediction alone—no matter how reliable—is not enough to constitute reasonable suspicion of a person or persons who happen to be found in the temporal and spatial crosshairs of a particular predictive model.
As a company, Palantir is proud to support the work of the law enforcement community and to enable initiatives that allow the police forces to do more with less. At the same time, we recognize that the development of predictive policing initiatives should be informed by careful consideration of the attendant privacy implications. In that vein, we approach potential engagements with predictive policing with a commitment to doing so in a thoughtful, rigorous, and ethically and legally responsible manner.