Pre-Crime Prediction: Does It Have Value? Is It Inherently Racist?

Pre-Crime Prediction: Does It Have Value? Is It Inherently Racist?

David H. McElreath, Sherri DioGuardi, Daniel Adrian Doss
DOI: 10.4018/IJSSMET.298672
Article PDF Download
Open access articles are freely available for download

Abstract

This paper considered the emerging use of predictive analytics in the justice domain with respect to potential bias. It discussed predictive algorithms and methods from the perspectives of reported crime and community safety in the United States. Although predictive algorithms, techniques, and implementation contexts are emerging, imperfection exists with respect to their use. Despite any effectiveness or efficiency of using predictive algorithms, such use should neither deny human rights nor transgress societal laws. Regardless, the emergence of predictive policing fuels and enhances the classic debate of balancing liberty versus security within a civil society.
Article Preview
Top

Introduction

It is unsurprising that data sets are used to anticipate crime. In the modern world, data drives decisions continuously. Prediction using algorithms is not new. For instance, the insurance industry has used predictors for decades in determining risk (Boodhun & Jayabalan, 2018). The banking industry employed algorithms to determine loan eligibility (Shie, Chen, & Liu, 2012). Political parties have collected and assessed data toward identifying, targeting, and influencing voters (van der Voort, Klievink, Arnaboldi, & Meijer, 2019). The marketing industry continually developed, refined and shaped messages to potential consumers (Du, Rong, Michalska, Wang, & Zhang, 2019). Amazon, Facebook, and Google all used machine learning techniques to analyze data derived from their customers (Hewage, Halgamuge, Syed, & Ekici, 2018). Each of these industries and corporations developed data sets to identify and target their respective consumers. What has changed over time is the advancement and refinement of the technology used to analyze the data. Artificial intelligence (AI) spawned Deep Learning (DL) involving artificial neural networks, modeled from the human brain, by applying a set of algorithms whereby the ‘machine’ will reach a solution to a specific problem (Marr, 2016a). For instance, Facebook’s DeepFace is a DL application that accurately recognized faces at a 97% success rate, as compared to the human success rate of 96% (Marr, 2016b). Even at a 97% success rate, it means that very accurate application will still get it wrong 3% of the time.

From a law enforcement and community safety perspective, the foundation for crime prediction is the concept that people behave predictably (to some degree) and future behavior may be both anticipated and predicted (Hayes, 2015). If Hayes’s assertion was true, can human behavior data be analyzed to examine whether behavior patterns may be anticipated? As a result, could more efficient intervention to deter crime and maintain societal order be crafted while not crossing the line with civil or human rights violations?

PREDICTIVE ALGORITHMS

Cambridge Dictionary (n.d.) defined algorithms as “a set of mathematical instructions or rules that, especially if given to a computer, will help to calculate an answer to a problem.” Predictive algorithms relied on artificial intelligence (AI) being applied to machine learning (Marr, 2016a), and all three (algorithms, machine learning, and artificial intelligence) were based on mathematical principles, such as probability theory and inferential statistics. Rigano (2019, para. 4), indicated that, “Conceptually, AI is the ability of a machine to perceive and respond to its environment independently and perform tasks that would typically require human intelligence and decision-making processes, but without direct human intervention.” While people should tend to think of mathematics as dealing in absolute truths and as an objective science, O’Neil (2016, 2017), a mathematician and data scientist, insisted that algorithms were nothing more than opinions embedded in code. An algorithm was a computer-coded instruction, written by human programmers, that allowed the discerning of patterns within massive amounts of historical data. Afterward, by assuming found patterns were fixed facts, outcomes of future predictions were provided for single locations and/or individual people (Ferguson, 2017a). In the case of crime prediction, ‘hot spots’ were flagged. Regarding recidivism risk, a single score was assigned to individuals identifying them within the justice system as having a high risk to recidivate. These predictive algorithms were used, over the past half dozen years, as supportive tools for decision-making within all components of the criminal justice system (courts, corrections, and law enforcement). Within policing, predictive algorithms have become a “multi-million dollar business” (Ferguson, 2017a, p. 1132).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 2 Released, 4 Forthcoming
Volume 12: 6 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing