top of page

Bias by Design: AI-powered Predictive Policing and Racial Profiling

Writer's picture: Nayanika JhaNayanika Jha

Artificial intelligence (AI) has profoundly impacted society, ushering in the fourth industrial revolution and embedding technology into every aspect of human life. However, while AI presents unprecedented opportunities, it also poses significant challenges. At the heart of AI functionality are algorithms, sets of instructions enabling machines to operate without direct human intervention. Algorithms work by processing vast amounts of data, identifying patterns and making predictions based on the learned information (DeAngelis, 2014). However, the data used in AI algorithms can introduce biases, either through unrepresentative sampling or aggregation from multiple sources without accounting for variations. Moreover, prolonged use of biased algorithms can also perpetuate biases in the decision-making processes.



Predictive Policing

Many law enforcement agencies have turned to AI technologies for predictive policing, which uses historical data to forecast potential criminal activity in specific areas (Meijer & Wessel, 2019). While predictive policing promises proactive crime prevention, concerns arise regarding embedded biases in the system, particularly racial discrimination. Predictive policing operates at multiple levels, including area-based, event-based, and person-based policing. However, the data driving these predictions often reflects historical over-policing of communities of colour, perpetuating racial biases. This racial profiling is compounded by systemic issues, such as disproportionate arrests of Black individuals, leading to further hyper-criminalization of minorities.


AI technologies, including machine learning (ML), increase biases in predictive policing through large datasets and evolving algorithms. Biases originate mainly from developers' backgrounds and data sources, leading to a “feedback loop” which reinforces existing social systems (Storbeck, 2022). The LAPD frequently utilized and discarded PredPol, a crime prediction software, which creates 500-square-foot hot spots indicating probable crime locations in the next twelve hours (Nguyen, 2019). Despite its goal to remove race and socioeconomic status from records, the algorithms are imperfect. According to the US Department of Justice, a black person aged 20-34 is more than twice as likely as a white male to be reported and subsequently arrested for a crime (Solomon, 2012). The racially skewed data thus shows a clear pattern that is a result of a presumed correlation in the American criminal justice system, between race and criminality.


Intersectionality complicates bias in AI technologies, as individuals' identities intersect, influencing their experiences. A recent study conducted in Buolamwini and Gebru's "Gender Shades" found that the main face recognition algorithms offered by Face++, IBM, and Microsoft were unable to identify roughly three out of ten Black female faces. Not only were the algorithms more accurate in identifying men than women (gender discrimination), and more accurate in identifying White people than Black people (racial discrimination), but they also performed worse on Black women than on Black men and White women (intersection of gender and racial discrimination) (Guolamwini & Gebru, 2018). This exposes the most vulnerable to a number of inequities.


Securitization and Predictive Policing

In Krause and Williams’ “Security Studies”, the referent object is the entity around which the security discourse revolves (Krause & Williams, 2018). In this case, it is public safety, for which law enforcement agencies frame the use of predictive policing as a necessary means to ensure public order and safety and mitigate crime. The threat object would be criminal activity, which very often is associated with people of colour, in many western societies, especially in the US. As can be seen in the case of PredPol, the results produced were racially skewed, perpetuating discrimination. In another case, COMPAS, a risk assessment tool used in the US courts was found not only to be ineffective in predicting criminal behaviour but also found to be biased against Black defendants. It was revealed that the software incorrectly viewed a Black defendant to be twice as likely to commit a crime than the other defendants (Storbeck, 2022).


Here, through a speech act, surveillance—and predictive policing in particular—is securitized by presenting it as necessary for safeguarding public safety and preventing crime (Waever, 1995). The use of AI in surveillance is advocated as the countermeasure to these dangers. To convey a sense of immediacy and the need for quick action, language that evokes fear and urgency is employed. Lastly, in order to counter the identified threat, the speech act permits the adoption of extraordinary measures like heightened surveillance and the use of AI technologies. Despite potential concerns about biases and discrimination, surveillance is justified as a necessary response to the security challenge, in order to justify its expansion in the name of public safety.


Feminist Critique

Predictive policing reinforces institutionalized prejudices, especially those directed towards communities of color, which disproportionately affect women of colour. Racial profiling in AI algorithms further marginalizes already vulnerable groups by increasing their exposure to harassment, discrimination, and surveillance. People's intersectional experiences are undervalued, especially those of Black women, which feeds into the cycle of discrimination against both gender and race. Hansen's feminist critique exposes the gendered dimensions of AI-driven predictive policing, revealing how women, especially those from marginalized communities, bear the brunt of discriminatory practices. This critique challenges the notion of AI technologies as neutral and objective and also highlights their role in perpetuating and exacerbating existing inequalities.


Biopolitics

Foucault's work on biopolitics sheds light on the impact of AI on society, particularly in predictive policing. Biopolitics emphasizes population governance and regulation, with a focus on power techniques that manage life (Foucault, 1978). AI-driven predictive policing exemplifies biopolitical strategies because it requires the collection and analysis of massive amounts of data in order to govern and control societal behaviour. Predictive policing uses algorithms to detect patterns in data and forecast criminal activity, effectively regulating populations by targeting specific areas or individuals for surveillance and intervention. As in the case of predictive policing, power is acted on the population in an anticipatory manner, and the body species becomes the referent object.


The PARIS school claims that security and insecurity are part of the same continuum. The PARIS school approach to (in)securitization investigates the construction of security narratives and the securitization of issues such as AI-driven predictive policing. It emphasizes the role of powerful actors in framing certain phenomena as security threats, thereby justifying the use of surveillance technologies and the expansion of state control. In the context of predictive policing, this approach emphasizes the importance of critically evaluating the securitization of AI technologies, as it has the potential to erode civil liberties and reinforce social inequalities. This framework emphasizes the importance of balancing security measures with respect for individual rights and democratic principles (Bigo & McCluskey, 2018).




 

References

  1. Bigo, D., & McCluskey, E. (2018). What Is a PARIS Approach to (In)securitization? In A. Gheciu & W. C. Wohlforth (Eds.), The Oxford Handbook of International Security (pp. 1-21). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198777854.013.9

  2. Buolamwini, J. & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

  3. DeAngelis, S. F. (2014). Artificial Intelligence: How Algorithms Make Systems Smart. Wired. https://www.wired.com/insights/2014/09/artificial-intelligence-algorithms-2/  

  4. Foucault, M. (1984). The History of Sexuality, Volume 1: An Introduction. New York, NY: Pantheon Books. R9 Foucault_HistoryOfSexuality_Part5_RightOfDeath.pdf  

  5. Meijer, A. & Wessels, M. (2019). Predictive Policing: Review of Benefits and Drawbacks. International Journal of Public Administration, 42(12), 1031-1039, 10.1080/01900692.2019.1575664

  6. Nguyen, L. (2019). Predictive policing algorithm perpetuates racial profiling by LAPD. Daily Burn. https://dailybruin.com/2019/05/02/predictive-policing-algorithm-perpetuates-racial-profiling-by-lapd 

  7. Krause, K., & Williams, M. (2018). Security and “Security Studies”: Conceptual Evolution and Historical Transformation. In A. Gheciu & W. C. Wohlforth (Eds.), The Oxford Handbook of International Security. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198777854.013.2  

  8. Solomon, A. L. (2012). In Search of a Job: Criminal Records as Barriers to Employment. National Institute of Justice. https://nij.ojp.gov/topics/articles/search-job-criminal-records-barriers-employment#2-0

  9. Storbeck, M. (2022). Artificial intelligence and predictive policing: risks and challenges. EUCPN. https://eucpn.org/sites/default/files/document/files/PP%20%282%29.pdf

  10. Waever, O. (1995). Securitization and Desecuritization. In R. D. Lipschutz (Ed.), On Security (Chapter 3). http://www.ciaonet.org/book/lipschutz/lipschutz13.html

8 comments

8 Comments


Naisha Srivastav
Naisha Srivastav
Apr 30, 2024

This is a really well written article , it is interesting and informative . There is a great need to understand the risks associated with Artificial intelligence and the degree of harm that the negative effects of A.I can result in , this article is very relevant to today's technological and security context . Thank you for all your insights !

Like

Anukriti Singh
Anukriti Singh
Apr 26, 2024

This was an exciting read. The critical examination of algorithmic biases and their effects on underrepresented communities highlights the necessity for responsible AI technology development and use to guarantee fair and just societal outcomes. To mitigate systemic biases and protect civil liberties in law enforcement procedures, we must move forward with incorporating various perspectives and implementing transparency in algorithmic decision-making.

Like
Nayanika Jha
Nayanika Jha
Apr 30, 2024
Replying to

Thank you for your feedback! Your emphasis on responsible AI development resonates strongly, emphasizing the importance of inclusivity and transparency in addressing systemic biases.

Like

Suhani Sharma
Suhani Sharma
Apr 26, 2024

Thank you for writing such an insightful article, it serves as a compelling call to action for policymakers, technologists, and activists to address the biases inherent in AI-driven predictive policing. By advocating for responsible AI practices and promoting transparency and accountability, we can work towards a more just and equitable society.

Like

Ayushi Raghvendram
Ayushi Raghvendram
Apr 23, 2024

Hey, Nayanika your article was an interesting read. Can you please elaborate on how we can ensure that AI systems are designed and trained in a way that respects user privacy and prevents the perpetuation of biases like you mentioned in your blog?

Like
Nayanika Jha
Nayanika Jha
Apr 30, 2024
Replying to

Diverse datasets, inclusive model training programs and regular audits and transparency can ensure the mitigation of biases.

Like

Siyona Shaju
Apr 17, 2024

Through your arguments, it can be understood that racial profiling has been greatly institutionalized over the years. Can we say that the state is one of the main actors responsible for this large scale discrimination and torture via AI/technological policing and that the state itself has to be (or come up) the solution?

Like
Nayanika Jha
Nayanika Jha
Apr 30, 2024
Replying to

Yes, indeed! AI systems are increasingly being integrated into the state apparatus, as is in the case of the overt use of AI for ethno-political repression in China or the presence of algorithmic biases, perpetuating racial or even gender discrimination in the US. Systemic changes such as regulatory oversight or legal safeguards are necessary to correct this paradox.

Like
bottom of page