Comment to Shape AI in policing policy
Department of Justice and Department of Homeland Security
The Department of Justice wants to know what you think about the use of artificial intelligence (AI) in the criminal legal system specifically in terms of privacy, civil rights, and civil liberties.
AI tools like facial recognition, automated license plate readers (ALPRs), gunshot detection technology, social media monitoring, and predictive policing tools are being used for sentencing, police surveillance, predictive policing, crime forecasting, forensic analysis, prison management, crime and risk assessments. The use of these tools mean more automated decision making and analysis being farmed out to AI technologies and more biased policing practices as a result.
We must make it clear: there is no way for law enforcement to use this technology safely. It is biased, infringes on our rights, and should not be used.
The focus should be on stopping the use of these dangerous technologies.
It is important for the DOJ to hear from us. These comments might not stop them from using AI technology but it is critical to engage in the process and make sure our opposition is clear.
Add your name to the comment now and we’ll make sure it gets to them!
Sponsored by
To:
Department of Justice and Department of Homeland Security
From:
[Your Name]
The use of AI tools such as facial recognition, automated license plate readers (ALPRs), gunshot detection, and predictive policing by law enforcement infringes on protected rights.
The following AI tools in particular pose a risk to our privacy, civil rights, and civil liberties, and their use should be stopped:
- Facial recognition: Facial recognition technology is unreliable, unjust, and a threat to basic rights and safety. Facial recognition surveillance programs identify the wrong person up to 98% of the time. These errors have real-world impacts, including harassment, wrongful imprisonment, and deportation. They also have a chilling effect on people’s ability to exercise their protected rights, including when it comes to participating in protests. They frequently misidentify people of color, women, and children, perpetuating bias, discrimination and putting these vulnerable groups at greater risk of systemic abuse. Rather than effect any justice, this technology enables automated and pervasive monitoring. It is a tool for control, oppression and surveillance, not safety or justice.
- Automated license plate readers (ALPRs): While ALPRs help law enforcement track and trace the movement of anyone who drives a car, it does little to protect individuals, stop crimes or keep communities safe. Instead, it is a tool for targeting oppressed and vulnerable communities. They infringe on the right to move freely and often make a lot of mistakes, leading to frightening police encounters and wrongful arrests. The vast majority of data recorded by ALPRs is of ordinary people rather than those involved in a crime. ALPR technology is often used to target drivers who visit sensitive places such as health centers, immigration clinics, protests or centers of religious worship. ALPRs allow for widespread surveillance that focuses more on invading the privacy of individuals and violating the rights of entire communities than on ensuring the safety of those communities.
- Person-based predictive policing: Person-based predictive policing algorithms similarly threaten our privacy and rights. Police departments have been known to deploy predictive policing technology in communities with little (if any) transparency or evaluation of the tools in use. The long history of racist policing in the U.S. has an influence on predictive technologies that rely on inputs of historic data. This has the effect of creating an endless cycle of racist policing practices influencing future policing practices that continue to over-police communities of color. While some claim that the predictive algorithms actually remove human bias from the process, the fact that they rely on biased data means that they merely replicate it.
- Shot detection: Shot detection technologies like ShotSpotter are branded as a way to detect gunshots and send police to the location where shots were fired. But more often than not, they result in a waste of resources that increase police presence and altercations. Numerous independent analyses have shown high rates of false alarms, including report that up to 90% "probable gunshots" could not be connected to a verifiable shooting incident. These alerts have already resulted in the police killing a 13-year-old and multiple cases of false imprisonment.
All of these tools are disproportionately used by law enforcement to target and harm marginalized and oppressed communities, including Black and Brown people, religious groups, and immigrants. The United States law enforcement system has a long history of racial bias that is replicated by the use of predictive policing and AI tools. There is no question that the use of these technologies is unjustly harmful for people of color and society as a whole.
Ultimately, the Department of Justice should highly scrutinize any technology that automates existing practices. The ability to speed up decision making and action through AI is often seen as a highlight, but because it is speeding up biased practices and expanding surveillance, these technologies cause more harm than good.