Tell the Department of Justice: we don’t trust you to use facial recognition and other algorithmic tech!

Department of Justice and Department of Homeland Security

The Department of Justice and Department of Homeland Security want to know what you think about law enforcement use of facial recognition technology and other person-based biometric and predictive algorithms (think

This is a key moment to make it clear: there is no way for law enforcement to use facial recognition and other biometric technology safely. It should be banned from use!

The purpose of this comment period is to address concerns around the use of this technology in relation to privacy, civil rights, and civil liberties. While we don’t assume these comments will stop these agencies from using facial recognition technology, we know it’s critical to engage in the process and make sure our opposition to its use is clear.

Submit your comment now - tell the DOJ there is no way to safely use facial recognition or other tools of biometric and algorithmic surveillance. They should be banned!


Sponsored by

To: Department of Justice and Department of Homeland Security
From: [Your Name]

I am writing to express my opposition to law enforcement use of facial recognition and other biometric surveillance technologies including fingerprint and iris biometric technologies; DNA biometric technologies (including familial searching), probabilistic genotyping software, and predictive phenotyping; and person-based predictive policing algorithms. These tools infringe on my right to privacy, they put my most sensitive personal information at risk, and they have a chilling effect on my right to free speech.

Biometric surveillance tools are disproportionately used to target and harm vulnerable communities, including Black and brown people, religious groups, and immigrants.

The United States justice system has a long history of racial bias that is augmented by the use of biometric surveillance systems. From using facial recognition surveillance on predominantly Black neighborhoods to searching mugshot databases that are disproportionately made up of Black and brown faces due to decades of racist policing practices, there is no question that the use of this technology is unjustly harmful for people of color. There are documented cases of Black people being misidentified by facial recognition because the systems are less accurate at identifying non-white faces, and this has led to wrongful arrests. But even with completely accurate facial recognition systems, their use within a racist justice system will continue to put Black and brown people at increased risk.

Immigrant communities already experience increased biometric surveillance by DHS, ICE, CBP, and other agencies that use these tools to track, target, and detain immigrants. This is done without consent, without an opportunity to opt out, and without concern for the harms it can cause, including deportation and cyberattacks that leave peoples’ most sensitive data exposed. Much like our broader criminal legal system, immigration enforcement systems have a deeply racist track record, and the use of biometric technologies to police immigrants supercharges that racism. There is no way to protect the rights of immigrants while placing them under these systems of constant surveillance.

Facial recognition has a chilling effect on people’s ability to exercise their protected rights, including to participate in protests. Law enforcement use of facial recognition allows for anyone who participates in protests or other constitutionally protected public actions to be identified and tracked. We have seen this threat realized in recent years when law enforcement agencies used facial recognition on Black Lives Matter protesters. The use of these tools essentially eradicates human privacy and the ability to speak up safely.

Person-based predictive policing algorithms similarly threaten our privacy and rights. Police departments have been known to deploy predictive policing technology in communities with little (if any) transparency or evaluation of the tools in use, and use these systems to determine who is likely to commit a crime. Again, the long history of racist policing in the U.S. has an influence on predictive technologies that rely on inputs of historic data. This has the effect of creating an endless cycle of racist policing practices influencing future policing practices that continue to over-police communities of color. While some claim that the predictive algorithms actually remove human bias from the process, the fact that they rely on biased data.

These technologies allow law enforcement to track our movements and target people based on race, religion, political affiliation, and speech, which puts all of our rights in danger. There is nothing that the Department of Justice or Department of Homeland Security could change that would make me trust law enforcement agencies to use these technologies, and there is no way to meaningfully protect civil rights under their use. That is why I call on the Department of Justice and Department of Homeland Security to ban their use.