DOJ funding pipeline subsidizes questionable big data surveillance technologies

By The Conversation

Predictive policing has been shown to be an ineffective and biased policing tool. Yet, the Department of Justice has been funding the crime surveillance and analysis technology for years—and continues to do so despite criticism from researchers, privacy advocates, and members of Congress.

Senator Ron Wyden, D-Oregon, and U.S. Rep. Yvette Clarke, D-New York, joined by five Democratic senators, called on Attorney General Merrick Garland to halt funding for predictive-policing technologies in a letter issued Jan. 29, 2024. Predictive policing involves analyzing crime data in an attempt to identify where and when crimes are likely to occur and who is likely to commit them.

The request came months after the Department of Justice failed to answer basic questions about how predictive-policing funds were being used and who was being harmed by arguably racially discriminatory algorithms that have never been proven to work as intended. The Department of Justice did not have answers to who was using the technology, how it was being evaluated, and which communities were affected.

While focused on predictive policing, the senators’ demand raises what I, a law professor who studies big data surveillance, see as a bigger issue: What is the Department of Justice’s role in funding new surveillance technologies? The answer is surprising and reveals an entire ecosystem of how technology companies, police departments, and academics benefit from the flow of federal dollars.

The money pipeline

The National Institute of Justice, the DOJ’s research, development, and evaluation arm, regularly provides seed money for grants and pilot projects to test out ideas like predictive policing. It was a National Institute of Justice grant that funded the first predictive-policing conference in 2009 that launched the idea that past crime data could be run through an algorithm to predict future criminal risk. The institute has given $10 million dollars to predictive-policing projects since 2009.

Because there was grant money available to test out new theories, academics and startup companies could afford to invest in new ideas. Predictive policing was just an academic theory until there was cash to start testing it in various police departments. Suddenly, companies launched with the financial security that federal grants could pay their early bills.

National Institute of Justice-funded research often turns into for-profit companies. Police departments also benefit from getting money to buy the new technology without having to dip into their local budgets. This dynamic is one of the hidden drivers of police technology.

Once a new technology gets big enough, another DOJ entity, the Bureau of Justice Assistance, funds projects with direct financial grants. The bureau funded police departments to test one of the biggest place-based predictive policing technologies—PredPol—in its early years. The bureau has also funded the purchase of other predictive technologies.

The Bureau of Justice Assistance funded one of the most infamous person-based predictive policing pilots in Los Angeles, operation LASER, which targeted “chronic offenders.” Both experiments—PredPol and LASER—failed to work as intended. The Los Angeles Office of the Inspector General identified the negative impact of the programs on the community—and the fact that the predictive theories did not work to reduce crime in any significant way.

As these DOJ entities’ practices indicate, federal money not only seeds but feeds the growth of new policing technologies. Since 2005, the Bureau of Justice Assistance has given over $7.6 billion of federal money to state, local, and tribal law enforcement agencies for a host of projects. Some of that money has gone directly to new surveillance technologies. A quick skim through the public grants shows approximately $3 million directed to facial recognition, $8 million for ShotSpotter, and $13 million to build and grow real-time crime centers. ShotSpotter (now rebranded as SoundThinking) is the leading brand of gunshot-detection technology. Real-time crime centers combine security camera feeds and other data to provide surveillance for a city.

The questions not asked

None of this is necessarily nefarious. The Department of Justice is in the business of prosecution, so it is not surprising for it to fund prosecution tools. The National Institute of Justice exists as a research body inside the Office of Justice Programs, so its role in helping to promote data-driven policing strategies is not inherently problematic. The Bureau of Justice Assistance exists to assist local law enforcement through financial grants. The DOJ is feeding police surveillance power because it benefits law enforcement interests.

The problem, as indicated by Senator Wyden’s letter, is that in subsidizing experimental surveillance technologies, the Department of Justice did not do basic risk assessment or racial justice evaluations before investing money in a new technological solution. As someone who has studied predictive policing for over a decade, I can say that the questions asked by the senators were not asked in the pilot projects.

Basic questions of who would be affected, whether there could be a racially discriminatory impact, how it would change policing, and whether it worked were not raised in any serious way. Worse, the focus was on deploying something new, not double-checking whether it worked. If you are going to seed and feed a potentially dangerous technology, you also have an obligation to weed it out once it turns out to be harming people.

 

Only now, after activists have protested, after scholars have critiqued, and after the original predictive-policing companies have shut down or been bought by bigger companies, is the DOJ starting to ask the hard questions. In January 2024, the DOJ and the Department of Homeland Security asked for public comment to be included in a report on law enforcement agencies’ use of facial recognition technology, other technologies using biometric information and predictive algorithms.

Arising from a mandate under executive order 14074 on advancing effective, accountable policing and criminal justice practices to enhance public trust and public safety, the DOJ Office of Legal Policy is going to evaluate how predictive policing affects civil rights and civil liberties. I believe that this is a good step—although a decade too late.

Lessons not learned?

The bigger problem is that the same process is happening again today with other technologies. As one example, real-time crime centers are being built across America. Thousands of security cameras stream to a single command center that is linked to automated license plate readers, gunshot-detection sensors, and 911 calls. The centers also use video analytics technology to identify and track people and objects across a city. And they tap into data about past crime.

Millions of federal dollars from the American Rescue Plan Act are going to cities with the specific designation to address crime, and some of those dollars have been diverted to build real-time crime centers. They’re also being funded by the Bureau of Justice Assistance.

Real-time crime centers can do predictive analytics akin to predictive policing simply as a byproduct of all the data they collect in the ordinary course of a day. The centers can also scan entire cities with powerful computer vision-enabled cameras and react in real time. The capabilities of these advanced technologies make the civil liberties and racial justice fears around predictive policing pale in comparison.

So while the American public waits for answers about a technology, predictive policing, which had its heyday 10 years ago, the DOJ is seeding and feeding a far more invasive surveillance system with few questions asked. Perhaps things will go differently this time. Maybe the DOJ/DHS report on predictive algorithms will look inward at the department’s own culpability in seeding the surveillance problems of tomorrow.


Andrew Guthrie Ferguson is a professor of law at American University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Fast Company – technology

(26)