When AI meets law enforcement: The future of predictive policing

Summarize this article with:
Police in East Lansdowne, PA, are getting connected.In October, the Philadelphia suburb’s police department deployed artificial intelligence-powered Axon Body4 cameras that offer two-way translation of over 50 foreign languages.Earlier this month, the borough said that 41 AI-driven cameras have been installed along roadways throughout the town in an effort to make the streets safer by capturing criminal activity in real time.Police Chief James Cadden said AI-driven cameras have played a key role in two cases, including the arrest of a suspect who was following two children. He also stressed the department’s commitment to protecting residents' privacy, adding that the cameras do not use facial recognition and cannot see inside homes.While early predictive policing systems have been abandoned due to backlash, newer AI tools are increasingly embedded in routine police infrastructure.As AI systems for policing and surveillance expand, public anxieties about algorithmic decision-making are increasingly reflected in popular culture.The top movie in North America on Jan. 26 was Mercy, a science fiction film in which an AI judge tells a detective that he is on trial for the murder of his wife and gives him 90 minutes to prove his innocence or face execution.Expert cites AI bias issuesWe’re not contending with robot judges just yet, but analysts have expressed concerns about predictive AI, which uses statistical analysis and machine learning to identify patterns, anticipate behaviors, and forecast upcoming events. Critics worry about racial and socioeconomic biases, mass-surveillance privacy violations, and a lack of transparency. More tech:Morgan Stanley sets jaw-dropping Micron price target after eventNvidia’s China chip problem isn’t what most investors thinkQuantum Computing makes $110 million move nobody saw comingMorgan Stanley drops eye-popping Broadcom price targetApple analyst sets bold stock target for 2026Observers will often invoke another sci-fi flick, Steven Spielberg’s 2002 Minority Report, in which Tom Cruise is framed for a future murder. Andrew Lee, a partner with Jones Walker LLP, noted that “the comparison may no longer be hyperbole.”“Unlike Spielberg's film, in which technology worked relatively well, real-world predictive policing has been documented to have bias, opacity, and constitutional issues that alarm organizations considering these tools,” he wrote in The National Law Review.Lee cited a 2018 study that found commercial facial recognition systems show error rates of just 0.8% for light-skinned men but 34.7% for darker-skinned women—a 40-fold disparity. A 2019 study by the National Institute of Standards and Technology (NIST) tested 189 facial recognition algorithms from 99 developers and found that African American and Asian faces were 10 to 100 times more likely to be misidentified than white male faces.There are also worries about "automation bias," describing how "people are just deferring to computers," assuming AI-generated analysis is inherently superior to human reasoning.“At least eight Americans have been wrongfully arrested after facial recognition misidentifications, with police in some cases treating software suggestions as definitive facts,” Lee said.Ryan Jenkins, a philosophy professor at California Polytechnic State University, said AI predictive systems can be developed responsibly to serve public safety, “but they also provide a powerful tool for surveillance and repression if they are weaponized against the populace.” “Systems that make specific predictions about individuals and their risk scores are more worrisome—more or less what is contemplated in Minority Report—while forecasts about places likely to experience crime seem less concerning,” he said.AI technology showing up everywhereJenkins added that there is tentative evidence these systems can reduce crime, “though randomized, controlled trials are scarce and the data are noisy.”Andrew Guthrie Ferguson, a law professor at the George Washington University Law School, said that earlier data-driven systems have largely been rolled back.Related: When cars drive themselves: Robotaxis, regulations & reality“We had almost a decade of experimentation with what I viewed as the first generation of predictive policing has largely been pushed back on either because people saw it as racially biased or a bad use of resources or didn't work,” he said. “And yet at the same time, new AI technologies are showing up everywhere.”Ferguson, author of The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement, which is scheduled for a March release, said that there is “some pretty amazing technology” that allows central command centers to connect to a police officer’s body camera.“In some ways I think predictive policing is being baked into the data collection infrastructure that we're building in modern policing and it's still a problem, but doesn't seem quite as Minority Report scary,” he said.Axon, maker of the Taser, provides body-worn cameras, video storage solutions, and AI that helps police write incident reports. “Think about the thousands and thousands of officers who are going out on thousands and thousands of shifts,” Ferguson said. “They all have video running, and someone has to store that and then charge the cities for storage.”Flock Safety, funded by Andreessen Horowitz, makes automated license plate readers for law enforcement, associations, and businesses, capturing license plates, makes, and models. AI-powered platforms now allow 911 dispatchers to receive and analyze cell phone footage and live video.“It is a new and untested form of the privatization of public safety with a host of unanswered questions about democratic accountability, privacy, and police power,” Ferguson said.Related: AI teams up with humans: How work will change
