Skip to main content

The Complex Promise and Perils of AI in Policing

AI Police

The Associated Press recently reported that police departments in Oklahoma City and Fort Collins, Colorado, have started using generative AI to create incident reports. Leveraging bodycam audio, these departments can instantly generate narratives of events, a task that would otherwise take minutes or even hours. While officers appreciate the time-saving aspect and perceive the reports to be accurate, this development raises significant concerns. 

The critical issue here is the inherent error rates of AI technologies involved. Speech-to-text systems make mistakes. Language models make mistakes. These errors can compound. If a speech-to-text system is 90% accurate and the language model's generation of text based on it is also 90% accurate, overall accuracy might plunge into the 80s. Increased noise levels in recordings further exacerbates the problem. 

The process is not as simple as transcribing bodycam audio and generating a report. We must ask ourselves: How were these systems trained? The data used for training directly affects how well these systems perform. Were they trained on real-world incident reports, linking audio to transcripts and then to incident reports? Or on just the reports themselves? In the former, there is a link between what was said and what was reported. In the latter, the structure of the report could dominate the output, with elements of the transcript being ignored. 

There is also the issue of bias. If past incident reports have been predominantly associated with African American or Hispanic suspects, the AI could skew its predictions in a similar manner. Such biases could lead to more significant disparities in policing and a different kind of systemic bias built into the technology we deploy. 

Beyond these technical considerations, there is another layer of risk: human integrity. We know that law enforcement officers are not infallible. If an officer, knowingly or unknowingly, provides misleading audio content to fit a narrative, AI could reinforce these inaccuracies, creating exceptionally persuasive but erroneous reports. This becomes a powerful tool for manipulation and could have dangerous implications for fair justice. 

A key problem is the validation of these reports. Given that we know there are going to be errors, how do we validate these AI-generated incident reports? Another concern is the role these reports play in legal settings. If they become inadmissible in court or if we always need to revert to the original audio or transcription for verification, then what value do they provide? Experts suggest that every piece of data used to build these reports should be meticulously maintained for evidence validation, ensuring a robust chain of custody. 

Oklahoma City offers a measured approach by restricting this technology to reports where no one is at risk of incarceration. This careful implementation is commendable and may serve as a model for other departments. However, Fort Collins uses it more generally, which is riskier. Officers in Fort Collins are already noticing conditions where AI-generated reports are less reliable due to background noise.  

The promise of AI in law enforcement is alluring. The ability to generate rapid and seemingly comprehensive incident reports is undoubtedly tempting for an evidence-based system. However, these systems are not without flaws, and we have a responsibility to ensure that their deployment does not lead to greater injustices. The technology might save time, but it could also propagate errors and bias at a speed and scale far beyond human capability. 

The lesson here is clear: AI can be a powerful tool, but we must approach it with caution. We need to maintain a critical eye and ensure all aspects of its implementation are meticulously validated and checked for bias. Responsibility and thorough oversight are not optional; they are necessary conditions for the successful and just use of AI in policing. 

Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top