Designed alongside the UN and Interpol, the Responsible AI Toolkit has already been used to train thousands of officers and dozens of police chiefs across the world.


As more and more sectors experiment with artificial intelligence, one of the areas that has most quickly adopted this new technology is law enforcement. It’s led to some problematic growing pains, from false arrests to concerns around facial recognition.
However, a new training tool is now being used by law enforcement agencies across the globe to ensure that officers understand this technology and use it more ethically.
Based largely on the work of Cansu Canca, director of responsible AI practice at Northeastern University’s Institute for Experiential AI, and designed in collaboration with the United Nations and Interpol, the Responsible AI Toolkit is one of the first comprehensive training programs for police focused exclusively on AI. At the core of the toolkit is a simple question, Canca says.
“The first thing that we start with is asking the organization, when they are thinking about building or deploying AI, do you need AI?” Canca says. “Because any time you add a new tool, you are adding a risk. In the case of policing, the goal is to increase public safety and reduce crime, and that requires a lot of resources. There’s a real need for efficiency and betterment, and AI has a significant promise in helping law enforcement, as long as the risks can be mitigated.”
Thousands of officers have already undergone training using the toolkit, and this year, Canca led a training session for 60 police chiefs in the U.S. The U.N. will soon be rolling out additional executive-level training in five European countries as well.
Uses of AI like facial recognition have attracted the most attention, but police are also using AI for simpler things like generating video-to-text transcriptions for body camera footage, deciphering license plate numbers in blurry videos and even determining patrol schedules.
All those uses, no matter how minor they might seem, come with inherent ethical risks if agencies don’t understand the limits of AI and where it’s best used, Canca says.

“The most important thing is making sure that every time we create an AI tool for law enforcement, we have as clear an understanding as possible of how likely this tool is to fail, where it might fail, and how we can make sure the police agencies know that it might fail in those particular ways,” Canca says.
Even if an agency claims it needs or wants to use AI, the more important question is whether it’s ready to deploy AI. The toolkit is designed to get law enforcement agencies thinking about what best suits their situation. A department might be ready to develop its own AI tool like a real-time crime center. However, most that are ready to adopt the technology are more likely to procure it from a third-party vendor, Canca explains.
At the same time, it’s important for agencies to also recognize when they aren’t yet ready to use AI.
“If you’re not ready — if you cannot keep the data safe, if you cannot ensure adequate levels of privacy, if you cannot check for bias, basically if your agency is not able to assess and monitor technology for its risks and mitigate those risks — then you probably shouldn’t go super ambitious just yet and instead start building those ethics muscles as you slowly engage with AI systems,” Canca says.
Canca notes that the toolkit is not one-size-fits-all. Each sector, whether it’s policing or education, has its own ethical framework that requires a slightly different approach that is sensitive to the specific ethical issues of that sector.
“Policing is not detached from ethics” and has its own set of ethical questions and criticisms, Canca says, including “a really long lineage of historical bias.”
Understanding those biases is key when implementing tools that could potentially re-create those very biases, creating a vicious cycle of technology and police practice.
“There are districts that have been historically overpoliced, so if you just look at that data, you’re likely to overpolice those areas again,” Canca says. “Then the question becomes, ‘If we understand that’s the case, how can we mitigate the risk of discrimination, how can we supplement the data or ensure that the tool is used for the right purposes?’”
The goal of the toolkit is to avoid those ethical pitfalls by making officers aware that humans are still a vital component of AI. An AI system might be able to analyze a city and suggest which areas might need more assistance based on crime data, but it’s up to humans to decide if a specific neighborhood might need more patrol officers or maybe social workers and mental health professionals.
“Police are not trained to ask the right questions around technology and ethics,” Canca says. “We need to be there to guide them and also push the technology providers to create better technologies.”