Skip to main content

Tracking AI Failures: Understanding the Past to Engineer a Better Future

Sean McGregor gives a presentation at CASMI's workshop on Jan. 19.Sean McGregor gives a presentation at CASMI's workshop on Jan. 19.

When something goes wrong in industry, there’s usually a team tracking the problem to prevent the recurrence of disasters. The aviation industry has the National Transportation Safety Board. Medical professionals rely on the Centers for Disease Control and Prevention. However, the artificial intelligence (AI) community does not have a formal institution reporting on potentially deadly mistakes. That’s why Sean McGregor started the AI Incident Database.

McGregor is the founder of Responsible AI Collaborative, a nonprofit organization chartered to advance the database. The machine learning PhD visited Northwestern University Jan. 19 to give a presentation at the Center for Advancing Safety of Machine Intelligence’s (CASMI) last workshop entitled, “Toward a Safety Science of AI.” He explained how anyone can go to the AI Incident Database to report AI harms or near harms, which are then classified as AI incidents. The database currently has more than 2,400 reports of AI harms, resulting in more than 400 incidents.

Sean McGregor“Our definition of harm is fairly flexible,” McGregor said. “An AI incident needs to implicate an AI system and be something out in the real world that caused or nearly caused harm to people, property, or the environment.”

McGregor began work on the database in 2018 after noticing many people informally sharing news articles about AI failures. Applying lessons from other databases in computer security and aviation, McGregor assembled new and existing incident listings to build a user interface and formalize the practice of indexing AI incidents.

The database was a topic of discussion during CASMI’s last workshop. Participants agreed that it provides a foundation for incident reporting. Bill and Cathy Osborn Professor of Computer Science and CASMI Director Kristian Hammond looks forward to continuing to work with McGregor and the AI Incident Database. Kristian Hammond

“Part of the process of making the world safe is to understand where there are harms,” Hammond said. “The AI Incident Database is an excellent approach to gathering information about those harms so that organizations like CASMI can understand where the work needs to be done in order to anticipate and avoid them.”

David Danks“We are all rapidly becoming more familiar with the ways that AI can lead to very real harms to people and communities, often those who are already most vulnerable,” said David Danks, professor of data science and philosophy at University of California San Diego and member of CASMI’s scientific advisory board. “Doing better requires insights into what went wrong in the past, and Sean McGregor’s work to create and grow the AI Incident Database is leading the way towards that understanding."

 

How the AI Incident Database Works

The database classifies the severity of harms, the type of harm, and who was harmed. It tracks which entities are involved in the most incidents, and it allows users to subscribe to all new incidents or particular incidents. Companies can also see what incidents have happened among competitors, and they can respond to allegations of harm.

There are two ways to submit an incident: you can quick add a new report URL, or you can submit a new incident report by filling out a form. Once entered, the report goes into a queue before it is reviewed and categorized as either a new incident, an existing incident, or an issue that hasn’t happened yet but could arise in the future.

The database reports on AI system failures regarding tools such as facial recognition, speech recognition, predictive text, algorithms, deepfakes, autonomous vehicles, large language models, image generators, and more.

Some incidents have surprised McGregor, such as the case when police in Edmonton, Canada used DNA to sketch the face of a suspected criminal. Police ultimately apologized and stopped using the controversial method, known as DNA phenotyping. 

“Let’s not do this,” McGregor said. “Let’s not try and create portraits of people on the basis of DNA using these technologies. It can’t do that. You might be able to assess a racial background, just show a random portrait of a person in the background, but you’re not going to inform people to a greater extent with these portraits.”

Looking Ahead

There is optimism that the work McGregor is doing will produce policy changes. The AI Incident Database is mentioned in the National Institute of Standards and Technology’s AI Risk Management Framework. McGregor has also spoken with think tanks and people associated with government policy who are interested in developing an AI incident database for their countries.

“It’s critically important to move to a state of people indexing this stuff,” McGregor said. “AI safety is not currently 'a thing,' but we’re trying to make it so. I think CASMI is a great example of that. CASMI will be creating a lot of what the community of practice is, which is a great place to be.”

Improvements are coming to the AI Incident Database. The website will add a checklisting feature, which will allow users to concentrate on incidents related to their technology or application. McGregor is also working with the Organisation for Economic Co-operation and Development’s AI Policy Observatory to build dashboards on emerging risks.

While AI systems are unlikely to ever be perfectly safe, McGregor believes we can reach a level of reliability.

“It’s a big, complicated world,” he said. “We don’t stay home because there are dangers. We have to be able to go out into the world.”

Back to top