Skip to main content

Defining Safety in Artificial Intelligence: ‘We Need to Have a Community’

Two-day workshop highlights approaches toward developing a safety science of AI

Toward A Safety Science of AIToward A Safety Science of AI
Ben ShneidermanBen Shneiderman
Group photoGroup photo
Workshop lectureWorkshop lecture

Artificial intelligence (AI) systems are part of our everyday lives, from the technology on smartphones to the recommendations you get while shopping online. As AI changes the world, we need to develop a robust safety science in the field, researchers and practitioners told the Center for Advancing Safety of Machine Intelligence (CASMI) at Northwestern University.  

CASMI hosted a workshop on Jan. 19 and 20 entitled, “Toward a Safety Science of AI.” Interdisciplinary thought leaders attended to share ideas on how we can define, measure, and anticipate safety in AI. 

Kristian Hammond“The goal of this workshop is not to have one view on what safety is,” said Kristian Hammond, Bill and Cathy Osborn Professor and Director of CASMI, “but consider the breadth and depth of possible harms that might scope across individuals, groups, and society. 

“In order to make this work, we need to have a cross-disciplinary community, not just individual researchers,” Hammond added.  

Julio M. Ottino, dean of the Robert R. McCormick School of Engineering and Applied Science at Northwestern University, also emphasized the importance of taking an interdisciplinary approach. 

Julio M. Ottino “If you want a place to run well, you want all these modes of thinking to exist,” Ottino said. “You want to poke at problems from multiple angles.” 

The 40 guests who attended the workshop came from fields such as computer science, engineering, law, medicine, ethics, philosophy, sociology, journalism, and communications. The two-day event included presentations, group discussions, and breakout sessions. 

The workshop’s keynote speaker was Ben Shneiderman, emeritus distinguished computer science professor at the University of Maryland. A pioneer in human-computer interaction, Shneiderman recently wrote a book focused on the impact of AI on individuals and society, Human-Centered AI. He acknowledged how his thinking has changed as he has learned more about algorithmic safety. It led him to conclude that total safety is not possible.  

“Safer is possible,” Shneiderman said. “Safer algorithmic systems are possible. That’s your job.”  

CASMI is collaboratively led with the Digital Safety Research Institute (DSRI) at Underwriters Laboratories Inc. Jill Crisman, UL Research Institutes Vice President and Executive Director, DSRI, shared the significance of focusing on safety in the digital space.  

“The diversity of individuals attending the workshop was phenomenal and has led to thought provoking discussions,” Crisman said. “Hopefully, the workshop has forged some cross-disciplinary teams who will create revolutionary ideas for protecting individuals and societies from the dangers of AI-enabled systems.” 

What we've seen: The issues

Creating an AI safety culture is necessary in all stages of machine learning (ML) development, from data gathering to product deployment. The harms that may result from an ML model can happen at the individual, group, societal, and environmental levels. 

ML researchers Sean McGregor and Kristian Lum know about these issues firsthand. McGregor founded the AI Incident Database, which relies on journalists and citizen scientists to report AI-created harms or near harms. Lum, associate research professor at the University of Chicago Data Science Institute, previously worked at a social media company and has studied algorithmic systems that have created inequalities. 

 Kristian Lum“I have a hypothesis that when we have less equality, it can lead to echo chambers,” Lum said. “If you reduce inequality, it can lead to better business metrics.” She added this isn’t always the case.  

Sean McGregor“If I could make one policy recommendation, it would be to have mandatory reporting of AI incidents,” McGregor said. “We can learn so much from that. We can’t just wait for bad things to happen. It can’t be purely reactive. An information architecture is required to make sense of our past.” 


Measuring safety: Learning from other fields 

Safety science is already well-developed in other fields, such as aeronautics. Computer scientist Alwyn Goodloe has decades of experience in aeronautical safety engineering. He said there are ways machine learning practitioners can apply the methods he uses. 

 Alwyn Goodloe“Too little of AI research is grounded in a safety system or safety perspective,” Goodloe said. 

A multistep process measures safety in aeronautics. The priority is to prevent deaths. The first step is safety analyses. These are guidelines that identify hazards before they occur. Another step is safety assurance, which is an oversight tool.   

Goodloe explained how functional hazard analysis classifies failures, from minor to catastrophic. He stressed ultra-critical systems, like nuclear reactors, should never have a catastrophic failure. 

“ML systems fail, but ultra-critical systems shouldn’t,” Goodloe said. “Until we can build systems that don’t fail, can we use these systems? How can we use them? It’s going to be a hard question to answer.” 

What’s next  

Work is underway to develop a safety culture in machine learning. At the federal government level, the National Institute of Standards and Technology recently released the AI Risk Management Framework, a voluntary document which allows people using AI systems to measure its risks. 

CASMI workshop keynote speaker Ben Shneiderman created his own framework for human-centered AI. Its goal is to amplify, augment, empower, and enhance people. 

Ben Shneiderman

Shneiderman gave an example of human control using the digital camera on a smartphone. While it has a lot of AI, people ultimately take the photo.  

“The technology gives me huge amounts of control in advance, during, and after,” he said. “It also does the human thing: support social connectedness. Many of the people in AI or computer science think about the machine, the user... but we think about the social connectedness as well.” 

McGregor expressed optimism that the CASMI workshop could effect real change.  

“We’ll look back on this and say this is such an important moment,” he said. “Hopefully we can say we did as best we could to produce safety.” 

CASMI will continue to host workshops to continue its mission of developing safe machine intelligence. 

Back to top