Skip to main content

CASMI Hosts Panel Discussion on Defining 'Safety' in Artificial Intelligence

Artificial intelligence (AI) systems have helped improve our lives, but they have also exposed people to harm. The solution requires a multipronged, inclusionary approach that focuses on developing a culture of safety, panelists told The Center for Advancing Safety of Machine Intelligence (CASMI) at Northwestern University.

CASMI hosted a virtual discussion Tuesday, Dec. 13 on defining safety in AI. The goal was to start a process of what it means to talk about safety in the digital world, said moderator Kristian Hammond, Bill and Cathy Osborn Professor and Director of CASMI.

Kristian Hammond“When talking about the physical world, we can get a crisp notion of what it means to look at safety and consider harm,” Hammond said. “But when we look at the digital world, and particularly AI, the term becomes a bit more difficult to put a finger on.”

The panelists said rapid development in AI requires those who use the systems to consider a proactive way to reduce exposure to harm. This includes establishing a communication line on social media to report harm, paying AI experts to be on oversight boards, rewarding whistleblowers who find vulnerabilities, and involving marginalized communities in conversations about what AI systems look like.

Currently, there is tension in the tech industry because people are trying to push things out quickly, said machine learning ethics researcher Kristian Lum. She stressed that a safety culture needs to be in place to mitigate and evaluate potential risks before systems are released.

Kristian Lum“Nothing is going to be 100 percent safe, ever,” Lum said. “At least understand harms that could occur and carefully balance risks to the system to make tools more safely.”

Dan Hendrycks“I view this as a transitionary period with technology,” said Dan Hendrycks, director of Center for AI Safety at UC Berkeley. “As the technology scales in impact, we need to move from a risk-seeking paradigm to a risk averse paradigm. Since it’s being deployed in many more contexts, we need to shift and be more cautious about how we use this technology.”

Stevie Chancellor, assistant professor of computer science and engineering at the University of Minnesota, provided a striking example of how algorithms can have a negative impact on people’s lives. In September, parents filed a lawsuit against Amazon in California state court, accusing the company of selling suicide kits to teenagers. Researchers have also found using social media sites like Instagram and Snapchat can lead to an increased risk of anxiety and depression for children and teenagers, particularly among young girls.

“Problems, as they arise, are often called unintended consequences,” Hammond said. “But even if they are unintended, they might be predictable.”

Stevie Chancellor “Over time, it may hurt people’s opinions of themselves,” Chancellor said. “If we say things are unattended, we take away the responsibility to act on them.”

Sean McGregor“You can reduce vulnerabilities to events by making them safer,” said Sean McGregor, founder of Responsible AI Collaborative. “Reduce exposure to risks. Even though we can’t predict everything, there’s still accountability we should have and the responsibility to address unintended consequences.”

Policymakers and governments are trying to tackle these issues. However, McGregor said those groups lack an understanding of AI’s current systems. Constant changes are part of the challenge. For example, in the last year, artificial intelligence systems have gotten better at decision-making. Now, they can code and find vulnerabilities in code.

McGregor said it’s important that people become aware of AI capabilities so that regulations are not outdated.

“Policymakers want to do something,” he said. “I think we need to get more people from AI into the policy space.” McGregor acknowledged this is difficult because of pay disparities.

While there is no singular solution, the panelists agreed having a checklist is a great starting point.

“Early research shows it spurs more ethical thinking,” Chancellor said. However, she added checklists come with drawbacks because they can miss a more comprehensive safety approach.

Hosting discussions like this is part of CASMI’s overall mission to develop best practices for the evaluation, design, and development of machine intelligence that is safe, equitable, and beneficial. CASMI will continue to host workshops and support research projects to strengthen its mission.

Back to top