Skip to main content

Reducing Harm in Artificial Intelligence Requires Policies for Safer Algorithmic Systems

Ben Shneiderman gives a presentation at CASMI's workshop on Jan. 19.Ben Shneiderman gives a presentation at CASMI's workshop on Jan. 19.

There’s a difference between safe and safer algorithmic systems, one that Ben Shneiderman wouldn’t have noticed if you asked him two years ago. Now, the emeritus distinguished computer science professor at the University of Maryland understands that while totally safe artificial intelligence (AI) systems are not possible, safer systems are.

Shneiderman was the keynote speaker Jan. 19 at the Center for Advancing Safety of Machine Intelligence’s (CASMI) workshop entitled, “Toward a Safety Science of AI.” On Jan. 26, the Association for Computing Machinery published Shneiderman’s TechBrief, “Safer Algorithmic Systems.”

“Let’s never say safe AI,” Shneiderman said. “Safety promotes a vision, which I came to realize, is not realizable. We need to be much more sober and realistic about it.” Safer systems assess whether there are threats, and if those threats pose a serious risk to human life, then they should be banned, Shneiderman continued.

A pioneer in human-computer interaction, Shneiderman’s four-page TechBrief was published with the hope of convincing the government and policy communities to make safer algorithmic systems a priority. He advocates for an organizational safety culture in business, one that begins with management leadership and is maintained through safety-focused attention on hiring, training, and best practices.

Shneiderman developed the TechBrief with guidance and input from several experts in the field, including Kristian Hammond, Bill and Cathy Osborn professor of computer science and director of CASMI.

Kristian Hammond“We keep thinking safety is just a matter of algorithms, but it also includes all the data and information that was used to train it, how it’s going to interact with humans, how it’s deployed, where it’s used, and how we train people in using it every step of the way,” Hammond said. “We have to be vigilant about the impact of the technology because, if we aren’t, we can introduce issues that will cause harm.”

Shneiderman believes a safety culture requires both internal and external oversight. He argues companies that track their own mistakes have a corporate competitive advantage. But it doesn’t stop there. He adds governments and non-governmental groups should track and investigate failures to determine what went wrong.

The U.S. government is making progress toward establishing a framework for AI safety. In October, the White House Office of Science and Technology Policy released the blueprint for an “AI Bill of Rights.” In January, the National Institute of Standards and Technology released the “AI Risk Management Framework.”

A Human-Centered Approach: Involving Application Users

Humans are at the center of building and monitoring safer algorithmic systems.

"Safety is a continuous concern,” Shneiderman said. “It requires resources, skilled people, and a sincere effort to make it happen.”

Building safer systems should also involve input from the people who use the systems, said Stevie Chancellor, assistant professor in computer science and engineering at the University of Minnesota.

Chancellor is working with CASMI on a project that will research safe and compassionate machine learning recommendations for people with mental illnesses.

Stevie Chancellor“I think of harm as a slow burn of contagion in the community,” Chancellor said. “It has a negative impact on the community, but it may not be urgent. Safety is about keeping people safe from the negative impacts of AI systems.”

Chancellor’s previous research interviewed 16 people who use TikTok for mental health, focusing on the app’s “For You” page. While her research team found positive effects – users were exposed to new ideas and new communities – they also found TikTok users often cannot escape from triggering or hurtful content, a phenomenon researchers characterized as a “runaway train.”

“A lot of platforms don’t recognize their decisions negatively impact others,” Chancellor said. “It’s not intentional. It would take a lot of research to solve these problems.”

Chancellor is studying how to build an intervention tool, which would allow TikTok users to avoid viewing harmful content.

Challenges with Building Safer Systems

Ben Shneiderman

CASMI researchers are investigating ways to operationalize AI safety. However, Shneiderman believes most of the AI research community is not sufficiently attending to safety issues. He points to negative news coverage about OpenAI’s large language model, ChatGPT. Stories circulated that the chatbot told some reporters it would cause them harm. While OpenAI’s partner, Microsoft, has since added limits on how people can use the application, the company has argued the best way to test ChatGPT was to publicly release it.

“No, I don’t agree,” Shneiderman said. “There are well-developed methods of testing and reviews. You should test before going public. I would say while Microsoft, in general, is a responsible player, I think they need to be more careful in what they do.”

The challenge with large language models is that no one truly understands why they produce certain texts. This is because chatbots like ChatGPT are trained on vast data from the internet. Language models are simply predicting which words sound good. This can result in the chatbot producing confidently incorrect information, otherwise known as hallucinations.

“The question is if we don’t understand how they’re working, should we still use them?” Shneiderman wondered. “Can we trust the data on a degree of safety? In the past with technological systems, if something was wrong, we went and fixed it. For these large language models, we may not know how to fix it.”

Envisioning a Brighter Future

Companies are trying to improve their applications. Pinterest added features that are designed to promote mental health. Meta, which owns Facebook and Instagram, recently announced a more inclusive dataset to measure algorithmic fairness.

“AI technologies have a massive potential to make our lives, our work, and our interaction with each other better,” Hammond said. “But to get there, we have to be steely-eyed focused on issues of where they already cause harm, where they might cause harm, and actually do the work to make sure we get all the benefits of these systems while looking out for all the possibilities of harm.”

AI safety is not its own field yet, but conversations are happening to create a robust safety science in AI.

"The challenge is it takes time and persistence to promote new ideas, especially if you’re expecting people to change,” Shneiderman said. “I’m interested in people that see the problems AI is bringing, and they have ways of making it better. If your boots are muddy, you can clean them up. You can get it right. It’s our job to clean it up.”

Back to top