Skip to main content

Funding New Research to Operationalize Safety in Artificial Intelligence

Center for Advancing Safety of Machine Intelligence Awards $2.2 Million

The Center for Advancing Safety of Machine Intelligence (CASMI) at Northwestern University, a collaboration with the UL Research Institutes' Digital Safety Research Institute (DSRI), is providing $2.2 million in funding for eight new projects across seven institutions.

The projects will help advance CASMI’s mission to operationalize a robust safety science in artificial intelligence (AI), in part by broadening its network of researchers, who will work to improve outcomes.

Kris Hammond“This was a great next step for CASMI,” said Kristian Hammond, Bill and Cathy Osborn professor and director of CASMI. “Our investment in these projects not only moves the work in digital safety forward. It establishes the foundation for a new community of researchers all focused on how to operationalize safety in the online world.”

The projects were awarded in November 2022, following an open call for research proposals. Each project was eligible for up to $275,000 in funding for two years.

The principal investigators represent the following institutions: University of Minnesota, Northwestern University, University of Amsterdam, Carnegie Mellon University, University of Wisconsin-Madison, Purdue University, and Northeastern University.Jill Crisman

“It is great to see the variety of institutions and ideas that will be sponsored through the DSRI and CASMI collaboration,” said Jill Crisman, UL Research Institutes vice president and DSRI executive director. “I look forward to learning about the discoveries that will be made by these research efforts in AI safety.”

Principal investigators are researching various methods to quantify AI that is safe, equitable, and beneficial. Projects range from investigating safe and compassionate machine learning (ML) recommendations for people with mental illnesses to expressing human situations and contexts to machines.

This is the second group of projects CASMI has funded since its launch in April 2022. The initial group of projects has already produced promising results. Previous research has investigated the lack of reliability of algorithmic personality tests used in hiring. Last year, researchers also developed a framework to improve stress tests for autonomous vehicles. Another project identified data gaps in road safety by comparing rural and urban areas. CASMI researchers have also developed a Human Impact Scorecard to assess and to demonstrate an AI system’s impact on human well-being.

These new projects will address some of the critical research gaps and opportunities identified in the CASMI Research Roadmap. This includes studying data and ML algorithms to understand how the systems are designed to interact with people. Creating these building blocks is essential to establish a safety culture in AI.

Anticipating AI Impact in a Diverse Society: Developing a Scenario-Based, Diversity-Sensitive Method to Evaluate the Societal Impact of AI-Systems and Regulations

Anticipating the impacts of new AI technology is at the crux of efforts to understand, advance, and govern its safety, but doing so is fundamentally difficult.

Nick DiakopoulosNicholas Diakopoulos is collaborating with co-investigator Natali Helberger to Natali Helbergerdevelop a method to anticipate the impacts of new AI technologies by engaging diverse sets of stakeholders in scenario writing activities, or prospections, to envision the impacts of new AI technology in society.

Diakopoulos is an associate professor of communication studies in Northwestern’s School of Communication and (by courtesy) associate professor of computer science in Northwestern Engineering. Helberger is a distinguished university professor of law and digital technology at the University of Amsterdam, director of the AI, Media & Democracy Lab, and member of the board of directors for the Institute for Information Law (IViR).

Effective as orienting devices in decision-making processes, the goal of prospections is not to predict the future, but to perceive potential futures in the present and develop forward-looking evaluation frameworks. The team aims to develop a methodology to anticipate the impacts of new AI technologies and present a nuanced picture of future AI safety issues through diverse perspectives.

Co-designing Patient-Facing Machine Learning for Prenatal Stress Reduction

Maia JacobsThe development of intelligent decision-support tools (DSTs) in the healthcare industry often excludes patients from the process. Maia Jacobs, Slivka Professor of Computer Science at Northwestern Engineering and assistant professor of preventive medicine at Northwestern’s Feinberg School of Medicine, seeks to address this research gap by working directly with a patient population to co-design and evaluate clinical DSTs employing ML models.

A cross-disciplinary collaboration between researchers in human-computer interaction, ML, and healthcare, the project aims to evaluate how algorithms can support pregnant people managing prenatal stress through three components of a next-day stress prediction model; namely the prediction, the explanation, and a recommendation to use a just-in-time stress management exercise.

Human-AI Tools for Expressing Human Situations and Contexts to Machines

Despite significant advances in machine sensing and machine learning technologies, it remains difficult Haoqi Zhangfor designers to create context-aware and responsive applications based on the concept of a human situation — such as identifying an appropriate place for a child to ride a bicycle.

To bridge this gap and address significant concerns around safety, privacy, and inequitable access to AI-supported experiences, Haoqi Zhang, an associate professor of computer science at Northwestern Engineering, aims to advance new programming environments and tools that support designers in the construction of machine representations using available context features.

Zhang directs the Design, Technology, and Research (DTR) program and is a codirector of the Delta Lab.

Safe and Compassionate Machine Learning Recommendations for People with Mental Illnesses

While content recommendation drives engagement, connection, and discovery of new information on modern social platforms, these recommendations can be a double-edged sword. Stevie Chancellor, an assistant professor of computer science and engineering at the University of Minnesota, aims to identify sociotechnical factors that make content recommendations helpful or harmful to people with psychosocial mental illnesses.Stevie Chancellor

Chancellor’s team will design and evaluate one participant-centered machine learning (ML) intervention to alleviate algorithmic harms on social networks and build a system that makes safer and more compassionate recommendations for people in distress.

Chancellor is a former CS + X postdoctoral fellow in computer science at Northwestern Engineering, co-advised by Darren Gergle, John G. Searle Professor of Communication Studies in Northwestern’s School of Communication; and Sara Owsley Sood, Chookaszian Family Teaching Professor and associate chair for undergraduate education at the McCormick School of Engineering.

Dark Patterns in AI-Enabled Consumer Experiences

David ChoffnesWhile AI-enabled devices and services can bring benefits to consumers and Christo Wilsonbusinesses, they also have the potential to incorporate harmful dark patterns, or interface designs that interfere with peoples’ decision-making processes and impair their autonomy. David Choffnes and Christo Wilson, both associate professors of computer sciences at Northeastern University, propose to address the gaps in knowledge around the unique potential of AI to worsen existing classes of dark patterns, as well as facilitate entirely new classes specific to AI-enabled consumer experiences.

The team will investigate applications of AI in consumer electronics and third-party software to identify new classes of dark patterns and construct ground-truth datasets of dark pattern prevalence. Through user studies, Choffnes and Wilson also aim to better understand user perceptions about AI dark patterns and their potential to cause harm.

Supporting Effective AI-Augmented Decision-Making in Social Contexts

Kenneth Holstein, assistant professor in the Human-Computer Interaction Institute at Carnegie Mellon Kenneth HolsteinUniversity, will study how to support effective AI-augmented decision-making in the context of social work. In this domain, predictions regarding human behavior are fundamentally uncertain and ground truth labels upon which an AI system is trained — for example, whether an observed behavior is considered socially harmful — often represent imperfect proxies for the outcomes human decision-makers are interested in modeling.

Holstein is working with co-investigators Haiyi Zhu, Steven Wu, Alex Chouldechova and PhD students Luke Guerdan and Anna Kawakami. The team's goal is to develop an understanding of how expert decision-makers work with AI-based decision support to inform social decisions in real-world contexts, and to develop new methods that support effective decision-making in these settings.

Understanding and Reducing Safety Risks of Learning with Large Pre-Trained Models

Pre-trained AI models such as Google BERT and OpenAI’s GPT-3 enable practitioners to adapt the systems to a wide range of downstream tasks through transfer learning rather than learning from scratch.

Sharon LiSharon Yixuan Li, an assistant professor of computer sciences at the University of Wisconsin-Madison, aims to understand how pre-trained data models can exacerbate safety concerns and to mitigate the safety risks of transfer learning with large, pre-trained data models.

Li proposes a novel evaluation framework to comprehensively understand how inequity and out-of-distribution risks are propagated through the transfer learning process. She will then apply this framework to build new learning algorithms that enhance safety and de-risk the potential negative impacts when transferring knowledge from pre-trained models.

Diagnosing, Understanding, and Fixing Data Biases for Trusted Data Science

Data preparation tasks tailored to downstream ML applications can serve as a basis for detecting and Romila Pradhanmitigating algorithmic bias. Romila Pradhan, assistant professor of computer and information technology at Purdue University, aims to demonstrate the importance of data quality in establishing public trust around data-driven decision making.

Pradhan will investigate how to diagnose bias in ML pipelines and evaluate and integrate the impact of data quality. By decoupling data-based applications from the mechanics of managing data quality, Pradhan aims to help practitioners more easily detect and mitigate biases stemming from data throughout their workflows.

Projects

The project investigators join CASMI’s existing research team, including Kristian Hammond, CASMI director and Bill and Cathy Osborn Professor of Computer Science at Northwestern Engineering; Michael Cafarella (MIT); Leilani H. Gilpin (University of California, Santa Cruz); Francisco Iacobelli (Northeastern Illinois University and Northwestern University); Ryan Jenkins (California Polytechnic State University); Julia Stoyanovich (New York University); and Jacob Thebault-Spieker (University of Wisconsin-Madison).

Back to top