Skip to main content




Defining 'Safety' in Artificial Intelligence

December 13, 2022: 12:00pm - 1:00pm (CST), Virtual (Zoom) 

The first principle in the White House’s Blueprint for an AI Bill of Rights is the right to “Safe and Effective Systems.” But what does being safe mean with respect to AI technologies?

Do we have a common definition for what safety means in the digital world? How can we know if the intelligent systems that are in use in so many facets of life and society are endangering or providing safety? What should systems have to demonstrate in order to be considered safe? How do we determine the priority areas that must have focus to define and maintain safety in AI?

Our panelists bring multiple distinct perspectives of what AI safety should mean and how it should be approached. Join us for a virtual panel discussion exploring how we understand what it means to be safe in a world that is increasingly incorporating AI technologies.


Panel Registration


Stevie Chancellor

Assistant Professor, Computer Science

& Engineering - University of Minnesota

Dan Hendrycks

Director, Center for AI Safety -

UC Berkeley

Mecole Jordan-McBride

Advocacy Director, NYU Policing Project

Kristian Lum


Sean McGregor

Founder, Responsible AI Collaborative


Moderator: Kristian Hammond

Bill and Cathy Osborn Professor

and Director, CASMI - 

Northwestern University  




Call for Proposals 2023 Information Sessions

September 19 - 20, 2022

CASMI hosted virtual information sessions to discuss the current Call for Proposals and the CASMI research mission. Researchers who are considering developing a proposal can refer to the FAQ page or watch the info session recording below for more information on the proposal process.

CASMI Prime Seminar with Jim Guszcza: "Envisioning a field of human-machine hybrid intelligence architecture"

May 6, 2022

AI promises to improve human decision-making and fuel economic growth, but today’s state of the art suffers from serious shortcomings.  Reports abound of AI technologies that compromise human safety or wellbeing, treat people unfairly, amplify societal biases, undermine human autonomy through manipulation, and amplify the spread of misinformation and polarizing content.  On a purely practical level, Gartner has estimated that 85% of big data projects never make it to production.

Virtual Panel: "Ethics, Safety, and AI: Can we have it all?"

April 8, 2022

The 2022 AI Index report just published by the Stanford Institute for Human-Centered Artificial Intelligence says that "AI systems are starting to be deployed widely into the economy, but at the same time they are being deployed, the ethical issues associated with AI are becoming magnified." While ethical principles in AI use have been a focus for years, it is an open question whether we are making progress toward actualizing those principles or toward establishing safety in the use of intelligent systems.

Back to top