Skip to main content

Follow Us


The Center for Advancing Safety of Machine Intelligence (CASMI) leads a research vision and dynamic research network that is establishing best practices for the evaluation, design, and development of machine intelligence that is safe, equitable, and beneficial.

CASMI is a collaboration with the UL Research Institutes' Digital Safety Research Institute, building on UL’s long-standing mission to create a safer, more secure, and sustainable future.

About the Center for Advancing Safety of Machine Intelligence (CASMI) from UL Research Institutes on Vimeo.

CASMI is a Northwestern Engineering project sponsored by UL Research Institutes

Underwriters Laboratories Logo

Northwestern Engineering logo


Connecting a network of researchers

CASMI builds connections and collaboration among researchers and experts of different disciplines and backgrounds at Northwestern, UL Research Institutes, and partner organizations.



CASMI funds research that advances the state of intelligent technologies and answers key questions in the field. A set of initial projects began in spring 2022 across six universities.
View our Projects


CASMI hosts semiannual thematic workshops that investigate the human impact of machine intelligence, establish new research connections, and identify further research opportunities.

Explore our workshops


CASMI projects deliver outcomes for machine intelligence research and practitioner communities. These outcomes include research papers and publications facilitating future workshops, events, and research projects.

Review our Outcomes


An Evaluation Framework

One of CASMI’s goals is to develop repeatable and operational processes for the identification and mitigation of negative impacts of machine learning applications and the causes of those impacts. To this end, we have developed an evolving evaluation framework to guide the work and vision.

The evaluation framework provides a foundational structure for the design and evaluation of machine learning applications by decoupling fact-finding from evaluation. The framework divides the task of evaluating the human impacts of machine learning (ML) systems into two phases:

  1. Fact-finding related to three primary components:
    1. Data and how it was sourced and manipulated
    2. Central ML algorithms and how they were applied
    3. How the resulting systems are designed to interact with human users
  2. The evaluation phase examines how, given those facts, the system impacts the goals and values associated with a particular domain or field of use.

The framework is also the starting point for CASMI’s research roadmap, a set of specific research problems necessary to further operationalize the design, development, and evaluation of AI systems from the perspective of human health and safety.

Back to top