Skip to main content

Outcomes

As CASMI research initiatives progress, we are committed to producing deliverables that can be utilized by researchers and practitioners. These outputs will include publications and articles, as well as data sets, open source code, and other materials.

'Explanation' is Not a Technical Term: The Problem of Ambiguity in XAI

Leilani H. Gilpin, Andrew R. Paley, Mohammed A. Alam, Sarah Spurlock, and Kristian J. Hammond

arXiv (2022) DOI

In this paper, the authors explore the features of explanations and how to use those features in evaluating explanation utility. The focus is on the requirements for explanations defined by their functional role, the knowledge states of users who are trying to understand them, and the availability of the information needed to generate them. Further, the authors discuss the risk of XAI enabling trust in systems without establishing their trustworthiness and define a critical next step for the field of XAI to establish metrics to guide and ground the utility of system-generated explanations.


Separating facts and evaluation: motivation, account, and learnings from a novel approach to evaluating the human impacts of machine learning

Ryan Jenkins, Kristian Hammond, Sarah Spurlock, and Leilani Gilpin

AI & Society (2022) DOI

In this paper, the authors outline a new method for evaluating the human impact of machine-learning (ML) applications. In partnership with Underwriters Laboratories Inc., the collaborators developed a framework to evaluate the impacts of a particular use of machine learning that is based on the goals and values of the domain in which that application is deployed.


"Working from the Middle Out: A Domain-Level Approach to Evaluating the Human Impacts of Machine Learning"

Ryan Jenkins, Kristian Hammond, Sarah Spurlock, and Leilani Gilpin

Ryan Jenkins presented the paper as part of the AAAI 2022 Spring Symposia Series - Approaches to Ethical Computing: Metrics for Measuring AI’s Proficiency and Competency for Ethical Reasoning virtual meeting hosted by Stanford University, Palo Alto, California, March 21-23, 2022.


A Framework for the Design and Evaluation of Machine Learning Applications

Northwestern University Machine Learning Impact Initiative, September 2021

The framework document was compiled by Kristian J. Hammond, Ryan Jenkins, Leilani H. Gilpin, and Sarah Spurlock with assistance from Mohammed A. Alam, Alexander Einarsson, Andong L. Li Zhao, Andrew R. Paley, and Marko Sterbentz. The content reflects materials and meetings that were held as part of the Machine Learning Impact Initiative in 2020 and 2021, with the participation of a network of researchers and practitioners.

Back to top