Workshops
Upcoming Workshop
Operationalizing the Measure Function of the NIST AI Risk Management Framework
October 16-17, 2023
Washington, D.C.
A fundamental yet underspecified problem in AI is establishing that AI systems are effective and safe, ensuring that systems do what they are intended to do without introducing or exacerbating risks. While ensuring this basic functionality is necessary to prevent and mitigate harms associated with AI, it requires understanding the context in which a system was designed, what implicit and explicit schemas are used, and complex (but usually hidden) decisions underlying problem formulation and system evaluation.
Drawing on tools from the social sciences, we can point to how measurement describes the implicit processes in the design and function of AI systems that encode social assumptions into technical models. Turning an eye towards assessment and safety, we can look to how practices of validation thus offer more holistic practices of unpacking hidden assumptions, ensuring that systems are robust and do what they are intended to do beyond narrow technical assessments. We can also endeavor to consider the incentive structures in which measurement and validation are utilized and how they might be best aligned to motivate groups toward continual improvement.
This workshop will convene a range of scholars from academia, industry, and government to discuss how to meaningfully operationalize safe, functional AI systems by focusing on measurement and validity in the AI pipeline. Beyond developing an idea set, the focus of this workshop will be to develop a repeatable and standardized methodology to test actual systems against a range of socio-technical metrics within a large-scale testbed of human subjects.
The goal of this workshop is to support expansion of the Measure function of the NIST AI Risk Management Framework (AI RMF). The long-term goal is the development of measurements and methodologies to:
- Drive innovation by evaluating AI under controlled real-world conditions
- Develop societally robust AI methodologies, documentation processes, and sociotechnical measurements and standards
- Determine the validity and generalizability of AI risk and impact mitigation approaches
- Best practices for mapping qualitative methods onto scorable quantitative metrics
- Experimental protocols for testing of applications/models using sociotechnical methods
- The development processes and activities associated with a sociotechnical testbed
- Robust and reusable documentation standards
Past Workshops
Sociotechnical Approaches to Measurement and Validation for Safety in AI
July 18-19, 2023
CASMI convened a range of scholars from academia, industry, and government to discuss how to meaningfully operationalize safe, functional AI systems by focusing on measurement and validity in the AI pipeline. Read a recap.
Toward a Safety Science of AI
January 19-20, 2023
CASMI brought together researchers, practitioners and thought leaders in a collaborative, two-day workshop to define and refine an approach Toward a Safety Science of AI. The goal of the workshop is a deliverable that articulates a definition of AI safety and the next steps for research necessary to establish a robust safety science for the field. Read a recap.
Best Practices in Data-Driven Policing
June 23-24, 2022
The two-day workshop functioned as a forum for facilitating interdisciplinary conversations focused on the concerns and benefits of data-driven policing, as well as best practices for developing and implementing data-driven policing technologies. Read a recap.
Back to top