Skip to main content

Researchers Develop DANGER Framework for Stress Testing

CASMI researchers developed a framework and algorithm that can generate dynamic stress tests for safety-critical systems

A key issue for autonomous systems is how they will perform in real-world scenarios, including in rare adversarial or dangerous situations. CASMI researchers have developed a framework and algorithm that can generate dynamic stress tests for safety-critical systems.

“Most autonomous vehicle datasets are perfectly curated, labeled, and do not show any type of dangerous maneuvers. We wanted to examine if we could generate these types of dangerous maneuvers from existing datasets.” said Leilani Gilpin, an Assistant Professor of Computer Science and Engineering at UC Santa Cruz. 

Gilpin leads the research effort that created DANGER, a framework to generate edge-case images on top of existing autonomous vehicle datasets. These images represent dangerous driving scenarios that can increase the robustness and range of scenarios used to train systems and test their explanations. By including dangerous scenarios in their datasets, autonomous vehicle models can learn to respond to these real-life dangers.

The team chose autonomous vehicle models because the adversarial examples represent real-world, tangible dangers but are often excluded from datasets.

Master’s Student Shengjie Jay Xu led the development of the DANGER Framework and spoke to the potential benefits it brings to the field of AI safety. "DANGER attempts to compensate for the lack of a sense of human `common sense reasoning' in traditional computer vision models. Adding a dimension of dangerousness to ML brings AI closer to thinking and reasoning like humans."

This work has wide-ranging implications for explainable AI (XAI) and AI safety. The DANGER Framework can help in the performance of stress tests because it generates model-specific tests which the algorithm can then explain itself on. If an AI algorithm fails on these stress tests, how it failed will be explainable and contextualized in terms of the model and can be used to re-train the model on similar scenarios.

The DANGER framework is an outcome of the CASMI research project, "Adversarial Examples to Test Explanation Robustness". This project is part of Principal Investigator Gilpin’s long-term research vision for true XAI: self-explaining, intelligent, machines by design.

A crucial component of pre-deployment testing for intelligent systems is an explanation, a reason or justification within the context of its model for a system’s decisions. Currently, explanations cannot be compared to one another and the work on explaining errors or corner cases is limited. The Adversarial Examples project and its DANGER Framework are intended to address the agreement and edge-case gaps in XAI. Read more on all the ongoing CASMI research projects and their outcomes.

Back to top