Workshop to Explore Sociotechnical Standards to Better Manage AI Risks
Artificial intelligence (AI) systems are being developed and publicly released faster than policymakers can effectively regulate them. While these technologies are able to detect numerous threats, such as cancer and wildfires, they may also worsen biased and discriminatory practices that hurt people. As the Northwestern Center for Advancing Safety of Machine Intelligence (CASMI) works to mitigate these harms, researchers will travel to Washington, D.C. to develop methods that promote AI safety.
CASMI is co-hosting a workshop on Oct. 16-17 in our nation’s capital to test and evaluate sociotechnical approaches for AI systems, focusing specifically on expanding the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF). The workshop, entitled “Operationalizing the Measure Function of the NIST AI Risk Management Framework,” will be collaboratively led by CASMI; Abigail Jacobs, assistant professor of information and of complex systems at the University of Michigan; the NIST-National Science Foundation (NSF) Institute for Trustworthy AI in Law & Society (TRAILS); and the Federation of American Scientists (FAS).
The workshop will gather AI experts from academia, industry, and government to create a testbed, or a controlled environment to assess AI systems. The goal is to better understand the technologies’ performance and societal impact.
“This workshop is about evaluation, metrics, and measurement,” said Kristian Hammond, Bill and Cathy Osborn professor of computer science and director of CASMI. “How can we get to a real understanding of the impact of systems? If we are concerned about issues of harm, then we need to go beyond articulating harm to measuring harm, even if it is challenging to do so.”
The convening will build upon CASMI’s last workshop, “Sociotechnical Approaches to Measurement and Validation for Safety in AI.” Jacobs, who studies how the structure and governance of technical systems is fundamentally social, argues that a sociotechnical approach is necessary to assess the validity of AI systems.
“We can ask: does a system work as intended? According to whom? How do we know that it’s working as intended? Who’s responsible? Using frames like this can reveal better ways to develop, monitor, and govern AI systems. Paying attention to measurement reveals how we encode social and political decisions in technical systems,” Jacobs said.
The NIST AI RMF was released in January, and the voluntary framework for AI standards and metrics was designed so that it could be updated every three to five years as new issues emerge. NIST Research Scientist Reva Schwartz, who gave a presentation at CASMI’s July workshop, said the agency is seeking feedback on how to frame sociotechnical evaluations.
Policy Community Focused on Measured Approach to Regulation
The workshop will also attempt to build frameworks for standards, which have played a critical role in policymaking, said Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists, a nonpartisan think tank. Kaushik has a deep understanding of the policy implications of emerging technologies and how to ensure that they are aligned with our national security interests.
“We have to have a way of thinking that does not stifle innovation,” he said. “When cars were first introduced, we used to have red flag laws. A person would have to walk in front of a car, holding a red flag, so that people would know there is a car coming. That’s the kind of thing we want to avoid with AI.
“We want to better understand what the risks are. We want to better understand what the opportunities are. The US’s regulatory approach has been to prevent really bad stuff from happening, and I think that’s the right approach,” Kaushik continued.
Members of Congress have proposed AI legislation and have invited tech leaders to Capitol Hill to learn how AI systems work. While Kaushik believes lawmakers should hear from everyone who is impacted by AI systems, he also warns against having endless educational deliberations about AI.
“I would remind Congress that members of Congress did not have to be visiting professors in schools of medicine to create the FDA,” he said. “They did not have to become pilots to have the FAA. Congress has to get educated to an extent. They need that knowledge, but they also don’t need to know how to code.”
“One of the more important aspects of policymaking and regulation in the technological space is understanding enough about the technology so that you can mindfully put together policy and regulate,” Hammond said. “You need the education to understand the impacts.”
Some of the debate about AI has focused on existential threats, rather than the current risks society is facing. AI systems have contributed to job loss, inequality, misinformation, privacy violations, and lack of transparency. Kaushik is also worried about threats to cybersecurity and democracy.
“We’ve been having ransomware attacks at hospitals and pipelines,” Kaushik said. “Imagine ransomware attacks happening on an almost daily basis, causing loss of life. That is a big concern. Democracies survive when societies survive. Our societal fabric depends on cohesion, security of our elections, and protecting marginalized groups. When those things start getting attacked, that leads to erosion of democracy. We need to protect it.”
Building Trustworthy AI Systems
To build trustworthy AI systems, the NIST AI RMF says the following characteristics are necessary: validity, reliability, safety, security and resiliency, accountability and transparency, explainability and interpretability, privacy, and fairness. The framework says harms to people, organizations, and ecosystems should be mitigated.
At the CASMI July workshop, participants said that a diverse research body can help gain public trust. Researchers at TRAILS also believe there is no trust or accountability in AI systems without participation of diverse stakeholders.
“AI is increasingly widely used by all segments of society; however, the needs of several impacted populations may not be reflected in the design process,” said David Broniatowski, associate professor of engineering management and systems engineering at George Washington University. “As a result, people can be exposed to harms that could have been anticipated if the concerns of people from vulnerable communities were incorporated earlier on. People trust one another if they feel that their concerns are heard and addressed. This effort builds toward eliciting those concerns, and then translating them into meaningful metrics.”
The workshop in Washington, D.C. will further CASMI’s mission to develop best practices for safety in the design and development of machine intelligence. To learn more about CASMI’s research, visit our website.