How Scenario Writing Could Help Us Build a Safer AI Future
No one has a crystal ball, but we can all make assumptions about how artificial intelligence (AI) could impact life in the future, based on what we already know today. While Hollywood films such as Terminator and 2001: A Space Odyssey have focused on apocalyptic scenarios, the reality is that AI systems are currently contributing to job loss, inequality, misinformation, and privacy risks – to name a few issues.
Researchers with the Northwestern Center for Advancing Safety of Machine Intelligence (CASMI) are working to understand plausible risks through a unique, writing-based method. During the runtime of the project, they are testing and refining this method so that other researchers will be able to use it to anticipate AI impacts. The ultimate goal is to prevent negative AI impacts and help inform governance, regulatory policies, and decision-making.
“This is basically a qualitative data gathering approach,” said Kimon Kieslich, University of Amsterdam Institute for Information Law researcher, who is working on the CASMI research project, “Anticipating AI Impact in a Diverse Society: Developing a Scenario-Based, Diversity-Sensitive Method to Evaluate the Societal Impact of AI-Systems and Regulations.”
“We give people instructions and let them write a 300-word story about their views on AI,” Kieslich continued. “They should invent a character. For example, ‘This is Laura. She is a news consumer.’ Develop a story around that and describe how they are confronted with the technology.”
The first study concentrated on generative AI in the journalism industry. Kieslich was the lead author of that paper, titled “Anticipating Impacts: Using Large-Scale Scenario Writing to Explore Diverse Implications of Generative AI in the News Environment.” The research team plans to use their method in other domains like the legal justice system.
Nick Diakopoulos, professor of communication studies in Northwestern’s School of Communication and (by courtesy) professor of computer science in Northwestern Engineering; and Natali Helberger, University of Amsterdam professor in law and digital technology, are the principal investigators of the CASMI research project. Diakopoulos gave a presentation about anticipating the impacts of AI in the context of policy at a workshop that CASMI co-hosted on Oct. 16-17 in Washington, D.C.
“Generative AI can create a number of harms. It's just that oftentimes, harms to society – including democratic risks and polarization – tend to be slower moving. They're harder to measure,” Diakopoulos said. “That's a real challenge as we think about governance based on risk. What risks are hard to measure and so won’t be measured? What risks unfold over time, so they won’t result in some kind of scandal that gets attention? And what can we do about that?”
A key objective of the research project is to hear from diverse voices. For their first study, researchers surveyed three groups in European Union (EU) member states: news consumers, content creators, and technology developers. As they continue their research, they plan to reach out to specific minority groups and non-governmental organizations in the US and EU.
So far, researchers have found that their scenario-based writing method works well. Although some people lack knowledge about AI, the method provides a means to engage with participants and learn about their concerns and ideas for risk mitigation. Their ideas included collective action (through protests or deliberations), legal action (through regulation), and restricting access to AI systems.
“The value here is, with the cognitive diversity, we really have perceptions of people who aren’t experts in this field, but they have specific ideas on how they will be impacted,” Kieslich said. Technology developers who participated in the study also had ideas about technological fixes for AI systems.
The project tasks people to anticipate both positive and negative impacts of AI. Research participants have expressed concerns related to the spread of misinformation, job automation, inaccuracy, and lack of oversight.
"We also see great potential value of the method in policymaking and governance of AI,” Helberger said. “Current regulatory proposals, such as the EU AI Act, are driven by a risk-based approach – developers must ensure that the AI systems they build do not have a negative impact on citizens and society. The way current impact assessment methods work, however, is expert driven, with little real representation of citizens and affected communities actively. One strength of our scenario method is that it is engaging and can help a diverse set of societal stakeholders to anticipate the impact particular AI applications could have on their very own environment and near future. In other words: impact assessments on the ground involving real citizens."
Governments around the world are taking steps to regulate AI. US President Joe Biden recently signed an executive order to ensure safe, secure, and trustworthy AI. The EU is making efforts to finalize its proposed AI Act.
However, Helberger and Diakopoulos believe that the EU’s regulatory framework is limited because it places almost all responsibility for AI impact on the designers and developers who provide the technology. In an essay published in February, Helberger and Diakopoulos argue that people who use technologies like ChatGPT should also bear some responsibility if they misuse them.
“Given that generative AI models can be configured by anyone capable of natural language communication, it makes it much harder to account for risk or impact at design time because the intentionality and the use is fundamentally driven by that end user,” Diakopoulos said.
The CASMI research project is testing specific regulatory proposals. After research participants write their scenarios, they are asked to write about how they would prevent the harms. Then, they are informed about current regulatory efforts before they are asked to evaluate whether those efforts would prevent the harms.
"This way, we can learn what mitigation strategies participants themselves see working,” Helberger said. “Doing so can result in new insights and a better understanding of what measures are closer to the lived reality of citizens, and thus potentially more effective."
As a next step, researchers plan to conduct another study comparing people’s ideas about the future of AI in the EU and the US.