Skip to main content

Engaging with Lawmakers, Business Leaders to Promote AI Safety

Kristian Hammond and Daniel Linna Jr. briefed state lawmakers about artificial intelligence on Feb. 14 at Northwestern.

The Northwestern Center for Advancing Safety of Machine Intelligence (CASMI) is engaging with lawmakers and business leaders as it educates the public about the impact of artificial intelligence (AI) technologies. 

Kristian Hammond, Bill and Cathy Osborn Professor of Computer Science and director of CASMI, led an AI briefing on March 18 for the Chicagoland Chamber of Commerce. Additionally, Hammond and Daniel W. Linna Jr., senior lecturer at Northwestern Engineering and the Pritzker School of Law, briefed a group of Illinois House Cybersecurity and Judiciary Civil committee members on Feb. 14 at Northwestern. CASMI is also producing a YouTube series exploring the impact of AI, considering safety and harm. 

All of these efforts are part of CASMI’s mission to identify and remove harms caused by machine technologies. 

Kristian Hammond“We need to communicate with people who are making regulations and help them understand the reality of what they're regulating,” Hammond said. “We need to communicate with people in the business world who are making decisions about how to deploy these technologies so that they understand the issues of harm and safety. And we need to communicate with consumers so that they can make informed decisions and so that they can understand exactly where the harms are in the world so that they can avoid them.”
Laura Farr

“Connections with policymakers, the business community, and Northwestern faculty like Kris Hammond and Dan Linna are essential,” said Laura Farr, director of state relations for the Northwestern Office of Government Relations. “Our faculty are helping business leaders imagine new possibilities for economic growth with AI, and lawmakers are considering Northwestern insight in making balanced legislation to better protect against harms.” 

History of Influencing Policy Decisions 

CASMI’s network of researchers has a history of influencing policymaking decisions. Julia Stoyanovich, associate professor of computer science & engineering and of data science at New York University (NYU) and director of its Center for Responsible AI, has been deeply involved in AI governance and regulation in New York City and New York State for seven years.  

Stoyanovich participated in US Sen. Chuck Schumer’s (D-NY) AI “Insight Forum” on Nov. 1 and spoke as a panelist on Feb. 7 at the United Nations (UN) Commission for Social Development in New York. She continues to have conversations with policymakers at the local, state, and federal levels. 

Julia Stoyanovich“We’re in an election year. The types of questions getting the most attention are about the use of political deepfakes in political advertising. Algorithmic hiring continues to be important,” said Stoyanovich, principal investigator of the CASMI-funded project, “Incorporating Stability Objectives into the Design of Data-Intensive Pipelines.” 

Stoyanovich supported a New York City law which requires companies that are using AI systems in hiring to notify job seekers that they will be screened using automated tools. She first became involved in politics in October 2017, when she testified before the New York City Council during a public hearing about legislation which created the Automated Decision Systems Task Force, which was aimed at helping New York City agencies become more transparent in their use of algorithms and data. Stoyanovich was a member of the task force. 

“That’s a good way to be noticed: follow the bills being proposed in your city and state,” she said. “There will always be an opportunity to express your opinion in public hearings. Anybody can submit testimony, and everybody will be heard.” 

Communicating Effectively with Different Audiences 

It can be daunting to learn about artificial intelligence. Pew Research Center polling shows that while most Americans have heard about AI, only 30% of US adults correctly recognize all six examples of AI in everyday life. 

CASMI recognizes the challenges of learning about AI technologies. However, our researchers are skilled at communicating effectively with different audiences.  

“You don’t want a technical understanding of these technologies. You want a functional understanding,” Hammond said. “You want an understanding of what these technologies do, what they can do for us, and what the risks are.” 

“You can’t talk at the abstract level. It’s much more effective to talk domain by domain when we talk about the use of AI-generated content,” Stoyanovich said. “That allows us to connect whatever we’re doing to regulate AI with the way we’ve been regulating decision-making before AI. It’s very important to connect any regulatory action to existing environments.” 

To learn more about AI safety and harm, subscribe to the CASMI YouTube channel. 

Back to top