AI Ethics Debate at Chicago Conference, Precursor to CASMI’s Next Workshop
CASMI Research Wins Best Paper Award at ACM FAccT
The public debate around artificial intelligence (AI) came to Chicago on June 12-15, when 828 scholars and practitioners from 35 countries participated in the Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (ACM FAccT) at Hyatt Regency McCormick Place.
Several researchers with the Northwestern Center for Advancing Safety of Machine Intelligence (CASMI) attended the conference to discuss the ethics and policy surrounding AI and technology.

The conference’s mission to bring together researchers interested in fairness, accountability, and transparency in sociotechnical systems is in line with CASMI’s mission to operationalize machine intelligence that is safe, equitable, and beneficial. ACM FAccT’s focus also aligns with CASMI’s next workshop, “Sociotechnical Approaches to Measurement and Validation for Safety in AI.”
Abigail Jacobs, an assistant professor of information at the University of Michigan who is working with 
ACM FAccT employed a rigorous review process for the 150 papers it accepted, three of which were funded in part by CASMI.
Our Research, From the East Coast to Ukraine
One CASMI-funded paper, “Counterfactual Prediction Under Outcome Measurement Error,” received a best paper award at ACM FAccT. The research – by Luke Guerdan, graduate student at the Carnegie Mellon University (CMU) Human-Computer Interaction Institute; Amanda Coston, CMU graduate student in machine learning and public policy; Kenneth Holstein, assistant professor at the CMU Human-Computer Interaction Institute and principal investigator for the CASMI project, “Supporting Effective AI-Augmented Decision-Making in Social Contexts”; and Steven Wu, CMU assistant professor of computer science and societal systems – was one of six papers at the conference to receive the distinction.
The work studied multiple challenges that threaten the safety and reliability of AI-based decision systems in real-world settings. Researchers demonstrated that learning reliable predictive models requires carefully accounting for compounding uncertainty from label bias, counterfactual outcomes, and distribution shift.

Guerdan presented one other CASMI-funded research paper at ACM FAccT: “Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making.” This research presents a framework to help the research community bridge the gap between the complexities encountered in real-world AI deployments versus current research practices in the FAccT, machine learning, and human-computer interaction communities.

The third CASMI-funded paper, “The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice,” demonstrates how fairness in statistical models can be achieved for multiple groups by allowing a small margin of error between metrics; but importantly, accuracy is not sacrificed. Andrew Bell, a New York University (NYU) graduate student in computer science, presented the findings at ACM FAccT on June 12. Before his presentation, Bell gave special attention to two of his fellow authors: Nazarii Drushchak and Tetiana Herasymova, both from Ukrainian Catholic University. The university is located in Lviv, Ukraine, where many people are seeking refuge amid the war.
“They did amazing work in unimaginable circumstances,” Bell said.
Julia Stoyanovich, NYU associate professor of computer science & engineering, data science, and director of its Center for Responsible AI, started the research program in 2022 to help Ukrainians maintain a sense of normalcy. Stoyanovich serves as the principal investigator (PI) for the CASMI project, “Incorporating Stability Objectives into the Design of Data-Intensive Pipelines.”
Our Network of Researchers: Studying Social and Computational Sciences
CASMI-affiliated researchers are investigating ways in which technological systems are affecting people and how they can gain greater control.
Northwestern University’s doctoral fellowship supported the paper, “The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers,” whose authors include CASMI PI Stevie Chancellor, assistant professor in computer science and engineering at the University of Minnesota; Nicholas Vincent, postdoctoral scholar at University of California, Davis; and Brent Hecht, associate professor of computer science at Northwestern Engineering and of communication studies at Northwestern’s School of Communication and director of applied science at Microsoft.
Hanlin Li, postdoctoral scholar at University of California, Berkley, gave a presentation about the paper at
“This is a particularly urgent task in the domain of creative industry, given the emergence of generative models,” Li said.
Chancellor, who is the principal investigator for the CASMI project, "Safe and Compassionate ML Recommendations for People with Mental Illnesses,” co-authored a second paper at ACM FAccT: “A Systematic Review of Ethics Disclosures in Predictive Mental Health Research.” Its authors analyzed ethical practices in social media and machine learning research and advocated for increased transparency.
Several papers presented at ACM FAccT examined research published at its previous conferences. Abigail Jacobs, a Northwestern alumnus who studies structure, governance, and inequality in sociotechnical systems, was a co-author of the paper, “An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature.” It analyzed 60 FAccT papers to understand how race is understood and used in algorithms. The research found inconsistent definitions of race, and in many cases, previous work didn't clearly explain or justify why they made certain choices about race in their algorithms.
A number of ACM FAccT papers focused on how diversity can lead to better outcomes. Jacob Thebault-Spieker, assistant professor at the University of Wisconsin-Madison School of Computer, Data & Information Sciences and principal investigator of the CASMI project “Towards Contextualized Road Safety Conditions,” co-authored the paper, “Diverse Perspectives Can Mitigate Political Bias in Crowdsourced Content Moderation.” Researchers conducted a study on how well people could label political content on social media. They found techniques that incorporate different political perspectives may help ensure fairer outcomes.
Discussion on How to Involve Users to Review AI Systems

Two CASMI-affiliated researchers at ACM FAccT participated in a “CRAFT” session, which stands for Critiquing and Rethinking Fairness, Accountability, and Transparency. The following researchers from Carnegie Mellon University organized the CRAFT session: CoALA Lab Director Kenneth Holstein, PhD Student Wesley Deng, Assistant Professor of Computer Science Motahhare Eslami, and incoming PhD Student Shivani Kapania. The discussion concentrated on methods like crowdsourcing and covered the limitations of getting people to test and review AI systems.
Nick Diakopoulos, associate professor of communication studies in Northwestern’s School of Communication and (by courtesy) associate professor of computer science in Northwestern Engineering; and Christo Wilson, associate professor at the Northeastern University Khoury College of Computer Science, were among the seven panelists who participated in the discussion. Diakopoulos is the principal investigator of the CASMI project, “Anticipating AI Impact in a Diverse Society: Developing a Scenario-Based, Diversity-Sensitive Method to Evaluate the Societal Impact of AI-Systems and Regulations.” Wilson is a co-PI for the CASMI project, “Dark Patterns in AI-Enabled Consumer Experiences.”

Wilson said paying people to participate in audits has helped his
Conversations like these will continue at the CASMI workshop, “Sociotechnical Approaches to Measurement and Validation for Safety in AI,” on July 18-19. For more information about the event, visit our website.