Skip to main content

AI Ethics Debate at Chicago Conference, Precursor to CASMI’s Next Workshop

CASMI Research Wins Best Paper Award at ACM FAccT


Researchers with the Northwestern Center for Advancing Safety of Machine Intelligence (CASMI) share their thoughts after participating in the Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (ACM FAccT) on June 12-15 in Chicago.

The public debate around artificial intelligence (AI) came to Chicago on June 12-15, when 828 scholars and practitioners from 35 countries participated in the Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (ACM FAccT) at Hyatt Regency McCormick Place.  

Several researchers with the Northwestern Center for Advancing Safety of Machine Intelligence (CASMI) attended the conference to discuss the ethics and policy surrounding AI and technology.  

Kristian Hammond“It’s exciting to see the evolution of FAccT, which began as academics genuinely concerned about accountability and fairness with regard to technology and has grown into a concrete examination of where technologies are causing harm and how to make them safe,” said Kristian Hammond, Bill and Cathy Osborn professor of computer science and director of CASMI. “As a group, FAccT has become much more focused on how to operationalize safety and protect the world from harm.”  

The conference’s mission to bring together researchers interested in fairness, accountability, and transparency in sociotechnical systems is in line with CASMI’s mission to operationalize machine intelligence that is safe, equitable, and beneficial. ACM FAccT’s focus also aligns with CASMI’s next workshop, “Sociotechnical Approaches to Measurement and Validation for Safety in AI.” 

Abigail Jacobs, an assistant professor of information at the University of Michigan who is working with Abigail JacobsCASMI to coordinate the workshop, noted that scholars at FAccT are coalescing on the understanding that the assumptions built into AI can lead to downstream harms. “Our upcoming workshop takes this concern seriously,” she said. “No technical system is free of assumptions, but these assumptions are often hidden – and hugely impactful. Operationalizing safe technologies requires interdisciplinary, sociotechnical perspectives, which center safety as preventing and mitigating the harms emerging from historical and present-day technologies, impacting real people, inequitably, right now.” The workshop will bring together scholars from a range of disciplines and from academia, industry, and government. 

ACM FAccT employed a rigorous review process for the 150 papers it accepted, three of which were funded in part by CASMI.  

Our Research, From the East Coast to Ukraine 

One CASMI-funded paper, “Counterfactual Prediction Under Outcome Measurement Error,” received a best paper award at ACM FAccT. The research – by Luke Guerdan, graduate student at the Carnegie Mellon University (CMU) Human-Computer Interaction Institute; Amanda Coston, CMU graduate student in machine learning and public policy; Kenneth Holstein, assistant professor at the CMU Human-Computer Interaction Institute and principal investigator for the CASMI project, “Supporting Effective AI-Augmented Decision-Making in Social Contexts”; and Steven Wu, CMU assistant professor of computer science and societal systems – was one of six papers at the conference to receive the distinction. 

The work studied multiple challenges that threaten the safety and reliability of AI-based decision systems in real-world settings. Researchers demonstrated that learning reliable predictive models requires carefully accounting for compounding uncertainty from label bias, counterfactual outcomes, and distribution shift. 

Luke Guerdan“Models that don’t correct for outcome measurement error and treatment effects in parallel perform quite unreliably,” Guerdan said. “This underscores the importance of carefully vetting these measurement assumptions in consultation with domain experts when we’re applying these assumptions for downstream parameter estimation and risk minimization.”  

Guerdan presented one other CASMI-funded research paper at ACM FAccT: “Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making.” This research presents a framework to help the research community bridge the gap between the complexities encountered in real-world AI deployments versus current research practices in the FAccT, machine learning, and human-computer interaction communities. 

New York University PhD Student Andrew Bell presents the findings of CASMI-funded research.

The third CASMI-funded paper, “The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice,” demonstrates how fairness in statistical models can be achieved for multiple groups by allowing a small margin of error between metrics; but importantly, accuracy is not sacrificed. Andrew Bell, a New York University (NYU) graduate student in computer science, presented the findings at ACM FAccT on June 12. Before his presentation, Bell gave special attention to two of his fellow authors: Nazarii Drushchak and Tetiana Herasymova, both from Ukrainian Catholic University. The university is located in Lviv, Ukraine, where many people are seeking refuge amid the war. 

“They did amazing work in unimaginable circumstances,” Bell said.  

Julia Stoyanovich, NYU associate professor of computer science & engineering, data science, and director of its Center for Responsible AI, started the research program in 2022 to help Ukrainians maintain a sense of normalcy. Stoyanovich serves as the principal investigator (PI) for the CASMI project, “Incorporating Stability Objectives into the Design of Data-Intensive Pipelines.” 

Our Network of Researchers: Studying Social and Computational Sciences  

CASMI-affiliated researchers are investigating ways in which technological systems are affecting people and how they can gain greater control. 

Northwestern University’s doctoral fellowship supported the paper, “The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers,” whose authors include CASMI PI Stevie Chancellor, assistant professor in computer science and engineering at the University of Minnesota; Nicholas Vincent, postdoctoral scholar at University of California, Davis; and Brent Hecht, associate professor of computer science at Northwestern Engineering and of communication studies at Northwestern’s School of Communication and director of applied science at Microsoft. 

Hanlin Li, postdoctoral scholar at University of California, Berkley, gave a presentation about the paper atHanlin Li ACM FAccT on June 13. Li, who received a PhD in technology and social behavior at Northwestern University, said we’re all moonlighting as underpaid or unpaid data workers. The paper outlines opportunities for researchers, policymakers, and activists to empower data producers in their relationship with tech companies. 

“This is a particularly urgent task in the domain of creative industry, given the emergence of generative models,” Li said. 

Chancellor, who is the principal investigator for the CASMI project, "Safe and Compassionate ML Recommendations for People with Mental Illnesses,” co-authored a second paper at ACM FAccT: “A Systematic Review of Ethics Disclosures in Predictive Mental Health Research.” Its authors analyzed ethical practices in social media and machine learning research and advocated for increased transparency. 

Several papers presented at ACM FAccT examined research published at its previous conferences. Abigail Jacobs, a Northwestern alumnus who studies structure, governance, and inequality in sociotechnical systems, was a co-author of the paper, “An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature.” It analyzed 60 FAccT papers to understand how race is understood and used in algorithms. The research found inconsistent definitions of race, and in many cases, previous work didn't clearly explain or justify why they made certain choices about race in their algorithms. 

A number of ACM FAccT papers focused on how diversity can lead to better outcomes. Jacob Thebault-Spieker, assistant professor at the University of Wisconsin-Madison School of Computer, Data & Information Sciences and principal investigator of the CASMI project “Towards Contextualized Road Safety Conditions,” co-authored the paper, “Diverse Perspectives Can Mitigate Political Bias in Crowdsourced Content Moderation.” Researchers conducted a study on how well people could label political content on social media. They found techniques that incorporate different political perspectives may help ensure fairer outcomes.  

Discussion on How to Involve Users to Review AI Systems 

Panelists participate in a discussion entitled, "User Engagement in Algorithm Testing and Auditing: Exploring Opportunities and Tensions between Practitioners and End Users."

Two CASMI-affiliated researchers at ACM FAccT participated in a “CRAFT” session, which stands for Critiquing and Rethinking Fairness, Accountability, and Transparency. The following researchers from Carnegie Mellon University organized the CRAFT session: CoALA Lab Director Kenneth Holstein, PhD Student Wesley Deng, Assistant Professor of Computer Science Motahhare Eslami, and incoming PhD Student Shivani Kapania. The discussion concentrated on methods like crowdsourcing and covered the limitations of getting people to test and review AI systems. 

Nick Diakopoulos, associate professor of communication studies in Northwestern’s School of Communication and (by courtesy) associate professor of computer science in Northwestern Engineering; and Christo Wilson, associate professor at the Northeastern University Khoury College of Computer Science, were among the seven panelists who participated in the discussion. Diakopoulos is the principal investigator of the CASMI project, “Anticipating AI Impact in a Diverse Society: Developing a Scenario-Based, Diversity-Sensitive Method to Evaluate the Societal Impact of AI-Systems and Regulations.” Wilson is a co-PI for the CASMI project, “Dark Patterns in AI-Enabled Consumer Experiences.” 

Nick DiakopoulosDiakopoulos said people can push back against powerful entities by using their voices. “If you publicize ethical issues uncovered through end-user auditing and draw enough attention, maybe it brings providers to provide transparency or have an explanation,” he said.

Wilson said paying people to participate in audits has helped hisChristo Wilson own research, but ultimately, he believes people would prefer to have democratic governance of systems. “What do people really want? More control and transparency. They just want things to work a certain way or understand why decisions are being made,” he said. 

Conversations like these will continue at the CASMI workshop, “Sociotechnical Approaches to Measurement and Validation for Safety in AI,” on July 18-19. For more information about the event, visit our website 

Back to top