Skip to main content

CASMI PRIME

CASMI PRIME brings together a group of collaborators to focus on an open research question or theme for a short, focused effort to deliver specific outcomes.

PRIME projects blend diverse groups of experts and researchers working together as a catalyst toward shared goals or research objectives. PRIME projects link collaborators from multiple affiliations or home organizations who might not work together yet or may not have been able to find a venue or opportunity to come together. While working together at Northwestern, they will also be able to interact with other researchers at Northwestern and UL, to foster further connections.

If you have a concept or potential team that you think may be a candidate for CASMI PRIME, please reach out to us.




2022 Projects

The Impact of Human Decision Architecture on the Design of Intelligent Systems

Leads: Jim Guszcza, Research Affiliate, Stanford University Center for Advanced Study in the Behavioral Sciences; Kris Hammond, Bill and Cathy Osborn Professor of Computer Science, Northwestern University

“Intelligent” systems that incorporate AI algorithms typically do not achieve their desired results in isolation. Rather, they do so in partnership with human users or in the context of human social systems. Yet algorithmic technologies continue to be built without realistic consideration of how humans will interact with them or how these interactions might influence the ultimate outcomes.

The development of intelligent systems, even those intended to work in concert with human users, has typically assumed an overly simplified model of human decision making. In Artificial Intelligence and other areas of computer science, this has resulted in systems that jeopardize human safety because they depend on assumptions about human decision making that simply do not hold. Examples include self-driving cars that hand off control to human users who are assumed to be engaged but inevitably are not paying full attention, and decision support systems that are followed blindly because they provide “explanations” that have the right form but lack connection to the reasoning that they supposedly support.

Systems of human-machine intelligence must be designed in ways that reflect the realities of how humans process information and how they incorporate the outputs of algorithms in their behaviors and decisions. Ironically, while the core science of human decision architectures is being exploited online in the form of dark patterns that manipulate human activity, little work has been done to map the realities of human decision making behavior onto the design of algorithmic systems in ways that advance human goals, health, and safety.

The project’s goal is to take steps towards rectifying this by incorporating modern models of human decision architectures onto design principles for intelligent systems that improve decision making and mitigate problems that undercut human safety. To address this goal, we plan to bring together a cross-disciplinary team of researchers in computer science, design, HCI, and behavioral economics to develop these design principles by exploring human decision architectures and how they impact interaction with intelligent systems.

Back to top