Skip to main content

The Impact of Human Decision Architecture on the Design of Intelligent Systems

Jim GuszczaPI: Jim Guszcza

Research Affiliate, Center for Advanced Study in the Behavioral Sciences
Stanford University


Kris HammondPI: Kris Hammond

Bill and Cathy Osborn Professor of Computer Science
Northwestern University


“Intelligent” systems that incorporate AI algorithms typically do not achieve their desired results in isolation. Rather, they do so in partnership with human users or in the context of human social systems. Yet algorithmic technologies continue to be built without realistic consideration of how humans will interact with them or how these interactions might influence the ultimate outcomes.

The development of intelligent systems, even those intended to work in concert with human users, has typically assumed an overly simplified model of human decision making. In Artificial Intelligence and other areas of computer science, this has resulted in systems that jeopardize human safety because they depend on assumptions about human decision making that simply do not hold. Examples include self-driving cars that hand off control to human users who are assumed to be engaged but inevitably are not paying full attention, and decision support systems that are followed blindly because they provide “explanations” that have the right form but lack connection to the reasoning that they supposedly support.

Systems of human-machine intelligence must be designed in ways that reflect the realities of how humans process information and how they incorporate the outputs of algorithms in their behaviors and decisions. Ironically, while the core science of human decision architectures is being exploited online in the form of dark patterns that manipulate human activity, little work has been done to map the realities of human decision making behavior onto the design of algorithmic systems in ways that advance human goals, health, and safety.

The project’s goal is to take steps towards rectifying this by incorporating modern models of human decision architectures onto design principles for intelligent systems that improve decision making and mitigate problems that undercut human safety. To address this goal, we plan to bring together a cross-disciplinary team of researchers in computer science, design, HCI, and behavioral economics to develop these design principles by exploring human decision architectures and how they impact interaction with intelligent systems.


Back to top