Skip to main content

Events

CASMI PRIME SEMINAR with James Guszcza: "Envisioning a field of human-machine hybrid intelligence architecture"

May 6, 12 pm CDT

Mudd Hall Room 3514 or via zoom (Note: this event is intended for Northwestern University and UL Research Institutes; the zoom is set to authenticate attendees to these appropriate domains)

 AI promises to improve human decision-making and fuel economic growth, but today’s state of the art suffers from serious shortcomings.  Reports abound of AI technologies that compromise human safety or wellbeing, treat people unfairly, amplify societal biases, undermine human autonomy through manipulation, and amplify the spread of misinformation and polarizing content.  On a purely practical level, Gartner has estimated that 85% of big data projects never make it to production.

This talk explores the premise that many of today’s AI shortcomings arise from a basic mismatch.  On the one hand, business leaders and end-users typically require effective systems of human-machine partnership.  On the other hand, the teams that build AI technologies tend to be oriented towards optimizing algorithms in lab settings.  Currently missing, and needed, is an applied field for developing human-machine hybrid intelligence systems.  Such a field would be multidisciplinary, integrating concepts and methods from computer science and data science with those from the humanities and social sciences. For example, concepts such as confirmation bias, algorithm aversion, and choice architecture would be no less essential to the language of hybrid intelligence development than concepts such as label bias, data drift, and cross-validation.



Past Events

Virtual Panel - "Ethics, Safety, and AI: Can we have it all?"

April 8, 1:30 p.m. CDT

The 2022 AI Index report just published by the Stanford Institute for Human-Centered Artificial Intelligence says that "AI systems are starting to be deployed widely into the economy, but at the same time they are being deployed, the ethical issues associated with AI are becoming magnified." While ethical principles in AI use have been a focus for years, it is an open question whether we are making progress toward actualizing those principles or toward establishing safety in the use of intelligent systems.

In this panel, we'll discuss the ethics within and of safe AI, and how to make progress toward realizing them. As safety is an inherently value-centered notion, how do we identify the values that matter for safety and who articulates them? How do we implement, measure, and test those values in the context of safety? The panel will explore where progress is being made and how to take the next steps toward an operationalized ethical practice of AI that structures the design, development and deployment of intelligent systems to be intentionally safe for those they impact.

Panelists


Yejin ChoiYejin Choi

Brett Helsel Professor, Paul G. Allen School of Computer Science & Engineering
University of Washington

Yejin Choi is Brett Helsel professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research manager at AI2 overseeing the project Mosaic. Her research investigates a wide variety problems across NLP and AI including commonsense knowledge and reasoning, neural language (de-)generation, language grounding with vision and experience, and AI for social good. She is a co-recipient of the ACL Test of Time award in 2021, the CVPR Longuet-Higgins Prize (test of time award) in 2021, a NeurIPS Outstanding Paper Award in 2021, the AAAI Outstanding Paper Award in 2020, the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, IEEE AI's 10 to Watch in 2016, and the ICCV Marr Prize (best paper award) in 2013. She received her Ph.D. in Computer Science at Cornell University and BS in Computer Science and Engineering at Seoul National University in Korea.


Brent HechtBrent Hecht

Associate Professor of Computer Science and Communication Studies, Northwestern University
Director of Applied Science, Microsoft

Dr. Brent Hecht is Director of Applied Science in Microsoft’s Experiences and Devices organization, where he is working to increase the scale, pace and responsibility of innovation within products like Teams, Edge, and Office. Dr. Hecht has an additional appointment as an Associate Professor at Northwestern University, where he leads the People, Space, and Algorithms (PSA) Research Group. Dr. Hecht has been doing human-centered artificial intelligence (AI) research for over 10 years and his work has been particularly influential in the responsible AI domain. He is the recipient of a CAREER award from the U.S. National Science Foundation and his work has received Best Paper recognition at top-tier publication venues in human-centered AI. He was on the founding executive committee of ACM FAccT, and he played a key role in catalyzing the growing movement for AI researchers (e.g. NeurIPS authors) to more deeply engage with the societal impacts of their work.

Cara LaPointeCara LaPointe

Co-Director, Johns Hopkins Institute for Assured Autonomy
Johns Hopkins University

A futurist who focuses on the intersection of technology, policy, ethics, and leadership, Cara LaPointe is the co-director of the Johns Hopkins Institute for Assured Autonomy, which works to ensure that autonomous systems are safe, secure, and trustworthy as they are increasingly integrated into every aspect of our lives. During more than two decades in the United States Navy, LaPointe held numerous roles in areas including autonomous systems, acquisitions, ship design, naval force architecture, and unmanned vehicle technology integration. At Woods Hole Oceanographic Institution’s Deep Submergence Lab, she conducted research in underwater robotics, developing sensor fusion algorithms for deep-ocean autonomous underwater vehicle navigation. LaPointe was previously a senior fellow at Georgetown University’s Beeck Center for Social Impact + Innovation, where she created the “Blockchain Ethical Design Framework” as a tool to drive social impact and ethics into blockchain technology.

Moderator


David DanksDavid Danks

Professor of Data Science & Philosophy
University of California, San Diego

I am Professor of Data Science & Philosophy at University of California, San Diego. I am also affiliate faculty in UCSD’s Department of Computer Science & Engineering. Previously, I was the L.L. Thurstone Professor of Philosophy & Psychology at Carnegie Mellon University. While at CMU, I served as the Chief Ethicist of CMU’s Block Center for Technology & Society and co-director of CMU’s Center for Informed Democracy and Social Cybersecurity (IDeaS). I have received a James S. McDonnell Foundation Scholar Award (2008) and an Andrew Carnegie Fellowship (2017). My research interests are at the intersection of philosophy, cognitive science, and machine learning, using ideas, methods, and frameworks from each to advance our understanding of complex, interdisciplinary problems. I have explored the ethical, psychological, and policy issues around AI and robotics in transportation, healthcare, privacy, and security.

Back to top