Computer Scientist Shares Enlightening Journey to See the ‘Value in Expression’
Four years ago, Haoqi Zhang started a philosophical journey that would reshape his way of thinking about human values and engagement. The Northwestern Associate Professor of Computer Science and Design began reading about the work of Talbot Brewer, a philosopher and author of the book The Retrieval of Ethics. It took Zhang one year to read the book — only because it took him that long to comprehend the “foreign, yet familiar” concepts.
Zhang now wants other computer scientists, including researchers in artificial intelligence (AI) and human-computer interaction (HCI), to examine their thinking from a consequentialist point of view (which judges actions solely by their consequences, i.e. goal-reaching methods) to also incorporate a dialectical mindset (which emphasizes the inherent value of an activity, e.g. being a good friend). This dichotomy is explored in Zhang’s position paper, “Searching for the Non-Consequential: Dialectical Activities in HCI and the Limits of Computers,” which he presented on May 14 during the Association of Computing Machinery’s Conference on Human Factors in Computing Systems (ACM CHI) in Honolulu, Hawaii.
“It's a theory and philosophy paper that is really trying to help us understand why, fundamentally, there are some things that computers can't do,” said Zhang, principal investigator of the Center for Advancing Safety of Machine Intelligence (CASMI) project titled, “Human-AI Tools for Expressing Human Situations and Contexts to Machines.” “The concept in the paper is very basic, yet it's actually very hard to articulate correctly and to understand fully and appreciate.”
By their nature, computers are consequentialist “thinkers,” so only humans can think dialectically. To do this, people must engage deeply in experiences to see the intrinsic good in what they’re doing.
If someone is creating art or music, they are likely thinking dialectically already. Generative AI tools can also generate art or music, but the technologies are designed to be input-output machines, so they can’t capture the full human expressions that artists use while creating art.
“There’s value in expression,” Zhang said. “One of the very practical things that all builders of human-AI systems really ought to think about is, when we build these tools, what is the value of artmaking and developing a personal creative voice? How does that come through at all with these technologies?”
Zhang said that the research community should wrestle with these questions to better understand the complexities of what it means to be human.
The paper indicates that consequentialism isn’t always a bad approach. In fact, many things that people do are simply to meet a desired end, such as survival or feeling joy. Examples include eating to satisfy hunger, learning a new skill to advance your career, or watching a movie for entertainment.
However, Zhang adds that this mentality can be limited. Consider parenting.
“A consequentialist might say, ‘The point of being a good parent is to get my kid into Harvard or to make sure my kid has a stable job,’” Zhang said. “It’s saying that the point of parenting is to get your kid into this really good outcome. But something about that sounds off, right?”
By contrast, a dialectical approach would focus on relating to your child in the present and acting in a way that aligns with your ideals.
“That doesn't mean that, even if you made your best effort, things would go according to plan,” Zhang said. “It also doesn't mean that you clearly see what that is. It's very, very hard to learn, and it takes decades.”
One parallel in computer science is dating apps. They are designed consequentially: to help someone find a match. However, intimately relating to another human being is more nuanced and is difficult for a computer to encode.
The paper advocates for an environment that supports both consequentialist and dialectical thinking. This “computational ecosystem” wouldn’t lose the value of human engagement. The Agile Research Studios (ARS), a sociotechnical model which Zhang developed, can serve as an example of a computational ecosystem. While Zhang acknowledged that ARS teaches students the skills needed to produce research products (or better consequentialists), he said the program also helps them learn to see the good in the activity of leading a research inquiry, including how to deal with failure and the value of not knowing something (or better dialectical thinkers).
The computational ecosystems approach provides a guide for how computer science and HCI fields can think about these issues, in the hopes of advancing important human ways of being. Overall, Zhang wants readers to realize there is no quick, easy fix.
“We want people to think about the shape of human activities that we care about and the good in those activities — whether the activity is parenting, artmaking, ethical reasoning, historical thinking, or research,” Zhang said. “That's really what this work is about.”