Skip to main content

Researchers Working to Translate Human Experiences for AI Tools

Despite their impressive ability to process an immense amount of data, machines struggle to understand basic human situations. For example, while people naturally adapt to changing environments like the weather and can navigate social and cultural differences, computers often lack this context. That could lead to dangerous outcomes in a world that is increasingly relying on emerging technologies like artificial intelligence (AI).

Haoqi ZhangHaoqi Zhang, Northwestern associate professor of computer science, and Darren Darren GergleGergle, Northwestern professor of communication studies and (by courtesy) computer science, are working to address this issue in their research with the Center for Advancing Safety of Machine Intelligence (CASMI). Their project, “Human-AI Tools for Expressing Human Situations and Contexts to Machines,” is aimed at helping designers form rich and accurate descriptions of real-life situations so that they can create context-aware responsible tools. Context-aware computing tools include any technologies that are responsive to people’s locations, activities and situations, and to the meaning that people attribute to their doings in the world.

“We increasingly turn to machines to understand our world, but machines don't understand our world,” Zhang said. “For example, if you’re looking for places to throw a frisbee, parks are the place to go; except you didn't realize that when you tell the computer ‘parks,’ that includes botanical gardens. Gardens are not good for throwing the frisbee around. Our way of thinking about this and our colloquial way of talking about it is explicit and implicit.”

Students in Zhang’s Design, Technology, and Research (DTR) class are working on the project in two groups. One group is developing human concepts, called “concept expressions,” for the machine. The other group is accounting for differences in contexts.

Alex Feng and Mame Coumba Ka, Northwestern seniors and computer science majors, were part of the team that created concept expressions. They enhanced an interface called Affinder, which helps designers of context-aware applications translate information into computer code between low-level context features (like location data on a smartphone) and high-level human concepts (like good places for a student to relax between classes). The goal was to develop an interface that would allow designers to create an infinite number of concept expressions.

Alex Feng“The ultimate goal is to create applications that advance human values and support human needs,” Feng said. “In order to do this, there needs to be translation between the data that a machine has to the human interpretation of what it means to be in a certain situation or doing an activity. Affinder solves the translational problem.

“Machine learning is like a black box,” Feng continued. “You can't pick apart some level of cause and effect. There's also not as much accountability. But with this type of tool, there is a human accountability aspect to how these translational algorithms are being developed.”

The irony in developing concept expressions is that AI tools like ChatGPT can help create them because large language models (LLMs) are great tools for generating ideas. Designers can use a chatbot’s suggestions to refine concept expressions.

“We're working on having GPT-4 integration, and we're still conducting more tests to see the exact use of Mame Coumba KaLLMs that we want for this tool,” Ka said. “We've been testing how we want to use GPT-4 to give suggestions and in what situations. We’re also building that at the same time.” Computer science major Nuremir Babanov is working with Ka on this.

Part of the challenge with this research is that the human experience is unique, so people don’t always share the same values, expertise, or opinions. Zhang said it isn’t easy for a computer to learn which areas are most suitable to hold a private conversation. It’s also difficult for a machine to understand how work-related stress differs depending on a person’s job.

“How you would support a doctor who's stressed at the workplace is very different than how you would support a computer scientist or someone who's a blue-collar worker,” Zhang said. “Really understanding those distinctions helps designers have ways to think more broadly about these human issues.”

The research is currently in a testing phase, but researchers have already made some findings. One discovery was that machine learning models are trained based on how people talk about activities, not based on what people actually do. For example, people who live in warmer climates like Florida don’t typically use the word winter when describing their activities during winter months. To account for this, Zhang plans to extract online reviews using alternative contextual markers, such as based on the date that people submitted them.

If computers can better understand human experiences, Zhang said their suggestions will reduce risks. However, if nothing changes as people continue to depend on technology, they could be harmed.

“The immediate risk ‒ the narrow risk ‒ is the computer is going to send us to the wrong place,” Zhang said. “We're going to take the computer's suggestion for a ‘safe place to do something with kids,’ and it’s actually not safe. If we're going to keep asking computers for these experiences that we can do in places, it should have a darn good understanding of what we're actually asking for.”

Zhang also believes it’s important to express human experiences articulately so that machines don’t just take our first expressions.

“Our first expression is not necessarily a good expression,” he said. “We need to look inward and to really go deeper and richer into our conceptions of our human experience.”

Back to top