Skip to main content

Online Users Navigate Mental Health Information, Empowered to Help Make Changes

Access to mental health services is limited, so people are turning to the internet to find the information they need. New research from the Northwestern Center for Advancing Safety of Machine Intelligence (CASMI) finds that this information-gathering process is cyclical and involves both search engines and social media platforms. The discovery is aimed at empowering online users to help make design changes. 

The findings are detailed in the research paper, “Seeking in Cycles: How Users Leverage Personal Information Ecosystems to Find Mental Health Information,” and will be presented at the Association for Computing Machinery (ACM) Conference on Human Factors in Computing Systems (CHI 2024) from May 11-16 in Honolulu, Hawaii. 

The work is part of the CASMI research project, “Safe and Compassionate Machine Learning (ML) Recommendations for People with Mental Illnesses.” The paper’s authors — Ashlee Milton, University of Minnesota PhD student in computer science; Juan F. Maestre, University of Minnesota professor of computer science & engineering; Abhishek Roy, Google staff user experience (UX) researcher in Trust & Safety Research; Rebecca Umbach, Google senior user experience researcher; and Stevie Chancellor, assistant professor of computer science & engineering at the University of Minnesota — interviewed 17 participants with a mental health diagnosis to learn about how they find and assess mental health information. 

Ashlee Milton“A lot of traditional information-seeking process models are very linear,” Milton said. “So, you start with a question and ask a search engine or social media platform this question to find the answer. But that's not what was happening with our research participants.” 

Instead, the researchers found that participants tended to use a search engine for fact-checking and general information about mental illness. Conversely, participants preferred using social media platforms for stories about personal experiences with mental illness and to better understand new terminology. In some cases, participants sought reliable information on social media from professionals, like therapists. If they questioned what they saw, they would use a search engine to verify the information. 

The information-seeking process doesn’t always start intentionally. Sometimes, social media algorithms recognize when someone is seeking mental health information, so the algorithms recommend new, related content. This implicit process then inspires the research participants to learn more about the recommended content. 

This study was inspired by the team’s previous work investigating mental health content on TikTok. They knew that people utilized many platforms (not just one) to find information, and they wanted a deeper understanding of the information-seeking process.  Stevie Chancellor

“How good is that information on social media? How are people validating and knowing that it's high quality?” Chancellor asked. “We wanted to do this study to understand how people conceive of the entire social media ecosystem to help us build more effective recommendations or information generation. One of our larger research goals is to make mental health content safer and better for people.” 

Research participants were recruited on social media and were paid a $25 gift card. To protect anonymity, they were not asked which mental health condition they had. Instead, researchers simply asked if participants had received any form of diagnosis, whether formal or self-diagnosed. Most of the research participants were young white females, and nearly half had a least a master’s degree. Researchers attributed the lack of diversity and small number of participants to the stigma surrounding mental health. 

Milton and Chancellor said the research participants had several promising ideas to improve user experiences on social media. These included controlling the information they saw, allowing multiple identities within a single account to protect privacy, and building a community-based crowdsourcing model for fact-checking. 

“The only big platform that already has a community-based crowdsourcing model for fact-checking is Twitter Birdwatch,” Milton said. “It’s an idea that the community itself would be the one to actually look at information and ask, ‘Is this accurate or not?’ It’s polling from trusted community members.” 

Chancellor said that the participants gave researchers some of the best ideas they have ever heard, noting their intelligence and creativity. 

“Our participants really wanted control and agency in the content that they saw,” Chancellor said. “One of our participants called it the ‘fire hose of information’ that they wanted to just be able to stop because they were overwhelmed by that information. Another participant calls some of these platforms a ‘dopamine slot machine.’ They know this isn't good for them. Our users also wanted there to be places where they could put controls, stops, and limits on the kind of content that they would see, at least temporarily.” 

Some research participants also wanted more disclaimers about mental health to expedite fact-checking. For example, they supported COVID-19 disclaimers (which included information about the vaccine) and thought mental health information should be held to the same standard. 

The research team is running workshops to hear more ideas from more participants. Milton said that in another study she is currently running, participants without a computer science background are more idealistic about potential changes. Overall, researchers are interested in hearing ideas before they assess whether they are practical. 

“One of the more creative ideas was from a participant who wanted the ability to create multiple feeds that they called tabs,” Milton said. “They could tailor different feeds depending on what they wanted to see where, depending on whether they were having a good or bad mental health day. This wouldn’t necessarily be content-specific feeds, such as animal content, but it would be more tailored to their wellbeing.” 

After the workshops, the researchers will conduct a larger study focusing on what’s feasible with the ultimate goal of designing and testing a tool that could help people.  

“The big picture goal is to get the ideas rolling and show that we have some people that currently aren't being supported and bring that to the attention of bigger platforms,” Milton said. “It's interesting to see that people are not only looking for agency in how they want to interact with the recommender. They also want a say in how the algorithm itself works.” 

Back to top