Contrasting AI's Financial and Emotional Repercussions
Two compelling stories have emerged simultaneously, offering insights into AI's impact and possible harms and our attitude towards them. First, a group of writers and artists signed an open letter advocating regulations for generative AI systems to control their use of copyrighted material. Their concern is the risk of AI systems co-opting jobs, a matter often considered under copyright law, though it involves a broader set of issues.
Second, a lawsuit in Florida sees a mother suing Character.AI over her son's tragic death, arguing that his interaction with a sexually oriented chatbot led to addiction, isolation, depression, and ultimately, suicide.
These two situations highlight different problems. The copyright issue revolves around financial compensation for creators, while the other touches on the profound and complex matter of human life and mental health. The first issue is meaningful but the second is profound. Unfortunately, as we look at these two examples, it is easier for us deal with the financial impacts than it is for us to look at the implications of AI on human mental health.
The challenge lies in understanding these interactions, especially for vulnerable people, and responding appropriately. There's a culpability among builders, regulators, and those studying human-machine interactions, failing to address the deeper implications. While it's understandable to empathize with writers concerned about their work being used improperly, the issues affecting human life warrant greater attention.
Public sentiment varies; while people sympathize with the copyright concerns, the Florida lawsuit often sees blame placed on the mother, ignoring mental illness as a health issue. Recognizing suicidal tendencies is not a common skill, and protecting human agency often prevents us from addressing thought mechanisms. We oversimplify serious issues by giving superficial advice because we lack meaningful intervention strategies.
The lawsuit's merit is not easily judged, but the detrimental potential of the child's interaction with the chatbot is significant. Ironically, AI designed for dialogue could aid mental health positively. We must differentiate where technology benefits and where it harms, acknowledging that while individuals bear some responsibility, interacting with systems that isolate people is damaging. Identifying these harmful points and addressing them is crucial, yet the failure to do so remains a pressing concern.
Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program