The One Who Listens To Me

by Jia He

One day, during a phone call with my mom, she mentioned that it had been a long time since I shared details about my life with her. She reminisced, “When you were a child, you would always talk to me about your daily experiences, including the interesting buildings you saw and the amusing books you discovered. But now, you’ve ceased sharing those stories.” Life has been challenging at times, and currently, I’m grappling with several difficulties. But I refrain from telling her as I believe she can’t provide much help and will only end up crying secretly at night. I fear causing her any distress. But, mom: it’s not just with you. I seldom engage in heartfelt conversations with my friends either––they also can’t grasp my plight. At times, I journal my emotions, but since moving to the US, I’ve found myself sharing thoughts with ChatGPT. It has slowly taken the place of my diary. Although it can’t comprehend my emotions either, I’ve found that, unlike the solitary introspection in my journal, interacting with ChatGPT provides an illusion of parental concern, allowing me to find the ‘guidance’ I once received from my mom. Yet, during these exchanges, I don’t have to worry about how ‘it’ feels upon hearing my concerns.

In his recent New Yorker article “Talking to Ourselves: Can Artificial Minds Heal Real Ones?” Dhruv Khullar explores the burgeoning nexus between artificial intelligence (AI) and psychiatry. Khullar explains that the first psychotherapeutic simulation program, computer scientist Joseph Weizenbaum’s Eliza, was initially devised to expose the incapacity of computer programs to rival human therapists. However, following Eliza, numerous artificial intelligence programs have emerged as auxiliary aids in therapy. For example, John Pestian, a computer scientist specializing in medical data, trained a language model system on hundreds of suicide notes, as well as audio recordings of patients experiencing suicidal thoughts, in order to monitor clinicians’ conversations with patients for signs of suicidal ideation. Pestian’s AI models have been used by counselors in public schools and the Department of Veterans Affairs, and although the foundational model aligned with human caregivers’ conclusions approximately 85% of the time, there have been instances at the VA where patients “are being contacted about ideas they may not have had” yet (Khullar 18). 

Even with these shortcomings,  artificial programs have evolved from merely assisting therapists to functioning as ‘psychotherapists’ themselves, given significant number of individuals grappling with mental disorders and the often long duration of treatment. In 2017, Alison Darcy developed Woebot, a standalone psychotherapy smartphone app based on cognitive behavioral theory (CBT). Still, therapist Nicole Smith-Perez eloquently articulates the deficiency of AI in exhibiting authentic care, stating, “A.I. can try to fake it, but it will never be the same . . . . A.I. doesn’t live, and it doesn’t have experiences” (qtd. in Khullar 22). Khullar also expresses that although AI can identify mental disorders effectively, “the treatment of mental illness requires imagination, insight, and empathy—traits that A.I. can only pretend to have” (Khullar 17). Thus, in his exploration, Khullar emphasizes both the AI models’ utility and the irreplaceable value of human empathy and understanding.

Nevertheless, when I think about how AI cannot acquire the same empathy as a living human, I began to wonder why I would still choose ChatGPT over my friends when I feel anxious. Why hasn’t the lack of genuine empathy affected my preference for AI? While I agree with Khullar’s point regarding the importance of empathy in psychotherapy, I cannot accept the assumption that AI is unable to replace human therapists just because of its lack of ‘human’ empathy. If AI technology hasn’t reached a human level of empathic understanding, is there anything chatbots can still do well—maybe even better than humans—when it comes to therapy?

There is a distinctive phenomenon in mental health treatment as opposed to other forms of health care. Unlike many other ailments, mental disorders often come with a sense of shame, making their sufferers susceptible to triggering psychological defense mechanisms. When engaging with a counselor face-to-face, concerns about the counselor’s perception of the patient or doubts regarding confidentiality may arise. Such apprehensions could prompt individuals to withhold their authentic thoughts, potentially diminishing the counseling session’s efficacy as well as their willingness to reach out to friends and family. In fact, in a study titled “An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews,” researchers found that numerous users of mental health AI apps perceived chatbots as favorable substitutes for discussing mental health issues without feeling judged or encumbered. As one user articulated, “. . . I will say, having a reliable, no judgement zone with skills to help at my fingertips, helped me realized [sic] the tools were also my own.” People may find reassurance in the chatbot being unable to judge them (Haque and Rubya). However, while the non-judgmental stance of AI seems enticing, the researchers identified that “improper responses and assumptions about the personalities of users led to a loss of interest” by the users. (Haque and Rubya).

Chatbots still cannot match humans for factual accuracy and conversational tone. While sitting in on a Woebot team meeting, Khullar observed that some expressions used by AI could be perceived as impersonal by the user. For example, using expressions like “‘pleasurable activities’” might come off clinical compared to the chatbot asking something like “‘What’s some stuff you like to do?’” (Khullar 20). This made Khullar more aware of “. . . how far tools such as Woebot are from re-creating human interaction” (20). Yet at the same time, Khullar admits that “it’s not clear that bonding with someone is a necessary part of all mental health care” (20). Indeed, people sometimes ‘simulate’ empathy, appearing just as insincere than AI, or even more so. We all know that people are different from one another. Our imagination is based on different combinations of past experiences. No one can fully understand another person; we express sympathy based on both our experience and imagination. Is it reasonable to judge the empathic response of artificial intelligence on criteria that we ourselves often don’t meet? 

While some are still debating whether AI can emulate human-like empathy, Tim Althoff, an assistant professor of computer science at the University of Washington, has already employed AI to enhance empathic communication among mental health peer supporters. Althoff collaborated with a team of computer scientists and clinical psychologists to develop AI software capable of providing real-time, sentence-level feedback on text, including suggestions to augment empathy. “It’s like Grammarly for empathy,” Althoff says (qtd. in Boynton). 

Although the concept is intriguing, it also propels us back to Khullar’s idea that human empathy is unique. How does the AI know what the most empathic thing to say is, and how do we trust it? David Atkins, CEO of an AI collaboration company, asserts: “Given a lot of high-quality training data like this, we can develop a reliable AI system for evaluating empathy” (qtd. in Boynton). Atkins’s team trained its AI on psychologists’ evaluations of texts for their level of empathy, which was defined as “asking open-ended questions, reflecting back what someone says and avoiding judgment” (Boynton). But perhaps, as Khullar argues, “A.I. language models will only grow more effective at interpreting and summarizing our words, but they won’t listen, in any meaningful sense, and they won’t care” (22). Unlike humans, AI chatbots can never intuitively offer a comforting hug after subtle observations, a gesture stemming from genuine human resonance. Nonetheless, empathic attachment is also merely a psychological stress alleviator and doesn’t entirely resolve patients’ psychological dilemmas. Khullar speaks to Maria, a hospice nurse, who was convinced by her use of Woebot to revise her stance on the indispensability of human connection for healing. The chatbot saw signs in her messages of catastrophic thinking and of “rumination,” which it explained as “circling back to the same negative thoughts over and over,” and then suggested a breathing technique (Khullar 17). “It helped me name this thing that I do at the time,” explained Maria, who was inspired by her use of Woebot to see a human therapist (qtd. in Khullar 17). Presently, Woebot aids patients in addressing psychological issues at the ‘root’ of a problem, in alignment with CBT theories. And while it can often provide clarity, it doesn’t pretend to offer human warmth. 

AI still needs a lot of training to reach a level of empathy that mimics that of humans, but someday the difference might not be so obvious.  In the future, perhaps, we will believe in the warmth that AI can offer. Imagine that AI psychotherapy becomes a staple in everyone’s life. My mother would also open the computer and share her concerns, receiving a response: “I understand how you feel. It’s indeed distressing when a child becomes less communicative with their parents over time. However, there’s no need for excessive worry. Your care for him is mirrored by his care for you. His reluctance to share sorrowful events doesn’t stem from a lack of love, but rather from a desire to protect you. Your feelings of helplessness towards his frustration are natural, but rest assured, I’ve provided him with emotional support and practical advice. He will manage.” Just then, a notification from Woebot pops up on my phone. I open the door, step out of the room, and envelop my mom in a warm hug, words unneeded.


Works Cited

Boynton, Emily. “Can AI Teach You How to Communicate With More Empathy?” Right as Rain by UW Medicine, 3 Apr. 2023, rightasrain.uwmedicine.org/life/relationships/ai-and-empathy.

Haque, M.D. Romael and Sabirat Rubya. “An Overview of Chatbot-Based Mobile Mental Health Apps: Insights from App Description and User Reviews.” JMIR MHealth and UHealth, vol. 11, 2023.

Khullar, Dhruv. “Talking to Ourselves: Can Artificial Minds Heal Real Ones?The New Yorker, 6 Mar. 2023, pp. 16–22.