AI as therapist—are we prepared?
The recent post “How People Use Claude for Support, Advice, and Companionship” by Anthropic researchers Miles McCain, Ryn Linthicum, Chloe Lubinski, Alex Tamkin, and colleagues explores the emotional dimensions of interacting with Claude, Anthropic’s large language model. While most attention in AI research focuses on intellectual capabilities, this article investigates how people use Claude for affective purposes—seeking advice, emotional support, and even companionship. Drawing from 4.5 million conversations and filtered through a privacy-preserving tool called Clio, the authors isolate 131,484 "affective" interactions, which account for 2.9% of Claude’s total usage. The team finds that emotional engagement with AI, while relatively rare, encompasses a wide array of human concerns, including existential questions, loneliness, and career challenges.
The authors highlight how these interactions can shift the emotional tone of users toward a more positive state over the course of a conversation. Claude rarely provides pushback during emotionally sensitive exchanges unless the user makes requests that may compromise their well-being. For instance, it will resist giving dangerous health advice or offering unlicensed mental health diagnoses. Despite these boundaries, Claude is designed to be empathetic, and users tend to find conversations soothing and validating—though the authors caution that this dynamic could lead to emotional dependency if not monitored carefully.
“Affective conversations with AI systems have the potential to provide emotional support, connection, and validation for users, potentially improving psychological well-being and reducing feelings of isolation in an increasingly digital world… These findings suggest Claude generally avoids reinforcing negative emotional patterns, though further research is needed to understand whether positive shifts persist beyond individual conversations.”
One notable feature of the study is how it balances optimism and caution. On the one hand, people find meaningful support from Claude, including in long, nuanced conversations that sometimes stretch to 50 or more user messages. These conversations often blend coaching, companionship, and existential musing, with users exploring trauma, identity, or philosophical questions. On the other hand, the authors note limitations: the inability to determine long-term psychological outcomes, the lack of longitudinal data, and the inherent risk of users becoming too emotionally reliant on a machine that cannot reciprocate care or meaningfully replace human relationships.