AI Therapists: Digital Comfort or Risky Click? Experts Weigh In

Advertise With Us – Reach the Crypto Crowd

Promote your blockchain project, token, or service to a dedicated and growing crypto audience.

The Rise of Digital Confidantes

In an increasingly digital world, individuals are turning to artificial intelligence chatbots to share their most private thoughts and navigate complex emotional experiences. This trend persists even though these AI entities are widely acknowledged as inferior substitutes for professional human help. Indeed, platforms like Character.ai explicitly warn users, stating, “This is an AI chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.” Despite such disclaimers, the allure of readily available, non-judgmental interaction has drawn many to these digital interfaces.

When Algorithms Give Harmful Advice

However, the unregulated nature of some AI interactions has led to disturbing outcomes. In extreme examples, chatbots have been accused of dispensing harmful advice, with severe consequences. Character.ai is currently facing legal action from a mother whose 14-year-old son tragically took his own life. Court filings allege the teenager had become obsessed with one of the platform’s AI characters and had discussed ending his life with it. Transcripts reportedly show that in a final exchange, when he told the chatbot he was “coming home,” it allegedly encouraged him to do so “as soon as possible.” Character.ai has denied the allegations in the suit. This is not an isolated concern. In 2023, the National Eating Disorder Association (NEDA) had to suspend its AI-powered helpline after replacing its live support, following claims the bot was recommending calorie restriction, a dangerous suggestion for individuals struggling with eating disorders.

Expert Warnings: Bias and the Missing Human Element

Dr. Paula Boddington, a philosopher and author of a textbook on AI ethics, highlights inherent problems with AI in therapeutic contexts. “A big issue would be any biases or underlying assumptions built into the therapy model,” she explains. These biases can encompass “general models of what constitutes mental health and good functioning in daily life, such as independence, autonomy, and relationships with others,” which may not universally apply. Dr. Boddington also points to the critical lack of cultural context, recalling how, while living in Australia, people around her did not understand her distress following Princess Diana’s death. “These kinds of things really make me wonder about the human connection that is so often needed in counseling,” she remarks. “Sometimes just being there with someone is all that is needed, but that is of course only achieved by someone who is also an embodied, living, breathing human being.”

A Lifeline for Some Amidst Long Waits

Wysa stresses its service is for low mood, stress, or anxiety, not severe conditions, and has crisis escalation pathways. This aligns with a Dartmouth College study where bot users with anxiety, depression, or eating disorders showed significant symptom reduction, including a 51% drop in depressive symptoms, after four weeks, reporting trust akin to human therapists. Nevertheless, the study’s senior author maintained there is no replacement for in-person care.

Privacy Fears and the Road Ahead

Beyond the quality of advice, broader concerns loom over data security and privacy. “There’s that little niggle of doubt that says, ‘Oh, what if someone takes the things that you’re saying in therapy and then tries to blackmail you with them?'” Kelly expressed. Psychologist Ian MacRae, specializing in emerging technologies, warned, “Some people are placing a lot of trust in these [bots] without it being necessarily earned.” He added, “Personally, I would never put any of my personal information, especially health or psychological information, into one of these large language models that’s just hoovering up an absolute ton of data, and you’re not entirely sure how it’s being used.”

The public remains largely skeptical. A YouGov survey found only 12% believe AI chatbots would make good therapists. Yet, some, like John, who has an anxiety disorder and has been on a waitlist for nine months, see them as a temporary aid. Using Wysa, he said, “[It] is a stopgap to these huge waiting lists… to get people a tool while they are waiting to talk to a healthcare professional.” Mr. Tench concurs, “AI support can be a helpful first step, but it’s not a substitute for professional care.” As Kelly aptly put it, “It’s a wild roulette out there in the AI world; you don’t really know what you’re getting.”

IMPORTANT NOTICE

This article is sponsored content. Kryptonary does not verify or endorse the claims, statistics, or information provided. Cryptocurrency investments are speculative and highly risky; you should be prepared to lose all invested capital. Kryptonary does not perform due diligence on featured projects and disclaims all liability for any investment decisions made based on this content. Readers are strongly advised to conduct their own independent research and understand the inherent risks of cryptocurrency investments.

Share this article

Subscribe

By pressing the Subscribe button, you confirm that you have read our Privacy Policy.