×
Study finds AI-assisted therapy viewed as less trustworthy than traditional approaches
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Public perception of mental health therapists who use AI in their practice remains largely negative, with people viewing AI-assisted therapy as potentially less trustworthy and empathetic than traditional approaches. A recent study published in the Journal of the American Medical Association (JAMA) on physicians found that patients generally rated AI-using doctors as less competent and trustworthy, suggesting similar challenges await therapists as they increasingly integrate artificial intelligence into mental health services.

The big picture: The mental health profession is gradually shifting from a traditional therapist-patient relationship to a therapist-AI-patient triad, but public acceptance lags behind the technology’s capabilities.

  • Over 400 million weekly ChatGPT users already engage AI for mental health guidance, indicating some familiarity with AI-assisted therapy concepts.
  • Most therapists currently using AI are still in the early adoption phase, making public perception studies particularly valuable for guiding implementation strategies.

Why this matters: As AI becomes standard practice in therapy, professionals who don’t adapt risk being left behind, while early adopters face the challenge of educating skeptical clients about the benefits.

  • The shift mirrors broader healthcare AI adoption, where initial resistance eventually gives way to widespread acceptance and expectation.
  • Understanding public perception helps therapists communicate more effectively about their AI integration strategies.

Types of AI usage in therapy: Research identifies three distinct categories that generate different levels of public concern.

  • Administrative AI usage typically faces the least resistance, as clients expect streamlined booking, billing, and scheduling processes.
  • Diagnostic and therapeutic AI applications raise more significant concerns about replacing human empathy and personalized care.
  • Clients worry that AI might overshadow the therapist’s attention or compromise the personalized service they’re paying for.

Key client concerns: Prospective therapy clients are asking critical questions about AI integration that therapists must address transparently.

  • Whether therapists have thoughtfully integrated AI or simply added it without careful consideration of therapeutic impact.
  • How AI usage affects data protection and information privacy, particularly given the sensitive nature of mental health conversations.
  • Whether AI represents genuine value-added capabilities or merely a fee-increasing marketing tactic.

The transition ahead: Public acceptance will evolve from initial skepticism to eventual expectation as AI becomes ubiquitous in mental health services.

  • Therapists will eventually compete based on the quality of their AI integration rather than justifying AI usage itself.
  • Those who delay adoption risk becoming fringe practitioners as AI-assisted therapy becomes the standard of care.

What experts recommend: Mental health professionals should begin preparing for AI integration now rather than waiting for broader acceptance.

  • Transparency about AI usage purposes and limitations helps build client trust and understanding.
  • As Lance Eliot, a Forbes AI columnist, notes, following Dwight D. Eisenhower’s advice: “Neither a wise person nor a brave person lies down on the tracks of history to wait for the train of the future to run over them.”
Public Perception Of Mental Health Therapists Who Make Use Of AI In Their Practice

Recent News

Firefox adds Perplexity AI as default search engine option

The move brings conversational AI with citations directly to Firefox's address bar.