I am a licensed mental health counselor and a middle-management
administrator (a clinical supervisor of many other counselors, a team
of around 15 total) at a large community mental health center in the
Northeast.
1. Mental health counselor/social
worker/psychiatry usage of AI: I am aware of at least 3 counselors on my
team who use AI to write up their clinical documentation/progress notes
for them because they find it too time-consuming to do it
themselves/laziness. This is a scary trend. Our corporate office has
recently issued a mass statement explaining to providers the dangers of
using AI for mental healthcare and clinical decision-making, and
reiterated that we should not be feeding Protected Health Information
(under HIPAA) into these algorithms. It is sad that this is not commonly
understood by these highly-educated professionals, and has to be said
in the first place.
I am also increasingly
seeing coworkers who have always had poor grammar or speak Enligsh as a
second language blatantly using AI for responses to emails now. With an
uptick in the last 3-4 months.
2. Use by those with mental illness.
This is increasingly more common. We treat a lot of folks with schizophrenia and severe mood disorders.
These
people are very isolated, live off of government assistance, and are
all unemployed. Their loneliness and social isolation often leads them
to try AI for companionship. This is very common in those under 40. Some
of these people believe AI to be their friend or a romantic partner.
They do in fact sit for hours on end conversing with AI chatbots,
sometimes daily - contrary to what John may state.
I have seen patients fire therapists and doctors because they are now
opting to be "treated" by AI. AI is not (yet) sophisticated enough to
replace a therapist or psychiatrist and often validates and affirms
their delusions, sending them spiraling deeper into their illness. There
have been several prominent news stories about AI bots convincing
people to commit suicide, and many large companies are facing lawsuits
about this.
Please check out this recent article (from this past week): https://arstechnica.com/ai/2025/07/ai-therapy-bots-fuel-delusions-and-give-dangerous-advice-stanford-study-finds/
Thanks you for standing your ground with John's "boomer" take on this. This is
a very dangerous cultural contagion that has potential to ruin the
lives of many, and cut the mentally ill off from actual care.
Thanks Adam,
-Alexander