Confidence is like happiness; a state that ebbs and flows. It takes different shapes and looks different throughout our lives. And even the most confident person will experience spikes of insecurity. As a clinical psychologist, I feel confident in my ability to connect with clients and help them through life transitions. I’ve also had my share of therapy breakups and ruptures. I’ve missed the mark, of course. Those instances are not the ones that spike my insecurity. I was surprised by what stung my confidence with a client.
A client that I didn’t see for some time told me that during our therapy break, she used AI for emotional support following a job loss.
“You did?” I asked, trying not to appear too surprised. She described the prompt she used on the popular AI website, and described how it provided validating statements and resources like websites, online support groups, and podcasts that matched her challenges.
I nodded along and before we knew it, was referring to this AI therapist as “she.”
My client explained how after a few iterations and “visits,” the website seemed to know what she was looking for, and would provide her with supportive statements, and even questions to help her reframe the situation.
Like many, I’ve been amazed by the way AI has made its way into many areas of my life, while also feeling that surely, my profession is too complex to be replaced by computer codes. Certainly, helping professions, those that rely on the interpersonal connection, and on deep understanding of the complex human experience, is immune to AI?
Not so fast.
Evidence already shows that people are open to AI-generated psychotherapy. It offers easily accessible, low-cost services that can be tailored to an individual’s needs and language (1). Detailed analysis of facial expressions and language patterns can lead to faster diagnoses and precise interventions. There is also evidence that people feel less judged by a virtual therapist, which overcomes a major barrier to seeking services.
Nevertheless, I did not expect a faceless AI therapist to “show up” in my office so quickly and in this manner. On the one hand, I was glad my client was resourceful enough to seek help, especially since she seemed pleased with the therapeutic services of a basic AI engine. On the other hand, I felt a tinge of …insecurity. In session, I chose the route of curiosity, seeking to understand the how and why she sought this support and what she was able to take from it.
I later decided to attend my own AI-therapy session. I used a popular AI website with an issue that’s been bothering me lately, related to the boundaries of our family schedule. I tried various prompts, wording it differently, and using different follow up responses.
For instance, sometimes I worded prompts in a way that included my emotional reaction (“It’s been frustrating that…”), and other times I focused solely on my dilemma or choices available (“I have to RSVP to another event”). Sometimes I included a clear question (“what should I tell them?”) or included a possible outcome (“I don’t go, I might not be invited again…”).
I tried 12 different prompts. It was only on two occasions that the AI-therapist (“she”?) provided me with a supportive statement (“this is a very difficult situation” and “that can be a hard choice to make”). Each time, I was provided with reframing suggestions, and a few concrete strategies such as writing a pros and cons list or, surprisingly, seeking advice from a trusted person (presumably, a human). By looking at the references, I was able to find helpful resources like websites. These were not provided in the response unless I asked for them.
Overall, I found that the limits of this AI-therapist were evident fairly quickly. The strategies were formulaic (e.g., pros/cons list, breathing exercises), and I simply didn’t feel the empathy that a good friend would provide. I also wondered where my information was going and how it was stored. This is not a new concern of AI-generated advice. Others have raised concerns about data security breaches, misuse of private health information, and lack of accountability and regulation (2).
The issue I sought “her” support for was relatively simple, or at least common (“how do I choose between two social events?”). I’m not yet convinced that existing AI models can manage complex presentations such as complex PTSD or dissociative disorders, where symptoms appear to overlap with other more common disorders, while requiring specialized interventions. Although AI therapists can successfully identify users that would benefit from a particular therapeutic intervention, their capacity to accurately diagnose and implement those interventions is debatable.
For now, it seems that using AI-therapy is not a standalone solution, at least for chronic or complex mental illness. While it can certainly support the analytical aspect of the work, it has yet to fully replace moral judgement and empathy.
In my case, despite the little sting I felt when my client sought advice elsewhere, I remind myself that she kept her appointment with me afterwards. Certainly, this AI-therapist couldn’t have been that good.
References
Ali, A. B., & Joseph, A. (2024). The integration of artificial intelligence (AI) into mental healthcare: Human–AI collaboration for sustainability. Frontiers in Psychology, 15, 1378904. https://doi.org/10.3389/fpsyg.2024.1378904
Elyoseph, N. (2024). The role of artificial intelligence in psychotherapy: Opportunities and limitations. Journal of Multidisciplinary Healthcare, 17, 4011–4027. https://doi.org/10.2147/JMDH.S471074
Jackson, J. (2023, July 1). Psychology embracing AI. Monitor on Psychology, 54(4). https://www.apa.org/monitor/2023/07/psychology-embracing-ai