When AI makes mistakes: Lessons from an impromptu focus group

Therapists reflect on how quickly we trust AI—even when it's wrong—and stress the importance of questioning its output over doubting ourselves.

Professional consultations are one of my favorite parts of private practice. I meet with fellow clinicians at least twice a month to discuss clinical cases, get some moral support, and exchange private practice tips. The use of AI in our personal and professional lives is currently a hot topic.I was amazed to hear that some clinicians are using AI software to assist with note-taking and documentation. For the most part, our group’s use of AI seemed conservative and benign, such as creating templates and proofreading. Recently, our conversation turned to the ways in which AI gave us wrong information, and how we respond to its limitations.  

I was very impressed by a colleague who used AI to summarize and compare the licensing requirements for psychologists and other regulated professionals across Canada. In less than a minute, the program gathered, reviewed, and organized information into a document that would have taken her hours to do manually.

Here’s the catch. She found a mistake. She told the chatbot (let’s call it Roger) about the mistake and requested that Roger go back to review the information and revise the table. Roger spit out the same information. She tried again, more sternly this time. And again, Roger provided the wrong information. My colleague then directed Roger to the website in question and finally received information that accurately reflected the website.

Her experience led to a discussion about Roger making mistakes (and how stern we can be when it does). I shared my own experience of AI leading me astray. I asked Roger to give me research articles using keywords from the literature. The chatbot delivered an extensive list, which I realized was mostly made up once I tried to look for the articles. I was confused at first because the titles made sense, the authors were real researchers, the journals were legitimate. Yet, I couldn’t find a journal article from that author published in that journal. Roger made it all up. Unlike my colleague, however, I didn’t correct the chatbot. I didn’t doubt the program; I doubted myself. I believed the reason I couldn’t find the articles was because I wasn’t looking in the right place or the right way. Even with technology that is so new, I trusted its capacity more than mine. 

Our small impromptu focus group on the limitations of Roger and AI was revealing. We had all experienced its limitations, from erroneous reporting of findings in a paper to very poor estimation of how many pizzas should be ordered at a children’s birthday party (for the record, you won’t need 8 large pizzas for 22 guests).

While it’s likely that the paid version of the programs may not have made the mistakes that we faced, AI is far from perfect for now. If it makes mistakes on such basic requests and questions, I can only imagine the types of mistakes made in more contentious domains. More importantly, I wonder how many of us will double-check the output, or tell Roger to check its sources and do it again. Promptly.

Read my personal experience with AI chatbot: https://sorted-mind.com/the-ai-therapist

Listen to my CBC interview about AI and therapy:
https://www.cbc.ca/listen/live-radio/1-100-ottawa-morning/clip/16167320-should-using-ai-next-therapist%20https://www.cbc.ca/listen/live-radio/1-100-ottawa-morning/clip/16167320-should-using-ai-next-therapist

Ask yourself these questions before you use AI for emotional support:
https://www.instagram.com/p/DN6Ebl_kV_1/?utm_source=ig_web_copy_link&igsh=MW10cDEybm1naTV1dA==

Share the Post:

Additional Articles

Please enter your email to download.

I promise I won’t spam you. I will only update you with new worksheets are added to the site.