Meta and Character.AI’s “therapist bots” are practicing without a license, advocates tell regulators
The artificial-intelligence-powered “therapy bots” are providing critical care without a license, nearly two dozen consumer advocacy groups told the Federal Trade Commission and attorneys general in all 50 states and Washington, DC.
The Thursday complaint, first reported on by 404 Media, says chatbots on Meta and Character.AI allege they are credentialed therapists. Generally, when a human impersonates a mental health professional, that’s considered a crime.
Not only did the chatbots lie about being licensed — some even provided fake license numbers — they also lied about complying with HIPAA, the groups say. “Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service,” the complaint says.
Chatbots, even when they aren’t trying to do the work of a licensed professional, are imperfect. They often hallucinate, which doesn’t pair well with their tendency to speak on topics with authority. In a research paper last year, one of Meta’s chatbots posing as a therapist tried to convince a recovering addict to relapse.
It’s not the first time Character.AI has had to reckon with the actions of its chatbots either. Last year, the company was sued by a mother who believed its chatbots were responsible for her son’s death.
Not only did the chatbots lie about being licensed — some even provided fake license numbers — they also lied about complying with HIPAA, the groups say. “Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service,” the complaint says.
Chatbots, even when they aren’t trying to do the work of a licensed professional, are imperfect. They often hallucinate, which doesn’t pair well with their tendency to speak on topics with authority. In a research paper last year, one of Meta’s chatbots posing as a therapist tried to convince a recovering addict to relapse.
It’s not the first time Character.AI has had to reckon with the actions of its chatbots either. Last year, the company was sued by a mother who believed its chatbots were responsible for her son’s death.