This takes AI hallucinations and chatbot dangers to a slightly higher level. The Pennsylvania Department of State and the Pennsylvania State Board of Medicine have filed a lawsuit requesting a preliminary injunction against chatbot developer Character.AI. The company, formally Character Technologies, Inc. of Redwood City, California, is charged with enabling its LLM chatbots to pose as licensed medical professionals, including psychiatrists, in violation of the state’s Medical Practice Act. that prohibits the unauthorized practice of medicine and false credentials. Their investigation of their chatbot characters, posing as therapists, invited users to discuss their mental health symptoms. In the key instance outlined in the suit, a chatbot presented as a physician and falsely stated it was licensed in Pennsylvania and provided an invalid license number.
Character.AI’s chatbots are available for use by the general public. It has over 20 million active users worldwide. Anyone can go on the Character.AI website and register for free; a paid version costs $9.99 per month which provides priority access. According to the PA Professional Conduct Investigator (PCI), after creating a free account and his own character, he searched on ‘psychiatry’ and found “Emilie”. He presented with symptoms corresponding to depression. “Emilie” offered to complete an assessment for him as ‘within her remit as a Doctor’. “Emilie” represented herself as a physician graduate of Imperial College London, licensed with the General Medical Counsel in the UK with a full registration and with a specialty in psychiatry. When asked, “she” said she was also licensed in PA. The number “she” gave the PCI, however, was invalid.
The Medical Practice Act prohibits engaging in the unlawful practice of medicine and surgery or purport to do so. The complaint seeks to restrain Character.AI in presenting its characters as licensed medical professionals. Press release, “Complaint in Equity”
Character.AI’s response, from a spokesperson quoted in The Hill, was to not comment on the litigation, to state that the chatbots were for entertainment and role playing, claiming that “We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.” In another statement to Becker’s, “We also add robust disclaimers making it clear that users should not rely on characters for any type of professional advice. Character.AI prioritizes responsible product development and has robust internal reviews and red-teaming processes in place to assess relevant features.”
Unfortunately for Character.AI, there’s a trail of additional lawsuits from families saying that the chatbot ‘characters’ led their children to mental health problems, self-harm, and suicide along with other forms of abuse. Kentucky earlier this year filed its own lawsuit in that the characters allegedly “preyed on children and led them into self-harm.”
Its valuation stands above $1 billion and according to Crunchbase, between seed and Series A (both 2023), it has raised $230 million to date from Andreessen Horowitz, SV Angel, Greycroft, Elad Gil, and A. Capital Ventures.







Leave a Reply