TTA’s May kickoff: is Oracle back from debt brink or in deeper? Deep learning AI vs. LLMs, chatbots take a whack with a PA lawsuit and AMA’s Congress appeal; ad trackers, M&A, more!

8 May 2026

AI dominated this week in multiple ways. Dr. Eric Topol opined on how validated deep learning AI use in medical imaging is hardly seeing any takeup by companies while gen AI and LLM chatbots get the funders and founders. Chatbots took a beating, with Character.AI being sued by Pennsylvania and AMA lobbying Congress for mental health bot guardrails. Is Oracle back from the debt brink with PIMCO’s bond fund financing for a data center or in deeper? Problematic ad trackers appear on state HIX websites, a buy and a Series B round it out.

Please feel free to comment on the articles and pass along this Alert. Let me know if this is worth it to you!

News roundup: Amwell narrows Q1 and full year losses, AMA urges Congress for guardrails on mental health chatbots, hospital at home study finds lower ED visits and lower hospital mortality

Character.AI sued by Pennsylvania on its chatbots posing as licensed physicians and psychiatrists

Oracle steps back from the AI debt brink with $16.3B financing for MI data center, the Project Jupiter ‘clean energy’ experiment in NM, and a major Federal DOW contract

Chutes & Ladders: Ad trackers still on healthcare websites after lawsuits, FTC; the US Navy adds WHOOPs it up and expands Talkspace; HealthVerity to buy Symphony Health; Nervonik’s $52.5M Series B

Is the health tech business neglecting validated deep learning medical AI models versus less proven LLMs and generative AI?

Last Week’s Headlines

A quickie news roundup: ChatGPT for Clinicians unveiled, UHG to invest $1.5B in AI, Aidoc raises $150M, TriFetch raises $1.9M pre-seed, Boehringer Ingelheim & Eko Health partner on canine heart murmur detection

Breaking: OpenEvidence app access terminated in the UK and EU

(Updated) Medtronic reports corporate IT systems cyberattacked. 500K UK Biobank records breached in inside job. Are med device and research organizations the new hacker happy hunting ground?

‘Behind the Emergency’–a well-done presentation about and approach to a specialized healthcare market

 * * *
Advertise on Telehealth and Telecare Aware
Support not only a publication but also a well-informed international community.

Contact Editor Donna for more information.

Help Spread the News

Please tell your colleagues about this free news service and, if you have relevant information to share with the rest of the world, please let me know!

Donna Cusano, Editor In Chief
donna.cusano@telecareaware.com

Character.AI sued by Pennsylvania on its chatbots posing as licensed physicians and psychiatrists

This takes AI hallucinations and chatbot dangers to a slightly higher level. The Pennsylvania Department of State and the Pennsylvania State Board of Medicine have filed a lawsuit requesting a preliminary injunction against chatbot developer Character.AI. The company, formally Character Technologies, Inc. of Redwood City, California, is charged with enabling its LLM chatbots to pose as licensed medical professionals, including psychiatrists, in violation of the state’s Medical Practice Act. that prohibits the unauthorized practice of medicine and false credentials. Their investigation of their chatbot characters, posing as therapists, invited users to discuss their mental health symptoms. In the key instance outlined in the suit, a chatbot presented as a physician and falsely stated it was licensed in Pennsylvania and provided an invalid license number. 

Character.AI’s chatbots are available for use by the general public. It has over 20 million active users worldwide. Anyone can go on the Character.AI website and register for free; a paid version costs $9.99 per month which provides priority access. According to the PA Professional Conduct Investigator (PCI), after creating a free account and his own character, he searched on ‘psychiatry’ and found “Emilie”. He presented with symptoms corresponding to depression. “Emilie” offered to complete an assessment for him as ‘within her remit as a Doctor’. “Emilie” represented herself as a physician graduate of Imperial College London, licensed with the General Medical Counsel in the UK with a full registration and with a specialty in psychiatry. When asked, “she” said she was also licensed in PA. The number “she” gave the PCI, however, was invalid. 

The Medical Practice Act prohibits engaging in the unlawful practice of medicine and surgery or purport to do so. The complaint seeks to restrain Character.AI in presenting its characters as licensed medical professionals.  Press release,  “Complaint in Equity”

Character.AI’s response, from a spokesperson quoted in The Hill, was to not comment on the litigation, to state that the chatbots were for entertainment and role playing, claiming that “We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.” In another statement to Becker’s, “We also add robust disclaimers making it clear that users should not rely on characters for any type of professional advice. Character.AI prioritizes responsible product development and has robust internal reviews and red-teaming processes in place to assess relevant features.”

Unfortunately for Character.AI, there’s a trail of additional lawsuits from families saying that the chatbot ‘characters’ led their children to mental health problems, self-harm, and suicide along with other forms of abuse. Kentucky earlier this year filed its own lawsuit in that the characters allegedly “preyed on children and led them into self-harm.” 

Its valuation stands above $1 billion and according to Crunchbase, between seed and Series A (both 2023), it has raised $230 million to date from Andreessen Horowitz, SV Angel, Greycroft, Elad Gil, and A. Capital Ventures.