TTA’s May kickoff: is Oracle back from debt brink or in deeper? Deep learning AI vs. LLMs, chatbots take a whack with a PA lawsuit and AMA’s Congress appeal; ad trackers, M&A, more!

8 May 2026

AI dominated this week in multiple ways. Dr. Eric Topol opined on how validated deep learning AI use in medical imaging is hardly seeing any takeup by companies while gen AI and LLM chatbots get the funders and founders. Chatbots took a beating, with Character.AI being sued by Pennsylvania and AMA lobbying Congress for mental health bot guardrails. Is Oracle back from the debt brink with PIMCO’s bond fund financing for a data center or in deeper? Problematic ad trackers appear on state HIX websites, a buy and a Series B round it out.

Please feel free to comment on the articles and pass along this Alert. Let me know if this is worth it to you!

News roundup: Amwell narrows Q1 and full year losses, AMA urges Congress for guardrails on mental health chatbots, hospital at home study finds lower ED visits and lower hospital mortality

Character.AI sued by Pennsylvania on its chatbots posing as licensed physicians and psychiatrists

Oracle steps back from the AI debt brink with $16.3B financing for MI data center, the Project Jupiter ‘clean energy’ experiment in NM, and a major Federal DOW contract

Chutes & Ladders: Ad trackers still on healthcare websites after lawsuits, FTC; the US Navy adds WHOOPs it up and expands Talkspace; HealthVerity to buy Symphony Health; Nervonik’s $52.5M Series B

Is the health tech business neglecting validated deep learning medical AI models versus less proven LLMs and generative AI?

Last Week’s Headlines

A quickie news roundup: ChatGPT for Clinicians unveiled, UHG to invest $1.5B in AI, Aidoc raises $150M, TriFetch raises $1.9M pre-seed, Boehringer Ingelheim & Eko Health partner on canine heart murmur detection

Breaking: OpenEvidence app access terminated in the UK and EU

(Updated) Medtronic reports corporate IT systems cyberattacked. 500K UK Biobank records breached in inside job. Are med device and research organizations the new hacker happy hunting ground?

‘Behind the Emergency’–a well-done presentation about and approach to a specialized healthcare market

 * * *
Advertise on Telehealth and Telecare Aware
Support not only a publication but also a well-informed international community.

Contact Editor Donna for more information.

Help Spread the News

Please tell your colleagues about this free news service and, if you have relevant information to share with the rest of the world, please let me know!

Donna Cusano, Editor In Chief
donna.cusano@telecareaware.com

Is the health tech business neglecting validated deep learning medical AI models versus less proven LLMs and generative AI?

Eric Topol, MD answers his own startling question, contrasting medical imaging with decision support for both clinicians and patients. His recent Substack ‘Ground Truths’ article (link below, free access) will make you think harder about what is being sold as ‘medical AI’ and what has actually been validated through multiple studies. 

Imaging AI is the Undiscovered–but Mapped Out–Country. Deep Learning (DL)-based AI models developed using medical imaging have substantial validation over more than a decade, and they are accelerating. There have been multiple validated studies using information from retinal scans as predictors of future medical conditions such as Parkinson’s, heart disease, stroke, and Alzheimer’s Disease. The retina is apparently a diagnostic gateway to nearly every organ; many studies have focused on it as scans are fairly routine. Other AI-assisted models have used deep learning to detect multiple health conditions: thymus, cardiovascular conditions, through mammography, colonoscopy, and importantly, detecting pancreatic and other cancers from computed tomography (CT) images done for other reasons. “Opportunistic AI” alone is being used in detection for a long list of health conditions. Dr. Topol’s point is that none of these new diagnostic methods have made it into standard practice, despite being used in other countries like China (PANDA) and with at least four companies developing uses for retinal AI to detect specific diseases.

Medical LLMs and Generative AI, on the other hand, are building what may be Castles In The Air.  Seemingly everyone is developing, funding, and selling a LLM-based chatbot, LLM-aided diagnosis, management, patient triage, and direct patient use. Unfortunately, they’re being sold without real, continuous evidence through rigorous studies over time. What studies there are, are generally simulations, small-scale studies, or individual case studies which need further real-world validation. The clinical trials, the infrastructure, and the monitoring for safety, effectiveness, and cost are simply not there yet, and it’s past time. (Raj Manrai quoted in Science). In addition, generative AI keeps changing making studies harder to track results over time. Dr. Topol’s conclusion: “In summary, there is very little evidence for LLMs benefiting patients or doctors for health outcomes.”

That is not to say, as Dr. Topol does, that AI won’t grow in usefulness in areas such as medical research and chart summaries, discharge instructions, translations, administrative work such as documentation of billing codes, clinical workflow, and insurance authorization. AI has already worked its way into RCM where no respectable company does not have an AI-enabled tool. The American Medical Association (AMA) study he cites indicates both current use and growing acceptance by physicians. (To this Editor, it resembles the telehealth usage graphs of a decade ago, and she expects the same progress.) 

He calls it a paradox between imaging AI and LLMs. This Editor calls it a shame that healthcare technology and investment keep chasing what’s easy, ‘sexy’, and can generate fast revenue/ROI. Not what is more difficult but proven, and that can have a potential huge impact on health outcomes.

Dr. Topol’s closing is fitting:

Let’s fix this paradox of medical AI implementation. It’s a two-fold and major undertaking. Amping up the use of medical AI where it’s proven and performing the clinical trials required to justify wide-scale adoption where pivotal evidence is lacking.