Two Gimlety views of a hot AI player, Hippocratic AI. 2025’s first $1.6 billion valuation unicorn after its January $141 million Series B venture round is on a hiring streak. Their ‘safety focused generative AI for healthcare’ built on large language models (LLMs) has developed multi-age, ethnicity, and language AI-powered virtual “nurses” to interact with patients on post-discharge follow ups for medical and hospital visits, chronic care management, and medication reminders. The purpose of the agentic AI nurses is to close the gap of healthcare staff shortages.
On their website, the recorded demos of the ‘nurses’ roleplaying with real nurses sounded to this Editor responsive, warm, convincing, and uncannily human–the opposite of AI agents this Editor has dealt with, which are robotic, clearly on narrow scripts, and easily thrown off by off-script or critical questions, resulting in long silences and cutoffs. But none of the ‘patients’ threw curveballs, talked about multiple conditions, or deviated much in the Q and A. In other words, the bots got to shine.
Last month, it released its Polaris 3.0 product which organizes 22 LLMs having 4.2 trillion parameters, using proprietary data, documentation, regulatory documents and other training materials then tested by clinicians. It claims a clinical accuracy rate of 99.38% compared to 98.75% for their previous Polaris 2.0 version and upgrades in audio processing, agent emotional quotient, and improved clinical documentation. At the end of the month, Hippocratic AI announced seven new hires at C and VP levels, coming from major companies like Amwell, Blue Shield of California, non-profits such as Population Services International, and smaller healthcare companies such as Sidekick Health and Notable. Quite a move for a company only formed in February 2023, 22 months ago. Release, Mobihealthnews
Hippocratic AI is on a roll, but as with any sudden rise and particularly with LLM AI, the technology and claims are being examined closely. Two articles on Substack take a critical view–this Editor strongly suggests getting a cuppa and taking time to read them carefully.
The shorter of the two (and open to all) is Thomas W. Dinsmore’s “No Bots, Please” . He looks at Hippocratic AI’s tech from the perspective of someone highly experienced in machine learning tools and platforms from back when we called them algorithms. He presents a lot of cautions and yellow flags.
- Testing apparently not done on real patients
- Their claims of Polaris’ accuracy are not benchmarked externally. “Independent benchmarks are impossible with a proprietary model like Polaris.”
- LLMs are ubiquitous, costs are coming down, and investing in a company with proprietary LLMs may not be the smartest move.
- Hippocratic AI has many healthcare ‘partners’ (this Editor counted 15), many glowing endorsements from healthcare leaders, an impressive timeline. The rub? They are likely not paying anything, which is unusual for a Series B anything much less a unicorn. (I can personally confirm this last point.) Everyone will say nice things about you if you are free.
- He points out two types of value creation in healthcare. The first is doing a routine or even highly skilled process faster and better. The second is creating an entirely new capability.
- The touted healthcare shortage number is way overstated. The 15 million healthcare worker shortage cited by WHO and Hippocratic is global. The need in developing nations is primarily midwives, which can’t be done virtually. In the US, the number reported by the Bureau of Health Workforce (BHW) is closer to 2 million, with about 1 million nurses and the rest in LPNs, adult psychiatrists, family physicians, internists, and other clinicians.
- The Hippocratic pitch is that their bots can replace nurses. The irony is that what the bots do best–patient education–isn’t being done by nurses, based on a McKinsey study on nursing workloads. The nurses’ biggest time suck is updating documentation, followed by hunting and gathering (everything from locating patients to finding equipment), and medication administration. None of which can be done by Hippocratic AI.
- Bots have very limited use with mentally ill patients — and negative for those in crisis.
Conclusion: The model is shaky. The effort in creating bots is wasted. What would be a lot more useful in healthcare would be a system that prioritizes, flags, and notifies staff of a patient deteriorating who needs a real human being to call.
Want more? The Really Long Read is Sergei Polevikov’s Hypocritical AI: $45/Hour Human Nurses Babysitting $9/Hour “AI Nurses” in AI Health Uncut. This may be only partially viewable by those who are not subscribers (and it’s worth the small amount). It is a very deep and critical dive into Hippocratic AI, if what it is selling is real (red flag–their releases contain a lot of numbers which aren’t verifiable), and VC hype. The main sections of the article:
- What exactly Hippocratic AI is selling, including poor workflow and EHR integration
- What’s really under the hood of its “AI agents.”
- The reality behind “AI nurses”—and the human nurses tasked with babysitting them.
- Whether Hippocratic AI fine-tuned its models using confidential insurance or health data obtained from Health IQ.
- The toxic corporate culture, code of silence, and reliance on H-1B/O-1 visa “slavery.”
Plenty of references and comparisons. If you aren’t skeptical after the Dinsmore article, you will be after this. A must read.
Most Recent Comments