Amwell sees a light at the end of the tunnel while losses continue, as usual. Their Q1 closed with a reduced net loss– $10.3 million versus prior year’s $18.4 million and Q4 2025’s $25.2 million. Revenue was also lower: $54.9 million, down 18%, but exceeding their prior guidance. Revenue from subscriptions, Amwell’s current focus, of $24.9 million decreased about 23% from the prior year. Adjusted EBITDA also moved positively to a loss of $3.1 million versus Q4 2025’s $10.3 million. For their Q2, Amwell projects revenue in the range of $48- $52 million and adjusted EBITDA loss in the range of $2 to $4 million. Full year revenue remains at $195 to $205 million with adjusted EBITDA loss between $12 and $16 million, a reduction from prior projections. Healthcare Dive, Amwell Q1 statement
The American Medical Association (AMA) asks for more guardrails on AI mental health chatbots. In three letters sent to the House and Senate Artificial Intelligence Caucuses and the Congressional Digital Health Caucus, the AMA’s concern was around the emotional dependency on AI systems, the potential distortion of reality through prolonged interaction with chatbots. and the current lack of consistent safety protocols. They outlined several areas needing attention:
- Greater transparency in ensuring that users clearly understand when they are interacting with an AI system rather than a human being. Chatbots should not present as a licensed clinician or a human being. [See our earlier article on Pennsylvania’s suit against Character.AI]
- Clearer regulatory boundaries around how AI chatbots are used in mental healthcare, including diagnosis and treatment requiring oversight.
- Requesting that lawmakers direct agencies to establish a risk-based framework that clarifies when AI tools qualify as medical devices.
- Requiring developers to build safeguards, such as crisis-detection capabilities that can identify potential self-harm risk and direct users to appropriate resources and de-escalate harmful situations.
- Ongoing safety monitoring, mandatory reporting of adverse events, and stricter standards for tools used by children and adolescents.
- Limits on commercial influence, including restrictions or bans on advertising within mental health chatbots, and that chatbots aren’t ‘influenced’ by financial incentives.
- Robust data protection standards, including: limits on the amount of data collected and stored, safeguards to prevent unauthorized access or sharing of sensitive information, and clear user consent for data use.
Stanford’s recent research confirmed some common knowledge–that LLMs behind the chatbots pose significant risks by providing inappropriate responses, introducing bias and perpetuating stigma, which can result in dangerous consequences. AMA release, Mobihealthnews
Medicare beneficiary study compares hospital at home outcomes with traditional in-patient stays–and finds some good results. The JAMA Open Network published paper found that in over 15,000 patients (hospital at home, 4,174; in-patient 11, 697), treatment via “hospital at home was associated with significantly lower in-hospital mortality and emergency department (ED) use within 30 days of index admission discharge, with no significant difference in hospital readmissions within 30 days of index admission discharge compared with traditional inpatient care.” The study concluded that may maintain the same or better short term outcomes depending on “appropriately selected patients” (not specified) and that “future studies should evaluate implementation and equity”. The vast majority of patients in the hospital at home sample (nearly 97%) were urban. Healthcare Dive, JAMA Open Network







Leave a Reply