Two weekend ‘must reads’: the New Yorker’s Sam Altman/OpenAI exposé–and comments; a further deep dive into Carbon Health’s implosion

Too long to summarize or opine on this week–but a must for your weekend reading. Grab the cuppa for the talk of AI World–a New Yorker dissection of Sam Altman, the CEO of OpenAI (link below). To say it is an exposé worthy, at first glance, of the Old School (ain’t no school like the Old School–Ed.) on probably the most important company of AD 2026 is to undersell it. It’s a long article and you’ll need at least one break.

OpenAI, founded as a non-profit with integrity at its core to “prioritize the safety of humanity over the company’s success, or even its survival”, recapitalized last year as a for-profit corporation with 26% of the shares owned by the OpenAI Foundation. It is now a trillion-dollar company that had no trouble raising a paltry $122 billion last week [TTA 2 April] though arguments are made that at least some of that money are IOUs or contingent. ChatGPT has become almost generic for AI, like Kleenex has become for tissues. The battles over control and direction of the company are now totally controlled by Sam Altman, whom former colleagues are not shy about pointing out his difficulty with the truth and a pattern of deceit, for instance to his board, to employees, and Microsoft. Yet everyone continues to do business with him. The FOMO Factor is very strong.

Mr. Altman makes extremely broad statements on the future of work (most traditional managerial, healthcare, and IT jobs will be taken over by AI, thus most of us will be unemployed), has easy access to President Donald Trump, as well as other world executives, and may, as the headline barks, control our future. Thus, he is a person of consequence.

My read so far of this is that within OpenAI, there is no one to counterbalance Mr. Altman’s immense ambition, his desire to dominate and win, not only with AI but also over all business and everyday life. These are character issues that also show up in aspects of his personal life, detailed in the article. If past results are predictive of the future, this flaw usually curdles into the desire to control countries and a complete disrespect for the rest of us leading our lives. 

Sam Altman May Control Our Future–Can He Be Trusted?

I will offer two LinkedIn comment posts on this article from an AI person I respect, the head of Curiouser.ai, Stephen Klein. Many of his posts on LinkedIn deal with what AI can and cannot do in business. He writes that he is “committed to designing technology that augments people, creates jobs, and elevates humanity. It’s time we all got back to thinking for ourselves.” 7 April, 8 April 

Our second Must Read is from Sergei Polevikov’s AI Health Uncut, a long analysis on the failure of Carbon Health and what it tells us in “this business we have chosen”. “What The Hell Went Wrong?” and its implications need answers–because it’s being repeated again and again. Today’s article (9 April) is Part 1 of 2, sets the stage about the mistakes made (insiders talk) and, with full credit, springboards off Stuart Miller’s (Haverin Consulting) original analysis made at the time of the Chapter 11 reorg. What we called the ‘Ominous Parallels’ was a Must Read here on 12 February.  TTA (as Telecare Aware, our original name) and this article are also mentioned twice (thanks!).

Those who have yet to subscribe for Mr. Polevikov’s analytic, erudite, and revealing (Emperor’s New Clothes!) POVs can read part of this article for free–but seriously, if you’re in this business, the subscription is worth your money. He also podcasts (links are on his Substack, link at lower right sidebar).

An early and scandalous publisher (before he utterly lost it), Matt Drudge, used to say that he ‘went where the stink is’. Mr. Polevikov does the same. The stink is of our broken primary care reimbursement system, the Covid steroids that pumped up the company, flailing management running through money like drugs, and good ideas for patient care buried under incompetence. 

A study in contrasts: OpenAI raises $122B, eMed’s $200M Series A. Then there’s Avo’s $10M Series A, Stedi’s $50M Series C. And Oracle expands Nashville campus!

Your Editor is feeling a little whipsawed this usually quiet pre-Easter and Passover week. We opened with 30,000 Oracle employees losing their jobs. Yet even if Oracle can’t get it, there’s plenty of money out there that’s looking for an investment home. Some rounds are huge–if it’s AI or GLP-1, you can bet on BIG–but most fundings for startups and early stage companies are modest in a pre-2019 way. The money that’s out there lines up for ‘sure things’.

OpenAI had no problem raising $122 billion as it moves to conquer the AI World (and maybe the Universe) via ChatGPT. Considering their claim that they are generating $2 billion in revenue per month, just replace the millions raised in the earlier digital age with billions. There’s a laundry list of investors including institutions, individual investors via banks, plus exchange-traded funds managed by ARK Invest. The anchor investors are strategic partners Amazon, NVIDIA, and SoftBank, with continued participation from Microsoft. SoftBank co-led the round alongside a16z, D. E. Shaw Ventures, MGX, TPG, and accounts advised by T. Rowe Price Associates. The release notes leadership in consumer AI and growth in enterprise AI; as noted here, in January OpenAI debuted ChatGPT for Healthcare (enterprise) and put into test ChatGPT for Health (consumer).

At a ‘virtual VC conference’ earlier this week, one investor panelist estimated that 14% of venture capital funding in 2025 went to exactly two companies, OpenAI and Anthropic (Claude). That disproportion rings alarm bells to this Editor, who well remembers the ludicrous dot-com boom/bust, and even earlier the insane financing that went into (mostly failed) airlines during deregulation–including the airline she worked for.

Another healthcare segment that hasn’t had much problem raising funds is e-prescribing of GLP-1 drugs. Miami-based eMed raised $200 million in its Series A, bringing its valuation to over $2 billion. Fronted by NFL quarterback legend Tom Brady, recently named founding chief wellness officer who is also an investor, the round was led by earlier investor AON Consulting with the addition of a starry roster of individual investors noted in their brief release. eMed’s eRx is marketed both to individuals and employers; the fresh funding will support further development of its agentic AI platform plus a new capitated model “designed to help employers bend the healthcare cost curve”. This Editor notes the lede in most articles about eMed is Brady and the $2 billion valuation; as our Readers know, the latter is a subjective and oft-inflated estimate of market value especially at this early stage. TTA dug into eMed and some of the company’s interesting history, crossing over into Ali Parsa and Babylon Health, hereReuters, FierceHealthcare, Mobihealthnews

Moving back into reality, Avo, a NYC-based clinical AI information platform, raised a $10 million Series A. Avo’s calling card is bringing together EHR, revenue cycle including payer, patient data, and knowledge bases to streamline use at the point of care. Funders were led by Noro-Moseley Partners, with participation from existing investors AlleyCorp, Las Olas Venture Capital, MedMountain Ventures, Epsilon Health, and new investor Scrub Capital. Avo has a solid roster of customers that include Geisinger, Mass General Brigham, and local providers such as Englewood (NJ) Health. They also have an intriguing feature: an ambient listening copilot that references patient data and generates documentation that improves revenue cycle. Release

Stedi’s Series C is typical in this hard-raise market in both level and number of investors, with a bit of a twist. The $50 million raised brings their total to $142 million, and will be used to expand its product presence and scale infrastructure. Denver-based Stedi’s calling card is an API-first and cloud-native financial clearinghouse that in revenue cycle management sits between healthcare providers and payers (insurers) to process essential transactions like eligibility checks, claims, and electronic payments. The funding was led by by Addition, with participation from Stripe, Ribbit Capital, USV, First Round, BoxGroup, and Bloomberg Beta. There was also a group of angel investors who jumped in, including Tobi Lütke (CEO of Shopify), Guillermo Rauch (CEO of Vercel), and Karim Atiyeh (CTO of Ramp). Finsmes

Since we opened with Oracle, we’ll close with them. Five days before 30,000 employees globally were declared unnecessary, Oracle announced that they leased additional space in Nashville, specifically 116,000 square feet within The Neuhoff District at 1320 Adams Street. Oracle now has 2,000 “seats” across three Nashville locations. The release touts “teams focused on a wide variety of roles, including sales and marketing, cloud engineering, software development, and product management. The company is actively recruiting ambitious thinkers and leaders eager to shape the next generation of cloud infrastructure and AI innovation. ” Perhaps some of those hundreds of folks in KC and other locations can be rehired in Nashville (sic).

The weekend read: why SPACs came, went, and failed in digital health–the Halle Tecco analysis/memorial service; why OpenAI is going to be a bad, bad business

Let us now hold the formal memorial service for the SPAC–the special purpose acquisition company, at least for digital health. Halle Tecco, whom many of us know as the founder and past CEO of Rock Health, plus angel investor, plus adjunct professor in digital health at Columbia, now has an opinion blog on Substack. As our Readers know, this Editor, who is none of the above, has been shoveling dirt on SPACs here on TTA since they became an Easy Way To Avoid the cumbersome, oh-so-tiresome preparation for a public IPO during the Digital Health Boom of 2020-22 (RIP). She has been covering their Trouble Every Day and demise ever since. Having not kept quantitative track of Cracked SPACs, only the news as they floated, declined, and failed, this Editor enjoyed Ms. Tecco’s quantitative analysis of the overall picture. She puts it into a readable business context. 

Shockingly, SPACs across all IPOs are still going on. In 2023 and 2024, total SPACs as a percent of IPOs neared 40%. Their high was reached in 2022 at 73%. The attractiveness of SPACs was obvious: an investor sets up a publicly traded company and goes through the hassle of an IPO. It raises money on public markets and from investors to acquire another company. Then it hunts for a company to acquire. The target is landed, is acquired, symbols change, and the deal is done, all in three to six months. The acquired company doesn’t have to go through the investor pitches, the due diligence, the incessant filing…less fuss and muss, but missing the rigor of a traditional IPO. For the SPACs, especially those focusing on digital health, 2020-22 became FOMO Fever–the fear of missing out.

For digital health companies, the boom became a race to the bottom. 

  • 30.4% went bankrupt, some spectacularly, others with a whimper as they’ve failed, one after the other: 23andMe, Cano Health, Babylon Health, Nuvo, Pear, others
  • 26.1% were acquired well below their SPAC entry price: Sharecare, SOC Telemed, Akili and others. The only exception: Augmedix, with a $40 million SPAC valuation, was bought for $139 million by Commure. (Commure is backed by General Catalyst and Andreessen Horowitz; Commure/Athelas itself is an interesting and complex story.)
  • 39.1% are still in business but trading below their SPAC entry price. A number flirted with the Devil of Demise and are recovering: Clover Health, Owlet (baby monitors), Butterfly (ultrasound POC), Talkspace. DocGo became a Covid play and then got into political trouble and is nearing $2/share from their late 2022 high of just below $11. And others.
  • There is exactly one success story: hims & hers (4.3%)

Enjoy this read on her blog. If you prefer a podcast, here’s Ms. Tecco on her ‘Heart of Healthcare’ with Mohamad Makhzoumi (link is to Spotify), co-CEO of New Enterprise Associates (NEA), a VC in healthcare and technology (33 minutes), discussing healthcare’s evolution, so to speak, from “the trailer park of venture investing” and the hilarious ‘healthcare hokey-pokey’. And here’s a Gimlety View of SPACs from 26 June 2024.

Another Big and Disastrous Fail in the making may be OpenAI, the creator of ChatGPT. It is converting from a non-profit to a for-profit company, losing its founder group, fundraising like crazy, and generally has ditched its Mission. “OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.”  OpenAI has raised the largest venture-backed fundraise of all time, $6.6 billion, and is now valued at $157 billion. Why overvalued? A tell is that SoftBank has invested $500 million into this megillah–this Editor recalls that SoftBank invested in Theranos and WeWork. Another tell–the NY Times and The Information estimated that Open AI lost $5 billion in 2024, it loses money on every copy of ChatGPT, and its revenue projections are near-absurd at $11.6 billion in 2025 and $100 billion by 2029. It totally ignores that every major player has an AI program, from Microsoft to Google. If you’re a fan of ChatGPT or need your eyes cleared around this type of AI, grab your cuppa and a bottle of your favorite pain reliever for Ed Zitron’s article, OpenAI Is A Bad Business. (Ed is an English tech writer, podcaster, and PR specialist)

Two Must Reads: Is AI the next hype bubble replacing crypto–and capable of great harm?

crystal-ballTwo articles that consider the current state of AI to read and ponder. On one hand, far less than what it’s hyped to business–especially healthcare–and on the other, more malevolent with great potential for harm.

The first article by Gintaras Radauskas in Cybernews confirmed this Editor’s misgivings on exactly what is artificial intelligence (AI) and the unrealistic expectations around it. It seems that a lot of the thinking around AI is doubletalk–gibberish, as he put it, leading off with analyzing a recent interview of Sam Altman of Microsoft-backed OpenAI and its chatbot ChatGPT. 

“To me, AI looks like a solution to a problem that’s not a problem – or, actually, a non-solution to the very real problems that are not going away.”

  • He draws parallels to cryptocurrency, which was widely hyped in the past few years as a secure alternative currency that was off the dollar and global bank grid. Even large banks, financial institutions, and big VCs like Sequoia Capital were sucked in. And real people did lose real money–famous football quarterback Tom Brady to African and Indian students.

This Editor knew the high and nonsensical point of the bubble was when she was in her local Shoprite perhaps two years ago and after checkout, next to the NJ Lottery machine and containers of sidewalk deicer, there was a machine that would convert my very real US greenbacks to crypto. The end of the bubble was the FTX bankruptcy in November 2022, then the arrest followed by last year’s trial and conviction of FTX’s Sam Bankman-Fried. Gaining little notice was that FTX was itself hacked and drained in a SIM-card swapping scheme in late 2022 before its collapse that emptied the accounts of 50 people. Those three perpetrators were indicted earlier this month. CNBC

  • When crypto imploded, ChatGPT took its place in the TechWorld Hype Universe. Bank of America terms it a ‘defining moment–like the internet in the ’90s’. For those of us who were around then, there were bulletin boards (!), multiple platforms (AOL), something called search engines (AltaVista, Dogpile), and lots of websites that surfaced and then went under the waves. A lot of money changed hands and a lot of parties were thrown before the dot.com bust. Unlike the internet boom, AI is already dominated by the tech giants like Microsoft (OpenAI) and Google (Bard, now Gemini) so it’s actually less of a risk for the large companies eager to use it.

But then why are these large companies not on board yet? “Only 3.8% of businesses reported using AI to produce goods and services, according to November’s Business Trends and Outlook Survey. It’s safe to say we’re very, very far away from mass adoption and use of AI.”

Perhaps it’s this. AI has already been parodied as a highly sophisticated long-form autocomplete tool. Your Editor has experimented with generative AI via Microsoft’s Bing. Example: an article on a non-healthcare topic, antique auto restoration. It was largely but not entirely accurate. But it was written at about a fifth-grade level in a style that was flat and uninteresting–the dumbing-down of the value of copy to inform and persuade continues. (Companies look at writers and marketers as an expense to be eliminated, not managed. As a marketer from the start of my career, and who worked for or with some of the best-known US agencies renowned for creativity, I would not recommend that career path to anyone today.) 

  • And finally, the ultimate use of AI is to get rid of people. That is what automation does. And while it can increase accuracy, speed, and take away drudgery in tasks like healthcare billing and coding, healthcare is about people–and while it can make it appear more responsive, when the humans are gone, will only the chatbots be left, with coding that endlessly replicates itself, like the automated phone menus that leave you in the ether with your questions unanswered–except it’s your diagnosis or information that your doctor’s trying to obtain? And what happens to the professionals trained to do these tasks and who already use automation tools to do their work? What happens when AI picks up and propagates a wrong treatment or surgical technique? This is not quite the analogy of the blacksmith and horseshoes or film versus video. We are ill equipped to deal with the societal effects of training people for jobs that no longer exist and concentration of technology into a very few companies.

And if we leave these tasks to AI without human intervention and supervision, what will happen?

The second article, linked to in the first, could be titled after the 1960s movie ‘Experiment in Terror’. Imagine asking AI about you. It tells you you’ve died and gives links to your obituary. Alexander Hanff, a founder of IT companies, computer scientist, and privacy technologist did. And ChatGPT repeatedly told him he was dead, complete with fake links to his obit in the Guardian and very convincing text. Now imagine you’re applying for a job, a loan, a mortgage, or a passport. The AI tool tells the employer, the bank, and the Feds that you’re dead. Hanff was already warned by a professional colleague who conducted the same exercise and received a bio back with false information. This deep fakery, origin unknown and undiscoverable, is huge potential for harm. Conclusion:

“Based on all the evidence we have seen over the past four months with regards to ChatGPT and how it can be manipulated or even how it will lie without manipulation, it is very clear ChatGPT is, or can be manipulated into being, malevolent. As such it should be destroyed.” ®

Hanff has company with Steve Wozniak of Apple on this [TTA 5 May 2023]. Read this one all the way through. And be scared. The Register