Too long to summarize or opine on this week–but a must for your weekend reading. Grab the cuppa for the talk of AI World–a New Yorker dissection of Sam Altman, the CEO of OpenAI (link below). To say it is an exposé worthy, at first glance, of the Old School (ain’t no school like the Old School–Ed.) on probably the most important company of AD 2026 is to undersell it. It’s a long article and you’ll need at least one break.
OpenAI, founded as a non-profit with integrity at its core to “prioritize the safety of humanity over the company’s success, or even its survival”, recapitalized last year as a for-profit corporation with 26% of the shares owned by the OpenAI Foundation. It is now a trillion-dollar company that had no trouble raising a paltry $122 billion last week [TTA 2 April] though arguments are made that at least some of that money are IOUs or contingent. ChatGPT has become almost generic for AI, like Kleenex has become for tissues. The battles over control and direction of the company are now totally controlled by Sam Altman, whom former colleagues are not shy about pointing out his difficulty with the truth and a pattern of deceit, for instance to his board, to employees, and Microsoft. Yet everyone continues to do business with him. The FOMO Factor is very strong.
Mr. Altman makes extremely broad statements on the future of work (most traditional managerial, healthcare, and IT jobs will be taken over by AI, thus most of us will be unemployed), has easy access to President Donald Trump, as well as other world executives, and may, as the headline barks, control our future. Thus, he is a person of consequence.
My read so far of this is that within OpenAI, there is no one to counterbalance Mr. Altman’s immense ambition, his desire to dominate and win, not only with AI but also over all business and everyday life. These are character issues that also show up in aspects of his personal life, detailed in the article. If past results are predictive of the future, this flaw usually curdles into the desire to control countries and a complete disrespect for the rest of us leading our lives.
Sam Altman May Control Our Future–Can He Be Trusted?,
I will offer two LinkedIn comment posts on this article from an AI person I respect, the head of Curiouser.ai, Stephen Klein. Many of his posts on LinkedIn deal with what AI can and cannot do in business. He writes that he is “committed to designing technology that augments people, creates jobs, and elevates humanity. It’s time we all got back to thinking for ourselves.” 7 April, 8 April
Our second Must Read is from Sergei Polevikov’s AI Health Uncut, a long analysis on the failure of Carbon Health and what it tells us in “this business we have chosen”. “What The Hell Went Wrong?” and its implications need answers–because it’s being repeated again and again. Today’s article (9 April) is Part 1 of 2, sets the stage about the mistakes made (insiders talk) and, with full credit, springboards off Stuart Miller’s (Haverin Consulting) original analysis made at the time of the Chapter 11 reorg. What we called the ‘Ominous Parallels’ was a Must Read here on 12 February. TTA (as Telecare Aware, our original name) and this article are also mentioned twice (thanks!).
Those who have yet to subscribe for Mr. Polevikov’s analytic, erudite, and revealing (Emperor’s New Clothes!) POVs can read part of this article for free–but seriously, if you’re in this business, the subscription is worth your money. He also podcasts (links are on his Substack, link at lower right sidebar).
An early and scandalous publisher (before he utterly lost it), Matt Drudge, used to say that he ‘went where the stink is’. Mr. Polevikov does the same. The stink is of our broken primary care reimbursement system, the Covid steroids that pumped up the company, flailing management running through money like drugs, and good ideas for patient care buried under incompetence.
Most Recent Comments