Two Must Reads: Is AI the next hype bubble replacing crypto–and capable of great harm?

crystal-ballTwo articles that consider the current state of AI to read and ponder. On one hand, far less than what it’s hyped to business–especially healthcare–and on the other, more malevolent with great potential for harm.

The first article by Gintaras Radauskas in Cybernews confirmed this Editor’s misgivings on exactly what is artificial intelligence (AI) and the unrealistic expectations around it. It seems that a lot of the thinking around AI is doubletalk–gibberish, as he put it, leading off with analyzing a recent interview of Sam Altman of Microsoft-backed OpenAI and its chatbot ChatGPT. 

“To me, AI looks like a solution to a problem that’s not a problem – or, actually, a non-solution to the very real problems that are not going away.”

  • He draws parallels to cryptocurrency, which was widely hyped in the past few years as a secure alternative currency that was off the dollar and global bank grid. Even large banks, financial institutions, and big VCs like Sequoia Capital were sucked in. And real people did lose real money–famous football quarterback Tom Brady to African and Indian students.

This Editor knew the high and nonsensical point of the bubble was when she was in her local Shoprite perhaps two years ago and after checkout, next to the NJ Lottery machine and containers of sidewalk deicer, there was a machine that would convert my very real US greenbacks to crypto. The end of the bubble was the FTX bankruptcy in November 2022, then the arrest followed by last year’s trial and conviction of FTX’s Sam Bankman-Fried. Gaining little notice was that FTX was itself hacked and drained in a SIM-card swapping scheme in late 2022 before its collapse that emptied the accounts of 50 people. Those three perpetrators were indicted earlier this month. CNBC

  • When crypto imploded, ChatGPT took its place in the TechWorld Hype Universe. Bank of America terms it a ‘defining moment–like the internet in the ’90s’. For those of us who were around then, there were bulletin boards (!), multiple platforms (AOL), something called search engines (AltaVista, Dogpile), and lots of websites that surfaced and then went under the waves. A lot of money changed hands and a lot of parties were thrown before the dot.com bust. Unlike the internet boom, AI is already dominated by the tech giants like Microsoft (OpenAI) and Google (Bard, now Gemini) so it’s actually less of a risk for the large companies eager to use it.

But then why are these large companies not on board yet? “Only 3.8% of businesses reported using AI to produce goods and services, according to November’s Business Trends and Outlook Survey. It’s safe to say we’re very, very far away from mass adoption and use of AI.”

Perhaps it’s this. AI has already been parodied as a highly sophisticated long-form autocomplete tool. Your Editor has experimented with generative AI via Microsoft’s Bing. Example: an article on a non-healthcare topic, antique auto restoration. It was largely but not entirely accurate. But it was written at about a fifth-grade level in a style that was flat and uninteresting–the dumbing-down of the value of copy to inform and persuade continues. (Companies look at writers and marketers as an expense to be eliminated, not managed. As a marketer from the start of my career, and who worked for or with some of the best-known US agencies renowned for creativity, I would not recommend that career path to anyone today.) 

  • And finally, the ultimate use of AI is to get rid of people. That is what automation does. And while it can increase accuracy, speed, and take away drudgery in tasks like healthcare billing and coding, healthcare is about people–and while it can make it appear more responsive, when the humans are gone, will only the chatbots be left, with coding that endlessly replicates itself, like the automated phone menus that leave you in the ether with your questions unanswered–except it’s your diagnosis or information that your doctor’s trying to obtain? And what happens to the professionals trained to do these tasks and who already use automation tools to do their work? What happens when AI picks up and propagates a wrong treatment or surgical technique? This is not quite the analogy of the blacksmith and horseshoes or film versus video. We are ill equipped to deal with the societal effects of training people for jobs that no longer exist and concentration of technology into a very few companies.

And if we leave these tasks to AI without human intervention and supervision, what will happen?

The second article, linked to in the first, could be titled after the 1960s movie ‘Experiment in Terror’. Imagine asking AI about you. It tells you you’ve died and gives links to your obituary. Alexander Hanff, a founder of IT companies, computer scientist, and privacy technologist did. And ChatGPT repeatedly told him he was dead, complete with fake links to his obit in the Guardian and very convincing text. Now imagine you’re applying for a job, a loan, a mortgage, or a passport. The AI tool tells the employer, the bank, and the Feds that you’re dead. Hanff was already warned by a professional colleague who conducted the same exercise and received a bio back with false information. This deep fakery, origin unknown and undiscoverable, is huge potential for harm. Conclusion:

“Based on all the evidence we have seen over the past four months with regards to ChatGPT and how it can be manipulated or even how it will lie without manipulation, it is very clear ChatGPT is, or can be manipulated into being, malevolent. As such it should be destroyed.” ®

Hanff has company with Steve Wozniak of Apple on this [TTA 5 May 2023]. Read this one all the way through. And be scared. The Register

Ransomware roundup: TimisoaraHackerTeam (THT) attacks cancer centers; KillNet’s ‘Sudanese’ member; 101K ChatGPT accounts infostolen; LockBit attacker arrested on Federal charges

TimisoaraHackerTeam (THT) attacked an unnamed US cancer center with malware in June, demanding a ransom of 10 bitcoins ($300,176). The Central European, possibly Romanian-based group (named after a Romanian town), was uncovered in 2018 and was last tracked to an April 2021 attack on a French hospital. The malware vectors in using legitimate software from Microsoft Bitlocker and Jetico’s BestCrypt. Reports state that it targeted Fortinet’s FortiOS SSL-VPN to exploit CVE-2022-42475, a heap-based buffer overflow vulnerability that allows remote attackers to execute code or commands using specially crafted requests. THT may be linked to other malefactors such as DeepBlueMagic and China-based APT41 based on software used and style in notes. DeepBlueMagic disabled an Israeli medical center, Hillel Yaffe, in October 2021. 

The cancer center and Heimdal Security were able to reclaim the hacked records through the use of decryption software as they were only partially encrypted, avoiding the ransomware payment. HHS’ Office of Critical Infrastructure Protection has issued its notification with details on the attack here (PDF). SC Magazine, Healthcare Dive

KillNet, the Russia-based agglomeration of anti-Western hacktivist groups, has a possible new member in the interestingly named Anonymous Sudan. Their modus operandi is to use distributed denial of service (DDoS) attacks in response to the anti-Islamic views or actions of Western, to date 24 Australian, organizations, but the DDoS claims are smokescreens that not only tie up cyberdefense resources and generally spread panic and disinformation, but also gain publicity for the group. Cyber researchers CyberCX noted that their DDoS attacks have been intense, but unusual in that Sudan (the country) apparently has not instigated the attacks nor have the attacks been monetized. SC Magazine

Surprise, surprise–infostealers using malware to get into ChatGPT accounts. Once into the accounts, the malware infects browsers to collect saved credentials, bank card details, crypto wallet information, cookies, browsing history, and other information. Most of the affected devices are in Asia-Pacific. The malware is for sale on the dark web, with most of the 101,134 accounts tallied by Group-IB were breached by Raccoon/RecordBreaker (78,348), while the remainder were hit by Vidar (12,984) and RedLine (6,773). ChatGPT is being downloaded individually and often introduced into enterprise systems from personal devices without the usual IT security and vetting. LLM models for now are unsecured and for hackers, it’s ‘happy time’.  SC Magazine

But sometimes the bad actors get caught and dragged back to New Jersey. The FBI finally caught up to Russian national Ruslan Magomedovich Astamirov, who is accused of being part of the ransomware gang dubbed LockNet. The two counts filed in the Federal District of New Jersey center on conspiracy to commit fraud and related activity in connection with computers, plus the ever-popular conspiracy to commit wire fraud for the usual extortion of money and property between 2020 and 2023. The attacks were on businesses based in West Palm Beach, France, Tokyo, and Virginia, and received about $90 million in ransom payments. Astamirov sent emails and owned IP addresses, including Amazon and Microsoft accounts used in the fraud. NJ was chosen as the location for the Court since there was one LockBit victim in Essex County. SC Magazine, Criminal Complaint filed against Astamirov (PDF)

‘The Future of AI and Older Adults 2023’ now published

Laurie Orlov of Aging and Technology Watch in her latest paper tackles the latest iterations of AI and ML, tracing their roots back to 2014 to the original smart speakers and voice assistance, technologies that enabled older adults to access services with convenience and at reasonable cost. What will be the impact of AI using tools such as large language models (LLM) like ChatGPT to develop improved search, voice assistance, answers to health questions, and care plans written in understandable and empathetic language? For care facilities and senior housing, will they leverage AI with voice and sensor tech to improve safety monitoring for both residents and caregivers, plus the dream of predictive health for residents or those living at home with limited assistance? Will chatbots get a lot smarter versus obnoxious? Find out what both the short term and long term (5+ year) impact could be. 

Ms. Orlov’s somewhat gimlety view includes Gartner’s infamous Hype Cycle chart on page 5. As of today, most AI technologies reside in the balmy Peak of Inflated Expectations, the place where whatever investment funding is going. There’s lots of innovation and kitchen table hackathoning. Looming about two years out is the inevitable Trough of Disillusionment which has already been kicked off by Big Thinkers such as Steve Wozniak. As this Editor observed last month, it is a double-edged sword, with the bad side in its potential for data misuse, fraud, fakery, and malicious action. It’s already created controversy that this Editor predicts will crest in the next year with demands for regulation. We’re not there yet, however.

Download of the PDF is here and free.

Week-end roundup: Is ChatGPT *really* more empathetic than real doctors? Amwell’s $400M loss, Avaya emerges from Ch. 11, Centene sells Apixio, more on Bright Health’s MA sale, layoffs at Brightline, Cue Health, Healthy.io

Gimlet EyeA Gimlety Short Take (not generated by ChatGPT). This Editor has observed developments around AI tool ChatGPT with double vision–one view, as an amazing tool with huge potential for healthcare support, and the other as with huge potential for fakery and fraud. (If “The Woz” Steve Wozniak can say that AI can misuse data and trick humans, Tesla’s AI-powered Autopilot can kill you, plus quit Google over AI, it should give you pause.)

The latest healthcare ‘rave’ about ChatGPT is a study published 28 April in JAMA Network that pulled 195 questions and answers from Reddit’s r/AskDocs, a social media forum where members ask medical questions and real healthcare professionals answer them. The study authors then submitted the same questions to ChatGPT and evaluated the answers on subjective measures such as “better”, “quality”, and “empathy”. Of course, the ChatGPT 3.5 answers were rated more highly–78%–than the answers from human health care professionals who answer these mostly ‘should I see a doctor?’ questions. HIStalk noted that forum volunteers might be a little short in answering the questions. Another point was that “they did not assess ChatGPT’s responses for accuracy. The “which response is better” evaluation is subjective.” The prospective patients on the forum were also not asked how they felt about the AI-generated answers. Their analysis of the study’s shortcomings is short and to the point. Another view on compassion in communication as dependent on context and relationships was debated in Kellogg Insight, the publication of the Kellogg School of Management at Northwestern University, in Healthcare IT News.

Amwell posted a disappointing and sizable $398.5 million net loss in Q1. This was over five times larger than the Q1 2022 loss of $70.3 million and Q4 2022’s $61.6 million. The loss was due to a noncash goodwill impairment charge related to a lasting decline in the company’s share price. Current versus prior year Q1 revenue remained flat at $64 million, $15 million lower than Q4 2022 due to a decline in professional services revenue. Visits were 1.7 million visits in Q1, with 36% through the new platform Converge. Guidance for the year remains at $275-$285 million with an adjusted EBITDA loss between $150-$160 million. Mobihealthnews This contrasts with rival Teladoc’s optimistic forecast released last week, though remaining in the loss column [TTA 4 May]. 

Avaya emerged from Chapter 11 on Monday. According to the release, the company has financially restructured and now has $650 million in liquidity and a net leverage ratio of less than 1x. This was a lightning-fast bankruptcy and reorganization, usually referred to as ‘pre-packaged’, as it was announced in February with the company emerging from it in 60 to 90 days. Avaya provides virtual care and collaboration tools (and has contributed to our Perspectives series). 

Another restructuring continues at Centene. Their latest sale is Apixio, a healthcare analytics platform for value-based care. The buyer is private equity investor New Mountain Capital. New Mountain has $37 billion in assets under management. Centene acquired Apixio in December 2020 in the last full year of CEO Michael Neidorff’s leadership. Since 2022, Centene has been selling off many of their more recent acquisitions such as two specialty pharmacy divisions, its Spanish and Central European businesses, and Magellan Specialty Health. Transaction cost and management transitions were not disclosed. Based on the wording of the release, Centene will continue as an Apixio customer as well as other health plans. Given the profile of the 10 largest health plans, which includes Centene, and their diversification, Centene’s divestments coupled with the involvement of activist investor Politan Capital Management have led to speculation.

Another take on Bright Health’s projected divestiture of its California Medicare Advantage health plans is from analyst Ari Gottlieb on LinkedIn. If Bright sells the MA plans for what they paid for them–$500 million–according to Mr. Gottlieb they can pay off their outstanding JP Morgan credit facility as well as negative capital levels in many of the states where they had plans and are now defending lawsuits. It still leaves them $925 million in debt.

Unfortunately, we close with yet another round of layoffs.

  • Covid-19 test kit/home diagnostics Cue Health will be surplusing about 26% of its current workforce, or 325 employees. Most will be in the San Diego manufacturing plants. This is on top of 170 employees released last summer. The current value of the Nasdaq-traded company is estimated at $105 million, down from $3 billion at their 2021 IPO. Current share price is $0.68. HIStalk, San Diego Business Journal.
  • Another telemental health company is shrinking–Brightline–reducing their current workforce by another 20%. This affects corporate staff and is in addition to the 20% let go last November. Brightline’s focus is on mental health for children and teens, and has investment to date of $212 million. Becker’s 
  • Healthy.io, which offers in-home urinalysis and wound care, plus a new app for kidney care, laid off 70 staff while enjoying a fresh Series D raise of $50 million from Schusterman Family Investments.  Becker’s

Digital technology falling (even) short(er) in NHS nursing: QNI report (UK)

When health tech ‘magic’–isn’t. Roy Lilley and his several times per week newsletter (NHSManagers.net, subscribe here) are really must reads for our UK readers dealing with the foibles of the NHS and NHS Digital. Billions have been poured into digitization of records and equipping district (community) nurses with laptops and access to apps that connect them to patient information. All of which is apparently, a flop for the money spent. 

The Queen’s Nursing Institute (QNI) has published a study, Nursing in the Digital Age 2023, via its data gathering and analytics area, the International Community Nursing Observatory (ICNO). It obviously should be microscope-read by NHS Digital, but also by US developers (and in other countries) with clinical users. (Oracle Cerner, Epic, and 00’s of EHRs and workflow apps–take notice).

Mr. Lilley outlines the level of failure here–from his article

  • 5 yrs ago; 32.7% reported problems with lack of compatibility between different computer systems… in 2022 the figure had risen to 43.1%.
  • 5 yrs ago; around 85% of respondents reported issues with mobile connectivity… in 2022 this figure was around 87%.
  • 5 yrs ago; 29.5% reported problems with device battery life… in 2022 the figure was almost 53%.

The overall take of the QNI study is that nurses are highly digitally literate and embrace technology at scale, but in practice, the apps and the hardware have become impediments as the workload increases. For non-UK readers, district nurses travel a lot, often working from home–akin to home care or rural nurses in the US. Points from their executive summary:

  • Hardware–battery life, weight of laptop, old laptops, ergonomics not only from weight but also when working in cars. Safety and confidentiality issues lead many nurses to take the work home, leading to delays.
  • Software–connectivity, authentication, multiple platforms, little integration, repetition of data entry, and poor connectivity and software design leading to interrupted workflows.
  • Some scheduling tools cause workload issues, such as over-allocation of work, unmanageable workloads and loss of personal autonomy.
  • Systems design–impersonal, designed to act as a barrier to interacting with patients.
  • Duplicative workload–repetition with dual entry on paper and into platforms because of poor connectivity and software design
  • The use of electronic health records (EHR) and similar platforms was mixed in terms of productivity gains and work capture. 

Another issue: “Moving technology-enabled care (remote monitoring) to the community appears to have shifted work from the hospital to the community”, meaning an increased workload on nurses where specialists or non-nursing staff could do this. 

Mr. Lilley summarizes as a service what both the hardware and software should be accomplishing:

Just ten simple things:

  1. Who is the patient,
  2. where have they come from.
  3. See their record, have they been sick before and…
  4. What we did we do?
  5. Anything in their history that’s a red flag?
  6. What do we do to fix them up this time and…
  7. Record how we did it.
  8. Figure out what worked,
  9. What did it cost and…
  10. Do we want to do it again.

Both Mr. Lilley’s newsletter and the study (PDF) are must reads wherever you live. Especially if you are a software designer.

No wonder nurses are single-day rolling striking!

(He also has an interesting take on ChatGPT, AI for copywriting and reporting, which we will take on next week….) Hat tip to Editor Emeritus Steve.