Two articles that consider the current state of AI to read and ponder. On one hand, far less than what it’s hyped to business–especially healthcare–and on the other, more malevolent with great potential for harm.
The first article by Gintaras Radauskas in Cybernews confirmed this Editor’s misgivings on exactly what is artificial intelligence (AI) and the unrealistic expectations around it. It seems that a lot of the thinking around AI is doubletalk–gibberish, as he put it, leading off with analyzing a recent interview of Sam Altman of Microsoft-backed OpenAI and its chatbot ChatGPT.
“To me, AI looks like a solution to a problem that’s not a problem – or, actually, a non-solution to the very real problems that are not going away.”
- He draws parallels to cryptocurrency, which was widely hyped in the past few years as a secure alternative currency that was off the dollar and global bank grid. Even large banks, financial institutions, and big VCs like Sequoia Capital were sucked in. And real people did lose real money–famous football quarterback Tom Brady to African and Indian students.
This Editor knew the high and nonsensical point of the bubble was when she was in her local Shoprite perhaps two years ago and after checkout, next to the NJ Lottery machine and containers of sidewalk deicer, there was a machine that would convert my very real US greenbacks to crypto. The end of the bubble was the FTX bankruptcy in November 2022, then the arrest followed by last year’s trial and conviction of FTX’s Sam Bankman-Fried. Gaining little notice was that FTX was itself hacked and drained in a SIM-card swapping scheme in late 2022 before its collapse that emptied the accounts of 50 people. Those three perpetrators were indicted earlier this month. CNBC
- When crypto imploded, ChatGPT took its place in the TechWorld Hype Universe. Bank of America terms it a ‘defining moment–like the internet in the ’90s’. For those of us who were around then, there were bulletin boards (!), multiple platforms (AOL), something called search engines (AltaVista, Dogpile), and lots of websites that surfaced and then went under the waves. A lot of money changed hands and a lot of parties were thrown before the dot.com bust. Unlike the internet boom, AI is already dominated by the tech giants like Microsoft (OpenAI) and Google (Bard, now Gemini) so it’s actually less of a risk for the large companies eager to use it.
But then why are these large companies not on board yet? “Only 3.8% of businesses reported using AI to produce goods and services, according to November’s Business Trends and Outlook Survey. It’s safe to say we’re very, very far away from mass adoption and use of AI.”
Perhaps it’s this. AI has already been parodied as a highly sophisticated long-form autocomplete tool. Your Editor has experimented with generative AI via Microsoft’s Bing. Example: an article on a non-healthcare topic, antique auto restoration. It was largely but not entirely accurate. But it was written at about a fifth-grade level in a style that was flat and uninteresting–the dumbing-down of the value of copy to inform and persuade continues. (Companies look at writers and marketers as an expense to be eliminated, not managed. As a marketer from the start of my career, and who worked for or with some of the best-known US agencies renowned for creativity, I would not recommend that career path to anyone today.)
- And finally, the ultimate use of AI is to get rid of people. That is what automation does. And while it can increase accuracy, speed, and take away drudgery in tasks like healthcare billing and coding, healthcare is about people–and while it can make it appear more responsive, when the humans are gone, will only the chatbots be left, with coding that endlessly replicates itself, like the automated phone menus that leave you in the ether with your questions unanswered–except it’s your diagnosis or information that your doctor’s trying to obtain? And what happens to the professionals trained to do these tasks and who already use automation tools to do their work? What happens when AI picks up and propagates a wrong treatment or surgical technique? This is not quite the analogy of the blacksmith and horseshoes or film versus video. We are ill equipped to deal with the societal effects of training people for jobs that no longer exist and concentration of technology into a very few companies.
And if we leave these tasks to AI without human intervention and supervision, what will happen?
The second article, linked to in the first, could be titled after the 1960s movie ‘Experiment in Terror’. Imagine asking AI about you. It tells you you’ve died and gives links to your obituary. Alexander Hanff, a founder of IT companies, computer scientist, and privacy technologist did. And ChatGPT repeatedly told him he was dead, complete with fake links to his obit in the Guardian and very convincing text. Now imagine you’re applying for a job, a loan, a mortgage, or a passport. The AI tool tells the employer, the bank, and the Feds that you’re dead. Hanff was already warned by a professional colleague who conducted the same exercise and received a bio back with false information. This deep fakery, origin unknown and undiscoverable, is huge potential for harm. Conclusion:
“Based on all the evidence we have seen over the past four months with regards to ChatGPT and how it can be manipulated or even how it will lie without manipulation, it is very clear ChatGPT is, or can be manipulated into being, malevolent. As such it should be destroyed.” ®