The confusion within TEC/telehealth between machine learning and AI-powered systems

Defining AI and machine learning terminology isn’t academic, but can influence your business. In reading a straightforward interview about the CarePredict wearable sensor for behavioral modeling and monitoring in an AI-titled publication, this Editor realized that AI–artificial intelligence–as a descriptor is creeping into all sorts of predictive systems which are actually based on machine learning. As TTA has written about previously [TTA 21 Aug], there are many considerations around AI, including the quality of the data being fed into the system, the control over the systems, and the ability to judge the output. Using the AI term sounds so much more ‘techie’–but it’s not accurate.

Artificial intelligence is defined as the broader application of machines being able to carry out tasks in a ‘smart’ way. Machine learning is tactical. It’s an application that assumes that we give the machine access to data and let the machine ‘learn’ on its own. Neural networks in computer design have made this possible. “Essentially it works on a system of probability – based on data fed to it, it is able to make statements, decisions or predictions with a degree of certainty.”, as stated in this Forbes article by Bernard Marr.

CarePredict has been incorporating many aspects of machine learning, particularly in its interface with the wrist-worn wearable and its interaction with sensors in a residence. It gathers more over time than older systems like QuietCare (this Editor was marketing head) and with more data, CarePredict does more and progressed beyond the relatively simple algorithms that created baselines in QuietCare. They now claim effective fall detection, patterns of grooming and feeding, and environment. (Disclosure: this Editor did freelance writing for the company in 2017)

In wishing CEO Satish Movva much success, this Editor believes that using AI to describe his system should be used cautiously. It makes it sound more complicated than it is to a primarily non-techie, senior community administrative and clinical audience. Say what you do in plain language, and you won’t go wrong. AI for Healthcare: Interview with Satish Movva, Founder & CEO of CarePredict

 

Are AI’s unknown workings–fed by humans–creating intellectual debt we can’t pay off?

Financial debt shifts control—from borrower to lender, and from future to past. Mounting intellectual debt may shift control, too. A world of knowledge without understanding becomes a world without discernible cause and effect, in which we grow dependent on our digital concierges to tell us what to do and when.

Debt theory and AI. This Editor never thought of learning exactly how something works as a kind of intellectual paydown of debt on what Donald Rumsfeld called ‘known unknowns’–we know it works, but not exactly how. It’s true of many drugs (aspirin), some medical treatments (deep brain stimulation for Parkinson’s–and the much-older electroconvulsive therapy for some psychiatric conditions), but rarely with engineering or the fuel pump on your car. 

Artificial intelligence (AI) and machine learning aren’t supposed to be that way. We’re supposed to be able to control the algorithms, make the rules, and understand how it works. Or so we’ve been told. Except, of course, that is not how machine learning and AI work. The crunching of massive data blocks brings about statistical correlation, which is of course a valid method of analysis. But as I learned in political science, statistics, sports, and high school physics, correlation is not causality, nor necessarily correct or predictive. What is missing are reasons why for the answers they provide–and both can be corrupted simply by feeding in bad data without judgment–or intent to defraud.

Bad or flawed data tend to accumulate and feed on itself, to the point where someone checking cannot distinguish where the logic fell off the rails, or to actually validate it. We also ascribe to AI–and to machine learning in its very name–actual learning and self-validation, which is not real. 

There are other dangers, as in image recognition (and this Editor would add, in LIDAR used in self-driving vehicles):

Intellectual debt accrued through machine learning features risks beyond the ones created through old-style trial and error. Because most machine-learning models cannot offer reasons for their ongoing judgments, there is no way to tell when they’ve misfired if one doesn’t already have an independent judgment about the answers they provide.

and

As machines make discovery faster, people may come to see theoreticians as extraneous, superfluous, and hopelessly behind the times. Knowledge about a particular area will be less treasured than expertise in the creation of machine-learning models that produce answers on that subject.

How we fix the balance sheet is not answered here, but certainly outlined well. The Hidden Costs of Automated Thinking (New Yorker)

And how that AI system actually gets those answers might give you pause. Yes, there are thousands of humans, with no special expertise or medical knowledge, being trained to feed the AI Beast all over the world. Data labeling, data annotation, or ‘Ghost Work’ from the book of the same name, is the parlance, includes medical, pornographic, commercial, and grisly crime images. Besides the mind-numbing repetitiveness, there are instances of PTSD related to the images and real concerns about the personal data being shared, stored, and used for medical diagnosis. A.I. Is Learning from Humans. Many Humans. (NY Times)

Comings & goings: The TeleDentists go DTC, gains Reis as CEO; University of Warwick spinoff Augmented Insights debuts (UK); a new CEO leads GrandCare Systems

The TeleDentists leap in with a new CEO. A year-old startup, The TeleDentists, has announced it will be going direct-to-consumer with teledentistry consults. This will permit anyone with a dental problem or emergency to consult with a dentist 24/7, schedule a local appointment in 24-48 hours. and even, if required, prescribe a non-narcotic prescription to a local pharmacy. Cost for the DTC service is not yet disclosed. Currently, the Kansas City-based company has provided their dental network services through several telehealth and telemedicine service providers such as Call A Doctor Plus as well as several brick-and-mortar clinic locations.

If dentistry sounds logical for telemedicine, consider that about 2 million people annually in the US use ERs for dental emergencies; 39 percent didn’t visit a dentist last year. Yet teledentistry is just getting started and is unusually underdeveloped, if you except the retail tooth aligners. Several US groups are piloting it to community health and underserved groups, with Philips reportedly considering a trial in Europe (mHealth Intelligence). This Editor notes that on their advisory board is a co-founder of Teladoc.  Release

The TeleDentists’ co-founder, Maria Kunstadter, DDS, last week announced the arrival of a new company CEO, Howard Reis. Mr. Reis started with health tech back in the 1990s with Nynex Science and Technology piloting telemedicine clinical trials at four Boston hospitals, which qualifies him among the most Grizzled Pioneers. He also was business development VP for Teleradiology Specialists and founding partner of The Castleton Group, a LTC telehealth company, and has worked in professional services for Accenture, Telmarc and SAIC/Bellcore. Most recently, he started teleradiology/telehealth firm HealthePractices. Over the past few years, Mr. Reis has also been prominent in the NY metro digital health scene. Congratulations and much success!  

In the UK, the University of Warwick has unveiled a spinoff, Augmented Insights Ltd. AI will be concentrating on machine learning and AI services that analyze long term health and care data, automating the extraction in real time of personalized, predictive and preventative insights from ongoing patient data. It will be headed by Dr. James Amor, whom this Editor met last summer in NYC. Long term plans center on marketing their analytics services to tech providers. Interested parties or potential users may contact Dr. Amor in Leamington Spa at James@augmentedinsights.co.uk |Congratulations to Dr. Amor and his team! 

And in more Grizzled Pioneer news, there’s a new CEO at GrandCare Systems who’s been engaged with the company since nearly their start in 1993 and in its present form in 2005. Laura Mitchell takes the helm as CEO after various positions there including Chief Marketing Officer and several years leading her own healthcare and marketing consulting firm. Nick Mitchell rejoins as chief technology officer and lead software developer. Founders Charlie Hillman remain as an advisor and Gaytha Traynor as COO. Their offices have also moved to the Kreilkamp Building, 215 N Main Street, Suite 130, in downtown West Bend Wisconsin. GrandCare remains a ‘family affair’ as this profile notes. Congratulations–again!

AI and machine learning ‘will transform clinical imaging practice over the next decade’

The great challenges in radiology are accuracy of diagnosis and speed. Yet for radiology, machine learning and AI systems are still in early stages. Last August, a National Institutes of Health (NIH)-organized workshop with the Radiological Society of North America (RSNA), the American College of Radiology (ACR) and The Academy for Radiology and Biomedical Imaging Research (The Academy) kickstarted work towards AI. Their goal was to collaborate in machine learning/AI applications for diagnostic medical imaging, identify knowledge gaps, and to roadmap research needs for academic research laboratories, funding agencies, professional societies, and industry.

The report of this roadmap was published in the past few days in Radiology, the RSNA journal. Research priorities in the report included:

  • new image reconstruction methods that efficiently produce images suitable for human interpretation from source data
  • automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting
  • new machine learning methods for clinical imaging data, such as tailored, pre-trained model architectures, and distributed machine learning methods
  • machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence)
  • validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets.

Another aim is to reduce clinically important errors, estimated at 3 to 6 percent of image interpretations by radiologists. Diagnostic errors play a role in up to 10 percent of patient deaths, according to this report.

It is interesting that machine learning, more than AI, is mentioned in the RSNA materials, for instance in stating that “Machine learning algorithms will transform clinical imaging practice over the next decade. Yet, machine learning research is still in its early stages.” Radiology actually pioneered store-and-forward technology, to where radiology interpretation has been farmed out nationally and globally for many years. This countered a decline in US radiologists as a percentage of the physician workforce that started in the late 1990s and continues to today with some positive trends (Radiology 2015). Perhaps this distribution model postponed development of machine learning technologies. Also Healthcare Dive, RSNA press release  

News roundup: Virginia includes RPM in telehealth, Chichester Careline changes, Sensyne AI allies with Oxford, Tunstall partners in Scotland, teledermatology in São Paolo

Virginia closes in on including remote patient monitoring in telehealth law. Two bills in the Virginia legislature, House Bill 1970 and Senate Bill 1221, include remote patient monitoring (RPM) within their present telehealth and telemedicine guidelines and payment in state commercial insurance and the commonwealth’s Medicaid program. It is currently moving forward in House and Senate committees with amendments and. RPM is defined as “the delivery of home health services using telecommunications technology to enhance the delivery of home health care, including monitoring of clinical patient data….” Both were filed on 9 January. Virginia was an early adopter of parity payment of telemedicine with in-person visits. The University of Virginia has been a pioneer in telehealth research and is the home for the Mid-Atlantic Telehealth Resource Center. mHealth Intelligence

Chichester Careline switches to PPP Taking Care. Chichester Careline is currently a 24/7 care line services provided by Chichester District Council. Starting 1 March, PPP Taking Care, part of AXA PPP Healthcare, will manage the service. According to the Chichester release, costs will remain the same, technology will be upgraded, and telecare services will be added. Over the past 35 years, Chichester Careline has assisted over 1 million people across Britain. 

Sensyne collaborates with University of Oxford’s Big Data Institute (BDI) on chronic disease. The three-year program will use Sensyne’s artificial intelligence for research on chronic kidney disease and cardiovascular disease. Sensyne analyzes large databases of anonymized patient data in collaboration with NHS Trusts. BDI’s expertise is in population health, clinical informatics and machine learning. Their joint research will concentrate on two major elements within long-term chronic disease to derive new datasets: automating physician notes into a structure which can be analyzed by AI and integrating it into remote patient monitoring.  Release.

Tunstall partners with Digital Health & Care Institute Scotland. The partnership is in the Next Generation Solutions for Healthy Ageing cluster. Digital Health & Care supports the Scottish Government’s TEC Programme and the Digital Telecare Workstream. The program’s goals are to help Scots live longer, healthier lives and also create jobs.  Building Better Healthcare UK

Teledermatology powered by machine learning helps to solve a specialist shortage in São Paolo. Brazil has nationalized healthcare which has nowhere near enough specialists. São is a city with 20 million inhabitants, so large and spread out that when the aircraft crew announces that they are on approach to the airport, it takes two hours to touch the runway. The dermatology waitlist was up to 60,000 patients, each waiting 18 months to see a doctor. The solution: call every patient and instruct them to go to a doctor or nurse to take a picture of the skin condition. The photo is then analyzed and prioritized by an algorithm, with a check by dermatologists, to determine level of treatment. Thirty percent needed to see a dermatologist, only 3 percent needed a biopsy. Accuracy level is about 80 percent, and plans are in progress to scale it to the rest of Brazil. Mobihealthnews.

Comings and goings: CVS-Aetna finalizing, Anthem sued over merger, top changes at IBM Watson Health

imageWhat better way to introduce this new feature than with a picture of a Raymond Loewy-designed 1947 Studebaker Starlight Coupe, where wags of the time joked that you couldn’t tell whether it was coming or going?

Is it the turkey or the stuffing? In any case, it will be the place you’ll be going for the Pepto. The CVS-Aetna merger, CVS says, will close by Thanksgiving. This is despite various objections floated by California’s insurance commissioner, New York’s financial services superintendent, and the advocacy group Consumers Union. CEO Larry Merlo is confident that all three can be dealt with rapidly, with thumbs up from 23 of the 28 states needed and is close to getting the remaining five including resolving California and NY. The Q3 earnings call was buoyant, with CVS exceeding their projected overall revenue with $47.3 billion. up 2.4% or $1.1 billion from the same quarter in 2017. The divestiture of Aetna’s Medicare Part D prescription drug plans to WellCare, helpful in speeding the approvals, will not take effect until 2020. Healthcare Dive speculates, as we did, that a merged CVS-Aetna will be expanding MinuteClinics to create urgent care facilities where it makes sense–it is not a big lift. And they will get into this far sooner than Amazon. which will split its ‘second headquarters’ among the warehouses and apartment buildings of Long Island City and the office towers of Crystal City VA.

Whatever happened to the Delaware Chancery Court battle between Anthem and Cigna? Surprisingly, no news from Wilmington, but that didn’t stop Anthem shareholder Henry Bittmann from suing both companies this week in Marion (Indiana) Superior Court. The basis of the suit is Anthem’s willfully going ahead with the attempted merger despite having member plans under the Blue Cross Blue Shield Association meant the merger was doomed to fail, and they intended all along for “Anthem to swallow, and then sideline, Cigna to eliminate a competitor, in violation of the antitrust laws.” On top of this, both companies hated each other. A match made in hell. Cigna has moved on with its money and bought Express Scripts.

IBM Watson Health division head Deborah DiSanzo departs, to no one’s surprise. Healthcare IT News received a confirmation from IBM that Ms. DiSanzo will be joining IBM Cognitive Solutions’ strategy team, though no capacity or title was stated. She was hired from Philips to lead the division through some high profile years, starting her tenure along with the splashy new Cambridge HQ in 2015, but setbacks mounted later as their massive data crunching and compilation was outflanked by machine learning, other AI methodologies, and blockchain. According to an article in STAT+ (subscription needed), they didn’t get the glitches in their patient record language processing software fixed in ‘Project Josephine’, and that was it for her. High profile partner departures in the past year such as MD Anderson Cancer Centers, troubles and lack of growth at acquired companies, topped by the damning IEEE Spectrum and Der Spiegel articles, made it not if, but when. No announcement yet of a successor.

Coffee break reading: a ‘thumbs down’ on IBM Watson Health from IEEE Spectrum and ‘Der Spiegel’

In a few short years (2012 to now), IBM Watson Health has gone from being a 9,000 lb Harbinger of the Future to a Flopping Flounder. It was first MD Anderson Cancer Center at the University of Texas last year [TTA 22 Feb 17] kicking Watson to the curb after spending $62 million, then all these machine learning, blockchain, and AI upstarts doing most of what Watson was going to do, but cheaper and faster, which this Editor observed early on [TTA 3 Feb 17]. At the end of May, IBM laid off hundreds of workers primarily at three recently acquired data analytics companies. All came on board as market leaders with significant books of business: Phytel, Explorys, and Truven. Clients have evaporated; Phytel, before the acquisition ranked #1 by KLAS in analytics for its patient communication system, reportedly went from 150 to 80 clients. IBM denies the layoffs were anything but much-needed post-acquisition restructuring and refocusing on high-value segments of the IT market.

IEEE Spectrum rated the causes as corporate mismanagement (mashing Phytel and Explorys; IBM’s ‘bluewashing’ acquired companies; the inept ‘offering management’ product development process; the crushed innovation) plus inroads made by competition (those upstarts again!). What’s unusual is the sourcing from former engineers–IEEE is the trade group for tech and engineering professionals. The former IBM-ers were willing to talk in detail and depth, albeit anonymously. 

Der Spiegel takes the German and clinical perspective of what IBM Watson Health has gone wrong, starting with the well-documented failures of Watson at hospitals in Marburg and Giessen. The CEO of Rhön-Klinikum AG, which owns the university hospital at Marburg, reviewed it in action in February. “The performance was unacceptable — the medical understanding at IBM just wasn’t there.” It stumbled over and past diagnoses even a first-year resident would have considered. The test at Marburg ended before a single patient was treated.

The article also outlines several reasons why, including that Watson, after all this time, still has trouble crunching real doctor and physical data. It does not comprehend physician shorthand and negation language, which this Editor imagines is multiplied in languages other than American English. “Some are even questioning whether Watson is more of a marketing bluff by IBM than a crowning achievement in the world of artificial intelligence.” More scathingly, the Rhön-Klinikum AG CEO: “IBM acted as if it had reinvented medicine from scratch. In reality, they were all gloss and didn’t have a plan. Our experts had to take them by the hand.”

Hardly The Blue Wave of the Future. Perhaps the analogy is Dr. Watson as The Great Oz.

Digital health is not here. Or it is. Or it’s still “the future” and we’re waiting for the ship to come in.

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2016/06/long-windy-road.jpg” thumb_width=”150″ /]Another bit of convergence this week and last is the appearance of several articles, closely together, about digital health a/k/a health tech or ‘Dr. Robot’. It seems like that for every pundit, writer, and guru who believes “We’ve Arrived”, there’s some discouraging study or contra-news saying “We’re Nowhere Near The New Jerusalem”. This Editor’s been on the train since 2006 (making her a Pioneer but not as Grizzled as some), and wonders if we will ever Get There. 

Nearing Arrival is the POV of Naomi Fried’s article in Mobihealthnews giving her readers the keys to unlock digital health. “Digital health will be the dominant form of non-acute care.” It has value in chopping through the thicket of the low clinical impact technologies that dominate the current scene (Research2Guidance counted only 325,000 health apps and 3.6bn downloads in 2017). Where the value lies:

  1. Diagnosis and evaluation–devices that generate analyzable data
  2. Virtual patient care–telehealth and remote patient monitoring
  3. Digiceuticals–digital therapeutics delivered via apps
  4. Medication compliance–apps, sensors, games, ingestibles (e.g. Proteus) 

At the Arrival Platform and changing the timetable is machine learning. Already algorithms have grown into artificial neural networks that mimic animal learning behavior. Though the descriptions seem like trial and error, they are fast cycling through cheap, fast cloud computing. Machine learning already can accurately diagnose skin cancer, lung cancer, seizure risk, and in-hospital events like mortality [TTA 14 Feb]. It’s being debated on how to regulate them which according to Editor Charles Lowe will be quite difficult [TTA 25 Oct 17]. Returning to machine learning, its effect on diagnosis, prognosis, and prediction may be seismic. Grab a coffee for The Training Of Dr. Robot: Data Wave Hits Medical Care (Kaiser Health News). Hat tip to EIC Emeritus Steve Hards.

The (necessary?) bucket of Cold Water comes from KQED Science which looked at two studies and more, and deduced that the Future Wasn’t Here. Yet.:

  1. NPJ Digital Medicine’s 15 Jan meta-analysis of 16 remote patient monitoring (RPM) studies using biosensors (from an initial scan of 777) and found little evidence that RPM improves outcomes. The researchers found that many patients are not yet interested in or willing to share RPM data with their physicians. The fact that only 16 randomized controlled trials (RCTs) made the cut is indicative of the lack of maturity (or priority on research) for RPM. 
  2. In JMIR 18 Jan, a systematic review of 23 systematic reviews of 371 studies found that efficacy of mobile health interventions was limited, but there was moderate quality evidence of improvement in asthma patients, attendance rates, and increased smoking abstinence rates. 

Even a cute tabletop socially assistive robot given to COPD patients that increases inhaler medication adherence by 20 points doesn’t seem to cut hospital readmissions. The iRobot Yujin Robot helping patients manage their condition through medication and exercise adherence lets patients admit that they are feeling unwell so that a clinician could check on them either through text or phone and if needed to see their regular doctor. The University of Auckland researchers recommended improvements to the robot, integration to the healthcare system, and comparisons to other remote monitoring technology. JMIR (18 Feb), Mobihealthnews.

As Dr. Robert Wachter of UCSF put it to the KQED reporter, we’re somewhere on the Gartner Hype Cycle past the Peak of Inflated Expectations. But this uneven picture may actually be progress. Perhaps we are moving somewhere between the Slough (ok, Trough) of Disillusionment and the Slope of Enlightment, which is why it’s so confusing?

Google ‘deep learning’ model more accurately predicts in-hospital mortality, readmissions, length of stay in seven-year study

A Google/Stanford/University of California San Francisco/University of Chicago Medicine study has developed a better predictive model for in-hospital admissions using ‘deep learning’ a/k/a machine learning or AI. Using a single data structure and the FHIR standard (Fast Healthcare Interoperability Resources) for each patient’s EHR record, they used de-identified EHR derived data from over 216,000 patients hospitalized for over 24 hours from 2009 to 2016 at UCSF and UCM. Over 47bn data points were utilized.

The researchers then looked at four areas to develop predictive models for mortality, unplanned readmissions (quality of care), length of stay (resource utilization), and diagnoses (understanding of a patient’s problems). The models outperformed traditional predictive models in all cases and because they used a single data structure, are projected to be highly scalable. For instance, the accuracy of the model for mortality was achieved 24-48 hours earlier (page 11). The second part of the study concerned a neural-network attribution system where clinicians can gain transparency into the predictions. Available through Cornell University Library. AbstractPDF.

The MarketWatch article rhapsodizes about these models and neural networks’ potential for cutting healthcare costs but also illustrates the drawbacks of large-scale machine learning and AI: what’s in the EHR including those troublesome clinical notes (the study used three additional deep neural networks to discern which bits of the clinical data within the notes were relevant), lack of uniformity in the data sets, and most patient data not being static (e.g. temperature). 

And Google will make the chips which will get you there. Google’s Tensor Processing Units (TPUs), developed for its own services like Google Assistant and Translate, as well as powering identification systems for driverless cars, can now be accessed through their own cloud computing services. Kind of like Amazon Web Services, but even more powerful. New York Times

Themes and trends at Aging2.0 OPTIMIZE 2017

Aging2.0 OPTIMIZE, in San Francisco on Tuesday and Wednesday 14-15 November, annually attracts the top thinkers and doers in innovation and aging services. It brings together academia, designers, developers, investors, and senior care executives from all over the world to rethink the aging experience in both immediately practical and long-term visionary ways.

Looking at OPTIMIZE’s agenda, there are major themes that are on point for major industry trends.

Reinventing aging with an AI twist

What will aging be like during the next decades of the 21st Century? What must be done to support quality of life, active lives, and more independence? From nursing homes with more home-like environments (Green House Project) to Bill Thomas’ latest project–‘tiny houses’ that support independent living (Minkas)—there are many developments which will affect the perception and reality of aging.

Designers like Yves Béhar of fuseproject are rethinking home design as a continuum that supports all ages and abilities in what they want and need. Beyond physical design, these new homes are powered by artificial intelligence (AI) and machine learning technology that support wellness, engagement, and safety. Advances that are already here include voice-activated devices such as Amazon Alexa, virtual reality (VR), and IoT-enabled remote care (telehealth and telecare).

For attendees at Aging2.0, there will be substantial discussion on AI’s impact and implications, highlighted at Tuesday afternoon’s general session ‘AI-ging Into the Future’ and in Wednesday’s AI/IoT-related breakouts. AI is powering breakthroughs in social robotics and predictive health, the latter using sensor-based ADL and vital signs information for wellness, fall prevention, and dementia care. Some companies part of this conversation are CarePredict, EarlySense, SafelyYou, and Intuition Robotics.

Thriving, not surviving

Thriving in later age, not simply ‘aging in place’ or compensating for the loss of ability, must engage the community, the individual, and providers. There’s new interest in addressing interrelated social factors such as isolation, life purpose, food, healthcare quality, safety, and transportation. Business models and connected living technologies can combine to redesign post-acute care for better recovery, to prevent unnecessary readmissions, and provide more proactive care for chronic diseases as well as support wellness.

In this area, OPTIMIZE has many sessions on cities and localities reorganizing to support older adults in social determinants of health, transportation innovations, and wearables for passive communications between the older person and caregivers/providers. Some organizations and companies contributing to the conversation are grandPad, Village to Village Network, Lyft, and Milken Institute.

Technology and best practices positively affect the bottom line

How can senior housing and communities put innovation into action today? How can developers make it easier for them to adopt innovation? Innovations that ‘activate’ staff and caregivers create a multiplier for a positive effect on care. Successful rollouts create a positive impact on both the operations and financial health of senior living communities.

(more…)

AI good, AI bad (part 2): the Facebook bot dialect scare

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2017/08/ghosty.jpg” thumb_width=”175″ /]Eeek! Scary! Bots develop their own argot. Facebook AI Research (FAIR) tested two chatbots programmed to negotiate. In short order, they developed “their own creepy language”, in the words of the Telegraph, to trade their virtual balls, hats, and books. “Creepy” to FAIR was only a repetitive ‘divergence from English’ since the chatbots weren’t limited to standard English. The lack of restriction enabled them to develop their own argot to quickly negotiate those trades. “Agents will drift off understandable language and invent codewords for themselves,” said Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research. “This isn’t so different from the way communities of humans create shorthands.” like soldiers, stock traders, the slanguage of showbiz mag Variety, or teenagers. Because Facebook’s interest is in AI bot-to-human conversation, FAIR put in the requirement that the chatbots use standard English, which as it turns out is a handful for bots.

The danger in AI-to-AI divergence in language is that humans don’t have a translator for it yet, so we’d never quite understand what they are saying. Batra’s unsettling conclusion: “It’s perfectly possible for a special token to mean a very complicated thought. The reason why humans have this idea of decomposition, breaking ideas into simpler concepts, it’s because we have a limit to cognition.” So this shorthand can look like longhand? FastCompany/Co.Design’s Mark Wilson sees the upside–that software talking their own language to each other could eliminate complex APIs–application program interfaces, which enable different types of software to communicate–by letting the software figure it out. But for humans not being able to dig in and understand it readily? Something to think about as we use more and more AI in healthcare and predictive analytics.

AI good, AI bad. Perhaps a little of both?

Everyone’s getting hot ‘n’ bothered about AI this summer. There’s a clash of giants–Elon Musk, who makes expensive, Federally subsidized electric cars which don’t sell, and Mark Zuckerberg, a social media mogul who fancies himself as a social policy guru–in a current snipe-fest about AI and the risk it presents. Musk, who is a founder of the big-name Future of Life Institute which ponders on AI safety and ethical alignment for beneficial ends, and Zuckerberg, who pooh-poohs any downside, are making their debate points and a few headlines. However, we like to get down to the concretes and here we will go to an analysis of a report by Forrester Research on AI in the workforce. No, we are not about to lose our jobs, yet, but hold on for the top six in the view of Gil Press in Forbes:

  1. Customer self-service in customer-facing physical solutions such as kiosks, interactive digital signage, and self-checkout.
  2. AI-assisted robotic process automation which automates organizational workflows and processes using software bots.
  3. Industrial robots that execute tasks in verticals with heavy, industrial-scale workloads.
  4. Retail and warehouse robots.
  5. Virtual assistants like Alexa and Siri.
  6. Sensory AI that improves computers’ recognition of human sensory faculties and emotions via image and video analysis, facial recognition, speech analytics, and/or text analytics.
[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2017/08/AI.jpg” thumb_width=”200″ /]For our area of healthcare technology, look at #5 and #6 first–virtual assistants leveraging the older adult market like 3rings‘ interface with Amazon Echo [TTA 27 June] and sensory AI for recognition tools with broad applications in everything from telehealth to sleepytime music to video cheer-up calls. Both are on a ‘significant success’ track and in line to hit the growth phase in 1-3 years (illustration at left, click to expand).

Will AI destroy a net 7 percent of US jobs by 2027? Will AI affect only narrow areas or disrupt everything? And will we adapt fast enough? 6 Hot AI Automation Technologies Destroying And Creating Jobs (Forbes)

But we can de-stress ourselves with AI-selected music now to soothe our savage interior beasts. This Editor is testing out Sync Project’s Unwind, which will help me get to sleep (20 min) and take stress breaks (5 min). Clutching my phone (not my pearls) to my chest, the app (available on the unwind.ai website) detects my heart rate (though not giving me a reading) through machine learning and gives me four options to pick on exactly how stressed I am. It then plays music with the right beat pattern to calm me down. Other Sync Project applications with custom music by the Marconi Union and a Spotify interface have worked to alleviate pain, sleep, stress, and Parkinson’s gait issues. Another approach is to apply music to memory issues around episodic memory and memory encoding of new verbal material in adults aging normally. (Zzzzzzzz…..) Apply.sci, Sync Project blog

Health and tech news that’s a snooze–or infuriating

The always acerbic Laurie Orlov has a great article on her Aging in Place Technology Watch that itemizes five news items which discuss the infuriating, the failing, or downright puzzling that affect health and older adults. In the last category, there’s the ongoing US Social Security Administration effort to eliminate paper statements and checks with online and direct deposit only–problematic for many of the oldest adults, disabled and those without reasonable, secure online access–or regular checking accounts. The infuriating is Gmail’s latest ‘upgrade’ to their mobile email that adds three short ‘smart reply’ boxes to the end of nearly every email. Other than sheer laziness and enabling emailing while driving, it’s not needed–and to turn it off, you have to go into your email settings. And for the failing, there’s IBM. There’s the stealth layoff–forcing their estimated 40 percent of remote employees to relocate to brick-and-mortar offices or leave, while they sell remote working software. There’s a falloff in revenue meaning that profits have to be squeezed from a rock. And finally there’s the extraordinarily expensive investment in Watson and Watson Health. This Editor back in February [TTA 3 and 14 Feb] noted the growing misgivings about it, observing that focused AI and simple machine learning are developing quickly and affordably for healthcare diagnostic applications. Watson Health and its massive, slow, and expensive data crunching for healthcare decision support are suitable only for complex diseases and equally massive healthcare organizations–and even they have been displeased, such as MD Anderson Cancer Center in Houston in February (Forbes). Older adults and technology – the latest news they cannot use

Want to attract Google Ventures to your health tech? Look to these seven areas.

The GV Hot 7, especially the finally-acknowledged physician burnout. Google Ventures’ (GV) Dr. Krishna Yeshwant, a GV general partner leading the Life Sciences team, is interested in seven areas, according to his interview in Business Insider (UK):

  • Physician burnout, which has become epidemic as doctors (and nurses) spend more and more time with their EHRs versus patients. This is Job #1 in this Editor’s opinion.

Dr. Yeshwant’s run-on question to be solved is: “Where are the places where we can intervene to continue getting the advantages of the electronic medical record while respecting the fact that there’s a human relationship that most people have gotten into this for that’s been eroded by the fact that there’s now a computer that’s a core part of the conversation.” (Your job–parse this sentence!–Ed.)

Let’s turn to Dr. Robert Wachter for a better statement of the problem. This Editor was present for his talk at the NYeC Digital Health Conference [TTA 19 Jan] and these are quoted from his slides: “Burnout is associated with computerized order entry use and perceived ‘clerical burden’ [of EHRs and other systems]”. He also cites the digital squeeze on physicians and the Productivity Paradox, noted by economist Robert Solow as “You can see the computer age everywhere except in the productivity statistics.” In other words, EHRs are a major thief of time. What needs to happen? “Improvements in the technology and reimagining the work itself.” Citing Mr. Solow again, the Productivity Paradox in healthcare will take 15-20 years to resolve. Dr. Wachter’s talk is here. (more…)

Babylon Health ‘chatbot’ triage AI app raises £50 million in funding (UK)

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2017/04/babylon_lifestyle2.jpg” thumb_width=”150″ /]Babylon Health, which has developed an AI-assisted chatbot to triage a potential patient in minutes, has raised a serious Series B of £50 million (US$60 million). Funders were Kinnevik AB, which had led the Series A, NNC Holdings, and Vostok New Ventures (Crunchbase). According to the FT (through TechCrunch), Babylon’s value is now north of $200 million. Revenues were not disclosed.

The current app uses texts to determine the level of further care, recommends a course of action, then connects the user if needed to a virtual doctor visit, or if acute to go to Accident & Emergency (US=emergency room or department). It also follows up with the user on their test results and health info. The funding will be used to enhance their current AI to extend to diagnosis. They are accumulating daily data on thousands of patients, machine learning which further refines the AI. Founder Dr. Ali Parsa, founder and CEO of Babylon, said in a statement. “Babylon scientists predict that we will shortly be able to diagnose and foresee personal health issues better than doctors, but this is about machines and medics cooperating, not competing.” Like other forms of telemedicine and triage (Zipnosis in health systems), it is designed to put healthcare access and affordability, as they claim, “into the hands of every person on earth”. The NHS pilot in north London [TTA 18 Jan] via the 111 hotline is testing Babylon as a ‘reliever’ though it directs only to a doctor appointment, not a video consult. BBC News, Mobihealthnews

AI as patient safety assistant that reduces, prevents adverse events

The 30 year old SXSW conference and cultural event has been rising as a healthcare venue for the past few years. One talk this Editor would like to have attended this past weekend was presented by Eric Horvitz, Microsoft Research Laboratory Technical Fellow and managing director, who is both a Stanford PhD in computing and an MD. This combination makes him a unique warrior against medical errors, which annually kill over 250,000 patients. His point was that artificial intelligence is increasingly used in tools that are ‘safety nets’ for medical staff in situations such as failure to rescue–the inability to treat complications that rapidly escalate–readmissions, and analyzing medical images.

A readmissions clinical support tool, RAM (Readmissions Management), he worked on eight years agon, produced now by Caradigm, predicts which patients have a high probability of readmission and those who will need additional care. Failure to rescue often results from a concatenation of complications happening quickly and with a lack of knowledge that resemble the prelude to an aircraft crash. “We’re considering [data from] thousands of patients, including many who died in the hospital after coming in for an elective procedure. So when a patient’s condition deteriorates, they might lose an organ system. It might be kidney failure, for example, so renal people come in. Then cardiac failure kicks in so cardiologists come in and they don’t know what the story is. The actual idea is to understand the pipeline down to the event so doctors can intervene earlier.” and to understand the patterns that led up to it. Another is to address potential problems that may be outside the doctor’s direct knowledge field or experiences, including the Bayesian Theory of Surprise affecting the thought process. Dr Horvitz discussed how machine learning can assist medical imaging and interpretation. His points were that AI and machine learning, applied to thousands of patient cases and images, are there to assist physicians, not replace them, and not to replace the human touch. MedCityNews