Coffee break reading: a ‘thumbs down’ on IBM Watson Health from IEEE Spectrum and ‘Der Spiegel’

In a few short years (2012 to now), IBM Watson Health has gone from being a 9,000 lb Harbinger of the Future to a Flopping Flounder. It was first MD Anderson Cancer Center at the University of Texas last year [TTA 22 Feb 17] kicking Watson to the curb after spending $62 million, then all these machine learning, blockchain, and AI upstarts doing most of what Watson was going to do, but cheaper and faster, which this Editor observed early on [TTA 3 Feb 17]. At the end of May, IBM laid off hundreds of workers primarily at three recently acquired data analytics companies. All came on board as market leaders with significant books of business: Phytel, Explorys, and Truven. Clients have evaporated; Phytel, before the acquisition ranked #1 by KLAS in analytics for its patient communication system, reportedly went from 150 to 80 clients. IBM denies the layoffs were anything but much-needed post-acquisition restructuring and refocusing on high-value segments of the IT market.

IEEE Spectrum rated the causes as corporate mismanagement (mashing Phytel and Explorys; IBM’s ‘bluewashing’ acquired companies; the inept ‘offering management’ product development process; the crushed innovation) plus inroads made by competition (those upstarts again!). What’s unusual is the sourcing from former engineers–IEEE is the trade group for tech and engineering professionals. The former IBM-ers were willing to talk in detail and depth, albeit anonymously. 

Der Spiegel takes the German and clinical perspective of what IBM Watson Health has gone wrong, starting with the well-documented failures of Watson at hospitals in Marburg and Giessen. The CEO of Rhön-Klinikum AG, which owns the university hospital at Marburg, reviewed it in action in February. “The performance was unacceptable — the medical understanding at IBM just wasn’t there.” It stumbled over and past diagnoses even a first-year resident would have considered. The test at Marburg ended before a single patient was treated.

The article also outlines several reasons why, including that Watson, after all this time, still has trouble crunching real doctor and physical data. It does not comprehend physician shorthand and negation language, which this Editor imagines is multiplied in languages other than American English. “Some are even questioning whether Watson is more of a marketing bluff by IBM than a crowning achievement in the world of artificial intelligence.” More scathingly, the Rhön-Klinikum AG CEO: “IBM acted as if it had reinvented medicine from scratch. In reality, they were all gloss and didn’t have a plan. Our experts had to take them by the hand.”

Hardly The Blue Wave of the Future. Perhaps the analogy is Dr. Watson as The Great Oz.

News roundup: Walmart and Microsoft AI, are derm apps endangering public with 88% skin cancer diagnosis?

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2017/12/Lasso.jpg” thumb_width=”150″ /]Walmart and Microsoft partner to change the retail experience via AI. The five-year agreement will switch over applications to the cloud and will affect shipping and supply chain. It’s projected in Healthcare Dive that the impact will be in healthcare as well. Microsoft announced last month that it is forming a unit to advance AI and cloud-based healthcare tools. The landscape is under extreme pressure in retail and healthcare delivery, and Walmart needs to ready for future moves which will certainly happen. Walmart is rumored to be interested in acquiring Humana and is currently working with Emory Healthcare in Atlanta. Then there is CVS-Aetna, Cigna-Express Scripts, Google, and (looming above all) Amazon. (Though you can tuck all the years of Amazon’s profits into one year of Walmart’s.)

The ITV News headline grabs attention — but are dermatology apps really endangering the public when teledermatology can help diagnose 88 percent of people with skin cancer and 97 percent of those with benign lesions? A University of Birmingham-led research team did a metastudy of the literature and found three failings: “a lack of rigorous published trials to show they work and are safe, a lack of input during the app development from specialists to identify which lesions are suspicious and flaws in how the technology analyses photos” particularly for scaly or non-pigmented melanomas. But did access to these apps encourage early diagnosis which can lead to up to 100 percent five-year survival? Of course review is required as recommended by the study, but this last factor was not really examined at the British Association of Dermatologists’ annual meeting in Edinburgh. University of Birmingham release with study abstract

Google’s ‘Medical Brain’ tests clinical speech recognition, patient outcome prediction, death risk

Google’s AI division is eager to break into healthcare, and with ‘Medical Brain’ they might be successful. First is harnessing the voice recognition used in their Home, Assistant, and Translate products. Last year they started to test a digital scribe with Stanford Medicine to help doctors automatically fill out EHRs from patient visits, which will conclude in August. Next up, and staffing up, is a “next gen clinical visit experience” which uses audio and touch technologies to improve the accuracy and availability of care.

The third is research Google published last month on using neural networks to predict how long people may stay in hospitals, their odds of re-admission and chances they will soon die. The neural net gathers up the previously ungatherable–old charts, PDF–and transforms it into useful information. They are currently working with the University of California, San Francisco, and the University of Chicago with 46 billion pieces of anonymous patient data. 

A successful test of the approach involved a woman with late-stage breast cancer. Based on her vital signs–for instance, her lungs were filling with fluid–the hospital’s own analytics indicated that there was a 9.3 percent chance she would die during her stay. Google used over 175,000 data points they saw about her and came up with a far higher risk: 19.9 percent. She died shortly after.

Using AI to crunch massive amounts of data is an approach that has been tried by IBM Watson in healthcare with limited success. Augmedix, Microsoft, and Amazon are also attempting AI-assisted systems for scribing and voice recognition in offices. CNBC, Bloomberg

What if you crossed Alexa with a robotic healthcare manager?

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2018/03/pillo_01-625×350.jpg” thumb_width=”150″ /]You might have a tabletop ‘companion robot’ that’s called, interestingly, Pillo. It doesn’t look like something on a bed, nor does it ambulate, but more like a souped-up pastel colored Alexa with Eyes. Debuting at HIMSS 2018 this week, what is non-Alexa-like about it is that is a voice-responsive Wi-Fi/Bluetooth-connected healthcare manager, interacting with the user on Alexa-type requests but in the main managing (nudging?) their care plan, reminding them of medical appointments, delivering patient education, and dispensing their pre-loaded medications in a cup . Pillo claims to use AI algorithms to manage care, proactively engage with patients, and recognize users via voice and facial recognition. Orbita is supplying the platform for the voice assistant technology.

Pillo appears to be targeted to users with chronic conditions who need assistance in care management and with a connecting mobile app to family caregivers and clinicians. There’s no mention of a tracking platform nor connectivity with medical devices such as glucose meters or blood pressure cuffs. According to Forbes, it will ship in 4th Quarter, no pricing mentioned. Pillo raised $1.5 million in a venture round last August from BioAdvance (Crunchbase) with additional funding from Stanley Ventures, Hikma Ventures (the venture arm of Hikma Pharmaceuticals) and Thompson Family Foundation for a total of $4m (Forbes). It’s hard to tell if this will appeal to or be subsidized by pharma, payers, or Medicare primary care providers such as ACOs because the release is rather opaque on specifics.

Digital health is not here. Or it is. Or it’s still “the future” and we’re waiting for the ship to come in.

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2016/06/long-windy-road.jpg” thumb_width=”150″ /]Another bit of convergence this week and last is the appearance of several articles, closely together, about digital health a/k/a health tech or ‘Dr. Robot’. It seems like that for every pundit, writer, and guru who believes “We’ve Arrived”, there’s some discouraging study or contra-news saying “We’re Nowhere Near The New Jerusalem”. This Editor’s been on the train since 2006 (making her a Pioneer but not as Grizzled as some), and wonders if we will ever Get There. 

Nearing Arrival is the POV of Naomi Fried’s article in Mobihealthnews giving her readers the keys to unlock digital health. “Digital health will be the dominant form of non-acute care.” It has value in chopping through the thicket of the low clinical impact technologies that dominate the current scene (Research2Guidance counted only 325,000 health apps and 3.6bn downloads in 2017). Where the value lies:

  1. Diagnosis and evaluation–devices that generate analyzable data
  2. Virtual patient care–telehealth and remote patient monitoring
  3. Digiceuticals–digital therapeutics delivered via apps
  4. Medication compliance–apps, sensors, games, ingestibles (e.g. Proteus) 

At the Arrival Platform and changing the timetable is machine learning. Already algorithms have grown into artificial neural networks that mimic animal learning behavior. Though the descriptions seem like trial and error, they are fast cycling through cheap, fast cloud computing. Machine learning already can accurately diagnose skin cancer, lung cancer, seizure risk, and in-hospital events like mortality [TTA 14 Feb]. It’s being debated on how to regulate them which according to Editor Charles Lowe will be quite difficult [TTA 25 Oct 17]. Returning to machine learning, its effect on diagnosis, prognosis, and prediction may be seismic. Grab a coffee for The Training Of Dr. Robot: Data Wave Hits Medical Care (Kaiser Health News). Hat tip to EIC Emeritus Steve Hards.

The (necessary?) bucket of Cold Water comes from KQED Science which looked at two studies and more, and deduced that the Future Wasn’t Here. Yet.:

  1. NPJ Digital Medicine’s 15 Jan meta-analysis of 16 remote patient monitoring (RPM) studies using biosensors (from an initial scan of 777) and found little evidence that RPM improves outcomes. The researchers found that many patients are not yet interested in or willing to share RPM data with their physicians. The fact that only 16 randomized controlled trials (RCTs) made the cut is indicative of the lack of maturity (or priority on research) for RPM. 
  2. In JMIR 18 Jan, a systematic review of 23 systematic reviews of 371 studies found that efficacy of mobile health interventions was limited, but there was moderate quality evidence of improvement in asthma patients, attendance rates, and increased smoking abstinence rates. 

Even a cute tabletop socially assistive robot given to COPD patients that increases inhaler medication adherence by 20 points doesn’t seem to cut hospital readmissions. The iRobot Yujin Robot helping patients manage their condition through medication and exercise adherence lets patients admit that they are feeling unwell so that a clinician could check on them either through text or phone and if needed to see their regular doctor. The University of Auckland researchers recommended improvements to the robot, integration to the healthcare system, and comparisons to other remote monitoring technology. JMIR (18 Feb), Mobihealthnews.

As Dr. Robert Wachter of UCSF put it to the KQED reporter, we’re somewhere on the Gartner Hype Cycle past the Peak of Inflated Expectations. But this uneven picture may actually be progress. Perhaps we are moving somewhere between the Slough (ok, Trough) of Disillusionment and the Slope of Enlightment, which is why it’s so confusing?

Google ‘deep learning’ model more accurately predicts in-hospital mortality, readmissions, length of stay in seven-year study

A Google/Stanford/University of California San Francisco/University of Chicago Medicine study has developed a better predictive model for in-hospital admissions using ‘deep learning’ a/k/a machine learning or AI. Using a single data structure and the FHIR standard (Fast Healthcare Interoperability Resources) for each patient’s EHR record, they used de-identified EHR derived data from over 216,000 patients hospitalized for over 24 hours from 2009 to 2016 at UCSF and UCM. Over 47bn data points were utilized.

The researchers then looked at four areas to develop predictive models for mortality, unplanned readmissions (quality of care), length of stay (resource utilization), and diagnoses (understanding of a patient’s problems). The models outperformed traditional predictive models in all cases and because they used a single data structure, are projected to be highly scalable. For instance, the accuracy of the model for mortality was achieved 24-48 hours earlier (page 11). The second part of the study concerned a neural-network attribution system where clinicians can gain transparency into the predictions. Available through Cornell University Library. AbstractPDF.

The MarketWatch article rhapsodizes about these models and neural networks’ potential for cutting healthcare costs but also illustrates the drawbacks of large-scale machine learning and AI: what’s in the EHR including those troublesome clinical notes (the study used three additional deep neural networks to discern which bits of the clinical data within the notes were relevant), lack of uniformity in the data sets, and most patient data not being static (e.g. temperature). 

And Google will make the chips which will get you there. Google’s Tensor Processing Units (TPUs), developed for its own services like Google Assistant and Translate, as well as powering identification systems for driverless cars, can now be accessed through their own cloud computing services. Kind of like Amazon Web Services, but even more powerful. New York Times

Advances in 2017 which may set the digital health stage for 2018

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2017/12/Lasso.jpg” thumb_width=”100″ /]Our second Roundup takes us to the Lone Prairie, where we spot some promising young Health Tech Advances that may grow up to be Something Big in 2018 and beyond. 

From Lancaster University, just published in Brain Research (academic/professional access) is their study of an experimental ‘triple agonist’ drug developed for type 2 diabetes that shows promise in reversing the memory loss of Alzheimer’s disease. The treatment in APP/PS1 mice with human mutated genes used a combination of GLP-1, GIP, and Glucagon that “enhanced levels of a brain growth factor which protects nerve cell functioning, reduced the amount of amyloid plaques in the brain linked with Alzheimer’s, reduced both chronic inflammation and oxidative stress, and slowed down the rate of nerve cell loss.” This treatment explores a known link between type 2 diabetes as a risk factor and the implications of both impaired insulin, linked to cerebral degenerative processes in type 2 diabetes and Alzheimer’s disease, and insulin desensitization. Other type 2 diabetes drugs such as liraglutide have shown promising results versus the long trail of failed ‘amyloid busters‘. For an estimated 5.5 million in the US and 850,000 in the UK with Alzheimer’s and other dementias, and for those whose lives have been touched by it, this research is the first sign of hope in a long time. AAAS EurekAlertLancaster University release, video

At University College London (UCL), a drug treatment for Huntington’s Disease in its first human trial has for the first time safely lowered levels of toxic huntingtin protein in the brain. The group of 46 patients drawn from the UK, Canada, and Germany were given IONIS (the pharmaceutical company)-HTTRx or placebo, injected into spinal fluid in ascending doses to enable it to reach the brain starting in 2015 after over a decade in pre-development. The research comes from a partnership between UCL and University College London Hospitals NHS Foundation Trust. UCL News releaseUCL Huntington’s Research page, BBC News

Meanwhile, The National Institutes of Health (NIH)’s All of Us programpart of the Federal Precision Medicine Initiative (PMI), seeks to track a million+ Americans through their medical history, behavior, exercise, blood, and urine samples. It’s all voluntary, of course, the recruitment’s barely begun for a medical research resource that may dwarf anything else in the world. This is the NIH program that lured Eric Dishman from Intel. And of course, it’s controversial–that gigantic quantities of biometric data, genomic and otherwise, on non-genetic related diseases, will simply have diminishing returns and divert money/attention from diseases with clear genomic causes–such as Huntington’s. Oregon Public Broadcasting.

Let’s not forget Google DeepMind Health’s Streams app in test at the Royal Free NHS Foundation Trust Hospital in north London, where alerts on patients at risk of developing acute kidney infection (AKI) are pushed to clinicians’ mobile phones, (more…)

Tunstall partners with voice AI in EU, home health in Canada, update on Ripple alerter in US

Tunstall Healthcare seems to be a recent convert to the virtues of partnership and not trying to do it all in-house. Here’s a roundup of their recent activity in three countries with advanced technology developers. 

Perhaps the most advanced is conversational computing, which with Siri and Alexa is the 2017-2018 ‘IT Girl’, albeit prone to a few gaffes.  The European Commission is incentivizing the development of the next generation of interactive conversational artificial intelligence to assist older adults to live independently within their home. The largest award of €4m is going to Intelligent Voice, a speech recognition company based in London. The EMPATHIC project will develop a conversational ‘Personalized Virtual Coach’ with partners including Tunstall and the University of Bilbao, as well as several other companies and academic organizations in seven European nations. Digital Journal

On the other side of the Atlantic, Tunstall is partnering with TELUS Health in Toronto. TELUS will use Tunstall’s ICP Integrated Care Platform with remote patient monitoring and videoconference telehealth capabilities to monitor patients in their network. Apparently, this is the first use of the ICP in the Americas, as previous deployments have been in Europe, Australasia, and China. It is also additive to TELUS’ own capabilities. TELUS itself is a conglomerate of healthcare tech, with EHRs, analytics, consumer health, claims/benefits management, and pharmacy management. TELUS release.

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2017/02/ripplenetwork_5890862790fc7.jpg” thumb_width=”150″ /]This Editor also followed up with the CEO of Ripple, the smart-looking compact alerter targeted to a younger demographic that would dial 911 in emergency situations through a smartphone app or for a subscription fee, connect to Tunstall’s call center network. It was Americas’ CEO Casey Pittock’s last move of note back in February. In June, with his departure, a check of Kickstarter and social media indicated that Ripple also disappeared. Last month, after reaching out to their founder/CEO Tim O’Neil, it was good to hear that this was quite wrong. Ripple was featured on HSN on 23 September (release) and joined that month with Michigan Governor Rick Snyder and first lady Susan Snyder at the End Campus Sexual Assault Summit. On the new website, it’s priced as an affordable safety device: $19 for one unit connecting to an app to push notifications, plus $10 monthly for 24/7 live monitoring through Tunstall. A discreet alert device that has a jewelry-type look, pares safety down to the essentials, and extends safety coverage to the young does have something on the ball.

 

CES Unveiled’s preview of health tech at CES 2018

CES Unveiled, Metropolitan Pavilion, NYC, Thursday 9 November

The Consumer Technology Association’s (CTA) press preview of the gargantuan CES 9-12 January 2018 Las Vegas event was the first of several international preview ‘road shows’. It’s a benchmark of the ebb and flow of health tech and related trends on the grand scale. Gone are the flashy wearables which would change colors based on our sweat patterns and heart rate, or track the health and movement of pets. Now it’s the Big Issues of 5G, AI, machine learning, AR/VR, and smart cities. Entertainment, especially sports, are now being reinvented by all of these.

The developments this Editor gleaned from the mountain of information CEA plies us keyboard tappers that are most relevant to healthcare are:

  • Wireless 5G. As this Editor has written previously from Ericsson and Qualcomm, 5G and 5G New Radio will enable amazingly fast mobile speeds and hard-to-believe fast connectivity by 2019. It will enable IoT, self-driving cars, cars that communicate with each other, reconstruction of industrial plants, electric distribution, multimodal transport, and perhaps the largest of all, smart cities. The automation of everything is the new mantra. Accenture estimates the impact will be 3 million new jobs (nothing about loss), annual GDP increased by $500bn, and drive a $275bn investment from telecom operators.
  • AI.  Society will be impacted by machine learning, neural networks and narrow (e.g. calorie counting, diagnostics) versus general AI (simulation of human intelligence). This affects voice-activated assistants like Echo, Alexa, and Google Home (now owned by 12 percent of the population, CES survey) as well as robotics to ‘read’ us better. These conversations with context may move to relationships with not only these assistants but home robots such as from Mayfield Robotics’ Kuri (which this Editor attempted to interact with on the show floor, to little effect and disappointment). Oddly not mentioned were uses of AI in ADL and vital signs tracking interpreted for predictive health.
  • Biometrics. This will affect security first in items like padlocks (the new Bio-Key Touchlock) using fingerprint recognition and smart wallets, then facial recognition usable in a wide variety of situations such as workplaces, buildings, and smartphones. Imagine their use in items like key safes, phones, home locks, and waypoints inside the home for activity monitoring.
  • AR and VR. Power presence now puts viewers in the middle of a story that is hard to distinguish from reality. The pricing for viewers is dropping to the $200-400 range with Oculus Go and Rift. At the Connected Health Conference, this Editor saw how VR experiences could ease anxiety and disconnectedness in older people with mobility difficulties or dementia (OneCaringTeam‘s Aloha VR) or pain reduction (Cedars-Sinai tests). The other is Glass for those hands-on workers [TTA 24 July] and heads-up displays in retail.

CES is also hosting the fourth Extreme Tech Challenge. Of the ten semi-finalists showing down on 11 January, three are in healthcare: Neurotrack to assess and improve memory; Tissue Analytics that uses smartphone cameras to assess wounds and healing; and (drum roll) the winner of TTA’s Insanely Cute Factor competition, the Owlet smart sock for baby monitoring [TTA’s backfile here]. One of the judges is Sir Richard Branson, who will host the finalists on 28 February on Necker Island (which hopefully will be rebuilt by that time).

After the nearly two-hour briefing, CEA hosted a mini-show on the ground floor of the Metropolitan. (more…)

Themes and trends at Aging2.0 OPTIMIZE 2017

Aging2.0 OPTIMIZE, in San Francisco on Tuesday and Wednesday 14-15 November, annually attracts the top thinkers and doers in innovation and aging services. It brings together academia, designers, developers, investors, and senior care executives from all over the world to rethink the aging experience in both immediately practical and long-term visionary ways.

Looking at OPTIMIZE’s agenda, there are major themes that are on point for major industry trends.

Reinventing aging with an AI twist

What will aging be like during the next decades of the 21st Century? What must be done to support quality of life, active lives, and more independence? From nursing homes with more home-like environments (Green House Project) to Bill Thomas’ latest project–‘tiny houses’ that support independent living (Minkas)—there are many developments which will affect the perception and reality of aging.

Designers like Yves Béhar of fuseproject are rethinking home design as a continuum that supports all ages and abilities in what they want and need. Beyond physical design, these new homes are powered by artificial intelligence (AI) and machine learning technology that support wellness, engagement, and safety. Advances that are already here include voice-activated devices such as Amazon Alexa, virtual reality (VR), and IoT-enabled remote care (telehealth and telecare).

For attendees at Aging2.0, there will be substantial discussion on AI’s impact and implications, highlighted at Tuesday afternoon’s general session ‘AI-ging Into the Future’ and in Wednesday’s AI/IoT-related breakouts. AI is powering breakthroughs in social robotics and predictive health, the latter using sensor-based ADL and vital signs information for wellness, fall prevention, and dementia care. Some companies part of this conversation are CarePredict, EarlySense, SafelyYou, and Intuition Robotics.

Thriving, not surviving

Thriving in later age, not simply ‘aging in place’ or compensating for the loss of ability, must engage the community, the individual, and providers. There’s new interest in addressing interrelated social factors such as isolation, life purpose, food, healthcare quality, safety, and transportation. Business models and connected living technologies can combine to redesign post-acute care for better recovery, to prevent unnecessary readmissions, and provide more proactive care for chronic diseases as well as support wellness.

In this area, OPTIMIZE has many sessions on cities and localities reorganizing to support older adults in social determinants of health, transportation innovations, and wearables for passive communications between the older person and caregivers/providers. Some organizations and companies contributing to the conversation are grandPad, Village to Village Network, Lyft, and Milken Institute.

Technology and best practices positively affect the bottom line

How can senior housing and communities put innovation into action today? How can developers make it easier for them to adopt innovation? Innovations that ‘activate’ staff and caregivers create a multiplier for a positive effect on care. Successful rollouts create a positive impact on both the operations and financial health of senior living communities.

(more…)

Weekend Big Read: will telemedicine do to retail healthcare what Amazon did to retail?

Updated. Our past contributor and TelehealthWorks’ Bruce Judson (ATA 2017 coverage) has penned this weekend’s Big Read in the HuffPost. His hypothesis is that telemedicine specifically will disrupt location-based care, followed by other digitally based care–and that executives at health systems and payers are in denial. More and more states are recognizing both parity of treatment and (usually) payment. Telemedicine also appeals to three major needs: care at home or on the go, with a minimal wait; maldistribution of care, especially specialized care; and follow-up/post-acute care. His main points in the article:

  • Healthcare executives are being taken by surprise because present digital capabilities will not be future capabilities, and the shift to virtual will be a gradual process
  • Telemedicine will address doctor shortages and grow into coordinated care platforms embedding expertise (via connected diagnostics, analytics, machine learning, AI) and care teams
  • Telemedicine will eventually go up-market and directly compete with large providers in urban areas, displacing a significant amount of in-person care with virtual care
  • Telemedicine will start to incorporate continuous feedback loops to further optimize their services and move into virtual health coaching and chronic care management
  • Telemedicine platforms are also sub-specializing into stroke response, pediatrics, and neurology
  • Centers of expertise and expert platforms will become larger and fewer–centralizing into repositories of ‘the best’
  • Platforms will be successful if they are trusted through positive patient experiences. This is a consumer satisfaction model.

Mr. Judson draws an analogy of healthcare with internet services, an area where he has decades of expertise: “A general phenomenon associated with Internet services is that they break activities into their component parts, and then reconnect them in a digital chain.” Healthcare will undergo a similar deconstruction and reconstruction with a “new set of competitive dynamics.”

It’s certainly a provocative POV that at least gives a rationale for the sheer messiness and stop-n-start that this Editor has observed in Big Health since the early 2000s. A caution: the internet, communications, and retail do not endure the sheer volume of regulatory force imposed on healthcare, which tends to make the retail analogy inexact. Governments monitor and regulate health outcomes, not search results or video downloads (except when it comes to net neutrality). It’s hard to find an industry so regulated other than financial/banking and utilities. FierceHealthcare also found the premise intriguing, noting the VA’s ‘Anywhere’ programs [TTA 9 Aug] and citing two studies indicating 96 percent of large employers plan to make telemedicine, also with behavioral health services, available, and that 20 percent of employers are seeing over 8 percent employee utilization. (Under 10 percent utilization gave RAND the vapors earlier this year with both this Editor and Mr. Judson stinging RAND’s findings with separate analyses.)

AI good, AI bad (part 2): the Facebook bot dialect scare

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2017/08/ghosty.jpg” thumb_width=”175″ /]Eeek! Scary! Bots develop their own argot. Facebook AI Research (FAIR) tested two chatbots programmed to negotiate. In short order, they developed “their own creepy language”, in the words of the Telegraph, to trade their virtual balls, hats, and books. “Creepy” to FAIR was only a repetitive ‘divergence from English’ since the chatbots weren’t limited to standard English. The lack of restriction enabled them to develop their own argot to quickly negotiate those trades. “Agents will drift off understandable language and invent codewords for themselves,” said Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research. “This isn’t so different from the way communities of humans create shorthands.” like soldiers, stock traders, the slanguage of showbiz mag Variety, or teenagers. Because Facebook’s interest is in AI bot-to-human conversation, FAIR put in the requirement that the chatbots use standard English, which as it turns out is a handful for bots.

The danger in AI-to-AI divergence in language is that humans don’t have a translator for it yet, so we’d never quite understand what they are saying. Batra’s unsettling conclusion: “It’s perfectly possible for a special token to mean a very complicated thought. The reason why humans have this idea of decomposition, breaking ideas into simpler concepts, it’s because we have a limit to cognition.” So this shorthand can look like longhand? FastCompany/Co.Design’s Mark Wilson sees the upside–that software talking their own language to each other could eliminate complex APIs–application program interfaces, which enable different types of software to communicate–by letting the software figure it out. But for humans not being able to dig in and understand it readily? Something to think about as we use more and more AI in healthcare and predictive analytics.

Behave, Robot! DARPA researchers teaching them some manners.

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2014/01/Overrun-by-Robots1-183×108.jpg” thumb_width=”150″ /]Weekend Reading While AI is hotly debated and the Drudge Report features daily the eeriest pictures of humanoid robots, the hard work on determining social norms and programming them into robots continues. DARPA-funded researchers at Brown and Tufts Universities are, in their words, working “to understand and formalize human normative systems and how they guide human behavior, so that we can set guidelines for how to design next-generation AI machines that are able to help and interact effectively with humans,” said Reza Ghanadan, DARPA program manager. ‘Normal’ people determine ‘norm violations’ quickly (they must not live in NYC), so to prevent robots from crashing into walls or behaving towards humans in an unethical manner (see Isaac Asimov’s Three Laws of Robotics), the higher levels of robots will eventually have the capacity to learn, represent, activate, and apply a large number of norms to situational behavior. Armed with Science

This directly relates to self-driving cars, which are supposed to solve all sorts of problems from road rage to traffic jams. It turns out that they cannot live up to the breathless hype of Elon Musk, Google, and their ilk, even taking the longer term. Sequencing on roadways? We don’t have the high-accuracy GPS like the Galileo system yet. Rerouting? Eminently hackable and spoofable as WAZE has been. Does it see obstacles, traffic signals, and people clearly? Can it make split-second decisions? Can it anticipate the behavior of other drivers? Can it cope with mechanical failure? No more so, and often less, at present than humans. And self-drivers will be a bonanza for trial lawyers, as added to the list will be car companies and dealers to insurers and owners. While it will give mobility to the older, vision impaired, and disabled, it could also be used to restrict freedom of movement. Why not simply incorporate many of these assistive features into cars, as some have been already? An intelligent analysis–and read the comments (click by comments at bottom to open). Problems and Pitfalls in Self-Driving Cars (American Thinker)

Health and tech news that’s a snooze–or infuriating

The always acerbic Laurie Orlov has a great article on her Aging in Place Technology Watch that itemizes five news items which discuss the infuriating, the failing, or downright puzzling that affect health and older adults. In the last category, there’s the ongoing US Social Security Administration effort to eliminate paper statements and checks with online and direct deposit only–problematic for many of the oldest adults, disabled and those without reasonable, secure online access–or regular checking accounts. The infuriating is Gmail’s latest ‘upgrade’ to their mobile email that adds three short ‘smart reply’ boxes to the end of nearly every email. Other than sheer laziness and enabling emailing while driving, it’s not needed–and to turn it off, you have to go into your email settings. And for the failing, there’s IBM. There’s the stealth layoff–forcing their estimated 40 percent of remote employees to relocate to brick-and-mortar offices or leave, while they sell remote working software. There’s a falloff in revenue meaning that profits have to be squeezed from a rock. And finally there’s the extraordinarily expensive investment in Watson and Watson Health. This Editor back in February [TTA 3 and 14 Feb] noted the growing misgivings about it, observing that focused AI and simple machine learning are developing quickly and affordably for healthcare diagnostic applications. Watson Health and its massive, slow, and expensive data crunching for healthcare decision support are suitable only for complex diseases and equally massive healthcare organizations–and even they have been displeased, such as MD Anderson Cancer Center in Houston in February (Forbes). Older adults and technology – the latest news they cannot use

Want to attract Google Ventures to your health tech? Look to these seven areas.

The GV Hot 7, especially the finally-acknowledged physician burnout. Google Ventures’ (GV) Dr. Krishna Yeshwant, a GV general partner leading the Life Sciences team, is interested in seven areas, according to his interview in Business Insider (UK):

  • Physician burnout, which has become epidemic as doctors (and nurses) spend more and more time with their EHRs versus patients. This is Job #1 in this Editor’s opinion.

Dr. Yeshwant’s run-on question to be solved is: “Where are the places where we can intervene to continue getting the advantages of the electronic medical record while respecting the fact that there’s a human relationship that most people have gotten into this for that’s been eroded by the fact that there’s now a computer that’s a core part of the conversation.” (Your job–parse this sentence!–Ed.)

Let’s turn to Dr. Robert Wachter for a better statement of the problem. This Editor was present for his talk at the NYeC Digital Health Conference [TTA 19 Jan] and these are quoted from his slides: “Burnout is associated with computerized order entry use and perceived ‘clerical burden’ [of EHRs and other systems]”. He also cites the digital squeeze on physicians and the Productivity Paradox, noted by economist Robert Solow as “You can see the computer age everywhere except in the productivity statistics.” In other words, EHRs are a major thief of time. What needs to happen? “Improvements in the technology and reimagining the work itself.” Citing Mr. Solow again, the Productivity Paradox in healthcare will take 15-20 years to resolve. Dr. Wachter’s talk is here. (more…)

The stop-start of health tech in the NHS continues (UK)

Continuing their critique of the state of technology within the NHS [TTA 17 Feb], The King’s Fund’s Harry Evans examines the current state of incipient ‘rigor mortis’ (his term). Due to the upcoming election, the Department of Health is delaying its response to Dame Fiona Caldicott, the National Data Guardian for Health and Care (NDG), on her review of data security, consent and opt-outs (Gov.UK publications).

People have significant trust and privacy concerns about their data, which led to NHS England suspending care.data over three years ago. But with safeguards in place, public polling supports the sharing of health data for uses such as research and direct care. But…there’s more. Now there is ‘algorithmic accountability’, which may single out individuals and influence their care, much as algorithms dictate what online ads we’re served. What of the patient data being served to Google DeepMind, IBM Watson Health, and Vitalpac for AI development? Have people adjusted their concerns, and have systems evolved to better store, secure, and share data? And how can this be implemented at the local NHS level? The NHS and technology: turn it off and on again Hat tip to Susanne Woodman of BRE.

A reminder that The King’s Fund’s Digital Health and Care Congress is on 11-12 July. Click on the sidebar to go directly to information and to register. Preview video; the Digital Health Congress fact sheet includes information on sponsoring or exhibiting. To make the event more accessible, there are new reduced rates for groups and students, plus bursary spots available for patients and carers. TTA is again a media partner of the Digital Health and Care Congress 2017. Updates on Twitter @kfdigital17