Babylon Health ‘chatbot’ triage AI app raises £50 million in funding (UK)

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2017/04/babylon_lifestyle2.jpg” thumb_width=”150″ /]Babylon Health, which has developed an AI-assisted chatbot to triage a potential patient in minutes, has raised a serious Series B of £50 million (US$60 million). Funders were Kinnevik AB, which had led the Series A, NNC Holdings, and Vostok New Ventures (Crunchbase). According to the FT (through TechCrunch), Babylon’s value is now north of $200 million. Revenues were not disclosed.

The current app uses texts to determine the level of further care, recommends a course of action, then connects the user if needed to a virtual doctor visit, or if acute to go to Accident & Emergency (US=emergency room or department). It also follows up with the user on their test results and health info. The funding will be used to enhance their current AI to extend to diagnosis. They are accumulating daily data on thousands of patients, machine learning which further refines the AI. Founder Dr. Ali Parsa, founder and CEO of Babylon, said in a statement. “Babylon scientists predict that we will shortly be able to diagnose and foresee personal health issues better than doctors, but this is about machines and medics cooperating, not competing.” Like other forms of telemedicine and triage (Zipnosis in health systems), it is designed to put healthcare access and affordability, as they claim, “into the hands of every person on earth”. The NHS pilot in north London [TTA 18 Jan] via the 111 hotline is testing Babylon as a ‘reliever’ though it directs only to a doctor appointment, not a video consult. BBC News, Mobihealthnews

PwC: your job at risk by robots, AI by 2030?

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2016/06/robottoy-1.jpg” thumb_width=”150″ /]PwC‘s latest study on the effect of robotics and artificial intelligence on today’s and future workforce is the subject of this BBC Business article focusing on the UK workforce. 30 percent of existing jobs in the UK were potentially at a high risk of automation by the 2030s, compared with 38 percent in the US, 35 percent in Germany and 21 percent in Japan. Most at risk are jobs in manufacturing and retail, but to quote PwC’s page on their multiple studies, robotics and AI may change how we work in a different way, an “augmented and collaborative working model alongside people – what we call the ‘blended workforce’”. Or not less work, but different types of work. But some jobs, like truck (lorry) drivers, would go away or be vastly diminished.

The effect on healthcare? The categories are very broad, but the third category of employment affected is administrative and support services at 37 percent, followed by professional, scientific and technical at 26 percent, and human health and social work at 17 percent. Will it increase productivity and thus salaries, which have languished in the past decade? Will it speed innovation and care in our area? Will it help the older population to be healthy and productive? And the societal effects will roll on, but perhaps not for some. View this wonderful exchange between Jean Harlow and Marie Dressler that closes the 1933 film Dinner at Eight. Hat tip to Guy Dewsbury @dewsbury via Twitter

AI as patient safety assistant that reduces, prevents adverse events

The 30 year old SXSW conference and cultural event has been rising as a healthcare venue for the past few years. One talk this Editor would like to have attended this past weekend was presented by Eric Horvitz, Microsoft Research Laboratory Technical Fellow and managing director, who is both a Stanford PhD in computing and an MD. This combination makes him a unique warrior against medical errors, which annually kill over 250,000 patients. His point was that artificial intelligence is increasingly used in tools that are ‘safety nets’ for medical staff in situations such as failure to rescue–the inability to treat complications that rapidly escalate–readmissions, and analyzing medical images.

A readmissions clinical support tool, RAM (Readmissions Management), he worked on eight years agon, produced now by Caradigm, predicts which patients have a high probability of readmission and those who will need additional care. Failure to rescue often results from a concatenation of complications happening quickly and with a lack of knowledge that resemble the prelude to an aircraft crash. “We’re considering [data from] thousands of patients, including many who died in the hospital after coming in for an elective procedure. So when a patient’s condition deteriorates, they might lose an organ system. It might be kidney failure, for example, so renal people come in. Then cardiac failure kicks in so cardiologists come in and they don’t know what the story is. The actual idea is to understand the pipeline down to the event so doctors can intervene earlier.” and to understand the patterns that led up to it. Another is to address potential problems that may be outside the doctor’s direct knowledge field or experiences, including the Bayesian Theory of Surprise affecting the thought process. Dr Horvitz discussed how machine learning can assist medical imaging and interpretation. His points were that AI and machine learning, applied to thousands of patient cases and images, are there to assist physicians, not replace them, and not to replace the human touch. MedCityNews

#HIMSS17 roundup: machine learning, Proteus, Soon-Shiong/NantWorks’ cancer vax, Uniphy Health, more

HIMSS17 is over for another year, but there is plenty of related reading left for anyone who is not still recovering from sensory overload. There wasn’t big news made, other than Speaker John Boehner trying to have it both ways about what the House needs to do about replacing the failing ACA a/k/a Obamacare. Here’s our serving:

  • If you are interested in the diffusion of workflow technologies into healthcare, including machine learning and AI, there’s a long-form three-part series in Healthcare IT News that this Editor noted has suddenly become a little difficult to find–but we did. The articles also helpfully list vendors that list certain areas of expertise in their exhibitor keywords.
  • Mobihealthnews produced a two-page wrap up that links to various MHN articles where applicable. Of interest:
    • a wound measurement app that Intermountain Healthcare developed with Johns Hopkins spinoff Tissue Analytics
    • Children’s Health of Dallas Texas is using the Proteus Health ingestible med sensor with a group of teenaged organ post-transplant patients to improve med compliance
    • the Medisafe med management app has a new feature that alerts users to drug, food and alcohol interactions with their regimen, which is to this writer’s knowledge the first-ever med app to do this
    • Info security spending is rising, according to the Thales Data Threat Report. This year, 81 percent of U.S. healthcare organizations and 76 percent of global healthcare organizations will increase information security spending.
  • Healthcare and sports mogul Patrick Soon-Shiong presented on NantHealth‘s progress on a cancer vaccine that became a significant part of the former VP Joe Biden’s initiative, Cancer Breakthroughs 2020. Dr Soon-Shiong stated that the FDA has given approval to advance the vaccine into later clinical trials, and also unveiled Nant AI, an augmented intelligence platform to high-speed process genome activity of cancer tumors and the Nant Cloud, a cloud server which can generate bioinformatic data at 26 seconds per patient. This is in addition to the NantHealth GPS Cancer diagnostic tool used to isolate new mutations in a given tumor. HealthcareITNews MedCityNews takes a dimmer view, noting two recent cancer vaccine failures. Dimmer still is Stat’s takedown of Dr Soon-Shiong, which reportedly was the talk of HIMSS.
  • Leading up to HIMSS, Newark’s own Uniphy Health announced UH4, the latest generation of its enterprise-wide communications and clinical collaboration platform for hospitals and clinics to facilitate the ‘real-time health system’. Release

Not enough? DestinationHIMSS, produced by Healthcare IT News/HIMSS Media, has its usual potpourri of official reporting here.

AI as diagnostician in ophthalmology, dermatology. Faster adoption than IBM Watson?

Three recent articles from the IEEE (formally the Institute of Electronics and Electrical Engineers) Spectrum journal are significant in pointing to advances in artificial intelligence (AI) for specific medical conditions–and which may go into use faster and more cheaply than the massive machine learning/decision support program represented by IBM Watson Health.

A Chinese team developed CC-Cruiser to diagnose congenital cataracts, which affect children and cause irreversible blindness. The program developed algorithms that used a relatively narrow database of 410 images of congenital cataracts and 476 images of normal eyes. The CC-Cruiser team from Sun Yat-Sen and Xidian Universities developed algorithms to diagnose the existence of cataracts, predict the severity of the disease, and suggest treatment decisions. The program was subjected to five tests, with most of the critical ones over 90 percent accuracy versus doctor consults. There, according to researcher and ophthalmologist Haotian Lin, is the ‘rub’–that even with more information, he cannot project the system going to 100 percent accuracy. The other factor is the human one–face to face interaction. He strongly suggests that the CC-Cruiser system is a tool to complement and confirm doctor judgment, and could be used in non-specialized medical centers to diagnose and refer patients. Ophthalmologists vs. AI: It’s a Tie (Hat tip to former TTA Ireland Editor Toni Bunting)

In the diagnosis of skin cancers, a Stanford University team used GoogleNet Inception v3 to build a deep learning algorithm. This used a huge database of 130,000 lesion images from more than 2000 diseases. Inception was successful in performing on par with 21 board-certified dermatologists in differentiating certain skin lesions, for instance, keratinocyte carcinomas from benign seborrheic keratoses. The major limitations here are the human doctor’s ability to touch and feel the skin, which is key to diagnosis, and adding the context of the patient’s history. Even with this, Inception and similar systems could help to triage patients to a doctor faster. Computer Diagnoses Skin Cancers

Contrasting this with IEEE’s writeup on the slow development of IBM Watson Health’s systems, each having to be individually developed, continually refined, using massive datasets, best summarized in Dr Robert Wachter’s remark, “But in terms of a transformative technology that is changing the world, I don’t think anyone would say Watson is doing that today.” The ‘Watson May See You Someday’ article may be from mid-2015, but it’s only this week that Watson for Oncology has announced its first implementation in a regional medical center based in Jupiter, Florida. Watson for Oncology collaborates with Memorial Sloan-Kettering in NYC (MSK) (and was tested in other major academic centers). Currently it is limited to breast, lung, colorectal, cervical, ovarian and gastric cancers, with nine additional cancer types to be added this year. Mobihealthnews

What may change the world of medicine could be AI systems using smaller, specific datasets, with Watson Health for the big and complex diagnoses needing features like natural-language processing.

Robot-assisted ‘smart homes’ and AI: the boundary between supportive and intrusive?

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2016/06/Robot-Belgique-1.png” thumb_width=”200″ /]Something that has been bothersome to Deep Thinkers (and Not Such Deep Thinkers like this Editor) is the almost-forced loss of control inherent in discussion of AI-powered technology. There is a elitist Wagging of Fingers that generally accompanies the Inevitable Questions and Qualms.

  • If you don’t think 100 percent self-driving cars are an Unalloyed Wonder, like Elon Musk and Google tells you, you’re a Luddite
  • If you have concerns about nanny tech or smart homes which can spy on you, you’re paranoid
  • If you are concerned that robots will take the ‘social’ out of ‘social care’, likely replace human carers for people, or lose your neighbor their job, you are not with the program

I have likely led with the reason why: loss of control. Control does not motivate just Control Freaks. Think about the decisions you like versus the ones you don’t. Think about how helpless you felt as a child or teenager when big decisions were made without any of your input. It goes that deep.

In the smart home, robotic/AI world then, who has the control? Someone unknown, faceless, well meaning but with their own rationale? (Yes, those metrics–quality, cost, savings) Recall ‘Uninvited Guests’, the video which demonstrated that Dad Ain’t Gonna Take Nannying and is good at sabotage.

Let’s stop and consider: what are we doing? Where are we going? What fills the need for assistance and care, yet retains that person’s human autonomy and that old term…dignity? Maybe they might even like it? For your consideration:

How a robot could be grandma’s new carer (plastic dogs to the contrary in The Guardian)

AI Is Not out to Get Us (Scientific American)

Hat tip on both to reader Malcolm Fisk, Senior Research Fellow (CCSR) at De Montfort University via LinkedIn

Artificial intelligence with IBM Watson, robotics pondered on 60 Minutes

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2016/06/robottoy-1.jpg” thumb_width=”150″ /]This Sunday, the long-running TV magazine show 60 Minutes (CBS) had a long Charlie Rose-led segment on artificial intelligence. It concentrated mainly on the good with a little bit of ugly thrown in. The longest part of it was on IBM Watson massively crunching and applying oncology and genomics to diagnosis. In a study of 1,000 cancer patients reviewed by the University of North Carolina at Chapel Hill’s molecular tumor board, while 99 percent of the doctor diagnoses were confirmed by Watson as accurate, Watson found ‘something new’ in 30 percent. As a tool, it is still considered to be in adolescence. Watson and data analytics technology has been a $15 billion investment for IBM, which can afford it, but by licensing it and through various partnerships, IBM has been starting to recoup it. The ‘children of Watson’ are also starting to grow. Over at Carnegie Mellon, robotics is king and Google Glass is reading visual data to give clues on speeding up reaction time. At Imperial College, Maja Pantic is taking the early steps into artificial emotional intelligence with a huge database of facial expressions and interpretations. In Hong Kong, Hanson Robotics is developing humanoid robots, and that may be part of the ‘ugly’ along with the fears that AI may outsmart humans in the not-so-distant future. 60 Minutes video and transcript

Speaking of recouping, IBM Watson Health‘s latest partnership is with Siemens Healthineers to develop population health technology and services to help providers operate in value-based care. Neil Versel at MedCityNews looks at that as well as 60 Minutes. Added bonus: a few chuckles about the rebranded Siemens Healthcare’s Disney-lite rebranding.

A brief history of robotics, including Turing and Asimov (weekend reading)

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2016/06/robottoy-1.jpg” thumb_width=”150″ /]TechWorld gives us a short narrative on robotics history dating back to Asimov’s Three Rules of Robotics (1942), Turing’s Imitation Game (1950) and the pioneering work of several inventors in the late 1940s. There’s a brief tribute to Star Wars’ R2-D2 (Kenny Baker RIP) and C-3PO.  It finishes up with AI-driven IBM Watson and Deep Mind’s AlphaGo. Breezy but informative beach reading! Hat tip to Editor Emeritus and TTA founder Steve Hards; also read his acerbic comment on Dell and Intel’s involvement in Thailand’s Saensuk Smart City

Your weekly robot fix: ingestible robot fetches swallowed button batteries, more

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2016/06/mit-microsurgeon-2.jpg” thumb_width=”150″ /]A research team drawn from MIT, the University of Sheffield and the Tokyo Institute of Technology has developed an ‘origami’ robot to aid in the location and fetching the result of a common and potentially fatal incident–swallowed button batteries or other foreign objects. The robot is swallowed in a capsule which dissolves. It then unfolds its dried pig intestine appendages and is directed by external magnetic fields towards the battery, attaches to it and safely moves through the digestive system. Another potential use is to patch wounds or deliver medicine to a specific location. Unlike other robots, it is untethered and moves freely, propelling itself through a ‘stick-slip’ motion, and is resistant to acidic gastric fluids. Next steps for the team are to equip it with sensors and to perform animal and human in vivo testing. ZDNet

Nosocomial hospital infections may also get a good zapping by disinfecting robots. In an 18 month test at Lowell (Massachusetts) General Hospital, robots with pulsing xenon high-dose ultraviolet light from Xenex Disinfection Services disinfected the Lowell Hospital ORs nightly in addition to routine chemical disinfection. The study estimated that they avoided an estimated 23 infections at a cost savings of one life and $478,000. MedCityNews.

Robotics in healthcare will also be part of the five tracks centered on informatics available to attendees of HEALTHINFO 2016, August 21 – 25, 2016 in Rome’s H10 ROMA CITTA,  organized by IARIA (International Academy, Research, and Industry Association). More information here.

And if you wonder if humans will be able to find work when robots take over everything (maybe we just go to conferences and have a guaranteed income?), take comfort (or not) in this interview with one of the two authors of Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, a new book by Thomas Hayes Davenport and Julia Kirby. “One is to work alongside smart machines, and complement their activity. The other is to dip into what smart machines are unlikely to be able to do any time soon.” The emphasis on STEM education may be misplaced as many of these jobs will be replaced by AI. In healthcare, they predict that automation will displace specialists and empower GPs, leaving room for ultra specialization in combinations not thought of today. Robots beware: Humans will still be bosses of machines (TechRepublic)

Your Friday superintelligent robot fix: the disturbing consequences of ultimate AI

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2014/01/Overrun-by-Robots1-183×108.jpg” thumb_width=”200″ /]Our own superintelligent humans–Elon Musk (Tesla), Steve Wozniak (Apple), Bill Gates (Microsoft) and Stephen Hawking–are converging on artificial intelligence, not just everyday, pedestrian robotics, but the kind of AI superintellect that could make pets out of people–if we are lucky. In his interview with Australian Financial Review, the Woz (now an Australian resident) quipped: ‘Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?’ (more…)