Google’s ‘Medical Brain’ tests clinical speech recognition, patient outcome prediction, death risk

Google’s AI division is eager to break into healthcare, and with ‘Medical Brain’ they might be successful. First is harnessing the voice recognition used in their Home, Assistant, and Translate products. Last year they started to test a digital scribe with Stanford Medicine to help doctors automatically fill out EHRs from patient visits, which will conclude in August. Next up, and staffing up, is a “next gen clinical visit experience” which uses audio and touch technologies to improve the accuracy and availability of care.

The third is research Google published last month on using neural networks to predict how long people may stay in hospitals, their odds of re-admission and chances they will soon die. The neural net gathers up the previously ungatherable–old charts, PDF–and transforms it into useful information. They are currently working with the University of California, San Francisco, and the University of Chicago with 46 billion pieces of anonymous patient data. 

A successful test of the approach involved a woman with late-stage breast cancer. Based on her vital signs–for instance, her lungs were filling with fluid–the hospital’s own analytics indicated that there was a 9.3 percent chance she would die during her stay. Google used over 175,000 data points they saw about her and came up with a far higher risk: 19.9 percent. She died shortly after.

Using AI to crunch massive amounts of data is an approach that has been tried by IBM Watson in healthcare with limited success. Augmedix, Microsoft, and Amazon are also attempting AI-assisted systems for scribing and voice recognition in offices. CNBC, Bloomberg

Google ‘deep learning’ model more accurately predicts in-hospital mortality, readmissions, length of stay in seven-year study

A Google/Stanford/University of California San Francisco/University of Chicago Medicine study has developed a better predictive model for in-hospital admissions using ‘deep learning’ a/k/a machine learning or AI. Using a single data structure and the FHIR standard (Fast Healthcare Interoperability Resources) for each patient’s EHR record, they used de-identified EHR derived data from over 216,000 patients hospitalized for over 24 hours from 2009 to 2016 at UCSF and UCM. Over 47bn data points were utilized.

The researchers then looked at four areas to develop predictive models for mortality, unplanned readmissions (quality of care), length of stay (resource utilization), and diagnoses (understanding of a patient’s problems). The models outperformed traditional predictive models in all cases and because they used a single data structure, are projected to be highly scalable. For instance, the accuracy of the model for mortality was achieved 24-48 hours earlier (page 11). The second part of the study concerned a neural-network attribution system where clinicians can gain transparency into the predictions. Available through Cornell University Library. AbstractPDF.

The MarketWatch article rhapsodizes about these models and neural networks’ potential for cutting healthcare costs but also illustrates the drawbacks of large-scale machine learning and AI: what’s in the EHR including those troublesome clinical notes (the study used three additional deep neural networks to discern which bits of the clinical data within the notes were relevant), lack of uniformity in the data sets, and most patient data not being static (e.g. temperature). 

And Google will make the chips which will get you there. Google’s Tensor Processing Units (TPUs), developed for its own services like Google Assistant and Translate, as well as powering identification systems for driverless cars, can now be accessed through their own cloud computing services. Kind of like Amazon Web Services, but even more powerful. New York Times