A Google/Stanford/University of California San Francisco/University of Chicago Medicine study has developed a better predictive model for in-hospital admissions using ‘deep learning’ a/k/a machine learning or AI. Using a single data structure and the FHIR standard (Fast Healthcare Interoperability Resources) for each patient’s EHR record, they used de-identified EHR derived data from over 216,000 patients hospitalized for over 24 hours from 2009 to 2016 at UCSF and UCM. Over 47bn data points were utilized.
The researchers then looked at four areas to develop predictive models for mortality, unplanned readmissions (quality of care), length of stay (resource utilization), and diagnoses (understanding of a patient’s problems). The models outperformed traditional predictive models in all cases and because they used a single data structure, are projected to be highly scalable. For instance, the accuracy of the model for mortality was achieved 24-48 hours earlier (page 11). The second part of the study concerned a neural-network attribution system where clinicians can gain transparency into the predictions. Available through Cornell University Library. Abstract. PDF.
The MarketWatch article rhapsodizes about these models and neural networks’ potential for cutting healthcare costs but also illustrates the drawbacks of large-scale machine learning and AI: what’s in the EHR including those troublesome clinical notes (the study used three additional deep neural networks to discern which bits of the clinical data within the notes were relevant), lack of uniformity in the data sets, and most patient data not being static (e.g. temperature).
And Google will make the chips which will get you there. Google’s Tensor Processing Units (TPUs), developed for its own services like Google Assistant and Translate, as well as powering identification systems for driverless cars, can now be accessed through their own cloud computing services. Kind of like Amazon Web Services, but even more powerful. New York Times
Most Recent Comments