In-home behavioral tracking of older adults, which was a significant portion of telecare circa 2007 up until a few years ago, may be getting a new lease on life. The technology in this round is the same as what guides self-driving vehicles–LiDAR or Light Detection and Ranging, which uses laser light pulses to map images of movement and surroundings.
In this model, IBM Research will use the LiDAR information and their machine learning to establish normal patterns and also to observe behaviors that may indicate a potentially dangerous condition or situation. The LiDAR pilot will be in 10-15 households in the UK starting in June. IBM is partnering with early-stage UK home care company Cera Care on the reporting and linking with care staff on alerts on changes in behavior that may predict a more acute condition.
Many of the privacy issues that dogged predictive behavioral telemonitoring via networked infrared motion sensors, as well as in-home cameras, are present with LiDAR monitoring. Unlike 2007, five states have ‘nanny cam’ laws that prohibit cameras within skilled nursing facilities without patient consent (Senior Housing News) Another issue: expense. LiDAR sensor setups cost up to $1,000 each, and at least one per room is needed. Far cheaper setups are available from the Editor’s long-ago former company, QuietCare, if one can still purchase them for the home from Care Innovations; Alarm.com, UK’s Hive Link, and Google may get into the act with their Nest connected home tech.
Senior housing may open up a new market for LiDAR, which is wilting in the autonomous vehicle (AV) area as it’s proven to be rather buggy on real roads with real drivers. Certainly the housing and care market is growing and destined to be huge, with over-60s growing from 900 million in 2015 to 2 billion worldwide in 2050, while for-hire caregivers are shrinking by the millions. Business Insider, Reuters
A Google/Stanford/University of California San Francisco/University of Chicago Medicine study has developed a better predictive model for in-hospital admissions using ‘deep learning’ a/k/a machine learning or AI. Using a single data structure and the FHIR standard (Fast Healthcare Interoperability Resources) for each patient’s EHR record, they used de-identified EHR derived data from over 216,000 patients hospitalized for over 24 hours from 2009 to 2016 at UCSF and UCM. Over 47bn data points were utilized.
The researchers then looked at four areas to develop predictive models for mortality, unplanned readmissions (quality of care), length of stay (resource utilization), and diagnoses (understanding of a patient’s problems). The models outperformed traditional predictive models in all cases and because they used a single data structure, are projected to be highly scalable. For instance, the accuracy of the model for mortality was achieved 24-48 hours earlier (page 11). The second part of the study concerned a neural-network attribution system where clinicians can gain transparency into the predictions. Available through Cornell University Library. Abstract. PDF.
The MarketWatch article rhapsodizes about these models and neural networks’ potential for cutting healthcare costs but also illustrates the drawbacks of large-scale machine learning and AI: what’s in the EHR including those troublesome clinical notes (the study used three additional deep neural networks to discern which bits of the clinical data within the notes were relevant), lack of uniformity in the data sets, and most patient data not being static (e.g. temperature).
And Google will make the chips which will get you there. Google’s Tensor Processing Units (TPUs), developed for its own services like Google Assistant and Translate, as well as powering identification systems for driverless cars, can now be accessed through their own cloud computing services. Kind of like Amazon Web Services, but even more powerful. New York Times