Biotech/device company Verily added to its 2016 $800 million stake from Singapore’s Temasek a fresh $1 bn from Silver Lake Partners. with reported participation from Ontario Teacher’s Pension Plan. Verily is majority-owned by Google parent Alphabet, which has added a new member to the Verily board, CFO Ruth Porat, and Egon Durbat from Silver Lake.
CEO Andrew Conrad, who is still there despite a brace of bad press two years ago [TTA 6 Apr 16], stated that “We are taking external funding to increase flexibility and optionality as we expand on our core strategic focus areas. Adding a well-rounded group of seasoned investors, led by Silver Lake, will further prepare us to execute as healthcare continues the shift towards evidence generation and value-based reimbursement models.”
One is tempted to say, ‘whatever that means’. They have had multiple ventures from contact lenses with Novartis’ subsidiary Alcon (reportedly discontinued but dating back with Google to 2014), diabetes with Sanofi, to sleep apnea with ResMed. VentureBeat reports they are cash-profitable and even venturing into areas such as small exploding needles that can extract blood through a wearable device–not precisely for the needle-phobic. There seem to be multiple projects in multiple directions that are primarily research. Certainly their finding at $1.8 bn is an outlier even at 2018’s big scale–but with Alphabet/Google as a parent and A-list partners, the risk is minimal. Mobihealthnews, Crunchbase
FDA clearance of Verily’s Study Watch. Late last week, Verily announced that their Study Watch was given a 510(k) FDA clearance. It records, stores, transfers and displays single-channel ECG. To date, there are no plans to use it beyond a handful of research studies primarily on cardiac disease. Mobihealthnews. Meanwhile, Google, not Verily, paid Fossil $40 million for a still under development smartwatch technology to fit into Google’s Ware OS area. It’s not known whether it is health related, but their CEO admitted that it was based on tech from the Misfit acquisition–and Misfit was focused on health tech. After the sale closing, it is predicted that some Fossil R&D staff will move over to Google. Back in 2015, Fossil paid $260 million for Misfit and their fitness tech but generally has stayed in the conventional smartwatch area. The story broke in Wareable. Also Mobihealthnews.
DeepMind loses its Health to Google. DeepMind, the London-based AI developer acquired by Alphabet (Google) in 2014, no longer has a Health division. This group will be absorbed by Google Health, now headed by ex-Geisinger CEO David Feinberg. The former DeepMind health team will continue to be headed by former NHS surgeon Dr Dominic King, who will remain in London along with about 100 reported staffers, at least for now.
DeepMind’s major health initiative is Streams, an AI-powered mobile app that analyzes potential deterioration in patients and alerts nurses and doctors, saving time. It also monitors vital signs and integrates different types of data and test results from existing hospital IT systems. Streams is currently deployed at Royal Free NHS Foundation Trust Hospital in north London for acute kidney injury. The rollout is expected to be made at Imperial College Healthcare NHS Trust, Taunton and Somerset NHS Foundation Trust and Yeovil District Hospital NHS Foundation Trust. It is expected that test partners will be found outside of the UK.
DeepMind’s other health initatives and research include fast eye disease detection, planning cancer radiotherapy treatment in seconds rather than hours; and detecting patient deterioration from electronic records.
Google Health is now expanding into products and research into digital technologies which was to be expected with Dr Feinberg on board. Currently, its revenue stream consists of advertising and search.
The remainder of DeepMind not engaged with health will remain independent. CNBC, DeepMind blog
[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2017/12/Lasso.jpg” thumb_width=”150″ /]Walmart and Microsoft partner to change the retail experience via AI.
The five-year agreement will switch over applications to the cloud and will affect shipping and supply chain. It’s projected in Healthcare Dive
that the impact will be in healthcare as well. Microsoft announced last month that it is forming a unit to advance AI and cloud-based healthcare tools. The landscape is under extreme pressure in retail and healthcare delivery, and Walmart needs to ready for future moves which will certainly happen. Walmart is rumored to be interested in acquiring Humana
and is currently working with Emory Healthcare
in Atlanta. Then there is CVS-Aetna, Cigna-Express Scripts, Google
, and (looming above all) Amazon
. (Though you can tuck all the years of Amazon’s profits into one year of Walmart’s.)
The ITV News headline grabs attention — but are dermatology apps really endangering the public when teledermatology can help diagnose 88 percent of people with skin cancer and 97 percent of those with benign lesions? A University of Birmingham-led research team did a metastudy of the literature and found three failings: “a lack of rigorous published trials to show they work and are safe, a lack of input during the app development from specialists to identify which lesions are suspicious and flaws in how the technology analyses photos” particularly for scaly or non-pigmented melanomas. But did access to these apps encourage early diagnosis which can lead to up to 100 percent five-year survival? Of course review is required as recommended by the study, but this last factor was not really examined at the British Association of Dermatologists’ annual meeting in Edinburgh. University of Birmingham release with study abstract
Google’s AI division is eager to break into healthcare, and with ‘Medical Brain’ they might be successful. First is harnessing the voice recognition used in their Home, Assistant, and Translate products. Last year they started to test a digital scribe with Stanford Medicine to help doctors automatically fill out EHRs from patient visits, which will conclude in August. Next up, and staffing up, is a “next gen clinical visit experience” which uses audio and touch technologies to improve the accuracy and availability of care.
The third is research Google published last month on using neural networks to predict how long people may stay in hospitals, their odds of re-admission and chances they will soon die. The neural net gathers up the previously ungatherable–old charts, PDF–and transforms it into useful information. They are currently working with the University of California, San Francisco, and the University of Chicago with 46 billion pieces of anonymous patient data.
A successful test of the approach involved a woman with late-stage breast cancer. Based on her vital signs–for instance, her lungs were filling with fluid–the hospital’s own analytics indicated that there was a 9.3 percent chance she would die during her stay. Google used over 175,000 data points they saw about her and came up with a far higher risk: 19.9 percent. She died shortly after.
Using AI to crunch massive amounts of data is an approach that has been tried by IBM Watson in healthcare with limited success. Augmedix, Microsoft, and Amazon are also attempting AI-assisted systems for scribing and voice recognition in offices. CNBC, Bloomberg
This Editor has been covering contact lenses in health tech since at least 2013–contact lenses that detect glucose for diabetics (Google/Novartis/Alcon), eye pressure (Sensimed), and even detect multiple diseases (Oregon State University). None to date have made it into commercial release.
Here’s another try, this time from this year’s winner of the MIT Sloan Healthcare Innovation Prize competition. Theraoptix won the $25,000 grand prize, sponsored by Optum. The lenses are designed to deliver eye medication on a time release basis using a thin polymer film formed into a tiny circular strip sandwiched into the lens material. They can be worn for up to two weeks to slowly but constantly deliver drugs in the treatment of diseases like glaucoma or after surgery. It can also deliver drugs effectively for back of the eye treatment of macular degeneration, diabetic retinopathy, retinal vein occlusion, and similar diseases that today require in-office injections.
Theraoptix was developed by Lokendra Bengani Ph.D. of the Schepens Eye Research Institute of the Massachusetts Eye and Ear Infirmary. It was based on core technology by ophthalmologist Joseph B. Ciolino MD, who is Dr. Bengani’s mentor. We wrote about Dr. Ciolino’s research previously [TTA 7 Sept 16] including a look back at contact lens research. There were seven other finalists, of which the most interesting to this Editor was Kinematics shoe insole sensors for gait detection analysis (and fall prevention). MIT News.
Healthcare-related organizations have codes of conduct pertaining to suppliers. Does Uber meet compliance standards? As we reported a few days ago in our article on the burgeoning area of non-emergency medical transport (NEMT) [TTA 9 Mar], Uber Health’s debut with a reputed 100 healthcare organizations has led this Editor to a further examination of Uber, the organization. Uber has had a hard time staying out of the headlines–and the courts–in the past two years, in matters which might give healthcare partners pause.
- On 21 Nov, Uber reported that the personal data of 57 million users, including 600,000 US drivers, were breached and stolen in October 2016–a full year prior. Not only was the breach announcement delayed by over a year, but also in that year it was made to go away by Uber’s paying off the hacker. Reuters on 6 December: “A 20-year-old Florida man was responsible for the large data breach at Uber Technologies Inc [UBER.UL] last year and was paid by Uber to destroy the data through a so-called “bug bounty” program normally used to identify small code vulnerabilities, three people familiar with the events have told Reuters.” The payment was an extraordinary $100,000. “The sources said then-CEO Travis Kalanick was aware of the breach and bug bounty payment in November of last year.” The Reuters article goes further into the mechanism of the hack. It eventually led to the resignation of their chief security officer, former Facebook/eBay/PayPal security head Joe Sullivan, who ‘investigated’ it using encrypted, disappearing messaging apps. Atlantic.
- CEO and co-founder Travis Kalanick was forced to resign last June after losing the confidence of the company’s investors, in contrails of financial mismanagement, sexual harassment, driver harassment, and ‘bro culture’. This included legal action over Uber’s 2016 acquisition of self-driving truck startup Otto, started by former Googlers who may or may not have lifted proprietary tech from Google before ankling. These are lavishly outlined in Bloomberg and in an over-the-top article in Engadget (with the usual slams at libertarianism). Mr. Kalanick remains on the board and is now a private investor.
- The plain fact is that Uber is still burning through funds (2017: $1bn) after raising $21.1bn and its valuation has suffered. The new CEO Dara Khosrowshahi, who earlier righted travel site Expedia, has a tough pull with investors such as SoftBank and Saudi Arabia’s Public Investment Fund. Also Mashable.
Healthcare and NEMT, as noted in our earlier article, are a strong source of potential steady revenue through reimbursement in Medicare Advantage and state Medicaid programs, which is why both Uber and Lyft are targeting it. The benefits for all sides–patients, practices, these companies, sub-contractors, and drivers–can be substantial and positive in this social determinant of health (SDOH).
Healthcare organizations, especially payers, have strict codes of compliance not only for employees and business practices but also for their suppliers’ practices. Payers in Medicare Advantage and Medicaid are Federal and state contractors. While Uber under its new CEO has shown contriteness in acknowledging an organization in need of righting its moral compass (CNBC), there remains the track record and the aftermath. Both deserve a closer look and review.
A Google/Stanford/University of California San Francisco/University of Chicago Medicine study has developed a better predictive model for in-hospital admissions using ‘deep learning’ a/k/a machine learning or AI. Using a single data structure and the FHIR standard (Fast Healthcare Interoperability Resources) for each patient’s EHR record, they used de-identified EHR derived data from over 216,000 patients hospitalized for over 24 hours from 2009 to 2016 at UCSF and UCM. Over 47bn data points were utilized.
The researchers then looked at four areas to develop predictive models for mortality, unplanned readmissions (quality of care), length of stay (resource utilization), and diagnoses (understanding of a patient’s problems). The models outperformed traditional predictive models in all cases and because they used a single data structure, are projected to be highly scalable. For instance, the accuracy of the model for mortality was achieved 24-48 hours earlier (page 11). The second part of the study concerned a neural-network attribution system where clinicians can gain transparency into the predictions. Available through Cornell University Library. Abstract. PDF.
The MarketWatch article rhapsodizes about these models and neural networks’ potential for cutting healthcare costs but also illustrates the drawbacks of large-scale machine learning and AI: what’s in the EHR including those troublesome clinical notes (the study used three additional deep neural networks to discern which bits of the clinical data within the notes were relevant), lack of uniformity in the data sets, and most patient data not being static (e.g. temperature).
And Google will make the chips which will get you there. Google’s Tensor Processing Units (TPUs), developed for its own services like Google Assistant and Translate, as well as powering identification systems for driverless cars, can now be accessed through their own cloud computing services. Kind of like Amazon Web Services, but even more powerful. New York Times
Verily‘s visit to last week’s Health 2.0 conference had an odd-but-fun tack, comparing the data received from human bodies to the billions of data points generated by an average late-model automobile in normal operations. We generate a lot less (ten orders of magnitude difference, according to Verily Chief Technology Officer Brian Otis), but Verily wants to maximize the output by wiring us to multiple sensors and to use the data in a predictive health model. Some of the Verily devices this Editor predicts will be non-starters (the sensor contact lens developed with Alcon) but others like the Dexcom partnership to develop a smaller, cheaper continuous blood glucose monitor and Liftware, the tremor-canceling silverware company Google acquired in 2014, appear promising. Key to predictive health is the Study Watch, which is a wearable that collects a lot of data but is easy to wear for a long time. Mobihealthnews
But what to do with this All That Data? Where this differs from a car is that the operational data goes into feedback loops that tune the engine’s performance, perform long-term monitoring, electrical system, braking, and more. (When the sensors go south or the battery’s low, watch out!) It’s not clear from the talk where this overwhelming amount of healthcare data generated goes to and how it becomes useful to a person or a doctor. This has its own feedback loop this Editor dubbed a few years ago as the Five Big Questions (FBQs): who pays, how much, who’s looking at the data, who’s actioning it, how data is integrated into patient records. That’s not answered, but presumably these technologies will incorporate machine learning and AI to Crunch That Data into bite-sized parts.
Which leads us back to Verily’s parent, Alphabet a/k/a Google. All that data into Verily devices could be monitored by Google and fed into other Google programs like their search engines and Adwords. Another privacy problem?
Perhaps health systems are arriving at the realization that they have to crunch the data, not avoid it. For the first time, this Editor has observed that a CMIO of a small health system in Illinois and Sanford Health‘s executive director of analytics are actually welcoming patient data and research. Startups in this area such as PreventScripts labor on that “last mile” of clinical decision support, preventative medicine. EHRs are also into the act. Epic launched Share Everywhere, where patients can grant access to their data and clinicians can send updates into the patient portal (MyChart). What’s needed, CMIO Goel admits, is software that combines natural language processing and algorithms to track by disease and specialty–once again, machine learning. Healthcare IT News
[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2017/07/Glass-EE.jpg” thumb_width=”200″ /]”Glass is a hands-free device, for hands-on workers.” What a marketing position! Google Glass finally arrives at where it should have started–not a techie toy or a social snooper banned from bars, but a tool for specific work needs that solve specific but important problems. This is not only ‘on trend’, but also the ‘professional case’ is steak on the grill as a powerful way to lend legitimacy to a new product (the classic is Tang ‘orange drink’ going into space in the early ’60s). The recent announcement of Glass Enterprise Edition (EE) marking its emergence from stealth mode was a refreshingly low-key (for Google and parent Alphabet) surprise. Even the revamped look is sturdy and utilitarian in full glass mode (left) or in clip-on (and also serves as eye protection).
Their on-trend position for healthcare is to reduce the amount of time that doctors spend charting and documenting patients. Augmedix, a Glass partner, built the documentation automation platform for Sutter Health and for Dignity Health that captures the information from the interaction between patient and doctor via a ‘remote scribe’. Jay Kothari, the Glass project lead, quotes data from Dignity that it reduces clinician daily documentation time from 33 percent to less than 10 percent, The Sutter Health estimate is two hours per day. Out of the gate this is extremely valuable because it improves the clinician-patient face-to-face (and presumably virtual) visit in eye contact, reduces the break in taking notes, and reduces time pressure generated by post-visit review. Netherlands-based swyMed concentrates on facilitating virtual visits, and is testing a home visit pilot with Loyola University Health System practitioners in Maywood, Illinois. Others, like John Nosta, have been continuing to use Glass in business. Our Readers may want to check out these partners as that is how Google is making the Glass available, not directly. SF/Boston-based partner Brain Power wasn’t mentioned in Mr. Kothari’s blog, but their AI/VR applications for brain conditions such as autism and TBI, as well as other uses such as clinical trials and care for older adults. mHealthIntelligence interviewed Augmedix’s CEO Ian Shakil, who notes that Glass still needs improvements in battery life for the hard work of documenting patient visits.
Update: An interesting comment on this via Twitter. The paper is from 2015 but the regulatory and privacy questions around recording patients and information remain. Augmedix does state on its website that it is HIPAA compliant.
[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2017/07/Glass-Twitter.jpg” thumb_width=”250″ /]
[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2014/01/Overrun-by-Robots1-183×108.jpg” thumb_width=”150″ /]Weekend Reading
While AI is hotly debated and the Drudge Report
features daily the eeriest pictures of humanoid robots, the hard work on determining social norms and programming them into robots continues. DARPA
-funded researchers at Brown and Tufts Universities are, in their words, working “to understand and formalize human normative systems and how they guide human behavior, so that we can set guidelines for how to design next-generation AI machines that are able to help and interact effectively with humans,” said Reza Ghanadan, DARPA program manager. ‘Normal’ people determine ‘norm violations’ quickly (they must not live in NYC), so to prevent robots from crashing into walls or behaving towards humans in an unethical manner (see Isaac Asimov’s Three Laws of Robotics
), the higher levels of robots will eventually have the capacity to learn, represent, activate, and apply a large number of norms to situational behavior. Armed with Science
This directly relates to self-driving cars, which are supposed to solve all sorts of problems from road rage to traffic jams. It turns out that they cannot live up to the breathless hype of Elon Musk, Google, and their ilk, even taking the longer term. Sequencing on roadways? We don’t have the high-accuracy GPS like the Galileo system yet. Rerouting? Eminently hackable and spoofable as WAZE has been. Does it see obstacles, traffic signals, and people clearly? Can it make split-second decisions? Can it anticipate the behavior of other drivers? Can it cope with mechanical failure? No more so, and often less, at present than humans. And self-drivers will be a bonanza for trial lawyers, as added to the list will be car companies and dealers to insurers and owners. While it will give mobility to the older, vision impaired, and disabled, it could also be used to restrict freedom of movement. Why not simply incorporate many of these assistive features into cars, as some have been already? An intelligent analysis–and read the comments (click by comments at bottom to open). Problems and Pitfalls in Self-Driving Cars (American Thinker)
[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2015/08/is-your-journey-neccessary_.jpg” thumb_width=”150″ /]Increasingly, not in the opinion of many.
We’ve covered earlier [TTA 21 Dec, 6 Feb
] the wearables ‘bust’ and consumer disenchantment affecting fitness-oriented wearables. While projections are still $19 bn by 2018 (Juniper Research), Jawbone
is nearly out of business with one last stab at the clinical segment, with Fitbit
missing its 2016 earnings targets–and planning to target the same segment. So this Washington Post article
on a glam presentation at SXSW
of a Google/Levi’s
smart jeans jacket for those who bicycle to work (‘bike’ and ‘bikers’ connote Leather ‘n’ Harleys). It will enable wearers to take phone calls, get directions and check the time by tapping and swiping their sleeves, with audio information delivered via headphone. As with every wearable blouse, muumuu, and toque she’s seen, this Editor’s skepticism is fueled by the fact that the cyclist depicted has to raise at least one hand to tap/swipe said sleeves and to wear headphones. He is also sans
helmet on a street, not even a bike path or country lane. All are safety Bad Doo-Bees. Yes, the jacket is washable as the two-day power source is removable. But while it’s supposed to hit the market by Fall, the cost estimate is missing. A significant ‘who needs it?’ factor.
Remember the Quantified Selfer’s fascination with sleep tracking and all those sleep-specific devices that went away, taking their investors’ millions with them? Fitbit and many smartwatches work with apps to give the wearer feedback on their sleep hygiene, but the devices and apps themselves can deliver faulty information. This is according to a study published in the Journal of Clinical Sleep Medicine called “Orthosomnia: Are Some Patients Taking the Quantified Self Too Far?” (abstract) by Kelly Glazer Baron, MD with researchers from the Feinberg School of Medicine at Northwestern University. “The patients’ inferred correlation between sleep tracker data and daytime fatigue may become a perfectionistic quest for the ideal sleep in order to optimize daytime function. To the patients, sleep tracker data often feels more consistent with their experience of sleep than validated techniques, such as polysomnography or actigraphy.” (more…)
[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2016/06/Robot-Belgique-1.png” thumb_width=”200″ /]Something that has been bothersome to Deep Thinkers (and Not Such Deep Thinkers like this Editor) is the almost-forced loss of control
inherent in discussion of AI-powered technology. There is a elitist Wagging of Fingers that generally accompanies the Inevitable Questions and Qualms.
- If you don’t think 100 percent self-driving cars are an Unalloyed Wonder, like Elon Musk and Google tells you, you’re a Luddite
- If you have concerns about nanny tech or smart homes which can spy on you, you’re paranoid
- If you are concerned that robots will take the ‘social’ out of ‘social care’, likely replace human carers for people, or lose your neighbor their job, you are not with the program
I have likely led with the reason why: loss of control. Control does not motivate just Control Freaks. Think about the decisions you like versus the ones you don’t. Think about how helpless you felt as a child or teenager when big decisions were made without any of your input. It goes that deep.
In the smart home, robotic/AI world then, who has the control? Someone unknown, faceless, well meaning but with their own rationale? (Yes, those metrics–quality, cost, savings) Recall ‘Uninvited Guests’, the video which demonstrated that Dad Ain’t Gonna Take Nannying and is good at sabotage.
Let’s stop and consider: what are we doing? Where are we going? What fills the need for assistance and care, yet retains that person’s human autonomy and that old term…dignity? Maybe they might even like it? For your consideration:
How a robot could be grandma’s new carer (plastic dogs to the contrary in The Guardian)
AI Is Not out to Get Us (Scientific American)
Hat tip on both to reader Malcolm Fisk, Senior Research Fellow (CCSR) at De Montfort University via LinkedIn
Several articles of late have reported on the Google Alphabet life sciences company Verily. By fall last year, they had developed partnerships with Novartis-Alcon on development of a smart contact lens (for measuring glucose), plus Dexcom, Abbvie and Biogen. STAT, a health/medicine news website owned by Boston Globe Media which is still in beta, has a well-researched article that details, seemingly with a lot of inside scoop, its current turmoil. 12 top engineering and science executives have taken a powder. Some of the execs date back to the Google X days; most have fled back to Mother Google, others to Amazon or to life sciences competitors. STAT: “No similar brain drain has occurred at Calico, another ambitious Google spinoff, which is focused on increasing the human lifespan.” The reasons are the apparently abrasive CEO Andrew Conrad, depicted as ambitious, fickle and moody–and the constant shifting of support from approved projects to short-term initiatives ‘that show little promise’. Google’s bold bid to transform medicine hits turbulence.
Update: STAT published today information on a possible conflict of interest in Verily awarding a short-term research contract to a luxury health clinic, California Health & Longevity Institute, where Dr Conrad holds a majority ownership. According to the publication, it has no documented experience with this kind of work. The clinic will gather, in a 200-person ‘feasibility study’ for the larger Baseline study, genetic, molecular, clinical, and other data. According to Dr Conrad, it was done “Because I think it’s cool. Because it’s super efficient to have everything in one spot.” What may not be cool to the participants is that Baseline is already planning to sell the data to pharmaceutical companies–with patient consent, of course, in a document not yet public. Google’s biotech venture hit by ethical concerns
[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2016/04/samsung-smart-contact-lens-1.png” thumb_width=”150″ /]A Samsung
news tracking website, SamMobile
, has tracked down publication of a Samsung patent filing for a smart contact lens. This concept would have a camera with a display that would project directly into the eye, a tiny antenna that transmits images to the smartphone, and motion sensors that trigger by movement and blinking. This is different than the Google/Alcon
lens in their new Verily Life Sciences
division (TTA 17 July 14
and 1 Sept 15
, pictured in the Mashable
article) which is for measuring blood glucose. Samsung apparently filed the patent in 2014, and filed the ‘Gear Blink’ name for a trade mark in the US and South Korea. No clue on how comfortable a lens with a camera, antenna and display would be on a normal eye. Hat tip to former TTA Ireland Editor Toni Bunting.
The Royal Society of Medicine has two unbeatable benefits to offer conference attendees: virtually every world expert is keen to present there and, because it is a medical education charity, charges are heavily subsidised. As a result you get the most bang for your buck of any independent digital health event, anywhere!
And just now the offer is even more attractive as if you book for all three in the next 14 days (ie by 12th February) the RSM will give you a 10% discount on all three!
On February 25th, the RSM is holding their first 2016 conference: Recent developments in digital health. This is the fourth time they have run this popular event which aims to update attendees about particularly important new digital heath advances. For me the highlight will be Chris Elliott of Leman Micro who plans to demonstrate working smartphones that can measure all the key vital signs apart from weight without any peripheral – that includes systolic & diastolic blood pressure, as well as one-lead ECG, pulse, respiration rate and temperature. When these devices are widely available, they will dramatically affect health care delivery worldwide – particularly self-care – dramatically. See it first at the RSM!
I’d also highlight speakers such as Beverley Bryant, Director of Digital Technology NHS England, Mustafa Suleyman, Head of Applied Artificial Intelligence at Google DeepMind (who’ll hopefully tell us a bit about introducing deep learning in to Babylon), Prof Tony Young, National Clinical Director for Innovation, NHS England and Dr Ameet Bakhai, Royal Free London NHS Foundation Trust. It’s going to be a brilliant day!
On April 7th the RSM is holding Medical apps: mainstreaming innovation, also in its fourth year. Last year the election caused last minute cancellations by both NICE & the MHRA, who are making up for that with two high-level presentations. Among a panoply of other excellent speakers, I’m personally looking forward especially to (more…)
[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2016/01/RSM.jpg” thumb_width=”150″ /]Recent developments in digital health 2016
Thursday 25 February 2016
Royal Society of Medicine, 1 Wimpole Street, London, W1G 0AE
Presented by the Royal Society of Medicine’s Telemedicine and eHealth Section (presided by our Editor Charles), this full day conference is open to the public and provides a global perspective from leaders within digital health. Keynoters are Mustafa Suleyman from Google’s Artificial Intelligence branch, DeepMind, and Dr Euan Ashley from Stanford University in California who leads Apple’s MyHeartCounts. Rates are reasonable: £50-115 for RSM members and £60-175 for non-members, plus 6 CPD credits. More information and registration on the RSM website here and download the flyer here.
Upcoming RSM Telemedicine events into early June:
Medical apps: Mainstreaming innovation–Thursday 7 April 2016
The future of medicine – the role of doctors in 2025–Thursday 19 May 2016
Big data 2016–Thursday 2 June 2016