CES Unveiled’s preview of health tech at CES 2018

CES Unveiled, Metropolitan Pavilion, NYC, Thursday 9 November

The Consumer Technology Association’s (CTA) press preview of the gargantuan CES 9-12 January 2018 Las Vegas event was the first of several international preview ‘road shows’. It’s a benchmark of the ebb and flow of health tech and related trends on the grand scale. Gone are the flashy wearables which would change colors based on our sweat patterns and heart rate, or track the health and movement of pets. Now it’s the Big Issues of 5G, AI, machine learning, AR/VR, and smart cities. Entertainment, especially sports, are now being reinvented by all of these.

The developments this Editor gleaned from the mountain of information CEA plies us keyboard tappers that are most relevant to healthcare are:

  • Wireless 5G. As this Editor has written previously from Ericsson and Qualcomm, 5G and 5G New Radio will enable amazingly fast mobile speeds and hard-to-believe fast connectivity by 2019. It will enable IoT, self-driving cars, cars that communicate with each other, reconstruction of industrial plants, electric distribution, multimodal transport, and perhaps the largest of all, smart cities. The automation of everything is the new mantra. Accenture estimates the impact will be 3 million new jobs (nothing about loss), annual GDP increased by $500bn, and drive a $275bn investment from telecom operators.
  • AI.  Society will be impacted by machine learning, neural networks and narrow (e.g. calorie counting, diagnostics) versus general AI (simulation of human intelligence). This affects voice-activated assistants like Echo, Alexa, and Google Home (now owned by 12 percent of the population, CES survey) as well as robotics to ‘read’ us better. These conversations with context may move to relationships with not only these assistants but home robots such as from Mayfield Robotics’ Kuri (which this Editor attempted to interact with on the show floor, to little effect and disappointment). Oddly not mentioned were uses of AI in ADL and vital signs tracking interpreted for predictive health.
  • Biometrics. This will affect security first in items like padlocks (the new Bio-Key Touchlock) using fingerprint recognition and smart wallets, then facial recognition usable in a wide variety of situations such as workplaces, buildings, and smartphones. Imagine their use in items like key safes, phones, home locks, and waypoints inside the home for activity monitoring.
  • AR and VR. Power presence now puts viewers in the middle of a story that is hard to distinguish from reality. The pricing for viewers is dropping to the $200-400 range with Oculus Go and Rift. At the Connected Health Conference, this Editor saw how VR experiences could ease anxiety and disconnectedness in older people with mobility difficulties or dementia (OneCaringTeam‘s Aloha VR) or pain reduction (Cedars-Sinai tests). The other is Glass for those hands-on workers [TTA 24 July] and heads-up displays in retail.

CES is also hosting the fourth Extreme Tech Challenge. Of the ten semi-finalists showing down on 11 January, three are in healthcare: Neurotrack to assess and improve memory; Tissue Analytics that uses smartphone cameras to assess wounds and healing; and (drum roll) the winner of TTA’s Insanely Cute Factor competition, the Owlet smart sock for baby monitoring [TTA’s backfile here]. One of the judges is Sir Richard Branson, who will host the finalists on 28 February on Necker Island (which hopefully will be rebuilt by that time).

After the nearly two-hour briefing, CEA hosted a mini-show on the ground floor of the Metropolitan. (more…)

Themes and trends at Aging2.0 OPTIMIZE 2017

Aging2.0 OPTIMIZE, in San Francisco on Tuesday and Wednesday 14-15 November, annually attracts the top thinkers and doers in innovation and aging services. It brings together academia, designers, developers, investors, and senior care executives from all over the world to rethink the aging experience in both immediately practical and long-term visionary ways.

Looking at OPTIMIZE’s agenda, there are major themes that are on point for major industry trends.

Reinventing aging with an AI twist

What will aging be like during the next decades of the 21st Century? What must be done to support quality of life, active lives, and more independence? From nursing homes with more home-like environments (Green House Project) to Bill Thomas’ latest project–‘tiny houses’ that support independent living (Minkas)—there are many developments which will affect the perception and reality of aging.

Designers like Yves Béhar of fuseproject are rethinking home design as a continuum that supports all ages and abilities in what they want and need. Beyond physical design, these new homes are powered by artificial intelligence (AI) and machine learning technology that support wellness, engagement, and safety. Advances that are already here include voice-activated devices such as Amazon Alexa, virtual reality (VR), and IoT-enabled remote care (telehealth and telecare).

For attendees at Aging2.0, there will be substantial discussion on AI’s impact and implications, highlighted at Tuesday afternoon’s general session ‘AI-ging Into the Future’ and in Wednesday’s AI/IoT-related breakouts. AI is powering breakthroughs in social robotics and predictive health, the latter using sensor-based ADL and vital signs information for wellness, fall prevention, and dementia care. Some companies part of this conversation are CarePredict, EarlySense, SafelyYou, and Intuition Robotics.

Thriving, not surviving

Thriving in later age, not simply ‘aging in place’ or compensating for the loss of ability, must engage the community, the individual, and providers. There’s new interest in addressing interrelated social factors such as isolation, life purpose, food, healthcare quality, safety, and transportation. Business models and connected living technologies can combine to redesign post-acute care for better recovery, to prevent unnecessary readmissions, and provide more proactive care for chronic diseases as well as support wellness.

In this area, OPTIMIZE has many sessions on cities and localities reorganizing to support older adults in social determinants of health, transportation innovations, and wearables for passive communications between the older person and caregivers/providers. Some organizations and companies contributing to the conversation are grandPad, Village to Village Network, Lyft, and Milken Institute.

Technology and best practices positively affect the bottom line

How can senior housing and communities put innovation into action today? How can developers make it easier for them to adopt innovation? Innovations that ‘activate’ staff and caregivers create a multiplier for a positive effect on care. Successful rollouts create a positive impact on both the operations and financial health of senior living communities.

(more…)

Weekend Big Read: will telemedicine do to retail healthcare what Amazon did to retail?

Updated. Our past contributor and TelehealthWorks’ Bruce Judson (ATA 2017 coverage) has penned this weekend’s Big Read in the HuffPost. His hypothesis is that telemedicine specifically will disrupt location-based care, followed by other digitally based care–and that executives at health systems and payers are in denial. More and more states are recognizing both parity of treatment and (usually) payment. Telemedicine also appeals to three major needs: care at home or on the go, with a minimal wait; maldistribution of care, especially specialized care; and follow-up/post-acute care. His main points in the article:

  • Healthcare executives are being taken by surprise because present digital capabilities will not be future capabilities, and the shift to virtual will be a gradual process
  • Telemedicine will address doctor shortages and grow into coordinated care platforms embedding expertise (via connected diagnostics, analytics, machine learning, AI) and care teams
  • Telemedicine will eventually go up-market and directly compete with large providers in urban areas, displacing a significant amount of in-person care with virtual care
  • Telemedicine will start to incorporate continuous feedback loops to further optimize their services and move into virtual health coaching and chronic care management
  • Telemedicine platforms are also sub-specializing into stroke response, pediatrics, and neurology
  • Centers of expertise and expert platforms will become larger and fewer–centralizing into repositories of ‘the best’
  • Platforms will be successful if they are trusted through positive patient experiences. This is a consumer satisfaction model.

Mr. Judson draws an analogy of healthcare with internet services, an area where he has decades of expertise: “A general phenomenon associated with Internet services is that they break activities into their component parts, and then reconnect them in a digital chain.” Healthcare will undergo a similar deconstruction and reconstruction with a “new set of competitive dynamics.”

It’s certainly a provocative POV that at least gives a rationale for the sheer messiness and stop-n-start that this Editor has observed in Big Health since the early 2000s. A caution: the internet, communications, and retail do not endure the sheer volume of regulatory force imposed on healthcare, which tends to make the retail analogy inexact. Governments monitor and regulate health outcomes, not search results or video downloads (except when it comes to net neutrality). It’s hard to find an industry so regulated other than financial/banking and utilities. FierceHealthcare also found the premise intriguing, noting the VA’s ‘Anywhere’ programs [TTA 9 Aug] and citing two studies indicating 96 percent of large employers plan to make telemedicine, also with behavioral health services, available, and that 20 percent of employers are seeing over 8 percent employee utilization. (Under 10 percent utilization gave RAND the vapors earlier this year with both this Editor and Mr. Judson stinging RAND’s findings with separate analyses.)

AI good, AI bad (part 2): the Facebook bot dialect scare

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2017/08/ghosty.jpg” thumb_width=”175″ /]Eeek! Scary! Bots develop their own argot. Facebook AI Research (FAIR) tested two chatbots programmed to negotiate. In short order, they developed “their own creepy language”, in the words of the Telegraph, to trade their virtual balls, hats, and books. “Creepy” to FAIR was only a repetitive ‘divergence from English’ since the chatbots weren’t limited to standard English. The lack of restriction enabled them to develop their own argot to quickly negotiate those trades. “Agents will drift off understandable language and invent codewords for themselves,” said Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research. “This isn’t so different from the way communities of humans create shorthands.” like soldiers, stock traders, the slanguage of showbiz mag Variety, or teenagers. Because Facebook’s interest is in AI bot-to-human conversation, FAIR put in the requirement that the chatbots use standard English, which as it turns out is a handful for bots.

The danger in AI-to-AI divergence in language is that humans don’t have a translator for it yet, so we’d never quite understand what they are saying. Batra’s unsettling conclusion: “It’s perfectly possible for a special token to mean a very complicated thought. The reason why humans have this idea of decomposition, breaking ideas into simpler concepts, it’s because we have a limit to cognition.” So this shorthand can look like longhand? FastCompany/Co.Design’s Mark Wilson sees the upside–that software talking their own language to each other could eliminate complex APIs–application program interfaces, which enable different types of software to communicate–by letting the software figure it out. But for humans not being able to dig in and understand it readily? Something to think about as we use more and more AI in healthcare and predictive analytics.

Behave, Robot! DARPA researchers teaching them some manners.

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2014/01/Overrun-by-Robots1-183×108.jpg” thumb_width=”150″ /]Weekend Reading While AI is hotly debated and the Drudge Report features daily the eeriest pictures of humanoid robots, the hard work on determining social norms and programming them into robots continues. DARPA-funded researchers at Brown and Tufts Universities are, in their words, working “to understand and formalize human normative systems and how they guide human behavior, so that we can set guidelines for how to design next-generation AI machines that are able to help and interact effectively with humans,” said Reza Ghanadan, DARPA program manager. ‘Normal’ people determine ‘norm violations’ quickly (they must not live in NYC), so to prevent robots from crashing into walls or behaving towards humans in an unethical manner (see Isaac Asimov’s Three Laws of Robotics), the higher levels of robots will eventually have the capacity to learn, represent, activate, and apply a large number of norms to situational behavior. Armed with Science

This directly relates to self-driving cars, which are supposed to solve all sorts of problems from road rage to traffic jams. It turns out that they cannot live up to the breathless hype of Elon Musk, Google, and their ilk, even taking the longer term. Sequencing on roadways? We don’t have the high-accuracy GPS like the Galileo system yet. Rerouting? Eminently hackable and spoofable as WAZE has been. Does it see obstacles, traffic signals, and people clearly? Can it make split-second decisions? Can it anticipate the behavior of other drivers? Can it cope with mechanical failure? No more so, and often less, at present than humans. And self-drivers will be a bonanza for trial lawyers, as added to the list will be car companies and dealers to insurers and owners. While it will give mobility to the older, vision impaired, and disabled, it could also be used to restrict freedom of movement. Why not simply incorporate many of these assistive features into cars, as some have been already? An intelligent analysis–and read the comments (click by comments at bottom to open). Problems and Pitfalls in Self-Driving Cars (American Thinker)

Health and tech news that’s a snooze–or infuriating

The always acerbic Laurie Orlov has a great article on her Aging in Place Technology Watch that itemizes five news items which discuss the infuriating, the failing, or downright puzzling that affect health and older adults. In the last category, there’s the ongoing US Social Security Administration effort to eliminate paper statements and checks with online and direct deposit only–problematic for many of the oldest adults, disabled and those without reasonable, secure online access–or regular checking accounts. The infuriating is Gmail’s latest ‘upgrade’ to their mobile email that adds three short ‘smart reply’ boxes to the end of nearly every email. Other than sheer laziness and enabling emailing while driving, it’s not needed–and to turn it off, you have to go into your email settings. And for the failing, there’s IBM. There’s the stealth layoff–forcing their estimated 40 percent of remote employees to relocate to brick-and-mortar offices or leave, while they sell remote working software. There’s a falloff in revenue meaning that profits have to be squeezed from a rock. And finally there’s the extraordinarily expensive investment in Watson and Watson Health. This Editor back in February [TTA 3 and 14 Feb] noted the growing misgivings about it, observing that focused AI and simple machine learning are developing quickly and affordably for healthcare diagnostic applications. Watson Health and its massive, slow, and expensive data crunching for healthcare decision support are suitable only for complex diseases and equally massive healthcare organizations–and even they have been displeased, such as MD Anderson Cancer Center in Houston in February (Forbes). Older adults and technology – the latest news they cannot use

Want to attract Google Ventures to your health tech? Look to these seven areas.

The GV Hot 7, especially the finally-acknowledged physician burnout. Google Ventures’ (GV) Dr. Krishna Yeshwant, a GV general partner leading the Life Sciences team, is interested in seven areas, according to his interview in Business Insider (UK):

  • Physician burnout, which has become epidemic as doctors (and nurses) spend more and more time with their EHRs versus patients. This is Job #1 in this Editor’s opinion.

Dr. Yeshwant’s run-on question to be solved is: “Where are the places where we can intervene to continue getting the advantages of the electronic medical record while respecting the fact that there’s a human relationship that most people have gotten into this for that’s been eroded by the fact that there’s now a computer that’s a core part of the conversation.” (Your job–parse this sentence!–Ed.)

Let’s turn to Dr. Robert Wachter for a better statement of the problem. This Editor was present for his talk at the NYeC Digital Health Conference [TTA 19 Jan] and these are quoted from his slides: “Burnout is associated with computerized order entry use and perceived ‘clerical burden’ [of EHRs and other systems]”. He also cites the digital squeeze on physicians and the Productivity Paradox, noted by economist Robert Solow as “You can see the computer age everywhere except in the productivity statistics.” In other words, EHRs are a major thief of time. What needs to happen? “Improvements in the technology and reimagining the work itself.” Citing Mr. Solow again, the Productivity Paradox in healthcare will take 15-20 years to resolve. Dr. Wachter’s talk is here. (more…)

The stop-start of health tech in the NHS continues (UK)

Continuing their critique of the state of technology within the NHS [TTA 17 Feb], The King’s Fund’s Harry Evans examines the current state of incipient ‘rigor mortis’ (his term). Due to the upcoming election, the Department of Health is delaying its response to Dame Fiona Caldicott, the National Data Guardian for Health and Care (NDG), on her review of data security, consent and opt-outs (Gov.UK publications).

People have significant trust and privacy concerns about their data, which led to NHS England suspending care.data over three years ago. But with safeguards in place, public polling supports the sharing of health data for uses such as research and direct care. But…there’s more. Now there is ‘algorithmic accountability’, which may single out individuals and influence their care, much as algorithms dictate what online ads we’re served. What of the patient data being served to Google DeepMind, IBM Watson Health, and Vitalpac for AI development? Have people adjusted their concerns, and have systems evolved to better store, secure, and share data? And how can this be implemented at the local NHS level? The NHS and technology: turn it off and on again Hat tip to Susanne Woodman of BRE.

A reminder that The King’s Fund’s Digital Health and Care Congress is on 11-12 July. Click on the sidebar to go directly to information and to register. Preview video; the Digital Health Congress fact sheet includes information on sponsoring or exhibiting. To make the event more accessible, there are new reduced rates for groups and students, plus bursary spots available for patients and carers. TTA is again a media partner of the Digital Health and Care Congress 2017. Updates on Twitter @kfdigital17

Babylon Health ‘chatbot’ triage AI app raises £50 million in funding (UK)

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2017/04/babylon_lifestyle2.jpg” thumb_width=”150″ /]Babylon Health, which has developed an AI-assisted chatbot to triage a potential patient in minutes, has raised a serious Series B of £50 million (US$60 million). Funders were Kinnevik AB, which had led the Series A, NNC Holdings, and Vostok New Ventures (Crunchbase). According to the FT (through TechCrunch), Babylon’s value is now north of $200 million. Revenues were not disclosed.

The current app uses texts to determine the level of further care, recommends a course of action, then connects the user if needed to a virtual doctor visit, or if acute to go to Accident & Emergency (US=emergency room or department). It also follows up with the user on their test results and health info. The funding will be used to enhance their current AI to extend to diagnosis. They are accumulating daily data on thousands of patients, machine learning which further refines the AI. Founder Dr. Ali Parsa, founder and CEO of Babylon, said in a statement. “Babylon scientists predict that we will shortly be able to diagnose and foresee personal health issues better than doctors, but this is about machines and medics cooperating, not competing.” Like other forms of telemedicine and triage (Zipnosis in health systems), it is designed to put healthcare access and affordability, as they claim, “into the hands of every person on earth”. The NHS pilot in north London [TTA 18 Jan] via the 111 hotline is testing Babylon as a ‘reliever’ though it directs only to a doctor appointment, not a video consult. BBC News, Mobihealthnews

PwC: your job at risk by robots, AI by 2030?

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2016/06/robottoy-1.jpg” thumb_width=”150″ /]PwC‘s latest study on the effect of robotics and artificial intelligence on today’s and future workforce is the subject of this BBC Business article focusing on the UK workforce. 30 percent of existing jobs in the UK were potentially at a high risk of automation by the 2030s, compared with 38 percent in the US, 35 percent in Germany and 21 percent in Japan. Most at risk are jobs in manufacturing and retail, but to quote PwC’s page on their multiple studies, robotics and AI may change how we work in a different way, an “augmented and collaborative working model alongside people – what we call the ‘blended workforce’”. Or not less work, but different types of work. But some jobs, like truck (lorry) drivers, would go away or be vastly diminished.

The effect on healthcare? The categories are very broad, but the third category of employment affected is administrative and support services at 37 percent, followed by professional, scientific and technical at 26 percent, and human health and social work at 17 percent. Will it increase productivity and thus salaries, which have languished in the past decade? Will it speed innovation and care in our area? Will it help the older population to be healthy and productive? And the societal effects will roll on, but perhaps not for some. View this wonderful exchange between Jean Harlow and Marie Dressler that closes the 1933 film Dinner at Eight. Hat tip to Guy Dewsbury @dewsbury via Twitter

AI as patient safety assistant that reduces, prevents adverse events

The 30 year old SXSW conference and cultural event has been rising as a healthcare venue for the past few years. One talk this Editor would like to have attended this past weekend was presented by Eric Horvitz, Microsoft Research Laboratory Technical Fellow and managing director, who is both a Stanford PhD in computing and an MD. This combination makes him a unique warrior against medical errors, which annually kill over 250,000 patients. His point was that artificial intelligence is increasingly used in tools that are ‘safety nets’ for medical staff in situations such as failure to rescue–the inability to treat complications that rapidly escalate–readmissions, and analyzing medical images.

A readmissions clinical support tool, RAM (Readmissions Management), he worked on eight years agon, produced now by Caradigm, predicts which patients have a high probability of readmission and those who will need additional care. Failure to rescue often results from a concatenation of complications happening quickly and with a lack of knowledge that resemble the prelude to an aircraft crash. “We’re considering [data from] thousands of patients, including many who died in the hospital after coming in for an elective procedure. So when a patient’s condition deteriorates, they might lose an organ system. It might be kidney failure, for example, so renal people come in. Then cardiac failure kicks in so cardiologists come in and they don’t know what the story is. The actual idea is to understand the pipeline down to the event so doctors can intervene earlier.” and to understand the patterns that led up to it. Another is to address potential problems that may be outside the doctor’s direct knowledge field or experiences, including the Bayesian Theory of Surprise affecting the thought process. Dr Horvitz discussed how machine learning can assist medical imaging and interpretation. His points were that AI and machine learning, applied to thousands of patient cases and images, are there to assist physicians, not replace them, and not to replace the human touch. MedCityNews

#HIMSS17 roundup: machine learning, Proteus, Soon-Shiong/NantWorks’ cancer vax, Uniphy Health, more

HIMSS17 is over for another year, but there is plenty of related reading left for anyone who is not still recovering from sensory overload. There wasn’t big news made, other than Speaker John Boehner trying to have it both ways about what the House needs to do about replacing the failing ACA a/k/a Obamacare. Here’s our serving:

  • If you are interested in the diffusion of workflow technologies into healthcare, including machine learning and AI, there’s a long-form three-part series in Healthcare IT News that this Editor noted has suddenly become a little difficult to find–but we did. The articles also helpfully list vendors that list certain areas of expertise in their exhibitor keywords.
  • Mobihealthnews produced a two-page wrap up that links to various MHN articles where applicable. Of interest:
    • a wound measurement app that Intermountain Healthcare developed with Johns Hopkins spinoff Tissue Analytics
    • Children’s Health of Dallas Texas is using the Proteus Health ingestible med sensor with a group of teenaged organ post-transplant patients to improve med compliance
    • the Medisafe med management app has a new feature that alerts users to drug, food and alcohol interactions with their regimen, which is to this writer’s knowledge the first-ever med app to do this
    • Info security spending is rising, according to the Thales Data Threat Report. This year, 81 percent of U.S. healthcare organizations and 76 percent of global healthcare organizations will increase information security spending.
  • Healthcare and sports mogul Patrick Soon-Shiong presented on NantHealth‘s progress on a cancer vaccine that became a significant part of the former VP Joe Biden’s initiative, Cancer Breakthroughs 2020. Dr Soon-Shiong stated that the FDA has given approval to advance the vaccine into later clinical trials, and also unveiled Nant AI, an augmented intelligence platform to high-speed process genome activity of cancer tumors and the Nant Cloud, a cloud server which can generate bioinformatic data at 26 seconds per patient. This is in addition to the NantHealth GPS Cancer diagnostic tool used to isolate new mutations in a given tumor. HealthcareITNews MedCityNews takes a dimmer view, noting two recent cancer vaccine failures. Dimmer still is Stat’s takedown of Dr Soon-Shiong, which reportedly was the talk of HIMSS.
  • Leading up to HIMSS, Newark’s own Uniphy Health announced UH4, the latest generation of its enterprise-wide communications and clinical collaboration platform for hospitals and clinics to facilitate the ‘real-time health system’. Release

Not enough? DestinationHIMSS, produced by Healthcare IT News/HIMSS Media, has its usual potpourri of official reporting here.

AI as diagnostician in ophthalmology, dermatology. Faster adoption than IBM Watson?

Three recent articles from the IEEE (formally the Institute of Electronics and Electrical Engineers) Spectrum journal are significant in pointing to advances in artificial intelligence (AI) for specific medical conditions–and which may go into use faster and more cheaply than the massive machine learning/decision support program represented by IBM Watson Health.

A Chinese team developed CC-Cruiser to diagnose congenital cataracts, which affect children and cause irreversible blindness. The program developed algorithms that used a relatively narrow database of 410 images of congenital cataracts and 476 images of normal eyes. The CC-Cruiser team from Sun Yat-Sen and Xidian Universities developed algorithms to diagnose the existence of cataracts, predict the severity of the disease, and suggest treatment decisions. The program was subjected to five tests, with most of the critical ones over 90 percent accuracy versus doctor consults. There, according to researcher and ophthalmologist Haotian Lin, is the ‘rub’–that even with more information, he cannot project the system going to 100 percent accuracy. The other factor is the human one–face to face interaction. He strongly suggests that the CC-Cruiser system is a tool to complement and confirm doctor judgment, and could be used in non-specialized medical centers to diagnose and refer patients. Ophthalmologists vs. AI: It’s a Tie (Hat tip to former TTA Ireland Editor Toni Bunting)

In the diagnosis of skin cancers, a Stanford University team used GoogleNet Inception v3 to build a deep learning algorithm. This used a huge database of 130,000 lesion images from more than 2000 diseases. Inception was successful in performing on par with 21 board-certified dermatologists in differentiating certain skin lesions, for instance, keratinocyte carcinomas from benign seborrheic keratoses. The major limitations here are the human doctor’s ability to touch and feel the skin, which is key to diagnosis, and adding the context of the patient’s history. Even with this, Inception and similar systems could help to triage patients to a doctor faster. Computer Diagnoses Skin Cancers

Contrasting this with IEEE’s writeup on the slow development of IBM Watson Health’s systems, each having to be individually developed, continually refined, using massive datasets, best summarized in Dr Robert Wachter’s remark, “But in terms of a transformative technology that is changing the world, I don’t think anyone would say Watson is doing that today.” The ‘Watson May See You Someday’ article may be from mid-2015, but it’s only this week that Watson for Oncology has announced its first implementation in a regional medical center based in Jupiter, Florida. Watson for Oncology collaborates with Memorial Sloan-Kettering in NYC (MSK) (and was tested in other major academic centers). Currently it is limited to breast, lung, colorectal, cervical, ovarian and gastric cancers, with nine additional cancer types to be added this year. Mobihealthnews

What may change the world of medicine could be AI systems using smaller, specific datasets, with Watson Health for the big and complex diagnoses needing features like natural-language processing.

Robot-assisted ‘smart homes’ and AI: the boundary between supportive and intrusive?

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2016/06/Robot-Belgique-1.png” thumb_width=”200″ /]Something that has been bothersome to Deep Thinkers (and Not Such Deep Thinkers like this Editor) is the almost-forced loss of control inherent in discussion of AI-powered technology. There is a elitist Wagging of Fingers that generally accompanies the Inevitable Questions and Qualms.

  • If you don’t think 100 percent self-driving cars are an Unalloyed Wonder, like Elon Musk and Google tells you, you’re a Luddite
  • If you have concerns about nanny tech or smart homes which can spy on you, you’re paranoid
  • If you are concerned that robots will take the ‘social’ out of ‘social care’, likely replace human carers for people, or lose your neighbor their job, you are not with the program

I have likely led with the reason why: loss of control. Control does not motivate just Control Freaks. Think about the decisions you like versus the ones you don’t. Think about how helpless you felt as a child or teenager when big decisions were made without any of your input. It goes that deep.

In the smart home, robotic/AI world then, who has the control? Someone unknown, faceless, well meaning but with their own rationale? (Yes, those metrics–quality, cost, savings) Recall ‘Uninvited Guests’, the video which demonstrated that Dad Ain’t Gonna Take Nannying and is good at sabotage.

Let’s stop and consider: what are we doing? Where are we going? What fills the need for assistance and care, yet retains that person’s human autonomy and that old term…dignity? Maybe they might even like it? For your consideration:

How a robot could be grandma’s new carer (plastic dogs to the contrary in The Guardian)

AI Is Not out to Get Us (Scientific American)

Hat tip on both to reader Malcolm Fisk, Senior Research Fellow (CCSR) at De Montfort University via LinkedIn

Artificial intelligence with IBM Watson, robotics pondered on 60 Minutes

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2016/06/robottoy-1.jpg” thumb_width=”150″ /]This Sunday, the long-running TV magazine show 60 Minutes (CBS) had a long Charlie Rose-led segment on artificial intelligence. It concentrated mainly on the good with a little bit of ugly thrown in. The longest part of it was on IBM Watson massively crunching and applying oncology and genomics to diagnosis. In a study of 1,000 cancer patients reviewed by the University of North Carolina at Chapel Hill’s molecular tumor board, while 99 percent of the doctor diagnoses were confirmed by Watson as accurate, Watson found ‘something new’ in 30 percent. As a tool, it is still considered to be in adolescence. Watson and data analytics technology has been a $15 billion investment for IBM, which can afford it, but by licensing it and through various partnerships, IBM has been starting to recoup it. The ‘children of Watson’ are also starting to grow. Over at Carnegie Mellon, robotics is king and Google Glass is reading visual data to give clues on speeding up reaction time. At Imperial College, Maja Pantic is taking the early steps into artificial emotional intelligence with a huge database of facial expressions and interpretations. In Hong Kong, Hanson Robotics is developing humanoid robots, and that may be part of the ‘ugly’ along with the fears that AI may outsmart humans in the not-so-distant future. 60 Minutes video and transcript

Speaking of recouping, IBM Watson Health‘s latest partnership is with Siemens Healthineers to develop population health technology and services to help providers operate in value-based care. Neil Versel at MedCityNews looks at that as well as 60 Minutes. Added bonus: a few chuckles about the rebranded Siemens Healthcare’s Disney-lite rebranding.

A brief history of robotics, including Turing and Asimov (weekend reading)

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2016/06/robottoy-1.jpg” thumb_width=”150″ /]TechWorld gives us a short narrative on robotics history dating back to Asimov’s Three Rules of Robotics (1942), Turing’s Imitation Game (1950) and the pioneering work of several inventors in the late 1940s. There’s a brief tribute to Star Wars’ R2-D2 (Kenny Baker RIP) and C-3PO.  It finishes up with AI-driven IBM Watson and Deep Mind’s AlphaGo. Breezy but informative beach reading! Hat tip to Editor Emeritus and TTA founder Steve Hards; also read his acerbic comment on Dell and Intel’s involvement in Thailand’s Saensuk Smart City