Google’s ‘Project Nightingale’–a de facto breach of 10 million health records, off a bridge too far?

Breaking News. Has this finally blown the lid off Google’s quest for data on everyone? This week’s uncovering, whistleblowing, and general backlash on Google’s agreement with Ascension Health, the largest non-profit health system in the US and the largest Catholic health system on the Planet Earth, revealed by the Wall Street Journal (paywalled) has put a bright light exactly where Google (and Apple, Facebook, and Amazon), do not want it.

Why do these giants want your health data? It’s all about where it can be used and sold. For instance, it can be used in research studies. It can be sold for use in EHR integration. But their services and predictive data is ‘where it’s at’. With enough accumulated data on both your health records and personal life (e.g. not enough exercise, food consumption), their AI and machine learning modeling can predict your health progression (or deterioration), along with probable diagnosis, outcomes, treatment options, and your cost curve. Advertising clicks and merchandising products (baby monitors, PERS, exercise equipment) are only the beginning–health systems and insurers are the main chance. In a worst-case and misuse scenario, the data modeling can make you look like a liability to an employer or an insurer, making you both unemployable and expensively/uninsurable in a private insurance system.

In Google’s latest, their Project Nightingale business associate agreement (BAA) with Ascension Health, permissible under HIPAA, allowed them apparently to access in the initial phase at least 10 million identified health records which were transmitted to Google without patient or physician consent or knowledge, including patient name, lab results, diagnoses, hospital records, patient names and dates of birth. This transfer and the Google agreement were announced by Ascension on 11 November. Ultimately, 50 million records are planned to be transferred from Ascension in 21 states. According to a whistleblower on the project quoted in The Guardian, there are real concerns about individuals handling identified data, the depth of the records, how it’s being handled, and how Google will be using the data. Ascension doesn’t seem to share that concern, stating that their goal is to “optimize the health and wellness of individuals and communities, and deliver a comprehensive portfolio of digital capabilities that enhance the experience of Ascension consumers, patients and clinical providers across the continuum of care” which is a bit of word salad that leads right to Google’s Cloud and G Suite capabilities.

This was enough to kick off an inquiry by Health and Human Services (HHS). A spokesperson confirmed to Healthcare Dive that “HHS’ Office of Civil Rights is opening an investigation into “Project Nightingale.” The agency “would like to learn more information about this mass collection of individuals’ medical records with respect to the implications for patient privacy under HIPAA,” OCR Director Roger Severino said in an emailed statement.”

Project Nightingale cannot help but aggravate existing antitrust concerns by Congress and state attorneys general on these companies and their safeguards on privacy. An example is the pushback around Google’s $2.1 bn acquisition of Fitbit, which one observer dubbed ‘extraordinary’ given Fitbit’s recent business challenges, and data analytics company Looker. DOJ’s antitrust division has been looking into how Google’s personalized advertising transactions work and increasingly there are calls from both ends of the US political spectrum to ‘break them up.’ Yahoo News

Google and Ascension Health may very well be the ‘bridge too far’ that curbs the relentless and largely hidden appetite for personal information by Google, Amazon, Apple, and Facebook that is making their very consumers very, very nervous. Transparency, which seems to be a theme in many of these articles, isn’t a solution. Scrutiny, oversight with teeth, and restrictions are.

Also STAT News , The Verge on Google’s real ambitions in healthcare, and a tart take on Google’s recent lack of success with acquisitions in ZDNet, ‘Why everything Google touches turns to garbage’. Healthcare IT News tries to be reassuring, but the devil may be in Google’s tools not being compliant with HIPAA standards.  Further down in the article, Readers will see that HIPAA states that the agreement covers access to the PHI of the covered entity (Ascension) only to have it carry out its healthcare functions, not for the business associate’s (Google’s) independent use or purposes. 

About time: digital health grows a set of ethical guidelines

Is there a sense of embarrassment in the background? Fortune reports that the Stanford University Libraries are taking the lead in organizing an academic/industry group to establish ethical guidelines to govern digital health. These grew out of two meetings in July and November last year with the participation of over 30 representatives from health care, pharmaceutical, and nonprofit organizations. Proteus Digital Health, the developer of a formerly creepy sensor pill system, is prominently mentioned, but attending were representatives of Aetna CVS, Otsuka Pharmaceuticals (which works with Proteus), Kaiser Permanente, Intermountain Health, Tencent, and HSBC Holdings.

Here are the 10 Guiding Principles, which concentrate on data governance and sharing, as well as the use of the products themselves. They are expanded upon in this summary PDF:

  1. The products of digital health companies should always work in patients’ interests.
  2. Sharing digital health information should always be to improve a patient’s outcomes and those of others.
  3. “Do no harm” should apply to the use and sharing of all digital health information.
  4. Patients should never be forced to use digital health products against their wishes.
  5. Patients should be able to decide whether their information is shared, and to know how a digital health company uses information to generate revenues.
  6. Digital health information should be accurate.
  7. Digital health information should be protected with strong security tools.
  8. Security violations should be reported promptly along with what is being done to fix them.
  9. Digital health products should allow patients to be more connected to their care givers.
  10. Patients should be actively engaged in the community that is shaping digital health products.

We’ve already observed that best practices in design are putting some of these principals into action. Your Editors have long advocated, to the point of tiresomeness, that data security is not notional from the smallest device to the largest health system. Our photo at left may be vintage, but if anything the threat has both grown and expanded. 2018’s ten largest breaches affected almost 7 million US patients and disrupted their organizations’ operations. Social media is also vulnerable. Parts of the US government–Congress and the FTC through a complaint filing–are also coming down hard on Facebook for sharing personal health information with advertisers. This is PHI belonging to members of closed Facebook groups meant to support those with health and mental health conditions. (HIPAA Journal).

But here is where Stanford and the conference participants get all mushy. From their press release:

“We want this first set of ten statements to spur conversations in board rooms, classrooms and community centers around the country and ultimately be refined and adopted widely.” –Michael A. Keller, Stanford’s university librarian and vice provost for teaching and learning

So everyone gets to feel good and take home a trophy? Nowhere are there next steps, corporate statements of adoption, and so on.

Let’s keep in mind that Stanford University was the nexus of the Fraud That Was Theranos, which is discreetly not mentioned. If not a shadow hovering in the background, it should be. Perhaps there is some mea culpa, mea maxima culpa here, but this Editor will wait for more concrete signs of Action.

Weekend reading: the deadly consequences of unpredictable code

The Guardian’s end of August post-bank holiday/pre-Labor Day essay on how algorithms are morphing beyond the familiar if/then/else model we learned in coding school or in the IT engineers’ bullpen as you strained to understand how the device you sought to market actually worked is scary stuff, especially read in conjunction with the previous article about Click Here to Kill Everybody. We may be concerned with badly protected IoT, cybersecurity, and the AI Monster, but this is actually much nearer to fruition as it drives areas as diverse and close to us such as medicine, social media, and weapons systems.

The article explains in depth how code piled on code has created a data universe that no one really understands, is allowed to run itself, and can have disastrous consequences socially and in our personal safety. “Recent years have seen a more portentous and ambiguous meaning emerge, with the word “algorithm” taken to mean any large, complex decision-making software system; any means of taking an array of input – of data – and assessing it quickly, according to a given set of criteria (or “rules”).” Once an algorithm actually starts learning from their environment successfully, “we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us.”

What’s happening? Acceleration. What’s missing? Any kind of ethical standards or brakes on this careening car. A Must Read. Franken-algorithms: the deadly consequences of unpredictable code

The evolution of Facebook: implications for social health

The Telegraph’s recent retrospective on Facebook and its evolution from 2004’s ‘Thefacebook’ of Harvard University students to the Facebook that many of us use now, with Chat, timeline and a converged mobile and desktop design, led reader Mike Clark to drop Editor Charles a line about how healthcare isn’t maximizing social media and internet-based innovation. Recent studies have indicated that these social patient communities benefit their members. Agreed, but there are increasing qualifications–and qualms.

Back in 2014, Facebook made some noises on forming its own online health communities, a move that was widely derided as Facebook monetizing yet another slice of personal (health) data from users. While Charles has made the excellent point that “almost all good health apps are essentially the tailored interface to an internet service that sits behind it, a fact often forgotten by commentators”, Editor Donna on her side of the Atlantic has seen concerns mount on privacy, security and the stealthy commercialization/monetization of many popular online patient support groups (OSGs) which Carolyn Thomas (‘The Heart Sister’) skewers here, excepting those with solid non-profit firewalling (academic, government, clinical). Example she gives: Patients Like Me, which markets health data gathered from members to companies developing products to sell to patients. How many members, with a disease or chronic condition on their mind, will browse through to this page that says in part: “Except for the restricted personal information you entered when registering for the site, you should expect that every piece of information you submit (even if it is not currently displayed) may be shared with our partners and any member of PatientsLikeMe, including other patients.”

We’ve also noted that genomics data may not be sufficiently de-identified so that it can’t be matched through inference [TTA 31 Oct 15], with the potential for sale. And of course Hackermania Running Wild continues (see here).

For now general information sites like WebMD and personalized reference sites such as Medivisor feel more secure to users, as well as small non-commercialized OSGs and ‘closed’ telehealth/telemedicine systems.

Is digital health going to add to Digital Big Brother Watching You?

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2014/10/Doctor-Big-Brother.jpg” thumb_width=”150″ /]“They’re watching me on my phone. They’re watching me on Facebook. They’re even watching me when I want to hide. Machines are a form of intelligence, and they’re being built into everything.”–Dr Zeynep Tufekci

The world of digital health is largely based on tracking–via smartphones, wearables, watches–and analytics taking and modeling All That Data we generate. Are we in compliance with our meds? Are we exercising enough? How’s our A1c trending? Drinking our water? All this monitoring–online and offline–is increasingly of concern to Deep Thinkers like Dr Tufekci, a reformed computer programmer, now University of North Carolina assistant professor and self-proclaimed “techno-sociologist.” At IdeaFestival 2015, she took particular aim at Facebook (surprisingly, not at Google) for knowing a tremendous amount about us by our behavior, of course using it to anticipate and sell us on what we might want. The ethics of machine learning are still hazy and machines are prone to error, different than human error, and we haven’t accounted for machine error in our systems yet. Like that big health data that mistakes a daughter for her mother and drops critical health information from a patient’s EHR [TTA 29 Sep]. A thought-provoker to kick off your week. TechRepublic 

Related: The Gimlet Eye took a squint at Big Brother Gathering and Monetizing Your Big Blinking Data–data mining, privacy and employer wellness programs–back in 2013, which means the Eye and Dr Tufekci should get together for coffee, smartphones off of course. While Glass is gone, the revolt against relentless monitoring is well-dramatized in the well-watched video, ‘Uninvited Guests’. And we can get equally scared about AI–artificial intelligence–like Steve Wozniak. 

Pharma company ‘breaks the Internet’ with Kim K, gets FDA testy

But it may break them…well, give them a fracture. Or a good hard marketing lesson. Specialty pharma Duchesnay thought it had hit the jackpot with negotiating a promotional spokeswoman endorsement from pregnant celebrity Kim Kardashian of its morning sickness drug Diclegis. The Kardashian Marketing Machine cranked up. Kim (and mom Kris Jenner) took to Instagram, Facebook and Twitter in late July with (scripted) singing of Diclegis’ praises to their tens of millions of followers. The Instagram posts linked to an ‘important safety page’ a/k/a The Disclaimers. That wasn’t near enough for the Federal Drug Administration (FDA) which governs the acceptable marketing of all drugs in the US. On August 7th a tartly worded letter arrived at Duchesnay’s Pennsylvania HQ cited multiple violations of marketing regulations, notably risk information, and told Duchesnay to cease these communications immediately or withdraw the drug, which would be highly unlikely as it is successful. They also were require to provide “corrective messages” to the “violative materials”.

Our takeaway:

* Duchesnay reaped a bounty of free media (see below), on top of the (undoubtedly expensive) Kardashian endorsement. Yes, they did pay the cost of a FDA nastygram and a legal response, and the warning will live on in their file. However, a lot of target-age women now know Diclegis and others know about the relatively obscure Duchesnay.

* This was a calculated marketing risk that tested the boundaries of social media and celebrity endorsement. (more…)

Facebooking health: good for communities, not for privacy?

In a Reuters exclusive, Facebook is reportedly considering creating online communities which will support those with various medical conditions, as well as ‘preventative care’ applications for those minding their healthy lifestyle. According to Reuters’ sources, Facebook representatives have been meeting with medical industry experts and entrepreneurs. They are also starting a research and development unit to test new health apps. It is not a far reach to assume that Facebook, which is always seeking to maximize its profitability dependent on digital ad revenues (second only to Google), yet finding its younger audience on the decline, is attempting to grapple with the concerns of its older-skewing audience–and also seeking a way to monetize another slice of data. Yet the 55+ audience is wary of Facebook given (more…)

The Internet.org initiative and the real meaning for health tech

Internet.org — Every one of us. Everywhere. Connected.

[grow_thumb image=”http://telecareaware.com/wp-content/uploads/2013/02/gimlet-eye.jpg” thumb_width=”150″ /]Much has been made of the Internet.org alliance (release). The mission is to bring internet access to the two-thirds of the world who supposedly have none. It is led, very clearly, by Mark Zuckerberg, founder and CEO of Facebook. Judging from both the website and the release, partners Ericsson, MediaTek, Nokia (handset sale to Microsoft, see below), Opera (browser), Qualcomm and Samsung, no minor players, clearly take a secondary role.  The reason given is that internet access is growing at only 9 percent/year. Immediately the D3H tea-leaf readers were all over one seemingly offhand remark made by Mr. Zuckerberg to CNN (Eye emphasis):

“Here, we use Facebook to share news and catch up with our friends but there they are going to use it to decide what kind of government they want, get access to healthcare for the first time ever, connect with family hundreds of miles away they haven’t seen for decades. Getting access to the internet is a really big deal. I think we are going to be able to do it”

Really? The Gimlet Eye thought that mobile phone connectivity and simple apps on inexpensive phones were already spreading healthcare, banking and simple communications to people all over the world. Gosh, was the Eye blind on this?

Looking inside the Gift Horse’s Mouth, and examining cui bono, what may be really behind this seemingly altruistic effort could be…only business. (more…)