Meta Pixel ad tracker collects another 3 million data breaches at Advocate Aurora Health; Zuckerberg getting Senate scrutiny

The Pixel ad tracker continues to be a Big Problem for Meta and Facebook. Advocate Aurora Health, a large health system in Illinois and Wisconsin, this week informed 3 million patients of a potential data breach connected to the use of Meta Pixel. The Meta Pixel snippets of JavaScript code were used within their Epic MyChart and LiveWell websites and applications, as well as on some of their schedulers.

As we have previously noted (below), ad trackers like the Meta Pixel are used to target website visitors and also to track ads placed on Facebook and Instagram. Developers routinely permit these snippets of code as trackers for better performance and website tracking, but the problem here is that sensitive patient information (PHI) is being sent back to Facebook where it violates patient privacy and can be misused.

Advocate Aurora cited that Meta Pixel may have collected “IP address; dates, times, and/or locations of scheduled appointments; your proximity to an Advocate Aurora Health location; information about your provider; type of appointment or procedure; communications between you and others through MyChart, which may have included your first and last name and your medical record number; information about whether you had insurance; and, if you had a proxy MyChart account, your first name and the first name of your proxy.” It did not collect social security number, financial accounts, credit cards, or debit card information. At this point, there is no reported misuse of information. Bleeping ComputerHealthcareITNews

That this is at all problematic is being vigorously denied by Facebook. But in an unusual move, Senator John Warner (D-VA) sent a letter yesterday to Meta CEO Mark Zuckerberg, containing seven fairly rigorous questions based on The Markup’s articles to be answered by 3 November. This follows on Sen. Jon Ossoff’s request via the Senate Homeland Security Committee (below)  (Editor’s opinion: to be written by Meta’s lawyers, and don’t hold your breath for any rending of garments or mea culpas.) HealthcareITNews, The Markup

Our previous articles on The Markup‘s research and Meta Pixel:

Breaking: Hospitals sending sensitive patient information to Facebook through website ‘Meta Pixel’ ad tracker–study

Facebook Meta Pixel update: Nemours Children’s Health using 25 ad trackers on appointment scheduling site

Let the lawsuits begin: Meta sued by health system patient for Meta Pixel info gathering

Novant Health notification 

Meta facing some Senate scrutiny on Meta Pixel’s health data collection–and how it’s used

Meta facing some Senate scrutiny on Meta Pixel’s health data collection–and how it’s used

A member of the Senate Homeland Security and Governmental Affairs Committee, Sen. Jon Ossoff (D-GA) has requested that Facebook’s parent, Meta, account for healthcare information that it has collected as a result of the Meta Pixel being used on leading hospitals’ websites as an ad tracker. During a hearing, Meta chief product officer Chris Cox was questioned about Meta’s having and using the data and responded, “Not to my knowledge.” According to this latest report in The Markup, Cox will follow up with a written response to the committee.

The June investigation by The Markup and STAT [TTA 17 June] investigated how these snippets of code, routinely used by developers to track website performance, could be sending to Facebook through online appointment schedulers and patient portals highly sensitive patient information. As we noted then from the article, “None of the hospitals using the Pixel have patient consent forms permitting the transmission of individual patient information, nor business associate agreements (BAAs) that permit this data’s collection.” Facebook’s defense is that it does not use this information in any identifiable way.  

Developments have moved quickly since then. According to The Markup, 28 of the 33 hospitals in the initial report have removed the Meta Pixel from their appointment schedulers or blocked it from sending patient information to Facebook. At least six of the seven health systems had also removed the pixels from their patient portals. In August, Novant Health notified patients of a code misconfiguration of their Meta Pixel tracker that may lead to unauthorized disclosure of their personal health information (PHI) [TTA 19 Aug]. North Carolina’s attorney general may investigate. Five class action lawsuits have been filed by patients, including against Novant and Medstar [TTA 23 June].

It may be that Meta may have a very hard time ‘splainin’ to Sen. Ossoff how the data flow and is used for any given account, based upon their own internal engineers’ assessments in a leaked 2021 privacy memo. But given Meta’s and the founder’s pull in the Federal government, one wonders how far all of this will go. Your Editor is not optimistic. TTA’s articles on Meta Pixel

Facebook Meta Pixel update: Nemours Children’s Health using 25 ad trackers on appointment scheduling site

The Meta Pixel tracker study gets a little worse–this time, it’s information on appointments for children. The Markup’s investigation on healthcare use of ad trackers continues with an examination of Nemours Children’s Health, a Delaware-based multi-state health network with 97 locations in Delaware, Pennsylvania, New Jersey, and Florida that serve about 500,000 families. Once again, Meta Pixel and other ad trackers were found to capture personal information and patient/family details entered by an adult on the appointment scheduling site to Facebook that may constitute protected health information.

Meta Pixel was recorded as tracking:

  • IP addresses
  • Scheduled doctor and his or her specialty
  • In some cases, the first and last name of the child being scheduled

It is not this information alone, but in combination with other information that Facebook possesses, that can profile any user’s health conditions, link specific conditions to individuals and parents, and thus constitute a privacy violation. IP addresses are one of the factors that HIPAA cites as when linked to other information, create a violation.

The Markup used a tool called Blacklight to scan Nemours’ websites.

What was Nemours thinking in building their website? In addition to Meta Pixel, the scheduling site is riddled with 25 ad trackers and 38 third-party cookies. These are coded in by Facebook, Amazon, Google, and The Latest Healthcare Transformer, Oracle. Oracle claims it has healthcare data on 80% of US internet users, and one can assume this is how they get it. Ad platforms MediaMath and LiveRamp also captured data. The Markup’s team could detect the trackers, but not determine what information these ad trackers were capturing. 

In addition to the trackers on the scheduling site, Blacklight picked up a session recorder from Mouseflow. This is code that can potentially track what people click on a page. Mouseflow states on its Legal Hub that in order to transmit HIPAA-protected information to a third party, a business associate agreement (BAA) must be in place. Mouseflow did not confirm a BAA agreement to The Markup, but in a statement to them insisted that Mouseflow does not permit the transmission of PII or PHI and masks that information.

Not all health data transmitted constitute HIPAA violations, but capture of appointment scheduling information is right on the line of HIPAA violations, though not 100% conclusive.

Elsewhere on the Nemours website, there were nine ad trackers and ten third-party cookies. 

Even after they were notified by The Markup, Nemours persisted in using Meta Pixel. While many of the trackers on the scheduling site were removed, trackers from Facebook, Google, and Salesforce remained. Facebook’s Meta Pixel was removed after last week’s story.

This is certainly another gap between the suits in the suites and the IT/developers rowing in the galley.

Breaking: Hospitals sending sensitive patient information to Facebook through website ‘Meta Pixel’ ad tracker–study

Meta Pixel tracker sending appointment scheduling, patient portal information to Facebook–likely to become the Hot Story of next week. A study published jointly by The Markup and STAT examined the patient-facing areas of Newsweek’s 100 leading hospitals’ websites. It found that 33 of them permit the Meta Pixel ad tracker to send sensitive patient information back to Facebook. Ostensibly the reason is to better serve the patient with more tailored information, but what is not disclosed is what else Facebook is doing with the information. At a minimum, the information is the IP address–which HIPAA considers one of 18 identifiers that when linked to other personal information, can constitute data as protected health information.

Ad trackers like the Meta Pixel are used to target website visitors and also to track ads placed on Facebook and Instagram. Developers routinely permit these snippets of code as trackers for better performance and website tracking.

  • For 33 hospitals, the Pixel tracker is picking up and sending back to Facebook information from users of the hospital’s online appointment scheduler: the user’s IP, the text of the button, the doctor’s name, and the search term. In testing the sites using a team approach facilitated by a plug-in called Mozilla Rally, the testers found that in several cases, even more identifiable patient information was being sent: first name, last name, email address, phone number, zip code, and city of residence entered into the booking form.
  • Seven hospitals have the Pixel deep into another highly sensitive area–the password-protected patient portal. These go by various names, but a popular one is Epic’s MyChart. One surveyor found that for Piedmont Healthcare, the Pixel picked up the patient’s name, the name of their doctor, and the time of their upcoming appointment. For Novant Health, the information was even more detailed: name and dosage of medication in our health record, notes entered about the prescription about allergic reactions, and the button clicked in response to a question about sexual orientation. (Novant has since removed the Pixel.)

None of the hospitals using the Pixel have patient consent forms permitting the transmission of individual patient information, nor business associate agreements (BAAs) that permit this data’s collection.

The reaction of most of these hospitals was interesting. Some immediately removed it without comment. Others maintained that no protected information was sent using Pixel or otherwise defended its use. Houston Methodist was almost alone in providing a detailed response on how they used it, but subsequently removed it.

Facebook maintains that it does not use this information in any identifiable way and that from 2020 it has in place a sensitive health data filtering system and other safeguards. The New York Department of Financial Services, in a separate action monitoring Facebook in this area, questioned the accuracy of the filtering system. Even when the information is ‘encrypted’, it’s easy to break. Internal leaked Facebook documents indicate that engineers on the ad and business product team admitted as late as 2021 that they don’t have “an adequate level of control and explainability over how our systems use data, and thus we can’t confidently make controlled policy changes or external commitments such as ‘we will not use X data for Y purpose.” (quoted from Vice)

The study could not determine whether Facebook used the data to target advertisements, train its recommendation algorithms, or profit in other ways, but the collection alone can be in violation of US regulations. 

On the face of it, it violates patient privacy. But is it a HIPAA violation of protected health information? No expert quoted was willing to say that was 100% true, but a University of Michigan law professor who studies big data and health care said that “I think this is creepy, problematic, and potentially illegal” from the hospitals’ point of view. Some of the hospitals in their comments say that they vetted it. One wonders at this tradeoff.

To this Editor, Meta Pixel’s use in this way walks right up to the line and puts a few toes over.

If this is true of 33 major hospitals, what about the rest of them–smaller and less important than Columbia Presbyterian, Duke, Novant, and UCLA? What all of us have suspected is quite true–social media is collecting data on us and invading our privacy at every turn, and except for exposés like this, 99% of people neither know nor care that their private information is being used.

The Markup is continuing their “Pixel Hunt” series with childrens’ hospitals. A previous article is about Pixels tracking information from crisis pregnancy centers, about as sensitive as you can get. Also HISTalk.

How Big Data failed public health during COVID

Once upon a time, say about 2012, Big Data and Massive Crunching was going to show us The Way. Better health, diagnosis, prevention, behavior, and a whole lotta other things. Doctors, nurses, engineers, and marketers feared that their jobs would be taken over by the handsome specimen to the left.

So at the start of the COVID pandemic, the hope was that Big Data was going to map the outbreaks and contact trace so that people could go into self-lockdown after a ride on a bus or subway, inform distancing measures, and identify hot spots for public health organizations, no matter how remote. Academic researchers and nonprofit partners mobilized into the non-profit Covid-19 Mobility Data Network that started by analyzing smartphone location data shared by tech companies. The intent was that public health officials could analyze it for insights based on hard data rather than 6′ guesstimates. It would then be expanded with additional data from Big Tech and grow, grow, grow.

Where it ran a cropper was the ad tech companies’ incompatibilities in data gathering, reluctance to share proprietary granular information, and privacy–an international battleground. Facebook turned out to be clueless in mapping mobility as a proxy or input to calculate contact rates, since it released only percent changes in movement or staying at home. The professors also didn’t figure on proprietary non-compatible systems and peculiarities stemming from business needs. Facebook, for instance, released data that mapped only eight-hour chunks in UTC which didn’t, of course, take into account normal bedtimes. Google would state that trends in staying home were up, versus Facebook data that indicated downward trends. Contact tracing, as Readers know, turned out to be a gigantic flop.

While the Covid-19 Mobility Data Network has evolved into a broader project called Crisis Ready, with the goal of creating data-sharing agreements that activate during a public health crisis, closing the gaps in data for epidemiological research remains elusive in areas such as urban versus rural and with specific demographics. STAT, PLOS Digital Health

Hearing voices: Cigna-Ellipsis AI-powered voice stress test; UCSF/Weill neuroprosthesis decodes attempted speech

The Next Voice You Hear? Two advances in voice analysis and restoring speech to those who’ve lost it.

The first is from Cigna International based in Hong Kong which through speech and choice of words can determine your stress level. Your Editor took the Cigna StressWaves test, which requires 90 seconds of answering a question based on one of four topics. To her utter shock as she’s rushing to get out an article or two after a busy day at work and the loss of a good friend in the past week, she was told her stress level was low! The StressWaves test is followed up with an email with your results and a questionnaire pitching Cigna’s health insurance. The test was developed for Cigna by machine-learning medical technology company Ellipsis Health. Other Ellipsis tools for clinicians can quantify anxiety and depression symptoms with 2-3 minutes of speech for initial screening and ongoing monitoring. Mobihealthnews.

The second is about restoring a measure of communication to people who have lost the power to speak through decoding their cortical activity. The research by a team from the Weill Institute of Neuroscience and the University of California (among others) implanted a subdural, high-density, multielectrode array over the area of the sensorimotor cortex that controls speech. This was performed on a person with post-brain stem stroke anarthria (the loss of the ability to articulate speech) and spastic quadriparesis. What the neuroprosthesis did was decode directly from the cerebral cortical activity while the participant attempted to say individual words from a vocabulary set of 50 words. Using computational models plus a natural-language model on next-word probabilities in sentences, the researchers were able with a high degree of accuracy to decode full sentences from the cortical activity. The New England Journal of Medicine article is available in abstract but paywalled for the full study (limited free access with registration). The clinical trial was funded by Facebook and is on ClinicalTrials.gov here for the device and related neurological studies. Also Mobihealthnews.

Google’s ‘Project Nightingale’–a de facto breach of 10 million health records, off a bridge too far?

Breaking News. Has this finally blown the lid off Google’s quest for data on everyone? This week’s uncovering, whistleblowing, and general backlash on Google’s agreement with Ascension Health, the largest non-profit health system in the US and the largest Catholic health system on the Planet Earth, revealed by the Wall Street Journal (paywalled) has put a bright light exactly where Google (and Apple, Facebook, and Amazon), do not want it.

Why do these giants want your health data? It’s all about where it can be used and sold. For instance, it can be used in research studies. It can be sold for use in EHR integration. But their services and predictive data is ‘where it’s at’. With enough accumulated data on both your health records and personal life (e.g. not enough exercise, food consumption), their AI and machine learning modeling can predict your health progression (or deterioration), along with probable diagnosis, outcomes, treatment options, and your cost curve. Advertising clicks and merchandising products (baby monitors, PERS, exercise equipment) are only the beginning–health systems and insurers are the main chance. In a worst-case and misuse scenario, the data modeling can make you look like a liability to an employer or an insurer, making you both unemployable and expensively/uninsurable in a private insurance system.

In Google’s latest, their Project Nightingale business associate agreement (BAA) with Ascension Health, permissible under HIPAA, allowed them apparently to access in the initial phase at least 10 million identified health records which were transmitted to Google without patient or physician consent or knowledge, including patient name, lab results, diagnoses, hospital records, patient names and dates of birth. This transfer and the Google agreement were announced by Ascension on 11 November. Ultimately, 50 million records are planned to be transferred from Ascension in 21 states. According to a whistleblower on the project quoted in The Guardian, there are real concerns about individuals handling identified data, the depth of the records, how it’s being handled, and how Google will be using the data. Ascension doesn’t seem to share that concern, stating that their goal is to “optimize the health and wellness of individuals and communities, and deliver a comprehensive portfolio of digital capabilities that enhance the experience of Ascension consumers, patients and clinical providers across the continuum of care” which is a bit of word salad that leads right to Google’s Cloud and G Suite capabilities.

This was enough to kick off an inquiry by Health and Human Services (HHS). A spokesperson confirmed to Healthcare Dive that “HHS’ Office of Civil Rights is opening an investigation into “Project Nightingale.” The agency “would like to learn more information about this mass collection of individuals’ medical records with respect to the implications for patient privacy under HIPAA,” OCR Director Roger Severino said in an emailed statement.”

Project Nightingale cannot help but aggravate existing antitrust concerns by Congress and state attorneys general on these companies and their safeguards on privacy. An example is the pushback around Google’s $2.1 bn acquisition of Fitbit, which one observer dubbed ‘extraordinary’ given Fitbit’s recent business challenges, and data analytics company Looker. DOJ’s antitrust division has been looking into how Google’s personalized advertising transactions work and increasingly there are calls from both ends of the US political spectrum to ‘break them up.’ Yahoo News

Google and Ascension Health may very well be the ‘bridge too far’ that curbs the relentless and largely hidden appetite for personal information by Google, Amazon, Apple, and Facebook that is making their very consumers very, very nervous. Transparency, which seems to be a theme in many of these articles, isn’t a solution. Scrutiny, oversight with teeth, and restrictions are.

Also STAT News , The Verge on Google’s real ambitions in healthcare, and a tart take on Google’s recent lack of success with acquisitions in ZDNet, ‘Why everything Google touches turns to garbage’. Healthcare IT News tries to be reassuring, but the devil may be in Google’s tools not being compliant with HIPAA standards.  Further down in the article, Readers will see that HIPAA states that the agreement covers access to the PHI of the covered entity (Ascension) only to have it carry out its healthcare functions, not for the business associate’s (Google’s) independent use or purposes. 

About time: digital health grows a set of ethical guidelines

Is there a sense of embarrassment in the background? Fortune reports that the Stanford University Libraries are taking the lead in organizing an academic/industry group to establish ethical guidelines to govern digital health. These grew out of two meetings in July and November last year with the participation of over 30 representatives from health care, pharmaceutical, and nonprofit organizations. Proteus Digital Health, the developer of a formerly creepy sensor pill system, is prominently mentioned, but attending were representatives of Aetna CVS, Otsuka Pharmaceuticals (which works with Proteus), Kaiser Permanente, Intermountain Health, Tencent, and HSBC Holdings.

Here are the 10 Guiding Principles, which concentrate on data governance and sharing, as well as the use of the products themselves. They are expanded upon in this summary PDF:

  1. The products of digital health companies should always work in patients’ interests.
  2. Sharing digital health information should always be to improve a patient’s outcomes and those of others.
  3. “Do no harm” should apply to the use and sharing of all digital health information.
  4. Patients should never be forced to use digital health products against their wishes.
  5. Patients should be able to decide whether their information is shared, and to know how a digital health company uses information to generate revenues.
  6. Digital health information should be accurate.
  7. Digital health information should be protected with strong security tools.
  8. Security violations should be reported promptly along with what is being done to fix them.
  9. Digital health products should allow patients to be more connected to their care givers.
  10. Patients should be actively engaged in the community that is shaping digital health products.

We’ve already observed that best practices in design are putting some of these principals into action. Your Editors have long advocated, to the point of tiresomeness, that data security is not notional from the smallest device to the largest health system. Our photo at left may be vintage, but if anything the threat has both grown and expanded. 2018’s ten largest breaches affected almost 7 million US patients and disrupted their organizations’ operations. Social media is also vulnerable. Parts of the US government–Congress and the FTC through a complaint filing–are also coming down hard on Facebook for sharing personal health information with advertisers. This is PHI belonging to members of closed Facebook groups meant to support those with health and mental health conditions. (HIPAA Journal).

But here is where Stanford and the conference participants get all mushy. From their press release:

“We want this first set of ten statements to spur conversations in board rooms, classrooms and community centers around the country and ultimately be refined and adopted widely.” –Michael A. Keller, Stanford’s university librarian and vice provost for teaching and learning

So everyone gets to feel good and take home a trophy? Nowhere are there next steps, corporate statements of adoption, and so on.

Let’s keep in mind that Stanford University was the nexus of the Fraud That Was Theranos, which is discreetly not mentioned. If not a shadow hovering in the background, it should be. Perhaps there is some mea culpa, mea maxima culpa here, but this Editor will wait for more concrete signs of Action.

Weekend reading: the deadly consequences of unpredictable code

The Guardian’s end of August post-bank holiday/pre-Labor Day essay on how algorithms are morphing beyond the familiar if/then/else model we learned in coding school or in the IT engineers’ bullpen as you strained to understand how the device you sought to market actually worked is scary stuff, especially read in conjunction with the previous article about Click Here to Kill Everybody. We may be concerned with badly protected IoT, cybersecurity, and the AI Monster, but this is actually much nearer to fruition as it drives areas as diverse and close to us such as medicine, social media, and weapons systems.

The article explains in depth how code piled on code has created a data universe that no one really understands, is allowed to run itself, and can have disastrous consequences socially and in our personal safety. “Recent years have seen a more portentous and ambiguous meaning emerge, with the word “algorithm” taken to mean any large, complex decision-making software system; any means of taking an array of input – of data – and assessing it quickly, according to a given set of criteria (or “rules”).” Once an algorithm actually starts learning from their environment successfully, “we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us.”

What’s happening? Acceleration. What’s missing? Any kind of ethical standards or brakes on this careening car. A Must Read. Franken-algorithms: the deadly consequences of unpredictable code

The evolution of Facebook: implications for social health

The Telegraph’s recent retrospective on Facebook and its evolution from 2004’s ‘Thefacebook’ of Harvard University students to the Facebook that many of us use now, with Chat, timeline and a converged mobile and desktop design, led reader Mike Clark to drop Editor Charles a line about how healthcare isn’t maximizing social media and internet-based innovation. Recent studies have indicated that these social patient communities benefit their members. Agreed, but there are increasing qualifications–and qualms.

Back in 2014, Facebook made some noises on forming its own online health communities, a move that was widely derided as Facebook monetizing yet another slice of personal (health) data from users. While Charles has made the excellent point that “almost all good health apps are essentially the tailored interface to an internet service that sits behind it, a fact often forgotten by commentators”, Editor Donna on her side of the Atlantic has seen concerns mount on privacy, security and the stealthy commercialization/monetization of many popular online patient support groups (OSGs) which Carolyn Thomas (‘The Heart Sister’) skewers here, excepting those with solid non-profit firewalling (academic, government, clinical). Example she gives: Patients Like Me, which markets health data gathered from members to companies developing products to sell to patients. How many members, with a disease or chronic condition on their mind, will browse through to this page that says in part: “Except for the restricted personal information you entered when registering for the site, you should expect that every piece of information you submit (even if it is not currently displayed) may be shared with our partners and any member of PatientsLikeMe, including other patients.”

We’ve also noted that genomics data may not be sufficiently de-identified so that it can’t be matched through inference [TTA 31 Oct 15], with the potential for sale. And of course Hackermania Running Wild continues (see here).

For now general information sites like WebMD and personalized reference sites such as Medivisor feel more secure to users, as well as small non-commercialized OSGs and ‘closed’ telehealth/telemedicine systems.

Is digital health going to add to Digital Big Brother Watching You?

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2014/10/Doctor-Big-Brother.jpg” thumb_width=”150″ /]“They’re watching me on my phone. They’re watching me on Facebook. They’re even watching me when I want to hide. Machines are a form of intelligence, and they’re being built into everything.”–Dr Zeynep Tufekci

The world of digital health is largely based on tracking–via smartphones, wearables, watches–and analytics taking and modeling All That Data we generate. Are we in compliance with our meds? Are we exercising enough? How’s our A1c trending? Drinking our water? All this monitoring–online and offline–is increasingly of concern to Deep Thinkers like Dr Tufekci, a reformed computer programmer, now University of North Carolina assistant professor and self-proclaimed “techno-sociologist.” At IdeaFestival 2015, she took particular aim at Facebook (surprisingly, not at Google) for knowing a tremendous amount about us by our behavior, of course using it to anticipate and sell us on what we might want. The ethics of machine learning are still hazy and machines are prone to error, different than human error, and we haven’t accounted for machine error in our systems yet. Like that big health data that mistakes a daughter for her mother and drops critical health information from a patient’s EHR [TTA 29 Sep]. A thought-provoker to kick off your week. TechRepublic 

Related: The Gimlet Eye took a squint at Big Brother Gathering and Monetizing Your Big Blinking Data–data mining, privacy and employer wellness programs–back in 2013, which means the Eye and Dr Tufekci should get together for coffee, smartphones off of course. While Glass is gone, the revolt against relentless monitoring is well-dramatized in the well-watched video, ‘Uninvited Guests’. And we can get equally scared about AI–artificial intelligence–like Steve Wozniak. 

Pharma company ‘breaks the Internet’ with Kim K, gets FDA testy

But it may break them…well, give them a fracture. Or a good hard marketing lesson. Specialty pharma Duchesnay thought it had hit the jackpot with negotiating a promotional spokeswoman endorsement from pregnant celebrity Kim Kardashian of its morning sickness drug Diclegis. The Kardashian Marketing Machine cranked up. Kim (and mom Kris Jenner) took to Instagram, Facebook and Twitter in late July with (scripted) singing of Diclegis’ praises to their tens of millions of followers. The Instagram posts linked to an ‘important safety page’ a/k/a The Disclaimers. That wasn’t near enough for the Federal Drug Administration (FDA) which governs the acceptable marketing of all drugs in the US. On August 7th a tartly worded letter arrived at Duchesnay’s Pennsylvania HQ cited multiple violations of marketing regulations, notably risk information, and told Duchesnay to cease these communications immediately or withdraw the drug, which would be highly unlikely as it is successful. They also were require to provide “corrective messages” to the “violative materials”.

Our takeaway:

* Duchesnay reaped a bounty of free media (see below), on top of the (undoubtedly expensive) Kardashian endorsement. Yes, they did pay the cost of a FDA nastygram and a legal response, and the warning will live on in their file. However, a lot of target-age women now know Diclegis and others know about the relatively obscure Duchesnay.

* This was a calculated marketing risk that tested the boundaries of social media and celebrity endorsement. (more…)

Facebooking health: good for communities, not for privacy?

In a Reuters exclusive, Facebook is reportedly considering creating online communities which will support those with various medical conditions, as well as ‘preventative care’ applications for those minding their healthy lifestyle. According to Reuters’ sources, Facebook representatives have been meeting with medical industry experts and entrepreneurs. They are also starting a research and development unit to test new health apps. It is not a far reach to assume that Facebook, which is always seeking to maximize its profitability dependent on digital ad revenues (second only to Google), yet finding its younger audience on the decline, is attempting to grapple with the concerns of its older-skewing audience–and also seeking a way to monetize another slice of data. Yet the 55+ audience is wary of Facebook given (more…)

The Internet.org initiative and the real meaning for health tech

Internet.org — Every one of us. Everywhere. Connected.

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2013/02/gimlet-eye.jpg” thumb_width=”150″ /]Much has been made of the Internet.org alliance (release). The mission is to bring internet access to the two-thirds of the world who supposedly have none. It is led, very clearly, by Mark Zuckerberg, founder and CEO of Facebook. Judging from both the website and the release, partners Ericsson, MediaTek, Nokia (handset sale to Microsoft, see below), Opera (browser), Qualcomm and Samsung, no minor players, clearly take a secondary role.  The reason given is that internet access is growing at only 9 percent/year. Immediately the D3H tea-leaf readers were all over one seemingly offhand remark made by Mr. Zuckerberg to CNN (Eye emphasis):

“Here, we use Facebook to share news and catch up with our friends but there they are going to use it to decide what kind of government they want, get access to healthcare for the first time ever, connect with family hundreds of miles away they haven’t seen for decades. Getting access to the internet is a really big deal. I think we are going to be able to do it”

Really? The Gimlet Eye thought that mobile phone connectivity and simple apps on inexpensive phones were already spreading healthcare, banking and simple communications to people all over the world. Gosh, was the Eye blind on this?

Looking inside the Gift Horse’s Mouth, and examining cui bono, what may be really behind this seemingly altruistic effort could be…only business. (more…)