About time: digital health grows a set of ethical guidelines

Is there a sense of embarrassment in the background? Fortune reports that the Stanford University Libraries are taking the lead in organizing an academic/industry group to establish ethical guidelines to govern digital health. These grew out of two meetings in July and November last year with the participation of over 30 representatives from health care, pharmaceutical, and nonprofit organizations. Proteus Digital Health, the developer of a formerly creepy sensor pill system, is prominently mentioned, but attending were representatives of Aetna CVS, Otsuka Pharmaceuticals (which works with Proteus), Kaiser Permanente, Intermountain Health, Tencent, and HSBC Holdings.

Here are the 10 Guiding Principles, which concentrate on data governance and sharing, as well as the use of the products themselves. They are expanded upon in this summary PDF:

  1. The products of digital health companies should always work in patients’ interests.
  2. Sharing digital health information should always be to improve a patient’s outcomes and those of others.
  3. “Do no harm” should apply to the use and sharing of all digital health information.
  4. Patients should never be forced to use digital health products against their wishes.
  5. Patients should be able to decide whether their information is shared, and to know how a digital health company uses information to generate revenues.
  6. Digital health information should be accurate.
  7. Digital health information should be protected with strong security tools.
  8. Security violations should be reported promptly along with what is being done to fix them.
  9. Digital health products should allow patients to be more connected to their care givers.
  10. Patients should be actively engaged in the community that is shaping digital health products.

We’ve already observed that best practices in design are putting some of these principals into action. Your Editors have long advocated, to the point of tiresomeness, that data security is not notional from the smallest device to the largest health system. Our photo at left may be vintage, but if anything the threat has both grown and expanded. 2018’s ten largest breaches affected almost 7 million US patients and disrupted their organizations’ operations. Social media is also vulnerable. Parts of the US government–Congress and the FTC through a complaint filing–are also coming down hard on Facebook for sharing personal health information with advertisers. This is PHI belonging to members of closed Facebook groups meant to support those with health and mental health conditions. (HIPAA Journal).

But here is where Stanford and the conference participants get all mushy. From their press release:

“We want this first set of ten statements to spur conversations in board rooms, classrooms and community centers around the country and ultimately be refined and adopted widely.” –Michael A. Keller, Stanford’s university librarian and vice provost for teaching and learning

So everyone gets to feel good and take home a trophy? Nowhere are there next steps, corporate statements of adoption, and so on.

Let’s keep in mind that Stanford University was the nexus of the Fraud That Was Theranos, which is discreetly not mentioned. If not a shadow hovering in the background, it should be. Perhaps there is some mea culpa, mea maxima culpa here, but this Editor will wait for more concrete signs of Action.

Behave, Robot! DARPA researchers teaching them some manners.

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2014/01/Overrun-by-Robots1-183×108.jpg” thumb_width=”150″ /]Weekend Reading While AI is hotly debated and the Drudge Report features daily the eeriest pictures of humanoid robots, the hard work on determining social norms and programming them into robots continues. DARPA-funded researchers at Brown and Tufts Universities are, in their words, working “to understand and formalize human normative systems and how they guide human behavior, so that we can set guidelines for how to design next-generation AI machines that are able to help and interact effectively with humans,” said Reza Ghanadan, DARPA program manager. ‘Normal’ people determine ‘norm violations’ quickly (they must not live in NYC), so to prevent robots from crashing into walls or behaving towards humans in an unethical manner (see Isaac Asimov’s Three Laws of Robotics), the higher levels of robots will eventually have the capacity to learn, represent, activate, and apply a large number of norms to situational behavior. Armed with Science

This directly relates to self-driving cars, which are supposed to solve all sorts of problems from road rage to traffic jams. It turns out that they cannot live up to the breathless hype of Elon Musk, Google, and their ilk, even taking the longer term. Sequencing on roadways? We don’t have the high-accuracy GPS like the Galileo system yet. Rerouting? Eminently hackable and spoofable as WAZE has been. Does it see obstacles, traffic signals, and people clearly? Can it make split-second decisions? Can it anticipate the behavior of other drivers? Can it cope with mechanical failure? No more so, and often less, at present than humans. And self-drivers will be a bonanza for trial lawyers, as added to the list will be car companies and dealers to insurers and owners. While it will give mobility to the older, vision impaired, and disabled, it could also be used to restrict freedom of movement. Why not simply incorporate many of these assistive features into cars, as some have been already? An intelligent analysis–and read the comments (click by comments at bottom to open). Problems and Pitfalls in Self-Driving Cars (American Thinker)