When will we learn how to evaluate complex healthcare interventions?

This editor’s piece last week entitled “Is this the last time the flat earth society will be celebrating” was very widely read – thank you readers – and prompted both further thoughts and an especially thoughtful pointer from Mike Clark.

As readers of that post will be aware, the paper referred to in the post focused heavily on the high cost/QALY supposedly shown by the Whole System Demonstrator RCT. Mike drew my attention to a paper, published both here and here, by Trine Bergmo on the different ways in which the concept of a QALY is calculated for remote patient monitoring. The thrust of the paper is that different methods give significantly different results for interventions like telehealth. To this editor there was another equally important message though, that whilst the net effect of a successful drug intervention is typically primarily a combination of greater length and greater quality of life, and so can be relatively easily described by a QALY calculation, remote patient monitoring can result in benefits being spread much more widely, and so a QALY often understates the overall benefit. As the author put it:

The benefits of telehealth might extend beyond health outcomes such as access, information, waiting time, time saved and avoidance of burdensome travels.

The most extreme example of this, albeit for telecare, that I have used many times is a case in Newham of where a simple falls detector service enabled a well-paid executive to return to work in the City, rather than stay home to care for their spouse who had motor neurone disease – the extra tax they would therefore pay would likely have covered a goodly portion of the whole telecare service we were offering in the Borough at the time, plus the extra spend they would be making in their corner of Stratford would likely support a few local traders nicely, in addition to the extra VAT they’d pay and the avoidance of carer support payments. Almost none of that benefit actually came back to the funder though: the Council, plus the QALY impact was likely confined to the impact of the family being more affluent.

The difficulty of evaluating complex interventions was also emphasised by the recent headlines about the Lancet paper on the results of the ESTEEM GP telephone triage trial which showed a significant increase in numbers of patient contacts when GPs (and, separately, nurses) triaged patients by phone (although there was no reported increase in cost, a fact that was not included in the headlines). After a run in period, this RCT studied the use of telephone triage against usual care, for a four week period.

However organisations that have been using telephone triage for some time, for whom it has become part of their overall standard operating system, report substantial savings. GP Access for example quotes a saving of at least 20%, with some GPs claiming up to 35%. Harry Longman, CEO of GP Access, and a seasoned campaigner for increased telephone triage, points out that the ESTEEM trial did not involve the all-important system change as it was a temporary testing of a single component. Also, whilst apparently statistically significant, it was a small trial compared to the wealth of evidence that GP Access has gathered. In summary he says:

The tragedy now is that selective headlines from a small scale study are seized upon by skeptics to deny that a system can work, when it hasn’t been tested.  The paper quotes the need for system level implications, but doesn’t provide them.

There’s no doubt in this editor’s mind that if we are to succeed in delivering improved care at lower cost we will need to use technology appropriately, as for example demonstrated by this PWC video. Necessarily, successful use of technology almost always requires a significant change in the processes and in the approach & skills of the people involved and so is far more complex than just dropping telehealth, telecare – or GP telephone triage – into an existing system for a short time. Understandably funders and users want proof of these benefits though before making major policy changes and financial commitments. In order to prove those benefits we will therefore have to find a better way of evaluating these interventions.

Categories: Latest News.

Comments

  1. Thanks Charles and profoundly agree with what you are saying. As it happens, I’ve just read the wonderful “Blackett’s War” by Stefan Budiansky. It tells how Patrick Blackett and others who became the Operational Research group during WW2 helped the armed forces be far more effective. They didn’t develop new weapons or technologies (radar was key but already in the pipeline). They enabled ten times more effective use of existing technology by understanding scientifically how to use it. They achieved very little, apart from winning the battle against U-boats and turning the tide of the war.

Leave a Reply

Your email address will not be published. Required fields are marked *