David Barrett, Lecturer in Telehealth at the University of Hull takes a hard look at how local trials are often evaluated and looks forward to a time when more rigorous approaches will provide solid evidence for the benefits of telehealth.
Regular readers of TelecareAware cannot fail to notice the frequency with which new evaluations of telehealth services are published. In recent months, we’ve seen documents from, amongst others, Kent, SE Essex and Argyll & Bute. These evaluations are almost always positive, suggesting that further deployments of technology can supply huge savings for health and social care organisations, whilst proving immensely popular with users, carers and practitioners.
This growing evidence base in favour of telehealth services strengthens the argument that technology can deliver substantial benefits for individuals and organisations. However… if the evidence is as unequivocal as many evaluations suggest, then why do we continue to see a relatively slow rate of adoption? If the benefits are proven, then why do localities feel the need to repeat the types of evaluation carried out elsewhere? If – as local evaluations suggest – telehealth can substantially reduce the number of hospital admissions for patients with long-term conditions (LTCs), then why is it not advocated in national clinical guidelines?
My view – and it is only my view – is that this reflects that though these evaluation reports provide evidence that is good enough to convince local commissioners of services, they are generally not good enough to convince others outside that locality or the wider clinical community.
This is largely because local evaluations demonstrate a number of methodological weaknesses. I realise that as soon as phrases like ‘methodological weaknesses’ raise their head, this whole article is open to accusations of academic snobbery. However, I think that if an evaluation report claims great results from an intervention such as telehealth, then readers need to be convinced that those findings come from a study that was rigorous, robust and reliable.
Sadly, this is not always the case. Many evaluations demonstrate problems that leave the scale of positive benefits open to question. A particular issue is that most evaluations use a ‘before-and-after’ approach to identifying benefits. This is a simple approach to use: for example, let’s assume that Mr B has a telehealth system installed because he has Chronic Obstructive Pulmonary Disease (COPD). We can record how many emergency hospital admissions he had in the six months before the equipment was installed, then look at how many times he was admitted in the six months whilst the telehealth service was deployed, and describe any changes. By calculating the cost of an admission, this method then allows cost savings to be identified, which can be extrapolated to a wider population. Here’s a made-up example of how that might look in a report with a few more patients;
“In the sample of 25 COPD patients, there were 47 emergency hospital admissions in the six-month period before the telehealth equipment was deployed. During the six-month period of deployment, there were only 31 emergency hospital admissions amongst the cohort of 25. This is a 34% reduction in hospital admissions. Assuming a mean cost per emergency admission of £3000, this demonstrates cost savings of £48000 in the pilot group over six months, or £3840 per patient, per annum. Given that there are 1568 COPD patients in the local area, this demonstrates that telehealth has the potential to save the local NHS over £6M per year”
Sounds great! All the maths adds up, and the analysis seems to follow a logical progression. But… this type of analysis has a number of flaws, often seen in real evaluation reports. Firstly, there is the issue of seasonality. What if the ‘before’ period was October-March and the ‘after’ period was April-September? Emergency admissions are likely to be lower during the spring and summer anyway, so how much of the effect is due to telehealth? Seasonality can be ‘ironed out’ by comparing like-for-like periods of the year, but not every evaluation does this.
Even if we correct seasonal changes, before-and-after studies still provide some problems. The condition of people with LTCs will change – for better or worse – over time, regardless of whether they have telehealth installed or not. Local evaluations have no way of ‘filtering out’ other variables that may have affected healthcare utilisation.
Another problem often encountered – and demonstrated in the example above – is the tendency to extrapolate from small samples. Even if we accept that there was a 34% reduction in hospital admissions, and that this was just because of the telehealth deployment (and not seasonal effects, or changes in other areas of care, or just chance), then we can’t assume that the same thing will necessarily happen with a wider population. Finally, the example above doesn’t take into account any of the costs of the telehealth service itself, thereby providing an over-estimate of economic benefits.
The example above exaggerated the types of claims made in local reports, but it was indicative of some of the problems encountered. So, am I saying that local evaluations are of little value? Not at all: the better reports – which solve methodological problems when they can and acknowledge limitations when they can’t – make an important contribution to the telehealth evidence base with regards to the effect on local services. In other words, local evaluations are often ‘good enough’ when we consider telehealth to simply be a tool for service redesign.
But telehealth needs to be more than just a way of working differently. To see large-scale adoption of telehealth services in patients with LTCs there needs to be evidence of clinical benefits, and that is where local evaluations aren’t good enough. To demonstrate clinical benefits – such as reductions in bed days or increased life expectancy – evidence is required where the benefits are attributable to telehealth alone (and not other variables), and where we can say with confidence that improvements were not just down to chance.
Bluntly, this is where academic snobbery has a place. We need randomised controlled trials, qualitative studies of patient experience, complex economic evaluations, systematic reviews and meta-analyses to demonstrate the clinical, financial and quality of life benefits of telehealth in patients with LTCs. It is this type of high-quality evidence that will convince GPs, hospital consultants, community matrons and other healthcare professionals that this technology can enhance their care delivery and improve the lives of patients. It is this type of evidence that will persuade the writers of clinical guidelines and allow telehealth to become embedded in care pathways.
At the moment, the evidence is not quite there. Though a recent Cochrane review of telemonitoring for chronic heart failure (CHF) patients was extremely positive, the NICE guidelines for CHF say that more research is required. The academic evidence bases for telehealth in COPD, diabetes and hypertension are all under-developed. The bad news is that high-quality research evidence is expensive and slow to develop. The evidence base for telehealth is not going to appear overnight, though the results of the Whole System Demonstrator in 2011 will hopefully go a long way towards convincing others of the benefits.
What we therefore need is a mixed economy of telehealth evidence. Local evaluations – if carried out to a high quality – can give us an indication of the potential benefits of service redesign in terms of resource savings and user experience. However, to fully convince clinicians, we also need a robust evidence base that stems from large-scale research studies, providing confirmation of clinical and economic benefits.
I believe strongly that telehealth offers huge possibilities to individuals and organisations. It allows patients to play a greater role in their own care, it gives carers the reassurance that they are not solely responsible for their loved one, and it helps practitioners to better organise their workloads. In addition, I believe that technology – if properly used – can help to improve clinical outcomes whilst providing cost savings.
All we have to do now is prove it.
Lecturer in Telehealth
University of Hull