You wait for a cohort study to come along for ages, then suddenly two come along within a week of each other…of the same intervention, although with apparently different conclusions. The second paper, entitled “Analysis of the Impact of the Birmingham OwnHealth Program on Secondary Care Utilization and Cost: A Retrospective Cohort Study”, is published online in the Journal of Telemedicine and e-Health, ahead of print, with lead author Liv Solvår Nymark. (Our post on the previously-reviewed paper, whose lead author was Adam Steventon is here).
The Steventon paper found that the OwnHealth intervention “did not lead to the expected reductions in hospital admissions or secondary care costs over 12 months, and could have led to increases” whereas the Nymark paper “found difference in costs constituted a 27% reduction in utilization and 22% reduction in cost of secondary care with the OwnHealth program.” From this they concluded that “Telehealth intervention can reduce the cost of secondary care of some patients with long-term conditions.”
So what’s the difference? An expert in statistical analysis would doubtless find much more to comment on (please do!). From a lay point of view, the principal difference that I spotted was that although both papers had the same objective, of analysing the change in unplanned hospitalisations, the Nymark paper actually measured “number of secondary care spells”, ie including planned admissions. As a result the Nymark paper has total days in hospital/patient/year for the intervention group some three times higher than the Steventon paper. The Nymark paper, being published in a US journal, shows total secondary care costs in US Dollars. However these are at similar levels to those shown in GB Pounds in the Steventon paper, so given an exchange of around $1.56/£1 there is clearly a difference in the way these are being measured too, which may well explain why the secondary cost results shown in the two papers go in opposite directions as a result of the intervention.
It is perhaps also just worth mentioning that there is a difference in the choice of control sample (ie those against whom the effect of the intervention is being analysed) that may be having an effect. For the Steventon paper, a 1:1 matched group of 2698 people “with respect to demographics, diagnoses of health conditions, previous hospital use, and a predictive risk score” was chosen. For the Nymark paper, the 4200 people in the comparison group were sampled from patients “who met the entry criteria for the Birmingham OwnHealth program and who either declined to enter the program or could not be contacted by NHS Birmingham East and North for the purpose of enrolling in the program.” It’s tempting to wonder whether those unwilling to participate in the intervention might have less motivation to self-care and thus end up using more secondary care resource; on the other hand, for this paper the intervention and control samples were drawn from the same geographic area.
The Steventon paper ended with a comment from which I drew the conclusion that “technology is not a simple intervention” because of the suggestion of a lack of coordination with other local health services. This paper looks to be suggesting better news.
Finally, although doubtless wholly objective in all their work, it is perhaps just worth mentioning that three of the four authors of the Nymark paper, including Nymark, have past or existing connections with Pfizer Health Solutions, who together with NHS Direct (as subcontractor) provided the service, and the fourth worked for Health Intelligence who provided general practitioner practice data export, and hosting and reporting services for the OwnHealth program. The paper was funded by Pfizer Health Solutions. Three of the four authors of the Steventon paper, including Steventon, work for the Nuffield Trust, the fourth for Ernst & Young. None disclose any direct involvement in Birmingham OwnHealth. Their work was funded by the UK Department of Health.
Most Recent Comments