Article Text

Download PDFPDF

The need for independent evaluations of government-led health information technology initiatives
Free
  1. Aziz Sheikh1,2,3,
  2. Rifat Atun4,
  3. David W Bates1,2,5
  1. 1Centre for Population Health Sciences, The University of Edinburgh, Edinburgh, UK
  2. 2Brigham and Women's Hospital, Boston, Massachusetts, USA
  3. 3Harvard Medical School, Boston, Massachusetts, USA
  4. 4Department of Global Health & Population, Harvard School of Public Health, Boston, Massachusetts, USA
  5. 5Harvard School of Public Health, Boston, Massachusetts, USA
  1. Correspondence to Dr Aziz Sheikh, Centre for Population Health Sciences, The University of Edinburgh, Medical School, Doorway 3, Teviot Place, Edinburgh EH8 9AG, UK; aziz.sheikh{at}ed.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

England's National Programme for Information Technology (NPfIT) was at the time of its launch in 2002 dubbed the most ambitious and expensive civilian health information technology (HIT) project in history.1 Then Prime Minister, Tony Blair, championed the project, with the aim of creating a digitised, interoperable, health infrastructure that would transform healthcare delivery, achieve major improvements in health outcomes and, at the same time, substantially reduce government expenditure on healthcare. The study by Franklin et al2 represents the long-awaited independent academic evaluation of the Electronic Prescription Service (EPS), a core component of NPfIT that aimed to reduce the need for patients to manually transfer paper prescriptions provided by their general practitioners to dispensing pharmacies and, more importantly, diminish medication errors and thereby improve patient outcomes. Franklin et al evaluated the impact of electronic transmission of prescriptions between prescribers and pharmacies, but found no benefit. In fact, the study found an even higher prevalence of labelling errors in prescriptions transmitted electronically, but this was mostly accounted for by the practices of a single pharmacy. Notably, most prescriptions were already being generated electronically even before the study of EPS.2

As with earlier evaluations of NPfIT functionality, Franklin et al found major delays with implementation and adoption of the HIT, substantial usability challenges—reflecting both design limitations and inadequate attention to the redesign of clinical workflows—and unrealistic expectations about the speed and scale of the anticipated benefits.3 Moreover, this and the related body of work reporting on evaluations of other NPfIT functionalities—namely the NHS Care Records Service,4 Summary Care Record5 and HealthSpace6—have clearly demonstrated the importance of independent evaluations in order to provide unbiased estimates of effectiveness, cost-effectiveness and impact. Lessons from evaluations of NPfIT demonstrate why it is essential that countries embarking on major healthcare information initiatives build an objective body of evidence to inform policy and practice on how best to successfully design and deliver similar potentially very important, but also inherently challenging and costly, national HIT programmes. Such evaluations are also essential to provide clear accountability for investments that use scarce taxpayer resources. England's experiences suggest that, unless there is strong academic pressure, independent evaluations are unlikely to be forthcoming, in part because policymakers find that the results often reveal inconvenient truths.

The evaluations of core aspects of NPfIT were born out of a high-profile public debate between academic researchers who voiced disquiet about the progress of NPfIT. Frustrated by the lack of openness and engagement with experts in response to concerns about privacy, security and waste of public resources, 23 academics took the unusual step of collectively writing two open letters addressed to the Health Select Committee in which they asked for assistance in initiating an independent assessment of NPfIT.7 ,8 Although initially ignored, following widespread media attention, the government of the time reluctantly agreed to establish an independent body—NHS Connecting for Health Evaluation Programme—which would commission and oversee the independent programme of research on behalf of the Department of Health's Research & Development (R&D) directorate.9

As with any treatment or innovation in healthcare, major government-led health policies and initiatives, including for HIT, should be independently evaluated to ascertain whether they achieve the desired outcomes. Indeed, a rich body of research suggests that all too often these interventions have unintended consequences, which may result in patients being harmed.10 The Institute of Medicine of the US National Academies has, for example, highlighted this concern in their recent report on HIT safety.11 Thus, major policies and interventions need to be formally evaluated by independent bodies in order to try to obtain the best understanding of whether the underpinning assumptions for a particular policy were found to hold true and what the actual consequences of the policy or intervention have been. Formal independent evaluation is, we believe, particularly important in the context of government-led initiatives because these, by their nature, typically affect much broader populations of patients and/or segments of society than the more narrowly focused healthcare interventions on selective populations (eg, a new treatment in those with a single disease) that investigators tend to concern themselves with. Yet, in reality, government policies or interventions—whether HIT-based or otherwise—are seldom evaluated, and, when they are, these tend to be undertaken retrospectively by in-house government departments, which have inherent conflicts of interest. Moreover, these evaluations typically take place well after the politicians or policymakers who conceived or championed the policy have departed—leaving little room for any meaningful accountability.

Given the well-demonstrated need for evidence-based policy-making, why has it been so difficult to institute independent evaluations of major health policies? We identify, among others, four possible explanations: first, such evaluations may be seen as a luxury and a distraction from focusing on the main task at hand of actually delivering and implementing a policy or intervention; second, independent evaluations are inherently challenging to undertake; third, the length of research and long lead times for publications mean the findings typically emerge when they are too late to be useful to policymakers; and fourth, many politicians and policymakers do not like scrutiny and therefore resist or delay such evaluations.

It is, however, in the public interest to insist on such evaluations in order to assess whether interventions affecting broad sections of society are safe and represent value for money.

Given these challenges and the political reality of decisions to commission such studies, how do we reconcile the tensions and move forward? The first and foremost challenge is to convince governments that major policies/interventions need to be evaluated and that this need increases, the more ‘ambitious’ and ‘transformative’ the policy.12 Second, such evaluations should, wherever possible, be truly independent, as there will otherwise always be lingering concerns about the credibility of findings.13 ,14 Third, investigators must be cognisant of the reality of the time frames in which governments and politicians operate—hence, consideration needs to be given to undertaking both formative and summative evaluations.14 ,15 Fourth, it is important for academic researchers to engage in discussion and debate with policymakers to openly consider the interpretation and, where necessary, the implications of their findings. And fifth, given that findings from well-conducted evaluations have the potential to generate important transferable lessons to inform other government policies and international initiatives, evaluations should have an inbuilt dissemination and translation element to promote evidence-based policymaking.9 Indeed, if Mr Blair's government had had a broadly comparable body of work on which to draw—even if from other parts of the world—might NPfIT have experienced a very different fate?16

We should not underestimate the challenges of undertaking independent, rigorous evaluations of government-led HIT programmes. But England's experiences do make clear that such evaluations can be undertaken and that they do deliver important results and insights to inform policy and practice.9 Considering that sooner or later most of the world is expected to embark on programmes to digitise information in their health systems, we hope that countries making this transition will learn from the NPfIT experience and commission early, independent evaluations and then widely share their experiences. The results and insights from such evaluations can help to improve policymaking, HIT and healthcare delivery and, most important of all, achieve improvements in health outcomes—globally.

References

Footnotes

  • Contributors AS conceived and led the drafting of the manuscript, which was then commented on by RA and DWB. All authors approved the final manuscript. AS is the guarantor.

  • Funding AS is supported by The Commonwealth Fund, a private independent foundation based in New York City.

  • Competing interests AS led the evaluations of a series of NHS CFHEP evaluations (001, 005 and 009) and was a co-investigator on 010. DWB chaired the Independent Project Steering Committee for NHS CFHEP 005 and is a member of the US HIT Policy Committee.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles