Article Text

Download PDFPDF

Reporting on implementation trials with null findings: the need for concurrent process evaluation reporting
Free
  1. Anne Sales1,2
  1. 1 Sinclair School of Nursing and Department of Family and Community Medicine, University of Missouri, Columbia, Missouri, USA
  2. 2 Center for Clinical Management Research, VA Ann Arbor Healthcare System, Ann Arbor, Michigan, USA
  1. Correspondence to Dr Anne Sales, Sinclair School of Nursing, University of Missouri, Columbia, MO 65211, USA; asales{at}missouri.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The Prioritising Responses of Nurses to deteriorating patient Observations (PRONTO) trial reported in this issue of BMJ Quality and Safety describes results of a trial that overall, despite a few positive findings among the large number of planned comparisons performed, produced results that supported rather than rejected the study null hypotheses.1 This is a disappointing result for the investigators, who put considerable time and energy into this study; for funders, who hoped to learn how to positively influence the quality and safety of nursing care for seriously ill adults; and for readers, who also hoped to learn how to influence and support high-quality care by ensuring that nurses activate support systems in response to patient deterioration in hospitals.

The PRONTO trial was designed to assess the effectiveness of a combined internal and external facilitation implementation intervention compared with usual guideline dissemination in hospital inpatient acute care wards. The trial was conducted in four hospitals in Victoria, Australia, and a total of 36 inpatient wards were randomised to either the facilitation intervention or to usual dissemination. The goal of the guideline being implemented was to ensure that nurses react quickly and appropriately to changes in vital signs indicating that the patient’s condition was deteriorating. The measurements used in the study—which were numerous—were focused on measuring compliance with the complex clinical practice guideline of caring for patients with deteriorating condition, required for hospital accreditation. The guideline implemented in these four hospitals mandated three escalating levels of care for patients with deteriorating condition, the highest being activating the Cardiac Arrest Team, to be selected based on the clinical assessment of the nurse following changes in vital signs.

The expectation of outcomes was that improving the implementation of the guideline would increase the frequency with which nurses trigger escalation in the level of care provided to the patient on observing abnormal vital signs indicating deterioration. The intervention lasted for 6 months, with chart audits at baseline, 6 months (at the end of the intervention period) and again 6 months later (12 months after baseline) to assess key process and outcome measures. Some of the many findings supported the expectations underlying the trial. There was a significant improvement from baseline to the end of the intervention in escalating care for patients in the intervention group, but no difference between intervention and control groups at the end of the intervention period, nor was the improvement in the intervention group sustained to the 12-month period. There was also an improvement in the proportion of audited charts with at least one vital sign measurement in a shift between baseline and 12 months later for the intervention group. The number of measures, with complex branching logic based on which level of care the nurse should have escalated to, makes a complex story, but in general, one striking finding is that the control group, which improved in many measures as much as or better than the intervention group, showed higher proportions of appropriate care at baseline and throughout the study. Patients admitted through emergency were a higher proportion of patients in the intervention group at each audit point than in the control group. In terms of patient outcomes, the control group apparently improved more than the intervention group at the 12-month measurement point in terms of inpatient mortality, but the intervention group improved more than the control group at that time point in terms of inpatient length of stay. The pattern of changes for the intervention group was not monotonic in many of the measures, and overall, the control group performed better at baseline than the intervention group in all measures, and in general, sustained its baseline performance.

There are a number of strengths to this trial and its report in this issue. First, the trial was led by a very senior and experienced group of nurse scientists, who have worked in this area for many years and conducted previous trials.2 3 Second, the investigative team published a protocol paper detailing the plans for trial conduct and analysis,4 as well as registering the trial prospectively. Third, the trial report adheres to important elements of the Consolidated Standards of Reporting Trials reporting guideline for cluster randomised controlled trials. Fourth, despite likely disappointment over the lack of support for expected findings, the investigators report the main trial results—often not the case with findings that are not consistent with hypotheses. Finally, the investigators conducted a process evaluation concurrently with the trial even though the process evaluation results are not reported in this paper reporting the main trial findings.

This last note carries an important but mixed message. By publishing the largely null findings as main results, but without concurrent publication of the process evaluation, the authors are not fully enacting best practices. Without concurrent publication of the process evaluation, we are left to guess at the reasons for the main findings. Taking this point a little further, in the protocol paper, the process evaluation is given very little attention, so it is not clear what data were collected and how. The brief description of planned data analysis suggests that most of the planned process evaluation rest on qualitative data—a mix of interviews, focus groups and field notes—even though several sources of more quantitative data are clearly available, including tracking the number of individuals completing facilitator training at the hospital and ward level, to assess fidelity; or monitoring how often bedside care nurses were given feedback about their processes of care related to detecting and acting on abnormal vital signs. It is quite likely that the data collected to assess the costs of the intervention can also shed light on how the intervention went, and what may or may not have happened.

The most important use of process evaluation, whether concurrent with an intervention or retrospective, is to contextualise and understand the effect of the intervention.5 Process evaluations can focus on issues of uptake of various components of the intervention, whether an implementation intervention as in the case of PRONTO, or a clinical or system intervention. They can focus on adherence to expected methods of delivery of the intervention (fidelity), or on ways in which the intervention was changed (adaptation), or a number of other issues. When an intervention works, process evaluations are sometimes ignored, not even published. When an intervention does not work, process evaluations should provide essential clues to understanding what did not work, how, and most importantly, why. This paper makes another important point about process evaluation;, however, that it may be just as important to understand process in control groups as in intervention groups.

Several possible reasons for the mixed but overall null findings are described by the authors, such as the intervention being too short to routinise practice changes. The fact that the control group of wards started at a higher baseline rate in many of the metrics, and also improved over the period, complicates our interpretation of the findings. Close examination of the patterns in the findings reported suggest that measurement error, due to the cross-sectional nature of the audits needed for the outcome data in this study, may have been very problematic. In addition, patients in the intervention group of units may have been less stable and more likely to experience deterioration than those in the control unit, based on the proportion admitted through emergency. If future studies are planned, investigators would likely need to fully analyse all data for all patients, rather than relying on audit data. This is possible in health systems with fully electronic health record systems, including all vital sign measurements. The need to audit the record system to assess patient status, vital signs and actions taken in response to vital sign change limits the number of observation time points, which may have had an effect on the data recorded, and certainly limits power in a time series analysis. The characteristics of patients included in each group at each audit point change considerably, suggesting that the audits only captured a snapshot of patients included in each group, without necessarily incorporating the entire longitudinal trend. In all time-varying data, the more measurement points available the better we can characterise the true trend. We cannot, in this study, understand the full extent of possible measurement problems due to the need to do costly chart audits. Given the increase in use of electronic data which would permit continuous measurement rather than infrequent, limited measurement, it may be that studies of this kind should be conducted where electronic data capture is possible.

Also importantly, however, there was limited opportunity to use costly and valuable research assistants to observe practice, because research assistant time in the study had to be used to conduct the audits. As the authors note in their discussion, understanding nurses’ decision making in complex environments under time pressured situations is critical to understanding the results of this study. We might have learnt a great deal from observations rather than relying on audit data. Hopefully, the process evaluation data will help us understand these important contextual factors. For example, the marked decrease in repeating vital sign measurement 30 min after an abnormal reading for both groups seen in the 12-month audits suggests that there may have been external factors affecting practice negatively across all the units. Essentially, relying on a limited number of audits decreases our ability to fully capture and understand secular trend that affects everyone, and may diminish the effect of any intervention.

A final point for further consideration—it is possible that facilitation, as designed in this study, may not adequately address differences among facilitators. Even though there was some form of external facilitation included in the intervention, reporting on that is minimal. More information is available about the two levels of internal facilitation, one at the level of the hospital, the other at the ward or unit level. However, there is no discussion about the people who were selected to be facilitators. This may be problematic, as there is some evidence that not all facilitators are equal in their abilities to carry out the work required, and variation in ability to facilitate may be an important factor in whether the intervention is successful.6 Ideally, the process evaluation will shed some light on this and other important aspects of facilitation, which is a very broad term. The literature on effective facilitation as an implementation strategy is complex, partly because of the breadth of the term ‘facilitation’, which can cover a large number of activities.7 As with many implementation strategies, facilitation has been shown to work at times and in some contexts, and not in others.7 8 Its use in complex environments such as acute care, in implementing complex interventions such as the clinical practice guidelines in this case, is infrequently reported.

The investigators should be commended for their work. This report adds to our knowledge, in particular how facilitation as an intervention which sometimes has been shown to work, but was not effective in improving this problem in this context. Hopefully the process evaluation, when it is available, will accomplish its most important goal, and shed critical light on the reasons for these findings, and understanding of what we need to try next. In all likelihood, without process measurement in the control wards, we will only learn part of what we need to know.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Footnotes

  • Twitter @AnneSales4

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles