Article Text

Download PDFPDF

Five golden rules for successful measurement of improvement
Free
  1. Edward Etchells1,2,3,
  2. Patricia Trbovich3,4,5
  1. 1 Women’s College Hospital, Toronto, ON, Canada
  2. 2 Sunnybrook Health Sciences Centre, Toronto, ON, Canada
  3. 3 Centre for Quality Improvement and Patient Safety, Department of Medicine, University of Toronto, Totonto, ON, Canada
  4. 4 Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
  5. 5 North York General Hospital, Toronto, ON, Canada
  1. Correspondence to Dr Edward Etchells, Department of Medicine, 76 Grenville St, Women's College Hospital, Toronto, ON M5S 1B2, Canada; edward.etchells{at}wchospital.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Too often, seemingly simple interventions are implemented without fully considering how the intervention might achieve the desired results, whether it can cause harm, or whether a different intervention should be considered.1–3 The tendency to favour rapid cycle implementation over analysis and measurement represents a common pitfall in quality and safety studies.4 Quality improvement and patient safety (QIPS) studies often omit the critical details underlying the success (or lack thereof) of the intervention, in part due to the perception that simple interventions do not require rigorous measurement.3 4 Consequently, reported measures often solely focus on the outcomes rather than the mechanisms and processes that led to the outcomes.

For instance, suppose you are doing rounds at your healthcare setting. You notice a blue flower next to your patient’s name on the electronic whiteboard, but you are unsure what it means. Your colleague tells you that the blue flower is part of a dementia care quality improvement programme. You wonder how exactly the blue flower is supposed to make dementia care better.

In this issue of BMJ Quality & Safety, Sutton and colleagues5 identify the mechanisms by which visual identifiers for patients with dementia can generate positive or negative consequences. The qualitative study consisted of in-depth case reviews and interviews with 21 dementia leads and healthcare professionals, 19 carers and 2 people with dementia (PwD) in four acute care hospitals. The authors identified four mechanisms through which visual identifiers can potentially enhance care for PwD: (1) acting as quick reference cue for staff, (2) signalling eligibility for dementia-specific interventions, (3) informing prioritisation of resources on wards and (4) enabling coordination of care at organisational level. The authors also identify factors that can undermine the effectiveness of the intervention or result in unintended effects such as stigma associated with a dementia diagnosis. The findings highlight the importance of knowing why and how an intervention might achieve the desired effects. One surprising observation is that this study was done after several national improvement projects had been undertaken, without apparent knowledge of how such an intervention might work, what other elements of the intervention might be needed and what unintended downsides might be incurred.6 A prior systematic review of dementia identifiers focused only on whether such identifiers were acceptable.7 A study like that of Sutton and colleagues5 would have been a very helpful first step prior to a costly widespread implementation of a potentially ineffective intervention. Without an understanding of why and how an intervention might be successful, even apparently simple interventions may prove difficult to implement and ineffective at improving patient care.

Compared with clinical trials which use rigorous measurement with dedicated funding and clear protocols for data collection, quality and safety studies have fewer measurement standards and less funding.3 4 8 A recent Delphi panel8 suggested 61 measurement considerations to guide projects from initial conception through to scaled sustainable implementation. We propose five golden rules for measurement that should help improvement projects get started in the right direction.

Five golden rules for measurement

Rule 1: know why your change might achieve the desired results

Although most QIPS practitioners appreciate the importance of carefully measuring the downstream outcomes of their proposed intervention(s), they often neglect to articulate the expected intermediary process changes and measure whether these processes change as intended.9 10 To inform what intermediate processes need to be measured, you must have a theory to explain how your proposed change will lead to different processes and outcomes. This need not be some grand theory but rather an explicit description of the mechanisms through which the proposed interventions are expected to lead to the intended process changes and subsequent downstream outcomes.10 11

Sutton and colleague’s5 overall aim is to better identify the care needs of PwD. Their theory of change is that visual identifiers, such as a blue flower, will raise staff awareness that a patient may have additional needs, which will subsequently address the target problem of poor care of PwD in hospital. Specifically, they hypothesise that visual identifiers will help in the identification of PwD within the hospital. According to their theory, these visual identifiers will then prompt action by providers such as coordinating care pathways for the patient, prioritising resources and addressing specific patient needs.

The programme theory of change might include:

  • The visual symbol is correctly applied.

  • The visual symbol is detected at the necessary time.

  • The visual symbol is correctly interpreted.

  • The correct interpretation leads to actions such as:

    • Supporting and coordinating improvement at an organisational level.

    • Eligibility assessment for dementia-specific interventions.

    • Prioritisation of resources at the ward level.

    • Providing a quick reference cue for patient-specific needs.

  • All of the above leads to better care for PwD (eg, patient experience, health, length of stay (LOS), safety, effectiveness).

How will the proposed change prompt ‘action’? Your measurement plan will flow logically from your description of the change theory. Your change theory might also highlight additional elements of the change that will be essential for success, such as staff training and individualised dementia care plans. During initial development and testing of the intervention, you may find that the change did not lead to the desired effects, so you may need to come up with a new theory of change. If, for example, the correct interpretation of the visual symbol does not lead to action as initially hypothesised, you may need to move away from the ‘identification leads to action’ theory in favour of a ‘nudging’ theory that encourages action more directly. Nudges, or subtle environmental changes, are effective means of influencing human behaviour12 and may help bridge the gap between identification and action by implementing nudges (eg, defaults) that reduce the cognitive burden on providers to comply with recommended actions. Understanding and explicitly articulating the theory of change is therefore critical to the successful replication of interventions.9 10

Rule 2: identify fidelity/process measures—did the change take hold?

In randomised trials of medications, it is essential to measure adherence to medication therapy. If no one is taking the pill, then the clinical trial is likely to be negative. Fidelity, the quality improvement analogue to adherence, can be defined as the degree to which your intervention is working as intended.13 If your process changes are not implemented with high fidelity, then your improvement project is likely to be unsuccessful. Fidelity measures are simple process measures that flow logically from your improvement theory.

A useful question to guide the choice of these fidelity measures is: ‘What would be the first change you would expect to see if you have successful uptake of your intervention?’. For example, the first logical fidelity measurement for the dementia project would be: is the visual symbol correctly applied? If the answer is no, then the project is unlikely to be successful. Before fretting about downstream target outcomes, you need to make sure each of the elements of change have been implemented with fidelity. Such fidelity measurement allows for understanding precisely why interventions work in some cases but not in others. If your change elements are not occurring as intended, you are not ready to undertake broader implementation or evaluation.9 14 Rather, you should spend more time refining your intervention, or choose a different approach entirely. Minimum acceptable fidelity of implementation can be measured on small convenience samples which allows for rapid iterative testing and refining of change elements.13

In the Sutton et al 5 study, if the visual identifier is being correctly applied with acceptable fidelity, then the next measure could be: is the visual identifier correctly recognised? If no one is recognising the symbolism of the identifier (ie, blue flower denotes PwD), then the project is unlikely to be successful. Furthermore, the visual identifiers may be effective at identifying PwD but that does not guarantee that providers will act on the visual cue. How helpful are visual identifiers if they do not encourage action? Subsequent fidelity measures could focus on actions that produce worthwhile improvements. It is not sufficient for the visual cue to only identify the PwD. The desired action must occur to improve care. This highlights the importance of (a) clearly describing the nature of your change(s), and (b) assessing which elements of change have taken hold with high fidelity.

Fidelity measurement is also essential for interpretation of the effectiveness of your intervention at achieving your intended outcomes. Most improvement projects have quasiexperimental designs, so it is very important to show that your change was implemented as intended.14 You cannot confidently conclude that your change caused (or did not cause) an improvement if you do not know the fidelity of implementation.13 15

Rule 3: how are you measuring change?

Even though interviews and case studies as used in the Sutton et al 5 study are effective first steps in defining why and how the intervention may work, understanding the mechanisms of effect requires more real-world in-depth investigations to capture the system factors (eg, underlying cognitive, task, environmental, workflow, organisational or other system factors) that may influence adherence to the interventions.16

Selecting sensible measures is one thing but selecting sensible data collection methods is another. It might be tempting to measure changes by sending surveys to providers and asking them to reflect on their subjective performance, but you will likely not end up with the data you need because of the various assumptions, explicit or implicit, that people have about how work is or should be done. Pragmatic observational methods may be required to allow for objective identification of implementation in practice. For example, an observer could prospectively ask a small convenience sample (n=15) of providers to describe the meaning of the blue flower symbol displayed on the patient whiteboard. If fewer than 70% of providers know what the symbol means, then implementation needs improvement. If the accuracy (fidelity) is less than 70% you need to fix it.13 Simulation can also be useful to help empirically assess what behavioural changes occur in response to iterative design changes in the interventions and to identify problems in advance of implementation.17 18

Rule 4: be mindful of lag time—how long would it take before the change improves outcomes?

Suppose your change is implemented with a high degree of fidelity. When might you expect to see the fruits of your labour in terms of better processes and outcomes? Some changes have immediate benefits. Returning to the Sutton et al study,5 suppose you implement patient wristbands to identify patients as belonging to a specific group or category—that of PwD. If the wristbands are effective, then you can expect an immediate improvement in identification of PwD (no lag). However, it may take more time to successfully implement changes to the care of that patient once it is identified, and even more time to show that those changes improve patient satisfaction, LOS or other clinical measures. The description of the theory of change and fidelity/process measures for the different intervention elements (rules 1 and 2) will help you outline realistic lag times for each element of change and thereby a reasonable total lag period before your intervention will start showing its intended results.

Rule 5: anticipate unintended consequences—what can go wrong?

Sutton and colleagues5 highlight potential downsides to the dementia visual identifier, such as misclassification and stigma. You should always anticipate that your changes may have unintended downsides. These can be predicted based on the effects your change may have on resources (eg, cost associated with visual identifiers), providers (eg, may overcompensate by doing things for the patients rather than promoting patient autonomy) and patients (eg, feelings of discrimination). Unintended consequences can be uncovered during early rapid cycle improvement (Plan-Do-Study-Act) cycles. Testing the interventions on a small scale will help uncover problems with the change. When the change is not successfully implemented, ask why. Ask those who are trying to adopt the change. Make failures informative.19

Conclusion

Sutton and colleagues5 sought to explain the mechanisms of effect of an existing intervention. Explicitly articulating these mechanisms of effect, and associated theories of change, early in the design of improvement projects will enable us to move beyond the perception that simple interventions do not require rigorous measurement. Such theories should be the cornerstone of improvement projects, upon which a sound measurement plan can be built. We believe these five golden rules can help. Application of the golden rules will ensure that fidelity to the multiple elements that make up the intervention has been measured, and that the intervention was deployed as intended. The next time you are on rounds and notice a blue flower next to your patient’s name on the electronic whiteboard, you will readily comprehend its value and its implications for how you care for the patient.

Ethics statements

Patient consent for publication

References

Footnotes

  • Contributors EE and PT contributed to the conception of the paper; they critically read and modified subsequent drafts and approved the final version.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests PT is an editor of BMJ Quality & Safety.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles