Article Text

Download PDFPDF

The role of theory in research to develop and evaluate the implementation of patient safety practices
  1. Robbie Foy1,
  2. John Ovretveit2,
  3. Paul G Shekelle3,4,
  4. Peter J Pronovost5,
  5. Stephanie L Taylor3,4,
  6. Sydney Dy5,
  7. Susanne Hempel3,
  8. Kathryn M McDonald6,
  9. Lisa V Rubenstein3,4,
  10. Robert M Wachter7
  1. 1Leeds Institute of Health Sciences, University of Leeds, Leeds, UK
  2. 2Medical Management Centre, The Karolinska Institute, Stockholm, Sweden
  3. 3RAND Corporation, Santa Monica, California, USA
  4. 4VA Greater Los Angeles, Los Angeles, California, USA
  5. 5The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
  6. 6Stanford University, Stanford, California, USA
  7. 7University of California, San Francisco, San Francisco, California, USA
  1. Correspondence to Professor Robbie Foy, Leeds Institute of Health Sciences, University of Leeds, 101 Clarendon Road, Leeds LS2 9LJ, UK; r.foy{at}leeds.ac.uk

Abstract

Theories provide a way of understanding and predicting the effects of patient safety practices (PSPs), interventions intended to prevent or mitigate harm caused by healthcare or risks of such harm. Yet most published evaluations make little or no explicit reference to theory, thereby hindering efforts to generalise findings from one context to another. Theories from a wide range of disciplines are potentially relevant to research on PSPs. Theory can be used in research to explain clinical and organisational behaviour, to guide the development and selection of PSPs, and in evaluating their implementation and mechanisms of action. One key recommendation from an expert consensus process is that researchers should describe the theoretical basis for chosen intervention components or provide an explicit logic model for ‘why this PSP should work.’ Future theory-driven evaluations would enhance generalisability and help build a cumulative understanding of the nature of change.

  • Patient safety
  • effectiveness
  • theory
  • evaluation

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

‘There is nothing so practical as a good theory’—Kurt Lewin (1952)

Handwashing by hospital staff is widely advocated as a means of reducing healthcare-acquired infections. Handwashing works because bacteria cause disease, and handwashing kills bacteria. A hospital aiming to increase the extent to which staff wash their hands between patients might provide alcohol-based antibacterial dispensers on every bedside wall and run an educational campaign about the harms and costs of healthcare-acquired infections. Such a strategy is based on the rationales that better availability of dispensers makes it easier for staff to wash their hands and that providing evidence on harms and costs will increase motivation to wash hands. Both handwashing and the efforts to increase handwashing are based on assumptions or, more explicitly, a theory.

A theory is an organised, heuristic, coherent and systematic articulation of a set of statements related to significant questions that are communicated in a meaningful whole for the purpose of providing a generalisable form of understanding.1 This article highlights the value and uses of theory in research aiming to develop and evaluate patient safety practices (PSPs). It draws upon work undertaken for the Agency for Healthcare Research and Quality (AHRQ) to improve the design, evaluation and reporting of research on PSPs.2 The project brought together a panel of international experts in specific PSPs; methodologists from fields including epidemiology and statistics, programme evaluation, organisational behaviour and human-factors engineering; senior health-system executives responsible for implementing PSPs; and leaders of national and international patient safety organisations. A team of patient safety experts and social scientists led the panel through structured consensual processes informed by targeted literature reviews.

Defining patient safety practices

We define PSPs as interventions intended to prevent or mitigate harm caused by healthcare or risks of such harm. They may include systems, organisational and behavioural interventions, singly or in combination. Our definition distinguishes the strategy or system for changing clinical care (eg, training) from a clinical intervention targeting a patient (eg, antibiotics before surgery) and defines a PSP as the strategy used. Box 1 shows the five PSPs studied within the project.

Box 1

Five patient safety practices studied

  • Checklists for catheter-related bloodstream-infection prevention

  • The Universal Protocol, for preventing wrong procedure, wrong site, wrong person surgery

  • Computerised order-entry/decision-support system

  • Medication reconciliation

  • Interventions to prevent in-facility falls

In addition to conclusions from the AHRQ project, this paper draws upon evidence and theories used in implementation research.3 Both safety and implementation research are concerned with understanding and changing behaviour at individual provider, organisational and wider system levels in order to promote evidence-based practices which reduce mortality and morbidity or improve the patient experience.

Need for theory

Changing provider and organisational behaviour is challenging. There is a wide range of interventions to change clinician behaviour. Systematic reviews of interventions to improve practice consistently indicate that most interventions, across different categories, are effective some of the time—but none all of the time—and that intervention effects range from none to large.4 5 This is analogous to clinical research in which drugs or other therapies work in some but not all patients. Many factors can influence the effects of PSPs. These include contextual features, such as the characteristics of targeted providers, clinical settings or clinical behaviours, as well as the characteristics of the intervention itself. In principle, it is possible to explore and explain variations in effectiveness across studies by examining such characteristics. In practice, it is difficult for two reasons.

First, the characteristics of context and interventions are seldom described in sufficient detail or in consistent ways to enable meaningful comparisons. However, additional description by itself does not necessarily inform decision-making about the selection of interventions. One could consider carrying out many studies of PSPs in many settings so that decision-makers could learn from studies in settings similar to their own. For example, audit and feedback are variably effective in changing provider behaviour and clinical outcomes.6 Their effectiveness may vary according to features such as content of feedback (eg, comparative or not, anonymous or not, perceived credibility of data?), intensity (eg, monthly, annually?), method of delivery (by peer or non-peer?), duration (6, 12 or 24 months?) and care setting (intensive care or nursing home?). Varying only five elements produces 288 combinations—without accounting for the need for replication or addition of co-interventions.7 This is not an efficient way to build knowledge of the effects of PSPs.

Second, many studies of interventions to promote safety currently categorise features of interventions, targeted practices and contexts on a superficial basis, such as clinical decision support systems (CDSSs), prescribing and urban hospitals respectively. Such classification systems are really descriptive typologies rather than theoretically meaningful groupings. They may be as unhelpful or misleading as classifying drugs into groups according to whether they are taken orally or intravenously, or by the colour and size of the pill.8 9 It is not surprising that systematic reviews based on such categories or typologies raise more questions than they answer and struggle to extract generalisable lessons about how interventions achieve their effects.10 For example, a CDSS can work in a number of ways, such as by increasing knowledge of safe practice, reinforcing motivation or prompting recall; and its effects may vary according to what types of clinical behaviour are targeted, whether it is used with co-interventions and so forth. The mechanisms by which more complex interventions work, such as those to reduce falls or rapid response teams, may be both more variable and more sensitive to contextual features. Theory can also help improve understanding of which elements of complex interventions are necessary and synergistic, hence informing spread and adaptation across different settings.

Theoretical models provide a basis or vocabulary with which to describe the key features of targeted behaviours, contexts and interventions. Such a vocabulary can enable the identification of the features that systematically influence the effectiveness of interventions and, hence, help build a cumulative understanding of what works and how.7 ‘Generalisation through theory’ potentially offers a more efficient and appropriate method of generalisation than study replication in many possible settings. Without the ability to generalise in some meaningful way, decision-makers lack information to make choices about what PSPs are likely to be effective in their own settings. For example, an evaluation of an electronic medical record at a hospital in Sweden found that implementation success was associated with factors in Rogers' work on diffusion of innovations, plus additional factors postulated by previous research.11 This strengthens our confidence in the usefulness of that theory and those factors to predict successful implementation in other settings.

Theory has not commonly been used in the fields of implementation and safety research.12 A review of 235 implementation studies found that only 53 used theory in any way and that only 14 were explicitly theory-based.13 Similarly, most reports of PSP evaluations do not report any theoretical model underpinning the intervention. Even for the five representative PSPs chosen for the AHRQ project, which are among the most commonly studied PSPs, our review of published evaluations of the PSPs found only two studies that even partially reported a theory for why the PSP should work. Pronovost et al examined the impact of a safety programme on teamwork climate but were unable to examine the relationship between team climate actual delivery of care in intensive care units.14 Lesselroth et al explicated the mechanisms by which a medication reconciliation intervention was intended to work, but there were limited data to judge whether this intervention had worked as predicted.15

Diversity and overlaps in theory

There is a diverse range of theories potentially relevant to PSPs which encompass a wide variety of disciplines from anthropology, psychology, sociology, behavioural economics and management sciences. Useful overviews of theory relevant to understanding and changing clinical behaviour have been published elsewhere.16 17

Theories are frequently based upon similar notions but are expressed differently according to their intra- or interdisciplinary origins. In the former case, Michie et al identified a total of 128 constructs that explained behaviour from 33 psychological theories and summarised them into 12 domains that could be used in implementation research.18 To illustrate the latter case, the following terms from different disciplines describe similar ways of modelling interventions and theorising about them: the ‘logic model,’19 ‘treatment theory,’20 ‘programme theory’21 and ‘theories of change.’22–25 These are all particularly useful approaches for understanding the effects of complex safety programmes.

A logic model describes how an intervention is understood or intended to produce particular results.19 The logic model proposes a chain of events over time in cause-effect patterns in which the dependent variable (event) at an earlier stage becomes the independent variable (causal event) for the next stage.26 A logic model is often based on explicit or implicit theories of behaviour change. ‘Treatment theory’ describes the process through which an intervention is expected to have its effects on a specified target population, in the case of PSPs, providers or organisations.20 This theory is not a protocol that requires very specific prescribed actions. Instead, it is a set of principles that together are hypothesised to bring about a change in the particular situation. These principles might be enacted in several different ways, but they would all achieve the same ‘functions’27 and intermediate objectives in a chain of events which leads ultimately to improved patient outcomes.

In the field of programme evaluation, programme theory is defined as the ‘conceptual basis’ of the programme: ‘Comprehensive evaluations address the theory by carefully defining the components of the programme and their relationships, and then examining the implementation of these components and how they mediate outcomes.’28 Experimental designs use ‘theory’ in the sense that the evaluation is designed as a prospective test of a hypothesis. In contrast, in theory-informed programme evaluation, the programme theory is either a prospective model of how the components lead to the intended results or a retrospective explanation of how or why the programme progressed as it did.21 29 30

A ‘theory of change’ is usually used to describe how those responsible for implementation understand an intervention to work.22–24 It may be explicit, or may exist as a theory in a sense of being unspoken assumptions or beliefs. Dixon-Woods et al25 describe a theory of change as identifying ‘plans for change and how and why those plans are likely to work, and indicates the assumptions and principles that allow outcomes to be attributed to particular activities.’ This is different from an explanation derived from empirical research about possible influences on outcomes.

These types of theories focus on the intervention and conceptualise it as a chain of events, often in a linear sequence, which leads through successive intermediate changes (including changes in provider and organisational behaviour) to final results (clinical or cost outcomes). Other variants, often relevant to combined, or multifaceted, safety interventions, view the implementation as a number of interacting components with a synergistic and system effect.

From this general account, it is apparent how these various approaches to theory overlap and emphasise similar ideas. Given the current state of knowledge in this field, there is considerable scope to explore and apply a wide range of theories to advance our current understanding of PSPs. There is also considerable overlap between theories used in safety and implementation research respectively. For example, theories of human error propose that errors have different causal characteristics, which can include violations as well as slips and lapses.31 These are potentially relevant to safety and to more general implementation of evidence-based practice. Lapses can contribute to the erroneous co-prescribing of two drugs which may interact and harm a patient, or the failure to consider prescribing an effective drug. Both are concerned with understanding and changing clinical behaviour. Both behaviours should preferably be evidence-based, so that following them reliably reduces the probability of morbidity or mortality.

Just as the most appropriate research design is informed by the research question, the selection of an appropriate theory is informed by objectives. For example, an organisational rather than individual-level theory may be most relevant when studying change at an organisational level. Furthermore, studies of PSPs that target multiple levels (eg, clinician and organisational) may draw upon multiple theories.

How can theory guide safety research?

Many PSPs are complex; they often have a number of interacting components, address more than one behaviour, target more than one organisational level or group, require flexibility in design and implementation, and can result in variable outcomes.32 The UK Medical Research Council highlights the importance of integrating theory within the linked stages of developing and evaluating complex interventions. Within this framework, theory can be applied at different stages to explain clinical and organisational behaviour, inform PSP selection and development, and understand PSP effects (box 2), thereby developing a generalizable body of knowledge.

Box 2

Potential roles of theory in patient safety research

  • Explaining clinical and organisational behaviour

  • The selection and development of patient safety practices

  • Evaluating implementation and mechanisms of change

Explaining clinical and organisational behaviour

Just as with clinical practice, it is important to diagnose the causes of adherence or non-adherence to recommended practice before intervening. Human error theories, recognising that slips and lapses may lead to the wrong execution of an intended sequence of action,31 have informed better equipment design—for example, alarms within anaesthetic machines.33

Selection and development of PSPs

Knowledge about which factors influence behaviour can inform the selection of ‘active ingredients’ to incorporate within interventions. McAteer et al34 developed an intervention to increase levels of providers' hand-hygiene behaviour using psychological theory for evaluation in a cluster randomised trial. This involved a review of effective behaviour change techniques to inform the theoretical approach taken, development of intervention components with clinicians and focus groups with the targeted provider groups.

Having a theoretical basis alone is an insufficient justification for choosing a PSP in routine healthcare practice; there are many examples from medicine where widely promulgated clinical interventions based upon theory and partial evidence turned out to be ineffective or harmful.35 Rigorous experimental or quasi-experimental methods are still required to draw conclusions about effectiveness.

Evaluating implementation and mechanisms of change

Theory can be used to help evaluate the process of implementation, whether the PSP worked as hypothesised or did so by an alternative means, and identify unintended consequences. For example, Byng et al36 conducted a qualitative interview study alongside a randomised controlled trial (RCT) of a multifaceted intervention to improve the care of people with long-term mental illness. Using a realistic evaluation approach, they constructed a theoretical model to help explain which intervention features had the greatest impact.37 Such evaluations can also be quantitative; Ramsay et al used psychological theory in an analysis to explore which provider beliefs and attitudes had changed following a randomised trial of an intervention to reduce unnecessary laboratory testing.38

RCTs should ideally be accompanied by parallel process evaluations that assess the intended and unintended changes in processes that may affect outcomes. A recent review highlighted much potential for improvement in using this approach.39

Illustrative example

A hypothetical example, again using handwashing, illustrates the value of theory in PSP development and evaluation. Whether individuals carry out a specified action is at least partly determined by their motivations or intentions to do so. According to the Theory of Planned Behaviour (TPB), the strength of a behavioural intention is predicted by attitudes towards the behaviour, subjective norms based on the perceived views of other individuals or groups (ie, perceived social pressure); and perceived behavioural control, encompassing beliefs about self-efficacy (confidence that one can perform an action and that performing the action will have the desired consequence) and wider environmental factors that facilitate or inhibit performance.35 The TPB has been used to understand a wide range of other clinical behaviours and inform or explain behaviour change.38 40–42

Figure 1 illustrates the application of the model to handwashing. Qualitative interviews with a given set of providers can initially identify the range of specific beliefs and attitudes related to handwashing behaviour.43 These might include their attitudes towards handwashing (eg, how much do they think it will protect patients from infection?), subjective norms (eg, do they feel under pressure from fellow providers or patients to wash their hands?), self-efficacy (eg, do they know how to wash their hands properly?) and environmental factors (eg, the availability of alcohol-based antibacterial dispensers). A quantitative questionnaire survey of providers can then assess the extent and magnitude of these beliefs and attitudes. Regression analysis then examines the relationships between the TPB predictor items and behavioural intention (motivation to wash hands) and, ideally, actual clinical behaviour (handwashing). In reality, collecting reliable data on actual individual behaviour is often difficult.

Figure 1

Theory of Planned Behaviour, illustrating its relevance to understanding handwashing.35 The broken arrow represents the potential direct effect of environmental factors on behaviour.

Some of these factors may turn out not to be important in this case—for example, beliefs about the value of handwashing or knowledge of how to wash hands may not explain variations in intentions or practice. Therefore, interventions aiming to change provider beliefs or knowledge about handwashing would be unlikely to be helpful in overcoming barriers to behaviour change. However, other factors might explain important variations in intention or practice. For example, providers may be especially sensitive to criticism from colleagues if they are seen not to be washing their hands, and they may identify the lack of bedside alcohol dispensers as an impediment. Therefore, an intervention to improve uptake of handwashing can incorporate an element of peer pressure combined with changing the environment to reduce perceived barriers to handwashing. If this intervention is tested in a trial and found to change behaviour, a process evaluation can illuminate whether the intervention worked as hypothesised (by changing subjective norms and perceived behavioural control) or in a different but unexpected way (eg, by changing attitudes but not subjective norms). This in turn provides a platform for further work to improve the understanding and effectiveness of interventions to promote handwashing.

Recommendations for future research

The AHRQ project expert panel made a number of recommendations to improve evaluations of PSP effectiveness. Most immediate in relation to theory is the need for evaluations to describe the theoretical basis for chosen intervention components or provide an explicit logic model for ‘why this PSP should work.’2

However, it is also critical to delineate and, where possible, measure contextual factors which influence PSP effectiveness. In the AHRQ project, there are four groups of contextual factors thought to be important and in need of study: external factors (eg, regulatory requirements, the presence of public reporting or pay-for-performance), organisational characteristics (eg, size, complexity and financial status), teamwork, leadership and patient safety culture, and management and implementation (eg, training resources, internal organisation incentives). A long-term goal is to develop some form of shared, theory-informed taxonomy with which to describe the key elements of these contextual factors and PSPs.

We have only highlighted general issues relating to the value of theory and illustrated some applications in this paper. The wide and often bewildering range of theories—each with its own strengths and limitations—and the lack of one ‘theory of everything’ might deter safety researchers less familiar with theory from using it. We suggest four approaches to help.

First, we recommend greater collaboration with researchers from fields such as psychology, sociology and management sciences. Patient safety is not a field of enquiry for which insights and understanding can be generated by an isolated research team; it requires interdisciplinary collaborations that can bring in one or more theoretical perspectives.

Second, preliminary work can guide which theory to select in the evaluation of a PSP. Several review and consensus-driven papers have integrated factors from individual theories into broader conceptual frameworks.18 44 45 Such frameworks represent useful approaches to the initial exploration of an implementation problem and guide the selection of more specific theories to understand specific causes and potential solutions. For example, Dyson et al used an interview schedule based upon the 12 behavioural domains18 to elicit barriers to and enablers of handwashing.46 While they found that motivation was important, lending support to using the TPB for deeper exploration, they highlighted that other domains, such as the working environment or habits and routines, could be considered in developing a strategy to increase handwashing.

Third, there are a number of toolkits to guide the application of theories to understand implementation problems and guide the development of better targeted interventions, such as those for the TPB43 or a sociological model, the Normalisation Process Theory.47

Fourth, safety researchers could usefully draw on theories from implementation research and beyond; there is a great need and opportunity for cross-pollination, so as to learn from wider bodies of knowledge and experience.

Conclusion

Theoretical perspectives have, hitherto, seldom been incorporated into PSP evaluations. This lack of description and explication of the assumptions or logic behind many PSPs makes it more difficult for others to reproduce or adapt them. Theory-driven evaluations can enhance generalisability and help build a cumulative understanding of the nature of change.

Acknowledgments

The authors acknowledge the contributions of L Carr, B Johnsen, P Smith and A Motala to this work. The technical expert panel included AS Adams, P Angood, DW Bates, L Bickman, C Brown, P Carayon, L Donaldson, N Duan, DO Farley, T Greenhalgh, J Haughom, ET Lake, R Lilford, KN Lohr, GS Meyer, M Miller, D Neuhauser, G Ryan, S Saint, K Shojania, SM Shortell, DP Stevens and K Walshe.

References

View Abstract

Footnotes

  • Linked articles 047035.

  • Funding AHRQ.

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles