Article Text
Statistics from Altmetric.com
Two related papers1 ,2 in this issue of BMJ Quality & Safety provide interesting insights into the difficulties of evaluating improvement activities, and also illustrate why improvement is so hard. In a carefully crafted set of controlled, interrupted time series experiments, the authors examined the effectiveness in the operating theatre of two popular improvement interventions: standardised procedures and teamwork training. The primary outcomes in both were process measures: the theatre teams’ non-technical skills performance, and the count of ‘glitches’—omissions, interruptions or other untoward events that disrupted flow and had potential to affect safety or quality. In both experiments, the investigators took care to ensure the interventions were ‘owned’ by the frontline workers, and not imposed from without by managers disconnected with the realities of the workplace (although this also means that higher level support important for sustainability may have been lacking).
The papers report insufficient evidence to support improved performance from introducing standard operating procedures, even when those procedures were developed and implemented by the frontline staff themselves.1 However, they also report a partial success, in that, when accompanied by teamwork training, the combination of standard operating procedures and teamwork significantly improved non-technical skills performance.2 Curiously, in the combined experiment, technical performance as measured by ‘glitches’ per hour improved in experimental and control groups. Taken as a whole, the two papers suggest an interaction, or synergism, between the two interventions. Standardisation alone was not effective, but standardisation in conjunction with teamwork training, was (although we cannot be certain whether teamwork alone might have been similarly effective).
These two papers make a valuable contribution to the safety and quality literature by showing that the same intervention (standardisation) can be ineffective in one context (without teamwork training) but effective in another (with teamwork). One wonders how many negative reports of quality interventions were negative only because an important effect modifier was missing from the analysis; or conversely, how many positive reports attributed success to the planned intervention, when it was actually facilitated by an unmeasured interaction variable. There is a significant risk here of drawing the wrong lessons from previous work. This is a possible explanation for the heterogeneity that bedevils the safety and quality literature—a confusing patchwork of claims and counterclaims, reports of interventions that worked or failed, or worked here but not there (sometimes even within the same organisation).3 Systematic reviews of these reports have not helped much; by dealing with context as a nuisance variable and averaging it out, they tend to cast everything in a dim grey light—across the board, most interventions are neutral or dull average at best, further investigation is required.
These papers fall into a well-established evaluation framework that has become an orthodoxy in healthcare: the technical, rational, deterministic and reductionist approach of positivist ‘normal science’. The success of this approach in much of science, and the parallel success in industry of its philosophical cousin, statistical process control, has led healthcare into mistaking the map for the territory. Since positivist science has been such a successful lens through which to view aspects of the world, these aspects have been mistaken for the world and anything that does not fit or cannot be accommodated in a positivist paradigm is tacitly presumed to be unimportant or non-existent.
These methods were largely developed for static, engineered, inanimate systems; the paradigmatic model for statistical process control is the assembly line. They are approaches suitable to machines—there are seldom interactions among components, it is possible to change only one thing at a time, as a change in one part does not produce a consequent change in another.
However, healthcare systems are not assembly lines. They are complex, intractable, sociotechnical systems and4–6 organic rather than engineered. Their basic ‘physics’ is poorly understood at best. They do not simply accept change (eg, interventions), but adapt and reconfigure themselves in response to it; those adaptations reverberate and ramify throughout the system via positive and negative feedback loops with varying delays. These interactions among components are more important than the components themselves; the behaviour of one component depends in part on the behaviour of others, and the evolving cycles of reciprocal action and reaction reshape the universe of possibilities.7 ,8 This makes systems path dependent; the past trajectory of changes, reactions, and interactions influences future paths, opening some while closing others.9 Furthermore, sociotechnical systems are composed at least in part of sentient beings, so how those actors in the system understand and interpret interventions in context, and develop strategies to manage or integrate them within existing workflow, have strong influences.
These properties make it impossible to change only one thing,10 ,11 and difficult to predict the overall effect of changes by ‘summing’ across the individual effects.7 Thus, interventions in a complex sociotechnical system produce a chain of consequences that extend over time and cannot be fully anticipated. Such systems cannot be directly controlled in the Taylorist, rationalist way that managers or regulators would like; and evaluations of interventions in such systems can never be ‘one and done’, but must always be formative rather than summative.
The problem is exacerbated when the intervention itself is a complex social one.12 In the two papers discussed here, teamwork training is clearly a complex social intervention, but what about standard operating procedures? Standardisation is often viewed as a purely objective, technical exercise, but this is a misconception.13 However, objective, rationalised, complete and internally consistent a set of standardised procedures might be, their development, interpretation and application are social processes, subject to the context, history, politics and goals of actors in the system.14 In addition, there are inevitably gaps between the imagined world of the procedures and the real world of work,15 and conflicts among competing goals; both must be recognised, negotiated and resolved in action by workers in a community of practice. Finally, the cycle of adaptations set in motion by the intervention can feed back onto the original intervention itself, so that it also changes with time, triggering yet another cycle of adaptations.
Although complex sociotechnical systems cannot be directly controlled, all is not lost, because they can be influenced.8 Interventions may not lead directly to the desired behaviours, but they can ‘set the stage’ to enhance and sustain the emergence of those behaviours.16 This realisation will require us to modify our approach to both improvement and its evaluation. It will require accepting a broader range of sciences and methodologies as admissible; abandoning many of the Taylorist principles that have informed improvement efforts;17 and fundamentally re-examining the Newtonian-Cartesian assumptions that underlie them.18
Similarly, we will have to expand our evaluation methods to move beyond a certain methodological fetishism19 aimed at answering the ‘horse race’ question “Does A work better than B?” and adopt more nuanced methods20–22 aimed at a more complex set of questions: “Which works, how, why, for whom, to what extent and in what context?” These questions are often best addressed by qualitative, ethnographic methods aimed at providing a ‘thick description’ in a case study of an improvement effort.23–25 The value of this type of approach has been shown by careful, theory-driven studies of how and why initiatives are successful26: for example, discovering that the theory of improvement motivating a project at its beginning was not the way in which improvement actually, eventually occurred; or illuminating tensions and paradoxes in contrasting understandings of interventions.27
However, progress in this area is haunted by a difficult question: why is it that safety and quality in healthcare has been so strongly wedded to rationalist, Taylorist, Cartesian-Newtonian thinking about the nature of clinical practice, and how to improve it? Three factors supporting this marriage may be difficult to overcome. First, it offers the comforting modernist illusion that the muscular application of science can at last tame risk, uncertainty, and disorder, leading to a better, safer, more controllable world.28 Second, it offers a satisfying explanation for drawing meaning out of the inevitable failures that still must occur,29 while simultaneously not threatening those in power.30 And finally, it supports a long-standing secular trend increasing the power and influence of a technocratic elite18 of scientific-bureaucratic managers31 that accompanies the progressive industrialisation of healthcare.32 ,33 Ironically, the external pressures on healthcare to achieve the precision, safety and efficiencies of linear production systems is driving some very counter-productive behaviours and undermining our desired goals.
References
Footnotes
-
Competing interests None.
-
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
- Original research
- Original research