Statistics from Altmetric.com
Modern approaches to improvement in health care need modern approaches to measurement. Our traditional use of matrices of retrospective data has been described as like “trying to drive a car by looking through the rear view mirror”.1 Statistical process control tools appear to have much to offer despite the concerns expressed about using techniques taken from industry.2–5 In this issue of Quality in Health Care Boëlle et al lift the lid on the use of control charts in health care and give us a peek inside.6 This commentary attempts to lift the lid a little further to explain the importance of the principles underpinning their use and to provide some further insights into their value for taking the next step—that is, actively to improve care.
Whenever we measure such data as the average length of stay each quarter, monthly waiting times to admission, or frequency of adverse events in anaesthetic processes we can be sure of one thing—the results will change over time because of natural variation. However, our own natural inclination is to respond to individual data points as soon as we see them and many a management memo has been written exhorting staff to “do something about it”, even though “it” may be due to random causes. Control charts are graphs that can take the uncertainty out of decision making through the analysis of relationships between data points plotted over time. They offer a powerful tool for those trying to understand whether the variation displayed is random and what may be causing it. One of the key aims of quality improvement is to reduce unwanted variation in processes of care.
The control chart displayed in the paper by Boëlle et al provides a good illustration of how such charts can be very informative. The monthly frequencies of significant anaesthetic events show random variation over time, but none of them fall outside the “control limits” superimposed on the graph. Control limits indicate the range of variation that the process has displayed to date. The graph (control chart) in the paper by Boëlle et al suggests that, although the frequency of significant anaesthetic events will continue to vary due to a wide range of possible causes inherent within the process, it will not fall outside these limits. This type of variation is called “common cause variation”, the process is described as being “stable”, and it is possible to predict its future performance.
However, the graphs also show a pattern of falling values emerging during 1998 that does not appear to be random and which may be the result of something specific acting on the process. When such a pattern is observed, and is confirmed by simple rules as being unnatural, it serves as a signal that there is a “special cause” at work that needs further investigation. Boëlle and his co-authors discovered that it appeared to reflect a reduction in the number of patients experiencing nausea due to the use of a different drug. Learning this allows the team to make decisions about future management. Data points that fall outside the control limits are also considered to result from special causes that warrant special attention. There are no such points in this case illustration.
Depending on the needs of the investigators, control charts can provide early warning of a problem or determine whether planned changes generate better outcomes. By using real time data they can make an important contribution because they provide speedy feedback.
Control limits are usually calculated from actual data gathered by using simple calculations and then plotted on the graph. It is unclear from the paper by Boëlle et al whether they did this or whether they arbitrarily assigned values to them. This distinction is important if you are trying to discover what a process is capable of, rather than what you would like it to do. Control limits calculated from real data are crucial to discovering whether variation is due to common or special causes since they require very different approaches to intervention.
A further step that Boëlle and co-workers could take is to revisit their distinction between process events and outcome events. Their very careful categorisation and listing of such events lends itself to supporting a dynamic approach to improvement. It could be argued that all the events they describe are the outcomes of processes. Undertaking a Pareto analysis (suggesting that 80% of the variation is caused by 20% of the processes) would allow them to identify the significant few processes that, if improved, might reduce variation further. The interrelationships between these processes and outcomes and the impact of interventions could be studied using control charts to display the data.
Finally, they could use the availability of real time control charts to reinforce interprofessional team working in their service. They have begun this with the involvement of nurses. Examination of the control charts should stimulate curiosity among the different team members who between them manage the process of care, and hence the potential causes of the variation displayed. Taking “blame” out of the equation by focusing on processes and emphasising team learning is critical to the successful use of control charts for the continuous improvement of their care.
The authors should be applauded for using these tools in their attempt to introduce rigorous measurement to the business of improvement, rather than making judgements and scapegoating. It can serve both as a springboard for the team's own continuing improvement efforts and a stimulus and encouragement for others to present similar papers for publication.
The techniques of statistical process control, which have proved to be invaluable in other settings, appear not to have realised their potential in health care. Thus, even in the paper published here they are not being used in the same way as they would in other industries—as ongoing and prospective components of a quality improvement process. Is this because they are, as yet, rarely used in this way in health care? Is it because they are unsuccessful when used in this way and thus not published (publication bias)? Or is it that they are being successfully used but not by people who have the inclination to share their experience in academic journals? Indeed, this has been a perceived problem in publishing quality improvement projects across health care, as discussed in a previous editorial.7 Neither journals nor writers are equipped to present such practical examples of good practice, despite the real demand for sharing the experience of generalisable methods.
So, Quality in Health Care would like to set a challenge to those of you who have experience of applying such techniques. Let us have examples of the effective application of tools such as run charts and control charts, process flowcharts, Pareto analysis, fishbone diagrams, etc. Our new rapid response mode will help with this (see page 158 of September issue) and, if we get enough, we could begin to publish collated examples. Alternatively, look at our guidance on quality improvement reports on our website (www.qualityhealthcare.com) and give us your projects using statistical process control in this format for publication. Meanwhile, we will be seeking to commission papers that provide guidance on the use of such tools.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.