The use of statistical process control (SPC) charts in healthcare is increasing. The general advice when plotting SPC charts is to begin by selecting the right chart. This advice, in the case of attribute data, may be limiting our insights into the underlying process and consequently be potentially misleading. Given the general lack of awareness that additional insights may be obtained by using more than one SPC chart, there is a need to review this issue and make some recommendations. Under purely common cause variation the control limits on the xmr-chart and traditional attribute charts (eg, p-chart, c-chart, u-chart) will be in close agreement, indicating that the observed variation (xmr-chart) is consistent with the underlying Binomial model (p-chart) or Poisson model (c-chart, u-chart). However, when there is a material difference between the limits from the xmr-chart and the attribute chart then this also constitutes a signal of an underlying systematic special cause of variation. We use one simulation and two case studies to demonstrate these ideas and show the utility of plotting the SPC chart for attribute data alongside an xmr-chart. We conclude that the combined use of attribute charts and xmr-charts, which requires little additional effort, is a useful strategy because it is less likely to mislead us and more likely to give us the insight to do the right thing.
- Statistical process control
- Quality measurement
- Control charts, run charts
Statistics from Altmetric.com
Statistical process control (SPC) charts are increasingly being used in healthcare with the aim of understanding the voice of the process by distinguishing between common and special causes of variation and guiding action to deliver continual improvement.1–7 Common cause variation is intrinsic to any stable process and affects all outputs from the process. To decrease common cause variation, we need to act on the process. Special cause variation is the result of factors extrinsic to the process, and its reduction therefore requires identification of, and action on, the special causes. These notions are readily demonstrated. Imagine writing, under identical conditions, your signature with your usual hand say 10 times. Even though the underlying conditions are constant the signatures will show some variation—this variation has a common cause (the underlying process of writing) and further reduction in variation requires a change to your process of writing. However, if you produce another signature using your other hand (left or right) then it is very likely that this signature will stand out from the rest because it has a special cause. To reduce special cause variation we must first find the cause.
We can determine if a process is consistent with common or special cause variation with the aid of an SPC chart (see box for further details). When interpreting an SPC chart we can make two types of errors.8 A Type I error is to treat variation resulting from a common cause as if it were emanating from a special cause, and a Type II error is to treat variation from a special cause as if it were emanating from a common cause.
Statistical process control (SPC) charts
All SPC charts are characterised by showing the data from a process over time with the aid of three additional lines—a centre line (usually the mean) and upper and lower control limits typically set at ±3 standard deviations (SDs) from the mean. The control limits are sometimes known as three-sigma limits. When data points appear, without any unusual patterns (see below for further details), within the control limits, the process is said to be exhibiting common cause variation and is therefore considered to be in statistical control (or stable). The action to reduce common cause variation is to change the process. When points fall outside the three-sigma limits or there are unusual patterns in the data, the process is exhibiting special cause variation which requires detective work to find the cause and, if appropriate, to eliminate the cause. In our paper, we use three popular SPC charts which are briefly explained below.
P-chart: If we are monitoring, say, the proportion (or percentage) of patients who have a postoperative infection following minor surgery, then we can use a p-chart. The p represents proportion or percentage of a binary event which can only have two states, for example, infected/not infected. The control limits of the p-chart rely on the Binomial distribution which assumes that the events are independent and have a constant underlying probability of occurring.
C-chart: If we are monitoring, say, the number of emergency admissions every Monday, then we can use a c-chart. The c-chart is used when we can count the event (admissions) from an underlying population, the size of which unknown but we can assume it is to be relatively large and constant (ie, the hospital's catchment area). The three-sigma control limits for a c-chart are derived using the Poisson distribution. The assumptions of the c-chart are that the events occur one at a time and the events are independent.
XMR-chart: If we are plotting the daily average blood pressure readings of a patient then this measurement can be plotted on an xmr-chart. Since this chart shows the data one day at a time (or one unit at a time), it is sometimes known as the individuals or i-chart. Although the control limits are derived from an underlying normal distribution (which also assumes independence), the xmr-chart is robust. A particular feature of the xmr-chart is that it uses the difference between successive values (ie, moving ranges) to determine the three-sigma limits and so the resulting limits are often described as empirical; that is, that which is actually observed. The xmr-chart has two charts—one SPC chart with the data (x) and the other SPC chart with the moving ranges—although it is not uncommon to only show the former. The empirical nature of the xmr-chart enables us to use it with attribute data. So, we can plot the proportion of infected patients or number of emergency admissions on an xmr-chart. The use of an xmr-chart alongside the c-chart or p-chart is the focus of this paper.
Rule 1: If any single data point falls outside the 3 SD limit from the centre line.
Rule 2: If two out of three consecutive points fall beyond the 2 SD limit.
Rule 3: If four out of five consecutive points fall beyond the 1 SD limit.
Rule 4: If eight consecutive points fall on the same side of the centre line.
The four rules will not pick up the issues we raise. Nevertheless, the handbook also suggests that ‘Fifteen consecutive points falling within ±1 SD’, or ‘hugging the central line’, is indicative of a special cause variation.7 This rule has the potential to pick up some of the issues we raise but only if we have at least 15 data points and are relying primarily on the attribute SPC charts.
The first steps in producing an SPC chart require the user to select the right chart and this decision is determined by whether we are dealing with continuous/measurement data or discrete/attribute data. Continuous data involve measurement and include examples, such as weight, height, blood pressure, temperature and time from referral to surgery. Discrete attribute data involve counts (integers) and include examples such as the number of admissions, number of prescriptions, number of deaths and number of patients waiting. For individual measurements data (eg, a person's daily blood pressure readings), there is only one type of SPC chart that is applicable. It is known as the individuals-chart or i-chart or xmr-chart (see box).
For attribute data there are, in general, also one correct SPC chart depending on the type of attribute data at hand (see box). For binary data, which are represented as proportions (eg, proportions of infected blood samples), the p-chart (p stands for proportion) is recommended. For Poisson type counts (eg, number of emergency admissions per month) the c-chart (c stands for count) is recommended and for Poisson rates (eg, number of infections per 1000 bed days) a u-chart is recommended. (Readers who are less familiar with selecting and calculating limits for these SPC are referred to a tutorial paper1 or textbooks in the references.5 ,7 ,9–13)
The use of traditional attribute SPC charts involves the use of an underlying statistical model (Binomial or Poisson) with its associated assumptions1 ,9–11 (see box). On the other hand, the control limits on an xmr-chart, although based on the normal distribution, are determined from the point-to-point variation in the data and so, to a much greater extent, can be described as ‘empirical’ limits reflecting the actual variation in the voice in the process.10 Although less widely acknowledged, this feature of the xmr-chart means that we can also use it to plot attribute data10 ,13 and this is what we wish to exploit in this paper. So, while most SPC texts5 ,9–12 emphasise the importance of selecting an attribute SPC chart for attribute data, our aim here is to demonstrate the value of producing an xmr-chart alongside an attribute SPC chart.
Our basic rationale stems from the fact that under purely common cause variation the three-sigma control limits on the xmr-chart and traditional attribute charts (eg, p-chart, c-chart, u-chart) will generally agree10 ,13 (assuming fairly constant subgroup sample sizes) indicating that the observed variation (xmr-chart) is consistent with the underlying Binomial model (p-chart) or Poisson model (c-chart, u-chart). However, in practice if the control limits show disagreement, then this contradiction is insightful because it points to a violation of the assumptions of the Binomial or Poisson model and the likely existence of an underlying systematic special cause of variation. In other words, the contradiction itself represents a signal of special cause variation that merits investigation. By using xmr-charts and attribute SPC charts alongside each other we can at once see if the control limits broadly agree or disagree. We cannot readily make this assessment using a single SPC chart; on the contrary, exclusive reliance on a single chart (attribute chart or xmr-chart) may be potentially misleading. We first illustrate these ideas with a simple simulated example and then proceed to two real case studies.
A simple illustrative example
To illustrate our ideas, we generated 50 random numbers from a Poisson distribution with mean equal to 50. The actual numbers are listed in the appendix. The correct chart for these data is the c-chart. The standard formula for the control limits of a c-chart is where is the mean (=48.9) and the lower limit is 27.9 and the upper limit 69.9. The resulting c-chart is shown figure 1 (upper left panel) and shows no surprises—the inference is that the variation in counts is due to chance according to the Poisson law. Likewise, we are led to the same inference when we construct an xmr-chart (upper right panel). We also see that the control limits from the c-chart (lower limit 27.9, upper limit 69.9) and the xmr-chart (lower limit 27.3, upper limit 70.6) are almost identical. So under pure random variation, we may choose to construct either a c-chart or an xmr-chart and we are led to the same inference. This is not surprising because under pure random variation, the xmr-chart and the c-chart (or p-chart) will produce almost identical limits.10 ,13 So, producing both the c-chart and the xmr-chart provides a useful check as to whether the data are acting according to the Poisson law (or in the case of a p-chart, the Binomial law).
Now consider adding a fixed (ie, non-random, systematic) constant of 25 to each random count in our simulated data and let us see how this affects the behaviour of the c-chart and the xmr-chart (figure 1, middle row). We now see that there is disagreement across the control limits. The limits for the c-chart are 48.1 (lower limit) and 99.7 (upper limit) compared with 52.3 (lower limit) and 95.6 (upper limit) for the xmr-chart. The limits on the c-chart are about 20% (=(99.7–48.1)/(95.6–52.3)×100=20%) wider than those of the xmr-chart. Why? Because the limits for a c-chart are based on the square root of the mean count , and since the mean count has increased (from 48.9 to 73.9) so too have the limits increased. However, the limits on the xmr-chart still show the same amount of variation as above (70.6–27.3=43.3 vs 95.6–52.3=43.3). Why? Because adding a constant 25 to each count does not affect the calculation of the control limits as the moving ranges are the same whether 25 is added to each count or not.
So while each chart alone provides no obvious signals of special cause variation, the combined side-by-side comparison clearly shows disagreements between the limits. This disagreement is due to an underlying systematic special cause of variation which added 25 to each count. If we rely exclusively on the c-chart or the xmr-chart, there is a risk of overlooking this type of underlying systematic special cause variation which is readily indicated by the lack of agreement between the c-chart limits and the xmr-chart limits.
For completeness we also consider the case of subtracting a fixed (ie, non-random, systematic) constant 25 from the uncontaminated counts. The resulting c-chart and xmr-chart are shown in the lower panel of figure 1. Again, we see disagreement in the control limits, but now the c-chart limits are much narrower than the xmr-chart. Indeed, the c-chart limits have narrowed to such an extent that two of the data points now appear above the control limits. Nonetheless, the disagreement between the limits serves as a signal of an underlying systematic special cause of variation (subtracting 25).
The non-random component of ±25 is an example of what we are calling an ‘underlying systematic special cause’ to make the point that such a cause is always present as opposed to the more usual ‘one-off’ special cause. Similar arguments apply to proportions data. If the proportions data are ‘pure’ Binomial then the subsequent p-chart will have limits that are in close agreement with those of the xmr-chart. This type of underlying systematic special cause leads to over- or under-reporting of the events and may occur in any setting. In healthcare, for example, the monthly count of the numbers of patients waiting for an elective hip operation may unexpectedly rise because one of the surgeons has injured his wrist. This would constitute a one-off special cause of variation and would cause a signal on an SPC chart until the surgeon was back operating again. However, if the numbers on the waiting list were being restricted to ensure that they do not exceed some, possibly embarrassing, threshold then such data manipulation would constitute an underlying systematic special cause of variation that affects all the months, not just some months. Below are two case-studies that demonstrate these ideas in practice.
Accidents case study data
Here we consider the monthly counts of accidents (personal communication, PW) at a train station (not train accidents, but incidents on the station involving passengers, say tripping or falling down). We place these data on a c-chart and then an xmr-chart (figure 2) and use the ideas from the previous demonstration to guide us. The raw data are given in the appendix.
To repeat, the formula for the control limits of a c-chart is where is the mean number of accidents (=20.714), giving the lower control limit as 34.368 and the upper control limit as 7.060. The resulting c-chart is shown in figure 2 (left panel). The conclusion from the c-chart alone is that the process looks to be operating under common cause variation only. However, when we plot the data on an xmr-chart and place it alongside the c-chart we see that there is a marked difference in the control limits (or as one manager described ‘The limits on the c-chart look too-wide to be real’). This prompts us to ask ‘why?’: for if the accidents are generated by chance causes only, we would not expect such a discrepancy. So, the disagreement in limits signals the presence of an underlying special cause. Subsequent investigation found the underlying cause—a loose paving stone that caused passengers to trip as they left the train and so fixing the loose stone reduced the monthly occurrence of accidents to a level consistent with chance variation. In other words, the effect of the loose paving stone is analogous to ‘adding 25’ to each random point in the earlier demonstration. It is also worth noting that we were able to detect the existence of this underlying special cause from an apparently stable process without resorting to comparison from another train station with no loose paving stone.
Proportions defective case study data
Deming8 (p. 265) and Wheeler and Chambers13 (p. 283) describe the following case study from industry. An inspector checked daily the quality of manufactured goods (shoes) based on a fixed sample size of 225 (pairs of shoes). The procedure specified the shoes for inspection were to be chosen at random, the number of found to be defective was to be counted and the proportion defective recorded. The data are reproduced in the appendix from Wheeler and Chambers.13
The correct chart for proportions data is the p-chart. The standard formula for the control limits of a p-chart is , where is the overall proportion that is defective (=0.0896), n is 225 in our case, giving the lower control limit as 0.0325 and the upper control limit as 0.1468. Figure 3 (left panel) shows the resulting p-chart, which could lead us to conclude that the process is consistent with common cause variation. We now construct an xmr-chart of the data using the usual formula , where 2.66 is the relevant constant, is the average of the magnitude of the moving range of length two and is the proportion defective at day i. The resulting xmr-chart (moving range plot not shown) is shown in figure 3 (right panel).
In comparing the control limits of the p-chart and xmr-chart we notice a marked difference which signals an underlying special cause. If we rely only on one chart there is a risk of overlooking the existence of this underlying special cause and, as we shall soon see, of being seriously misled.
As Deming8 (p. 266) explains, the underlying special cause is that inspection figures were falsified, ‘The inspector was insecure, in fear. Rumour had it throughout the plant that the manager would close the plant down and sweep it out if the proportion defective on the final audit ever reached 10 per cent on any day. The inspector was protecting the jobs of 300 people’ and management was using these false figures to manage the business. This situation is analogous to ‘subtracting 25’ from each random point in the earlier demonstration. Deming8 went on to add that where there is fear, there will be wrong figures and management will base its decisions on false data. The problem disappeared when Deming8 pointed this out to top management. Once again, it is worth noting that we were able to detect the existence of this underlying special cause from an apparently stable process without resorting to comparison from another manufacturer with accurate inspection figures.
The example and case studies presented here demonstrate that, in respect of attribute data, the notion of one right chart is perhaps unnecessarily limiting and potentially misleading. Under pure random variation, the limits of the traditional attribute charts (eg, c-chart, p-chart) and the xmr-chart will agree. When there is an underlying systematic special cause of variation, the limits will disagree. The agreement/disagreement of the limits is a novel way to test for an underlying systematic special cause.
So, we conclude that the use of the xmr-chart alongside attribute charts, which requires little additional effort, is a useful strategy because it is more likely to guide us to do the right thing by reducing our propensity to misinterpret the variation in attribute data. Our advice regarding SPC charting with attribute data is to always consider plotting xmr-charts alongside attribute charts. Indeed, even if there is only space or time to show one SPC chart, we argue that this choice is better made after, not before, insights from plotting the xmr-chart alongside the attribute chart, and then selecting that chart which is less likely to mislead and more likely to inform.
Nonetheless, some issues remain with our approach. The use of the xmr-chart for attribute data is often criticised because the control limits do not reflect variation in subgroup sizes. This is a valid criticism but is readily addressed by Wheeler's suggestion of a simple multiplier involving the ratio of subgroup sample sizes and the average subgroup sample size.10 Furthermore, we note that the usual rules for detecting special cause variation (with the possible exception of the rule that ‘Fifteen consecutive points fall within ±1SD’) are not designed to detect the presence of a systematic underlying special cause of variation described here. To make the distinction between a one-off special cause (the more usual interpretation of special cause) we use the term ‘underlying special cause’ to infer the existence of a systematic enduring latent cause operating across the entire process. In addition, in our approach the assessment of agreement/disagreement between the attribute and xmr-charts is primarily a matter of judgment, although there may be scope for further work here.
In our examples, the underlying special cause produced either over- or under-reporting of counts. It is therefore tempting to ask if there is a way of deciding a priori, without resorting to further detective work, whether we are dealing with over- or under-reporting. The answer is yes for c-chart data and no for p-chart data. In the case of Poisson data plotted on a c-chart, the SD used in calculating the control limits is and so over-reporting (where the mean is now increased) inflates the control limits, making them appear to be ‘too wide’; whereas under-reporting (where the mean is now lowered) leads to limits that appear to be ‘too narrow’. This was demonstrated earlier in the first example.
However, with Binomial data the issue is less clear-cut. The SD used in calculating the limits for a p-chart is , which means that we can have the same control limits when, for a given sample size, say, and . Thus, a p-chart with control limits that appear to be to too wide might be due to over- or under-reporting. Hence only a subsequent investigation can determine which. Likewise, unless we are dealing with very large sample sizes,14 a p-chart with control limits that appear too narrow also requires further investigation.
We have used two non-healthcare case studies because it is not straightforward to come up with cases where the underlying systematic special cause is known and will not embarrass or violate the privacy of the institution involved. So, the use of non-healthcare examples should not be taken as evidence that this phenomenon is a rare. Indeed, some time ago one of us (PW) analysed data relating to the number of patients waiting for admission to a particular hospital. Plotting the xmr-chart alongside the c-chart revealed that the limits on the xmr-chart looked ‘appropriate’ whereas those on the c-chart were too narrow. Further investigation revealed the underlying special cause: under-reporting due to one person consistently, month by month, under-recording the number of patients awaiting admission.
In summary, we have shown how the simple step of plotting an xmr-chart alongside the appropriate attribute data chart (a c-chart and a p-chart, in our examples) is useful in identifying an underlying systematic special cause of variation without resorting to external information. This is helpful because such an underlying systematic special cause may not detectable by the conventional rules of SPC but nevertheless remains important to discover so that corrective action to improve the process is properly guided.
We would like to thank the editor and our anonymous reviewers for their comments, criticisms and suggestions which improved the manuscript. The table of data in the appendix is reprinted with permission from SPC Press, Understanding Statistical Process Control, 2nd Ed. by Donald J. Wheeler, PhD © Copyright 1992 by SPC Press, Knoxville, TN, USA. All Rights Reserved.
Appendix showing the raw data
Fifty random numbers from a Poisson distribution with a mean of 50 which were used to produce figure 1 (numbers are to be read from left to right).
39 55 53 52 50 55 53 55 47 47 45 55 49 43 42 38 49 44 50 56 46 38 53 42 52 53 65 52 44 53 43 52 44 57 61 52 38 58 41 46 49 50 40 38 40 52 64 49 35 62
Accidents data: monthly count of accidents plotted in figure 2 (numbers are to be read from left to right): 24 22 20 24 18 18 20 22 17 19 21 21 20 24
Contributors MAM produced the first draft of the paper based on preliminary discussions with PW. PW first introduced MAM to the idea of combined SPC plots for attribute data. Both authors contributed to and approved the final manuscript.
Competing interests Both authors have completed the unified competing interest form at http://www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any organisation for the submitted; no financial relationships with any organisations that might have an interest in the submitted work in the previous 3 years; and no other relationships or activities that could appear to have influenced the submitted work.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement No additional data available.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.