Background Statistical process control charts (SPCs) distinguish signal from noise in quality and safety metrics and thus enable resources to be targeted towards the most suitable actions for improving processes and outcomes. Nevertheless, according to a recent study, SPCs are not widely used by hospital boards in England. To address this, an educational training initiative with training sessions lasting less than one and a half days was established to increase uptake of SPCs in board papers. This research evaluated the impact of the training sessions on the inclusion of SPCs in hospital board papers in England.
Methods We used a non-randomised controlled before and after design. Use of SPCs was examined in 40 publicly available board papers across 20 hospitals; 10 intervention hospitals and 10 control hospitals matched using hospital characteristics and time-period. Zero-inflated negative binomial regression models and t-tests compared changes in usage by means of a difference in difference approach.
Results Across the 40 board papers in our sample, we found 6287 charts. Control hospitals had 9/1585 (0.6%) SPCs before the intervention period and 23/1900 (1.2%) after the intervention period, whereas intervention hospitals increased from 89/1235 (7%) before to 328/1567 (21%) after the intervention period; a relative risk ratio of 9 (95% CI 3 to 32). The absolute difference in use of SPCs was 17% (95% CI 6% to 27%) in favour of the intervention group.
Conclusions The results suggest that a scalable educational training initiative to improve use of SPCs within organisations can be effective. Future research could aim to overcome the limitations of observational research with an experimental design or seek to better understand mechanisms, decision-making and patient outcomes.
- statistical process control
- quality improvement
- medical education
- team training
Data availability statement
Data are available on reasonable request.
This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
What is already known on this topic
Statistical process control charts (SPCs) provide a basis for quality management and enable resources to be targeted effectively. Earlier research suggests that many hospital governing bodies, known as hospital boards in England, do not use SPCs.
What this study adds
An educational initiative with training sessions is ongoing to stimulate the demand for and supply of SPCs. This study reports positive findings of a controlled before and after study on the effectiveness of the intervention using naturally occurring observational data from board meeting papers.
How this study might affect research, practice and/or policy
Our results were not likely due to a ‘rising tide’ of greater use of SPCs, which suggests that focused interventions supporting uptake may still be required. Future research should consider mechanisms and use an experimental design.
Rationale for the use of statistical process control charts (SPCs)
The principles underlying statistical process control charts (SPCs) have been fundamental tenets of safety science since they were promoted by Deming and Shewhart in the 1930s.1 2 Originally developed to drive quality improvement in manufacturing, SPCs are now widely recommended for use in healthcare.3 A key feature of SPCs is ‘process’ or ‘control’ limits (henceforth used interchangeably) that visualise statistical variation from a mean. SPCs thus distinguish signal from noise or, in Deming and Shewhart’s original terminology, special cause from common cause variation. As a result, attention can be focused where it is needed. Presenting data in SPCs improve the ability of public advisors and hospital decision-makers to make good decisions given variation in the data, for instance, by not over-reacting to variation that is typical for a particular process of care.4 Examples of charts without and with process limits are shown in figures 1 and 2, respectively. Including process limits can limit the influence of cognitive biases that may otherwise guide decision-making. For example, in ‘anchoring bias’, human attention anchors on the most extreme and recent data points in a time-series chart, regardless of whether these data lie within common cause variation.4 5 A recent randomised trial showed that the use of SPCs was associated with fewer adverse surgical outcomes.6 Thus, omitting information about statistical variation could compromise decision-making about process variation, instigate unnecessary intervention, and, consequently, lead to the inefficient allocation of resources.
Lack of adoption of control chart methods
Hospital boards in the English National Health Service (NHS) are made up of executive and non-executive members who have a duty to assure the quality and safety of services. Board papers, therefore, include charts displaying quality and safety metrics. A previous study investigated the prevalence of SPCs in the documents used by hospital board members (board papers) in England, UK. The findings showed that SPCs are not widely included in hospital board papers in England: in 30 randomly selected English acute care hospitals’ quality and safety board papers, nearly half (14/30, 47%) of board papers did not contain any SPCs and only 12% (72 of 589) of the charts across papers were SPCs.7 Although the inclusion of SPCs in board papers does not necessarily indicate that these charts are being used effectively, it does suggest engagement with aspects of the approach.
An intervention to improve use of SPCs in board papers
The above findings underpinned the NHS Improvement/England (NHS I/E) (2019) initiative called ‘Making Data Count’ that encourages NHS institutions to adopt SPCs.8 NHS I/E is the organisation responsible for driving up the standard of care in the NHS. The initiative is comprised of educational resources and training sessions which take less than one and a half days to deliver, as described below in the “intervention” section.
The research aimed to assess the effect of the Making Data Count training sessions on the appearance of SPCs in publicly available board papers from NHS hospitals and to assess perceptions of the sessions among attendees. We conducted a systematic search for initiatives that aimed to improve use of SPCs for routine surveillance in healthcare. Our search strategy is laid out in figure 3 and discussed in the study protocol (online supplemental file 1). We looked for studies using SPCs in routine surveillance (rather than within an intervention to improve a given process)9 and found no papers replicating our approach.
A study protocol detailing the methods was published on the Open Science Foundation10 (online supplemental file 1). The SQUIRE reporting guideline checklist11 was completed (online supplemental file 2).
NHS Improvement delivered Making Data Count training sessions to NHS hospital board members and hospital analysts from November 2017. Hospital recruitment was performed by snowball sampling, where information on the training sessions was disseminated using social media, email and word of mouth.
The TIDier checklist12 was completed (online supplemental file 3). The Making Data Count training sessions were delivered to two groups of hospital staff. One group was board members who received sessions lasting around 90 min. The second group were quality improvement staff, including analysts, clinicians and operational staff, and their training took place over one working day. The training sessions aimed to improve knowledge about SPCs and increase their uptake (see online supplemental files 4 and 5 for training PowerPoints). Content included background on SPCs, when and how to use them, how they can be generated and how they can inform decision-making about process variation. Topics included identifying trends, special versus common cause variation and using icons to summarise trends. The limitations of other charts were discussed, and, importantly, each training session was personalised using hospitals’ own data. No specific software platform was recommended for creating SPCs, but the training team provided tools in Excel and SQL software that could be adapted by the trainees. If trainees requested further tools, the training team provided details about other organisations that could provide information on other software tools such as Business Objects, Tableau and Qlik.
Study of the intervention
Our sample size was based on detecting a 30 percentage-point improvement in the proportion of SPCs from 10% preintervention to 40% postintervention. Given that the effectiveness of the training intervention on patient safety is contingent on changes in the uptake of SPCs in board papers, we believed that at least a ‘moderate’ effect size13 would be necessary to stimulate widespread adoption. Assuming 5% significance and 80% power, and assuming a correlation between preintervention and postintervention measures of 0.90 based on a t-test,14 a minimum of 16 hospitals in total with preintervention and postintervention measures was required (eight in each arm). We included 20 hospitals to err on the side of caution.
We selected 10 acute care hospitals that received the training after February 2018. To achieve temporal heterogeneity, we sampled one training intervention hospital per month. If more than one hospital received the training intervention in each month, we randomly selected one of the hospitals. We then selected matched control hospitals that had not received the training using the NHS Digital Peer Finder tool.15 Hospitals were matched on the number of patient attendances, degree of specialisation and deprivation level. Degree of specialisation was defined as the divergence of individual trust Healthcare Resource Group activity profile from the national profile.15 Deprivation level was obtained from the average 2010 Index of Multiple Deprivation score in Lower Super Output Areas (containing about 1500 people) where the hospitals’ patients live.16 Tiebreaker characteristics were number of full-time equivalent staff, urban location and whether the hospital had been classified as a ‘foundation hospital’ by NHS authorities.
Board paper selection
For the intervention hospitals, the preintervention board paper was the first paper published at least 1 month before the training intervention. The postintervention board paper was the first board paper published at least 6 months after the intervention. The papers from the control hospitals were selected at the closest month to their matched intervention hospitals. Figure 4 shows the study design with 20 observations for the intervention hospitals (10 preintervention and 10 postintervention) and 20 observations for the matched control hospitals (again 10 preintervention and 10 postintervention), giving a total sample of 40 board papers across 20 hospitals.
Quantitative measures: intervention versus control hospitals
In line with previous research on use of SPCs in board papers,17 our main outcome measure was the proportion of all charts in the board papers made up of SPCs. There were three other outcomes: first, the proportion of quality and safety charts made up of SPCs; second, the proportion of time series charts made up of SPCs and third, the proportion of time series and between groups charts made up of SPCs (between group charts include funnel charts that show data between hospitals).
Quantitative measures: examination of SPCs in intervention and control hospitals
We examined SPCs included in board papers of the intervention and control hospitals for inclusion of certain specific factors included in the training for intervention hospitals (see PowerPoint slides in online supplemental file 4). One factor was icons (slide 47) that summarise statistical variation visually using colours and letters that indicate special or common cause variation or indicate performance relative to a target. Another was if the control limits were labelled (slides 32, 34). See online supplemental file 6 for the coding frame.
Four independent reviewers (R1, R2, R3, R4) conducted the quantitative coding. In step one, R1 and R2 independently identified charts and classified them according to whether they were quality and safety charts. In step two, R2 removed information regarding the hospital and the board meeting date. In step three, R3 and R4 identified the types of charts and specific elements of SPCs if identified. Any deblinding was reported.
The qualitative measures were four questions asked after the training sessions in feedback forms: ‘What went well today?’, ‘What could have been done differently?’, ‘What are your key takeaways?’ and ‘Any other comments about today?’. These forms were designed and administered by NHS-I/E and made available to the research team.
Hospital characteristics were summarised using means and SD. Inter-rater reliability was calculated using kappa statistics. Information regarding the type of charts and features of SPCs (online supplemental file 6) was summarised using counts and proportions. Next, we examined the effect of the training intervention on the main outcomes. For all hospitals, we first summarised the number of SPCs (outcome), the total number of charts and the proportion of SPCs out of all charts. The difference in the proportion of SPCs between preintervention and postintervention was computed for each hospital. This information was stratified by intervention and control hospitals, compared using a t-test and represented as a difference in difference with 95% CI.
To determine the relative effect (risk ratio) of the intervention, we fit a cluster-level analysis using zero-inflated negative Binomial regression model (as outcome data contain a high number of zero counts and there was overdispersion), with the outcome the number of SPCs in the postintervention period, fixed categorical effects for the intervention, the proportion of SPCs in preintervention period and an exposure of all charts in the postintervention period.
In sensitivity analyses (see online supplemental file 7), we explored other models. The analyses presented as our primary analysis (zero-inflated negative Binomial) differed to that planned (Poisson) due to many hospitals having no SPCs (high number of zero counts).
For the qualitative responses, a thematic analysis was conducted to identify barriers to and facilitators of using SPCs.18 We used an inductive, semantic and (critical) realist approach. One researcher coded each response into the main theme present in the data. These were reviewed by a second researcher who discussed the codes with the first researcher.
Information about the 20 hospitals from the NHS Digital Peer Finder Tool15 at baseline is summarised in table 1. On average, there were slightly more patient attendances per year in the intervention hospitals (1.7 mil, SD=0.5 mil) than in the matched control hospitals (1.3 mil, SD=0.75 mil). The degree of specialisation score was lower on average in the intervention group (83 739, SD=80 639) than in the matched control group (138 747, SD=135 068). The average 2010 Index of Multiple Deprivation was similar, at 24 (SD=7) in the intervention and 23 (SD=5) in the matched control sample.
Inter-rater reliability and blinding
Percentage agreement was 99.6% (Cohen’s k=0.97) for SPCs, 98.5% (Cohen’s k=0.94) for time series charts, 89.0% (Cohen’s k=0.61) for time series and between group charts, and 89.9% (Cohen’s k=0.80) for quality and safety charts. In no cases was a rater ‘de-blinded’ such that they could discern whether a board paper arose before or after the salient intervention period. There were 12 images referred to the chief project investor because it was unclear whether they were charts (eg, the resolution may have been too poor to tell) and agreement on the appropriate decision was reached in all cases.
Chart characteristics for all charts in intervention and control hospitals
There were 6318 charts identified. However, 31 were either educational SPCs with example data, illustrative data not about the hospital, or they were icons without any data. These charts were removed from the analyses. After excluding these charts, 6287 charts were retained for analyses (see table 2). Nearly one-half of charts (3003/6287, 48%) were quality and safety charts. Time series charts were more common (4741/6287, 75%) than between group charts (640/6287, 10%) and 906/6287 (14%) charts were comprised of both time series and between group presentations (combined). Of all 6287 charts, 449 (7%) were SPCs. Of the 449 SPCs, 63/449 (14%) had a summary icon displayed on them, and the control limits were labelled for 342/449 (76%) of the SPCs. For most charts with labelled limits (191/342, 56%), the label was UCL (‘upper confidence limit’) or LCL (‘lower confidence limit’) rather than specifying where the limit was set (see online supplemental file 6 for further description of the SPCs).
Effects of training intervention (intervention versus control hospitals)
The raw numbers and proportions of SPCs used by group (control and intervention), hospital and time-period (preintervention and postintervention) for all charts are shown in table 3 and figure 5. On average in the control group, there was very little change in use of SPCs from before (9/1585, 0.6%) to after (23/1900, 1.2%) the intervention period (average difference 0%, 95% CI −2% to 2%). In the training intervention group, use of SPCs increased from 89/1235 (7%) to 328/1567 (21%), and the average difference was 22% (95% CI 2% to 42%). On average, the absolute difference in use of SPCs was 17% (95% CI 6% to 27%) higher in the intervention group compared with the control group. Use of SPCs in the postintervention period was nine times higher (95% CI 3 to 32) in the intervention group compared with the control group, adjusting for the preintervention (baseline) proportion of SPCs.
Subset of quality and safety charts only
As planned, we carried out an analysis restricted to quality and safety charts. The raw number and proportions of SPCs used by group (control, intervention), hospital, and time-period (preintervention versus postintervention) for quality and safety charts are shown in table 4. In the control group, there was very little change in use of SPCs before (7/657, 1%) to after (12/741, 2%) the training intervention period (average difference 0%, 95% CI −3% to 4%). In the training intervention group, use of SPCs was 71/684 (10%) before and 213/921 (23%) after the training, and the average difference was 21% (95% CI 0% to 42%). On average, the difference in use of SPCs was 18% (95% CI 7% to 29%) higher in the intervention group compared with the control group. In model-based analyses, use of SPCs in the postintervention period was nine times higher (95% CI 2 to 41) in the intervention group compared with the control group.
Subset of time series charts
Further analyses regarding changing the exposures to time series charts and between group charts are reported in online supplemental file 7, Tables S7-2 and S7-3. For the model with the time series chart exposure, the results were broadly similar to the main analysis.
Subset of time series and between group charts
For the model with the times series and between group exposure, the average difference in use of SPCs was 10% (95% CI 0% to 20%) higher in the intervention group compared with the control group. The zero-inflated negative binomial model did not converge for these data, possibly due to the high number of zero cells in the outcome (37/40 observations).
Thematic analysis of qualitative data
Written responses from the feedback forms were available for 7 out of 10 hospitals in the training intervention sample, including two hospitals that increased the SPCs in board papers by less than 10%. Most comments consisted of a few words or one sentence. The main themes relating to responses to the question about what went well were the general format, content and delivery of the training (n=21/66), such as ‘Topic relevant and timely’; practical and personal examples that use own hospitals’ data (n=19/66), such as ‘trust (hospital) data brought it alive’; conversation, discussion and interaction (n=10/66), such as ‘interactive opportunity to discuss examples’; formatting, use and insights (n=10/66), such as ‘good explanation of SPC rules’ and other general comments (n=6/66).
The question about what could have been done differently during the training elicited fewer responses overall (n=32) than did the question about what went well (n=66); this was true across hospitals, including those that changed their use of SPCs both more and less than 10%. The main themes relating to what could have been done differently were the session format (n=15/32), such as ‘more time for discussion’ and ‘break out into groups’; no suggestions for doing anything differently (5/32); the training content (4/32), such as having a ‘technical supplement’ and ‘more on the calculation of control limits’ and requests for more examples using own hospital data (3/32), providing handouts (3/32) and other (2/32).
Most participants mentioned awareness of SPCs themselves as a key takeaway (n=29/70). Others commented on the general use of SPCs (n=23/70), such as trend lines, tools and templates, and understanding ‘how poor presentation can lead to poor decisions’. Several participants commented that the training changed how they interpret data (n=6/70), intend to report data (6/70) or generally think about data and reporting (4/70). The other comments (n=2/70) were about encouraging others and timelines for implementation.
Finally, when asked for any other comments, most participants made generally positive comments on the training (25/26). Only one (1/26) participant suggested that ‘next steps are important’, which may reference the need to consider implementation steps in training.
Summary of main results
This study investigated whether an educational training intervention increased the use of SPCs in NHS hospitals. We studied the board papers of 10 hospitals that received the training before and after the intervention, along with those from 10 control hospitals that did not receive training over the same time-period. The results showed that most hospitals increased the proportion of SPCs in their board papers after the training intervention, while there was almost no change in the proportion of SPCs among the controls. In model-based analyses, trained hospitals increased their uptake nine-fold relative to controls. The intervention consisted of a day of training for quality improvement staff and 90 min for board members. As this is not a highly intensive intervention, it should be scalable across most contexts.
Interpretation of main results
Interpretation with reference to prior literature
These results are important for several reasons. First, many hospitals do not depict statistical variation in the documents used to inform decision-making about process variation.7 Second, the use of SPCs enables management’s recommendations to align with statistical findings.4 A recent trial in France found that surgical departments using SPCs had better patient outcomes than controls. Notably, the French intervention appeared more intensive than the training intervention that we evaluated. It provided departments with SPCs from publicly available data, encouraged structured meetings and supplied logbooks for completion. These activities were all in addition to 3 days of training.6 Our results suggest that a simpler approach can effect change in the prevalence of charts in board papers, although it is a matter of opinion as to whether the change in the hospitals that improved was sufficient to influence improvements in processes and outcomes. Evidence on generalisable mechanisms linking the appearance of charts to quality improvement would more fully inform such opinions, such as perceptions of decisions taken based on the charts and hospital culture.
Interpretation of heterogeneity of the results
Improvement was not uniform across intervention hospitals. The qualitative data do not explain why some hospitals improved but not others, as nearly all respondents reported positive perceptions of the training—including in hospitals that did not change their use of SPCs in board papers. However, these positive responses may have been shaped by social desirability bias.19 Moreover, some respondents requested more information, including a technical supplement and more on calculating control limits, suggesting that not all training needs had been fulfilled and further sessions or re-engagement may be required.
Interpretation of proportional changes
There are several mechanisms by which the proportional changes in this study could be brought about. As intended, many charts that were previously not produced as SPCs could be transformed into SPCs. However, the total number of charts in the denominator could have decreased because of the intervention, thereby exaggerating improvement in the proportion of hospitals using SPCs (see detailed discussion in online supplemental file 8). Note that this mechanism is possible even in a randomised trial, as the intervention could have prompted changes in the number of charts presented to boards. On balance, we interpret our results as supporting the increased adoption of SPCs while acknowledging the alternative mechanisms. We also note that there is no agreed proportion of SPCs in board papers that would indicate sufficient usage after training, and the need for SPCs could vary by context as topics of concern may change over time.
Issues related to the presentation of SPCs in board papers
The presentation of SPCs could be further improved. Nearly half of SPCs did not state where the control limit had been set, either not mentioning the limit or simply recording ‘UCL’ and ‘LCL’ without specifying the limit (eg, three SD). Without labels on limits, the degree of uncertainty that they represent is unclear. We did not compare the labelling and limits of intervention and control hospitals due to the small number of identified SPCs.
Issues related to the implementation of SPCs in hospitals
The use of SPCs takes place within broader organisational contexts. It is possible that SPCs are not included in board papers but are used elsewhere—such as in quality and safety subcommittees. We believe this is unlikely given the explicit quality assurance function of hospital boards. Training alone may be insufficient to encourage adoption of SPCs if the organisational context is not supportive. Importantly, SPC usage is not a sufficient condition for improvement, just as checklists cannot, by themselves, effect safe practice.20 There must be a supportive implementation context: a team of analysts to create the charts, board members who view and interpret charts, managers who discuss and act on the information presented in the chart and staff at the front line. SPCs are but one element in a chain of events influencing the safety and quality of patient care.
Limitations of our study
Our research design does not fully permit a causal interpretation of the results. However, the use of contemporaneous controls showed that our results are not likely due to a ‘rising tide’ of greater use of SPCs among all NHS hospitals.21 Although control hospitals were selected to be as similar as possible to intervention hospitals, clear differences were observed at baseline, including in use of SPCs (Hospitals 15, 18, 20). We adjusted for observed differences between hospitals and the before and after design allows us to adjust for differences in baseline rates of the outcome variable (use of SPCs). However, especially given baseline differences, we must suspect unobserved confounders; for example, the intervention hospitals might have been more motivated to change in response to the training.
Limitations of research in the area
Future research should consider an investigation that randomly assigns hospitals to training interventions to balance these factors between groups. Other investigations might also research effects for other forms of hospitals, such as mental health or community care hospitals, to explore generalisability. Studies could explore which aspects of the training are effective, such as the personalisation element, trainers themselves and trainees’ understanding and confidence.20 Importantly, the causal chain linking the prevalence of charts in board papers to patient outcomes should be evidenced, including by qualitatively understanding decision-making related to patient care.
Limitations of qualitative research
A limitation of our qualitative data is that it came from feedback solicited only shortly after the intervention, which restricts the investigation of mechanisms like confidence in the longer term.
Certainly, not all the charts within board papers could or should be SPCs. SPCs are not a panacea for understanding data related to all quality improvement issues. However, the high proportion charts with time series information in the board papers (90%), combined with lack of use of SPCs, suggests substantial scope to better visualise chance variation in the data presented to decision-makers. Our results suggest that educational training initiatives may bolster progress towards this aim.
Data availability statement
Data are available on reasonable request.
Patient consent for publication
This research was approved by the University of Warwick Biomedical and Scientific Research Ethics Committee (BSREC 116/18-19). Data come from publicly available hospital board meeting paper documents. No data can identify individual patients.
Thank you to Alan Girling for providing feedback on the overall research design, Bernard Crump for providing feedback on the coding frame for the charts in the board papers, Beth Ann Brackett for her input, the public contributors for their feedback and Sam Riley and Karen Hayllar from NHS-Improvement/England for collaborating and sharing their training resources.
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Contributors LK wrote initial draft of protocol and manuscript, revised subsequent drafts of protocol and manuscript and attended NHS-I/E SPC training. PB attended NHS-I/E SPC training and commented on drafts of the protocol and manuscript. KS contributed to protocol development, conducted the literature review and informed and revised the content of the protocol and manuscript. LQ and KH contributed to the design of the study, developed the statistical analysis plan, commented on drafts of the protocol, wrote the results section of the manuscript and revised manuscript drafts. RL initiated idea, commented on drafts of protocol and manuscript and provided conceptual guidance. All authors have read and approved the manuscript. LK acts as guarantor.
Funding The current project was supported by the National Institute for Health Research (NIHR) Applied Research Collaboration (ARC) West Midlands, grant number NIHR200165.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.