Article Text

Make no mistake—errors can be controlled
Free
1. C M Hinckley
1. Correspondence to:  C M Hinckley, President, Assured Quality, 2016 South 370 West, Perry, UT 84302, USA;   cmhinckley{at}msn.com

## Abstract

Traditional quality control methods identify “variation” as the enemy. However, the control of variation by itself can never achieve the remarkably low non-conformance rates of world class quality leaders. Because the control of variation does not achieve the highest levels of quality, an inordinate focus on these techniques obscures key quality improvement opportunities and results in unnecessary pain and suffering for patients, and embarrassment, litigation, and loss of revenue for healthcare providers. Recent experience has shown that mistakes are the most common cause of problems in health care as well as in other industrial environments. Excessive product and process complexity contributes to both excessive variation and unnecessary mistakes. The best methods for controlling variation, mistakes, and complexity are each a form of mistake proofing. Using these mistake proofing techniques, virtually every mistake and non-conformance can be controlled at a fraction of the cost of traditional quality control methods.

• error management
• mistake proofing
• quality control

## Statistics from Altmetric.com

In most circumstances a defective video card in a computer is not likely to pose a serious health risk. By comparison, a defect in a blood sugar measurement device used by a diabetic patient or a medication error may threaten the health and safety of the patient. As these examples illustrate, quality and safety are more intimately linked in the healthcare profession than in any other industry. Because of the litigation risk, expense, and loss of consumer confidence resulting from a single quality lapse, every healthcare provider would prefer to supply error free products and services if these could be provided at a reasonable cost. While we know what we would like to do, in many cases the challenge is that we do not know how to do it.

Modern quality techniques used by healthcare professionals have generally been derived from industrial quality methods where many repetitions of the same process made it possible to first identify and characterize quality problems and attributes. World class quality leaders in the industrial sector are now manufacturing and assembling virtually defect free products, while eliminating traditional quality control methods and post-process inspection—achieving dramatic improvements in quality at a fraction of the traditional cost. Understanding the evolution of quality control concepts, what is being done to eliminate and control mistakes by world class quality leaders, and why these techniques are so effective can have a profound impact on the quality of healthcare service and patient safety.

The objectives of this paper are to:

• illuminate three distinctly different sources of non-conformities: variation, mistakes, and complexity;

• clarify the relative importance of mistakes in healthcare quality and safety;

• demonstrate why traditional statistical quality control methods are not effective in preventing or controlling mistakes; and

• introduce modern mistake proofing techniques that have been shown to be extremely effective and efficient.

Our primary goal is to persuade healthcare professionals that virtually every mistake can be prevented and controlled and that the tools are available today to achieve these results.

## QUALITY CONTROL EFFORT AND DEFECT RATE

US quality control performance generally lags behind world class standards, even though many companies have initiated aggressive quality control efforts. A benchmarking study by a major corporation showed that quality leaders in the US, including those aggressively pursuing Six Sigma goals (box 1), rarely achieved defect rates below 1000 parts per million (ppm).1 The scrap, rework, repair, warranty, and quality control costs among these US quality leaders consumed 6–24% of their production budget. In contrast, world class pacesetters such as Toyota maintain defect rates below 50 ppm while spending less than 3% of their production budget on the same quality losses (fig 1).

Figure 1

Relative quality control performance for quality leaders based on a benchmarking study by a major corporation. Toyota consistently maintains defect rates below 50 ppm while spending less on quality control than competitors.1

### Box 1 Six Sigma

Traditionally, the goal has been to control process variation so that specified performance limits are at least three standard deviations (a statistical measure of the process dispersion) from the mean, or average, value of the process. Six Sigma has the goal of defining limits and controlling process variation to assure that performance limits are at least six standard deviations from the mean. This level of control allows for a batch to batch shift in the process mean of 1.5 standard deviations, while predicting that non-conformance rates will still be less than 3.8 ppm.

While traditional quality control levels may seem adequate, the consequences of “pretty good” quality are staggering. The Institute of Medicine2 estimated in 1999 that between 44 000 and 98 000 people die in hospitals each year due to mistakes, and that medical mistakes are the 8th top killer in the nation. In the same report hospital errors alone have also been estimated to cost the nation $8.8 billion a year. ## VARIATION: A SOURCE OF DEFECTS With the advent of mass production in the early 1900s, part to part variation was identified as the major source of assembly problems and product defects. Owing to the inherent inconsistency of production during this time, it is not surprising that variation was the first defect source identified. From this historical context it is natural that the most commonly accepted concept in quality control is that “variation is the enemy.” Based on this perspective, statistical quality control (SQC), in conjunction with more recent refinements like Harry and Stewart’s Six Sigma,3 is viewed as an adequate means of controlling defects. This approach, however, has fundamental limitations that must be understood to achieve world class quality. ### The SQC paradigm The SQC paradigm is based on the observation that the outcome of every repeated action follows a distribution like that shown in fig 2. In a histogram frequencies are plotted in dimensional increments or bins represented by the vertical bars. Lower magnitude values such as time, quantity, size, and distance are plotted on the left, and those of higher magnitude are on the right. A mathematical model that matches the data is then selected to describe the observed distribution as shown in fig 2. We also use commonly computed properties like the mean (μ) and the standard deviation (s) which, respectively, describe the “average” value and dispersion of a sample. Figure 2 A mathematical model that “fits” the data plotted as a histogram is selected. The height of the bars shows the frequency or probability of events falling between the high and low value in each bin plotted along the abscissa. Because sample sizes are generally small, we can only predict the fraction of the distribution falling in the tails beyond specified limits by extrapolation using the selected mathematical model as shown in fig 3. If the defects estimated by the fraction below the lower specification limit (LSL) or above the upper specification limit (USL) exceed a desired value, we try to improve process control to reduce variation or adjust the mean or limits until acceptable conditions are achieved. Figure 3 Extrapolation of the model to predict the fraction of events above or below defined limits shown by relative area of the dark shaded areas to the total area under the curve. ### Limitations of SQC as a guide for quality control An accurate understanding of the cause of rare events is essential in establishing adequate quality control. Thus, the success of SQC in achieving world class quality depends upon its ability to accurately predict and control the tails of a distribution. Unfortunately, traditional statistical methods are inadequate quality tools because they consistently underestimate the frequency and magnitude of events falling in the tails. This weakness of SQC can be traced to several factors: • Sample sizes are generally small and are consequently inadequate to accurately characterize the tails of a distribution. • Since only a portion of the product or process is sampled when applying SQC, many events that exceed desired limits (non-conformances) are undetected. • Typically, two or three readings are averaged to provide a single data point for fitting the distribution of processes following guidelines from Juran4 and Ishikawa.5 This practice obscures the skewness or unsymmetrical characteristics of a distribution. • Outliers or oddball observations that do not appear to fit the population are arbitrarily discarded. Consequently, events that suggest that the tails are “heavier” than predicted are generally ignored. • Traditional inspection methods are not perfectly reliable as shown by Tavormina and Buckley.6 There is a significant probability that conditions outside the control limits will not be detected when inspected. Thus, the tails always tend to exceed the fraction predicted by inspection. For example, examining a population for a specific disease may not identify some affected patients because the symptoms are not yet pronounced, the patient may be recovering, or the symptoms may be incorrectly diagnosed as different health problems. • Defect rates are predicted by assuming the population faithfully follows a selected model far beyond the range of collected data. These models, particularly the normal distribution, are frequently assumed without evaluation. ### Evidence illustrating the limitations of SQC An example drawn from the production environment clearly illustrates the limitations of statistical methods. The dashed line in fig 4 is the distribution of time required for operators to perform a “standard” operation. This distribution is derived from work rate studies of roughly 9000 employees over a 10 year period where each employee was evaluated four times a year.7 The data therefore reflect over 360 000 observations, an exceptionally large sample. Note the long thin tail extending to the right. Although this example is drawn from an industrial setting, the time spent by doctors with each patient in office visits is likely to have a similar distribution where a few visits take a disproportionate amount of the doctor’s time. Figure 4 Probability distribution functions (PDFs) of time per task for a large population (dashed line) and 10 000 averaged random draws from the distribution (solid line). Curves are not plotted to the same vertical scale. Raw data from Barnes.7 Given this distribution, we can simulate random samples to assess the effectiveness of traditional sampling methods. The distribution shown as a solid line in fig 4 was obtained using 10 000 random draws from the larger sample. Outliers, or those readings more than four standard deviations from the mean, were discarded. Remaining readings were averaged in sets of three to produce about 3300 observations. The “averaged” distribution is overlaid on the original population using different vertical scales for visual comparison. The estimated standard deviation based on discarding outliers and averaging is roughly half the true population variance (σ) in this example. Figure 4 also shows that the traditional sampling methods result in poor approximations for the tails of the distribution. ### Impact of small sample sizes A small sample size further degrades the accuracy of predicting the tails. A typical histogram based on 300 random draws from the time per task distribution with outliers discarded and averaging is plotted in fig 5. Note that the resulting histogram appears to follow a normal distribution, which is plotted as a solid line, but deviates significantly from the population plotted as a dashed line. In seven out of 10 such samples the best statistical tests would conclude that the normal distribution is an appropriate model for this data! In other words, 70% of the time most researchers would conclude that this population follows a normal distribution using traditional SQC methods. Figure 5 Probability distribution functions (PDFs) for a large population (dashed line) and 300 averaged random draws from the distribution (solid line). Curves are not plotted to the same vertical scale. In most cases a normal distribution appears to be a good model for such small samples. In a study of 23 distributions of human performance traditionally modeled with the normal distribution, the frequency and magnitude of events more than three standard deviations from the mean were consistently underestimated.8 ## MISTAKES—THE MAJOR SOURCE OF DEFECTS Given the task of delivering medication to a patient in a hospital, nurses may occasionally forget to prepare the medication if they are busy with other tasks. Rook9 found that 1 in 10 000 to 1 in 100 000 manufacturing operations will be omitted without detection. In other words, omission errors alone will result in defect rates as high as 100 ppm. In efficient manufacturing processes the goal is to have each work cycle take less than 1 minute, where each cycle will be repeated roughly 400 times a day by a worker. Omission errors will occur even more frequently in environments like nursing care where each specific type of operation, such as giving an injection, may only be repeated a few times a day. Omissions are only one type of mistake. Nurses could inadvertently select the wrong medication, misread the prescription, select the wrong dose, deliver the medication to the wrong patient, select the wrong number of capsules or pills, or inadvertently drop a pill without detection. These are just a few of the many possible errors that can occur. Thus, while each type of mistake is individually a rare event, collectively, errors tend to be the dominant source of quality lapses in medical care. In a visit to a hospital clinical chemistry laboratory, the author observed seven errors in 2 hours including: a blood sample with the wrong patient name, specimens loaded on the wrong processing equipment, and lost specimen labels. This wide variety of possible errors and mistake rates for typical processes supports the conclusion that mistakes are the major source of healthcare problems. The results of studies that have attempted to identify the source of quality problems have consistently reinforced this conclusion. The Harvard Medical Practice Study10 in 1991 found that 3.7% of hospital patients received disabling injuries due to medical treatment errors and 58% of these injures were caused by errors in management. The Quality in Australian Health Care Study11 reported in 1995 that 16.6% of hospital admissions were associated with adverse events. Of these, 50% were judged to be highly preventable. A follow up study concluded that human error was the prominent cause for the adverse events,12 with 57% of them stemming from cognitive errors. The cost of these errors to the healthcare system in Australia in 1999 was estimated to exceed$800 million per year.

In the UK adverse events occur in about 10% of hospital admissions according to a report from the Department of Health,13 costing £2 billion annually in additional hospital stays alone, without considering the wider human suffering or losses. This report acknowledged that “human error may sometimes be the factor that immediately precipitates a serious failure”.

Bates et al14 stated that 6.5% of the patients entering hospitals experience adverse drug effects due to prescription errors. Illustrating the seriousness of these errors, approximately 1% resulted in fatalities.

Mistakes are also a common cause of non-conformities in clinical laboratories. Lapworth and Teal15 cited two studies where clinical laboratory mistake rates were in the range of 0.3–2.3%. Although their own study identified an average mistake rate of only 0.05%, their mistake detection methods admittedly could not detect many types of error. Boone16 observed overall mistake rates of roughly 100 per 100 000 (0.1%) in a hospital clinical laboratory. Similarly, in a study of turn around times for urgent clinical tests, Pellar et al17 found that mistakes were a leading source of delays.

It is most significant to note that none of these studies identified variation as a significant factor in adverse effects, clearly pointing to mistakes as the dominant problem.

### Consequences of mistakes

Although seemingly simple mistakes can result in staggering economic losses, the real cost of mistakes should be measured in terms of human suffering. An announcement in the Salt Lake Tribune on 10 June 2002 warned former residents of Roosevelt, Utah that 1500 individuals, mostly children, needed to be revaccinated after it was discovered that vaccines were stored below recommended temperatures.18 Vaccinations may have been compromised starting as early as 1 January 1998—4.5 years before the announcement. Although getting a second round of shots is not a pleasant experience for the children, consider the potential harm this simple error could have caused had an epidemic reached the area.

Sometimes the consequences are even more painful. On 7 March 1998 The Record published an article about Daniel Garza, a 10 year old boy who had just passed away.19 The paper reported that “the boy’s life turned into a chain of medical visits....that...led to one mistake after another”. After several ribs were mistakenly removed, he had to undergo surgery again to remove the diseased ribs.

### Mistakes are not controlled by SQC

Just as a medication is either given to a patient or not, mistakes either occur or do not occur. Thus, mistakes are best described in terms of probability rather than variation. Distributions describing variation can be expressed in terms of probabilities, but the converse is not always true. Consequently, the only universal method for describing both variation and mistakes is probability.

Let us examine a case where a nurse gives a patient an injection. Because of variations in syringes, filling procedures, lighting, and a variety of other factors, the amount of medication in the syringe varies from one injection to the next. Differences in stroke, force, dwell time, technique, and the “stick” location in the patient also contribute to the variation in medication delivered. A joint or combined distribution of medication variation which addresses occasionally omitted injections is illustrated in fig 6 (the familiar bell shaped curve plots as a parabola when the vertical scale is logarithmic). Note that the injection quantity for an omitted injection operation falls completely outside the traditionally accepted model describing the distribution of dose. In other words, SQC methods based on the dose variation predict that omission errors will never occur.

Figure 6

The PDF for normally distributed injection dose that plots as a parabola on a logarithmic vertical scale. Omitted injections result in outcomes exceeding statistical predictions.

SQC is relatively ineffective in preventing mistakes. Sampling one operation in 100, 99% of the defects resulting from mistakes would never be detected or corrected. Even if SQC accurately predicted the frequency of mistakes, it cannot be used to predict when these rare events will occur. Thus, world class quality can only be achieved through control of virtually every mistake rather than by prediction of their frequency.

### Pareto’s law and the limitations of statistical methods

The author has observed that Pareto charts are one of the best methods for describing the combined effects or frequency of mistakes and variation. Juran,4 one of the champions of SQC, defined the Pareto principle and described its application as “universal”. He stated: “The Pareto principle would lead us to believe that most distributions of quality characteristics would not quite be normal. Both Shewhart’s and the authors experience confirm this.”

What is not commonly understood is that the technique of plotting data in the form of Pareto charts was originally developed by Lorenz20 to describe data that follows Pareto’s law.21 Pareto’s law is the basis for proving that numbers are unbounded. Based on Pareto’s law, the typical distribution observed in Pareto charts is strong evidence that the variance for these data will not converge as a function of increasing sample size, a problem that is not readily discernable with small samples. In addition, distributions described by Pareto’s law can violate the conditions of the central limit theorem, proving that even the mean value of some distributions will not converge as sample size increases.

There is a substantial body of data and strong theoretical basis for concluding that extrapolating statistical models to extreme limits is not generally justified. Mandelbrot22 observed that “From the erratic behavior of sample moments, it follows that a substantial portion of the usual methods of statistics should be expected to fail, except if extreme care is exerted. This failure has of course often been observed empirically, and has perhaps contributed to the disrepute in which many writers hold the law of Pareto; but it is clearly unfair to blame a formal expression for complications made inevitable by the data.”

This discussion is not intended to be critical of statistics. It is an extremely valuable and useful tool that produces wonderful insights and is a great aid in understanding every process and a wide variety of human condition. We simply need to keep in mind that there are important limitations in using the statistics, and that additional tools are needed to understand and control events in the extreme tails of the distribution.

#### Controlling defects using mistake proofing

Although mistakes are inevitable, non-conformances and defects are not. To prevent defects caused by mistakes, our approach to quality control must include several new elements. Firstly, to avoid wasted effort, mistakes must be detected and corrected before they result in defects or harm to patients. To achieve this goal, inspection must be upstream of the process. Shingo,23 the leader who achieved a quality revolution at Toyota, perfected these upstream or source inspection methods. Secondly, since mistakes must be detected to be corrected, they can only be blocked using reliable 100% inspection. If the control measure for a mistake does not prevent the mistake, shutdown the operation, or provide a warning, it is not mistake proofing. Because each type of mistake is typically a rare event, these mistake proofing methods must be inexpensive. Finally, inspections must be autonomous to prevent inadvertent omission of the inspection.

#### 100% inspections

The high cost of inspection, combined with Deming’s view that quality cannot be “inspected into” a product, have discouraged 100% inspections. However, Shingo23 showed that new techniques based on “poka-yoke” or mistake proofing enable 100% inspections at a fraction of the cost of SQC. More importantly, these inspections are fundamentally different from those that Deming characterized as ineffective. Unlike traditional inspection that tries to detect defects, mistake proofing and source inspection generally detect the conditions that could cause defects and enable corrective action before the defect occurs.

### Evidence supporting the new instruments of quality control

Because mistakes are the dominant source of today’s defects and SQC is not an adequate tool for controlling mistakes, it follows that the SQC approach to quality control cannot by itself assure world class quality performance. Furthermore, understanding that mistakes are a problem is not the same as taking action to prevent mistakes. Experience reflected in the observations shown in box 2 supports this conclusion.

### Box 2 Evidence for the need for mistake proofing

• “We should recognize that people are, after all, only human and as such, they will, on rare occasions, inadvertently forget things. It is more effective to incorporate a checklist – i.e. a poka-yoke – into the operation so that if a worker forgets something, the device will signal that fact, thereby preventing defects from occurring. This, I think, is the quickest road leading to attainment of zero defects.”23

• “Motorola elected to enter this market (electronic ballast) and set a quality goal of 6 sigma for initial delivery. But it became evident early in the project that achieving a Cpk greater than 2 (a measure of variation control) would go only part of the way. Mistake proofing the design would also be required . . . Mistake proofing the design is an essential factor in achieving the TDU (Total Defect per Unit) goal. The design team is forced to investigate any opportunities for errors during manufacture and assembly, and to eliminate them.”24

• Hospital experience: Hospitals regularly hold mortality and morbidity conferences to identify mistakes with the intent of sharing them so that others may avoid their repetition. However, there is little evidence that this approach is producing a sustained reduction in mistake rates. Similar problems were noted in “An Organization with a Memory”13 where it was observed that “information from complaints system and from the health care litigation in particular appear to be greatly unexploited as a learning resource”. In contrast, Guwande25 described the following changes in the control of anesthesiology. In the administration of anesthesia mistakes were known for several decades to be one of the more common causes of death during operations. Between the 1960s and 1980s the death rate attributed to anesthesia errors stabilized at 1–2 per 10 000 operations. At this time manufacturers each set their own standards, and turning a knob clockwise on one controller increased the supply of anesthetics on one piece of equipment but decreased the supply of anesthetics on another. Although many recognized the frequency of anesthesia administration mistakes, dramatic reductions in these errors did not occur until a champion and leader at a national level, Ellison (Jeep) Pierce, insisted that changes must and could be made. His leadership defined consistent standards for equipment manufacturers and introduced mistake proofing type elements to the administration of anesthesia. As a result of this work, deaths from the administration of anesthesia fell more than 20-fold in the space of a decade.

The key point is that recognizing the consequences of mistakes or characterizing the type and frequency of mistakes is different from acting to prevent them. Too often we recognize the scope of the quality problems but action is not being taken at the appropriate level to intervene or prevent the problems.

## COMPLEXITY: THE ROOT CAUSE OF VARIATION AND MISTAKES

Process complexity is the root cause of both mistakes and excessive variation that result in defects. For example, we can reduce the probability of omitting a critical step or uncontrolled variation in a medical procedure, if we change the process so that fewer or simpler steps are needed in the process. In other words, as the complexity of a process decreases, the probability that it contains a defect will fall. Consequently, complexity is the root cause of defects arising from the more familiar and accepted sources.

### Link between complexity and defects

Many researchers have observed that mistakes and defects increase with the difficulty and duration of tasks. This link between complexity and defects is intuitively sound but has never been previously quantified. Our research has shown that defect rates are strongly related to the time required to complete a task working at a “standard” work rate minus a constant multiplied by the number of process steps executed (fig 7). This metric has proved to be the best measure of product and process complexity over a wide variety of conditions.

Figure 7

Defects per unit versus complexity for three different manufacturers plotted on a log-log scale. Complexity for a product is the total assembly time (TAT) minus a constant “c” multiplied by the total number of operations (TOP). The lines through the three data sets are the least squares fits to the data and all have virtually the same slope.

On the other hand, efforts to predict defect rates at the level of complete products or complex processes by combining or rolling up the variation performance of each process is so difficult it is rarely attempted. Furthermore, this approach rarely produces predictions that are a good match with the observed quality performance trends. The strong correlation between complexity and defect rates in contrast to the poor correlation between variation based models and defect rates again points to mistakes as the dominant source of defects in modern processes, including production, and that simplifying processes will reduce defect rates. Data from the healthcare profession strengthen this conclusion. Weingart et al26 observed: “The characteristics of individual patients may be less important than the duration of care in explaining injury”. Andrews et al27 reported that the likelihood of an adverse event increased by 6% for each day spent in the hospital. The intensity of an action can also affect the risk of injury. Among paediatric patients admitted to a British university hospital, drug errors were seven times more likely to occur in the intensive care unit than elsewhere.

Generally, the complexity or difficulty of a process or product is accepted as having some relatively fixed value. However, we have shown that simple changes can often cut product and process complexity in half.1 Simplification of the process must occur in the earliest stage of process development because it is extremely difficult to change processes after they have been institutionalized. Thus, simplifying processes with a focus on eliminating mistakes represents the ultimate method of controlling defects at their source.

## THE BEST QUALITY CONTROL METHODS CONTROL MISTAKES

Three fundamentally different sources of defects have been discussed in this paper—namely, variation, mistakes, and complexity.

Historically, key concepts for controlling these defect sources have evolved in the same order as that listed above.28 The first major quality developments focused on controlling variation. Shewhart29 developed control charts in 1924, a key step in the evolution of statistical process control. The earnest systematic study of mistakes in the US appears to have started during the 1950s in connection with the effort to avoid catastrophic nuclear power plant failures. Although many methods of characterizing and controlling mistakes have been defined and proposed, none of the approaches has proved more effective or efficient than the poka-yoke and source inspection concepts developed by Shingo in the 1960s.23 The relationship between complexity and quality was the last of the three major defect sources to be quantified in the 1990s.8

### Controlling mistakes is the central theme

The highest levels of quality control are only achieved when teams recognize that mistakes are the dominant source of defects and that the best methods of controlling complexity, mistakes, and variation are all forms of mistake proofing. When a product is simplified, opportunities for mistakes are eliminated. Mistake proofing intervenes to assure that mistakes do not become defects. When traditional process adjustments are converted from adjustments to “settings”, most of the tails of the distribution attributed to variation are eliminated because most set-up mistakes and adjustment mistakes are eliminated.

### An orderly quality improvement process

Although quality control techniques were characterized first for variation, then mistakes, and finally complexity, the most efficient method of achieving world class quality is to approach process improvement in the reverse order. The first goal should be to eliminate mistakes by simplifying the process. Why should simplification be the first step? If mistake proofing is done before simplification, subsequent simplification eliminates the need for many mistake proofing devices and concepts. Spending time developing unnecessary mistake proofing concepts is wasted effort that can be avoided by focusing on simplification first.

Traditional control of variation requires data collection, analysis, decisions, and process control activities. These actions consume resources and contribute to process inefficiencies. Furthermore, because the inspection is downstream of the process, it cannot prevent defects. In contrast, mistake proofing can virtually eliminate the need for quality related documentation activities while achieving superior results at lower cost. Thus, whenever possible, we should first seek upstream process control using mistake proofing rather than downstream feedback with SQC. For similar reasons, when mistake proofing is inadequate, we should first try to convert adjustments to settings, only using SQC as the last resort. This pattern of quality improvement can be applied to any process improvement.

## CONCLUSIONS

Fundamental changes in the way that we think about quality control must occur to achieve defect rates below 50 ppm. To achieve the most dramatic improvements in quality with the least effort, constant attention must be given to controlling variation, mistakes, and complexity. The best control methods for each of these defect sources are forms of mistake proofing. This represents a shift in our paradigms, or the theories and models that we use to think about quality. We must recognize that mistakes are the dominant source of defects in modern products and services, that the consequences of mistakes are generally more severe than excessive variation, that most mistakes are unintentional, that a wide variety of mistakes can occur, that each specific type of mistake is a rare event, and that virtually every mistake can be controlled. Such dramatic adjustments in quality concepts require cultural changes.

### Key messages

• Variation, mistakes, and excessive complexity are three distinctly different sources of quality and safety problems that must each be controlled separately to achieve the highest quality performance.

• Mistakes are the dominant source of quality and safety problems in virtually all services, including medical care.

• Traditional quality control methods based on statistics will not control mistakes.

• The poka-yoke (or mistake proofing) approach developed by Shigeo Shingo is the only method that has proved to be effective in controlling virtually every mistake.

• The best methods for controlling variation, mistakes, and complexity are each a form of mistake proofing.

Since virtually every mistake must be controlled to achieve mistake-free processes, excellent mistake proofing focuses on action to prevent every mistake. Because the consequence and frequency of each type of mistake varies and there are so many different types of mistakes occurring, as described in this report, prioritizing the mistake proofing order is an essential part of a sound quality strategy. The principles essential for controlling mistakes already exist, and every organization and industry that has consistently applied these principles has discovered that the techniques are remarkably effective and highly efficient.

Lapworth and Teal15 stated in 1994 that “Despite passing interest during each of the three last decades in ‘blunders’ occurring in clinical laboratories, the situation remains as Northam described it in 1977, namely that ‘although much effort and expense is being devoted to the assessment of analytical variation, little attention has been directed to the detection of laboratory blunders’”. Rather than merely detecting mistakes, the time has come when we can and must begin to prevent mistakes by applying the concepts of Shigeo Shingo. Healthcare organizations that lead in this effort will have the opportunity to improve productivity, decrease waste, and increase customer loyalty while reducing exposure to potentially costly litigation.