Article Text

Download PDFPDF

Lessons from the Australian Patient Safety Foundation: setting up a national patient safety surveillance system—is this the right model?
  1. W B Runciman
  1. Correspondence to:
 Professor WB Runciman, Department of Anaesthesia and Intensive Care, Royal Adelaide Hospital, North Terrace, Adelaide SA 5000, Australia;
 wrunciman{at}bigpond.com

Abstract

The evolution of the concepts and processes underpinning the Australian Patient Safety Foundation's systems over the last 15 years are traced. An ideal system should have the following attributes: an independent organisation to coordinate patient safety surveillance; agreed frameworks for patient safety and surveillance systems; common, agreed standards and terminology; a single, clinically useful classification for things that go wrong in health care; a national repository for information covering all of health care from all available sources; mechanisms for setting priorities at local, national and international levels; a just system which caters for the rights of patients, society, and healthcare practitioners and facilities; separate processes for accountability and “systems learnings”; the right to anonymity and legal privilege for reporters; systems for rapid feedback and evidence of action; mechanisms for involving and informing all stakeholders. There are powerful reasons for establishing national systems, for aligning terminology, tools and classification systems internationally, and for rapid dissemination of successful strategies.

  • patient safety
  • incident reporting
  • quality improvement

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

the value of history lies in the fact that we learn by it from the mistakes of others—learning from our own is a slow process” (W Stanley Sykes, 1894–1961)

By the late 1980s technological advances had made readily available the means for enormous improvements in patient safety. For example, for a small fraction of 1% of the overall cost of a surgical procedure, pulse oximetry could provide a beat by beat auditory representation of the heart rate and rhythm and of the adequacy of oxygenation of arterial blood, and online analysis of expired air breath by breath could confirm the adequacy of alveolar ventilation. However, clinicians responsible for the safety of patients in operating rooms, intensive care units, and emergency departments found that they were “at the end of the queue” in being allocated funds to purchase such devices. On reflection it was apparent that this disregard for patient safety permeated the healthcare system. With a few exceptions—such as the electrical safety of medical devices—patient safety was simply “not on the agenda”. For this reason, in May 1987 about 60 influential Australian clinicians were invited to a meeting on monitoring and patient safety at which it was decided to set up the Australian Patient Safety Foundation (APSF) as a “non-profit-making organisation with the aims of promoting, organising, funding, conducting research into, and establishing mechanisms for advancing patient safety”.1

One of the 10 original aims was to promote, coordinate, and organise incident reporting. As there is currently much debate about patient safety and reporting systems (box 1), it is worth giving a brief history of how the concepts and processes underpinning the Australian Incident Monitoring System (AIMS) evolved as different strategies were deliberately tried from time to time, some to be abandoned and others adopted as key components of the system.

Box 1 Current issues

  • Should reporting be voluntary or mandatory?

  • Should reports be anonymous, confidential or in the public domain?

  • Should free narrative or tick boxes be used?

  • Should forms be “generic” or developed locally?

  • What sources of information should be used?

  • How can doctors be encouraged to report?

  • How can accountability be reconciled with safeguarding reporters?

  • How should information be collated and classified?

  • How should priorities be set?

  • How should feedback and evidence of action be provided?

It also became clear that there is no single source of comprehensive information about “things that go wrong” in health care; some of the current sources are shown in box 2. Each has strengths and weaknesses, and the debate must be about how much of the available resources should be allocated to each, rather than which one should be used exclusively. Some of the strengths and weaknesses of these are outlined in this paper, and the lessons learnt by the APSF over the last 15 years—both from our own experiences and from interactions with others—are summarised.

Box 2 Sources of information about things that go wrong in health care

  • Incident monitoring

  • Medical record review

  • Routine data collections (ICD-10)

    • death certificates

    • hospital discharges

    • surveys of general practice

  • Existing registers and reporting systems for:

    • morbidity and mortality

    • adverse drug reactions

    • equipment failure and hazards

  • Complaints

  • Medicolegal investigations

  • Investigations by coroners

  • Results of enquiries and investigations

  • Literature searches

EVOLUTION OF THE CONCEPTS AND PROCESSES UNDERPINNING A PATIENT SAFETY SURVEILLANCE SYSTEM

1987: AIMS-Anaesthesia

This national, voluntary, anonymous reporting system, based largely on work by Cooper et al,2 was set up by the APSF and coordinated by a group of anaesthetists.3,4 Reports were reviewed locally and then sent to the APSF where keywords were generated and entered into a database. The forms had a free narrative section and over 200 “tick boxes”. It rapidly became apparent that the free narrative section provided most of the useful information.

As there was no national mechanism for protecting those who were reporting sensitive information, anonymity was a key feature. No identifying details were sought and any inadvertently supplied were deleted on receipt of the report.

By 1992, 2000 incidents had been reported. These were analysed and 30 manuscripts were published in the journal Anaesthesia and Intensive Care.5 This analysis led to the recognition of the fact that much new information of great practical importance, which could not be obtained using conventional research methods, could be brought to light by aggregating the experiences of practitioners in the field in this way. Many of the things which go wrong in health care occur quite rarely as isolated cases and only present a coherent picture when the information is aggregated. As a result of this analysis, many changes were made at both local and national level. Much new information was obtained about the applications and limitations of monitors in clinical use, about the common causes of signs such as hypoxaemia and hypercapnia, about how rare problems (such as anaphylaxis and gas embolism) present and progress, as well as about the relative contributions of human, system, and equipment failure to clinical incidents. A typical example of how one event triggered an analysis of a collection of individually rare incidents which led to a national initiative is presented in box 3. In nearly all such instances widely held misconceptions are shown to be simplistic or inaccurate when a substantial body of information from the field is collated and analysed.

Box 3 Awareness during a hip replacement

During a total hip replacement procedure under combined general and epidural anaesthesia in 1994, the anaesthetist decided to replace the vaporiser. When the new vaporiser was inserted into the system it failed to seat properly, allowing room air to be entrained into the anaesthetic breathing circuit. As a result the patient awoke fully but was paralysed. Fortunately, the epidural anaesthetic was working effectively and he felt no pain, but he was fully aware of what was going on around him. Upon recovery he not surprisingly complained and queried how this could have come about; the matter received some attention from the national media. The professional bodies representing anaesthetists called for all such vaporiser interfaces with anaesthetic machines to be recalled and modified. The manufacturers suggested that this was an operator problem. The APSF searched the AIMS-Anaesthesia database for vaporiser incidents and located an extraordinary array of problems within a few hours, although almost no useful information was found in the literature and, up to that time, there had been only seven reports of vaporiser problems to the Therapeutic Goods Administration. Over half of the 136 reports represented pure “human factor” problems—for example, a vaporiser left on the wrong setting, or a vaporiser left off when it was supposed to be on. Several fatal and potentially fatal problems had occurred, and it was apparent that standard practices with respect to the filling, care, and use of vaporisers were exposing patients to a wide variety of avoidable hazards. It became clear that the only practical solution was to monitor the concentration of volatile agent in the patient breathing circuit. As a result of the information produced by the APSF database, the College of Anaesthetists set up a “guideline” (in effect, a de facto national standard) requiring the use of online volatile agent monitoring during anaesthesia.

1991: The Professional Indemnity Review (PIR)

The Federal Minister for Health commissioned this review into compensation and professional indemnity in health care. It funded AIMS pilots in other specialties, the development of the APSF classification systems, and the medical record review described below. It also fostered the passage through Federal Parliament of Part VC of the Health Insurance Act, which provided legal privilege for eligible national quality improvement initiatives.4 Information brought into existence for both AIMS and the medical record review study were “declared” by the Minister as being protected under this Act.

1993: AIMS-Other Specialties

Funding was obtained from the PIR to extend AIMS to pilot studies in general practice, psychiatry, obstetrics and gynaecology, emergency medicine, and intensive care. Studies were subsequently carried out in areas such as general surgery, rehabilitation medicine, hyperbaric medicine, and helicopter based retrieval medicine. Each of these disciplines was advised by an APSF member but they were deliberately encouraged to develop their own forms and reporting systems. The report forms ranged from a simple request for a free narrative through to a series of complex forms for a single discipline, each with multiple tick boxes. A meeting was held in 1994 to discuss the experiences of those involved. It was concluded that the right to anonymity was important, that much value lay in having simple forms with plenty of space for free narrative, and that valuable information could be obtained in each of the specialties.4 However, it was also concluded that specific funding would be needed to run a sustainable system. Despite substantial success in disciplines such as intensive care (in which over 100 units were recruited), reporting and data analysis has been suspended in nearly all these areas pending definitive funding.4

1993: AIMS-Generic

Funding was also obtained in 1993 to trial “generic” incident monitoring across six entire hospital systems. The initial trial involved running anonymous reporting in parallel with the existing nurse based reporting systems, information from which was in the public domain. There was strong feedback that a single form for all types of incidents would be highly desirable, as would qualified legal privilege. A system was then developed in several stages which culminated in a reporter having the option of anonymity or confidentiality (with qualified legal privilege). In parallel, a simple classification system was developed to allow relevant sets of reports—for example, for wards, divisions or the hospital executive—to be produced at hospital level. Hospitals could now compare their profiles and reporting rates with those of like (but de-identified) institutions. The principle of “whoever provided the information owns it” was vital to ensure ongoing reporting.

1994: The Generic Occurrence Classification (GOC)

It was recognised in analysing the first 2000 incidents reported to AIMS that there was no clinically useful comprehensive classification for things that go wrong in health care, and that the use of keywords and text analysis was cumbersome and time consuming. It was therefore decided to develop a classification using “natural categories” and “natural mapping” to generate a multi-axial framework into which all iatrogenic events could be classified. This was designed to elicit their salient features, place them in context, and record their contributing factors—whether system or human based. The original GOC contained about 12 500 categories and was designed to be able to classify incidents and events at a regional or national level from all the sources listed in box 2.6 New categories were developed when necessary and added to an expanded version of the classification.

1995: The Quality in Australian Health Care Study (QAHCS)

The PIR commissioned this study to determine the prevalence of “adverse events” in Australian acute care hospitals. The initial results of this retrospective review of 15 000 admissions from 1992 showed that 16.6% were associated with an adverse event.7 Because of the discrepancy between this rate and that in an ostensibly identical study in hospitals in Colorado and Utah (3.5%),8 the APSF was commissioned to classify all the adverse events in both studies into the GOC to try to determine the reasons for the difference. When methodological differences between the studies were minimised9 it was found that the nature and frequency of serious adverse events were virtually identical.10 In both studies 1.7% of admissions were associated with a major disability and 0.3% with iatrogenic death. The original discrepancy was largely accounted for by the fact that Australian reviewers included as adverse events a range of problems (many minor) which were not included by US reviewers.10

Possibly the most useful finding from this study was that, when adverse events were classified into categories according to how they could be prevented, only one in 10 of the adverse event types would be encountered more than once every 2 months in the average 250 bed hospital and the remainder would occur less frequently and be represented by some 500 types.4,11 An important implication of this finding is that a large database, aggregating adverse events from many such hospitals, would be required to characterise these individually rare but collectively important low frequency events. There were three other useful findings from the study: (1) an indication of priorities could be obtained on the basis of resource consumption (60% of the resources were wasted by adverse events which resulted in only minor disabilities); (2) serious disability and death were distributed among many categories of things that go wrong, emphasising the importance of studying “near misses” and events resulting in minor disabilities as well as events with serious outcomes11; and (3) medical record review is not useful for determining how or why things went wrong as this information is simply not in the records.7

2000: AIMS-2

It had become apparent that having different forms, classification systems, and databases for the various specialties was proving to be expensive to maintain and it was difficult to conduct searches for specific problems across the various databases. A user group was therefore formed to develop a new system which could (1) be used across the entire spectrum of a national healthcare system by staff, patients and relatives, (2) be sufficiently flexible to meet the requirements of specialty based units and generic reporting, (3) allow web based reporting and rapid feedback and analysis, and (4) be suitable for both local use and a national data collection.

An explicit conceptual “reference model” based on the Reason model of complex system failure was developed,12 a mechanism was established for standardising definitions and terminology,13 and a coding classification process was developed whereby coders are simply asked sets of questions specific to each aspect of each type of incident. The software then stores the answers as unique terms describing the attributes of the relevant components of the reference model. A major advance is the ability to enter data electronically both for primary reporting and secondary enhancement. For example, a root cause analyst can enter additional comprehensive information based on a series of “cues” relevant to each particular incident. These cues are based on the experience of coding over 50 000 incidents. The principle was to develop a secure, expansible, flexible, language independent system which would allow reliable coding at source and enable users to obtain a wide range of reports rapidly. It was designed to accommodate findings from root cause analyses, the requirements of various medical specialties, and from the sources of information listed in box 2.14

2000: Analysis of routine data collections for adverse events

The Australian Institute of Health and Welfare (AIHW), the body charged with collecting national health statistics, routinely collects information on causes of mortality and morbidity from death certificates, hospital discharges, and surveys of general practice. These are coded into the International Classification of Diseases, Version 10 (ICD-10). Although these collections are focused on the underlying causes of mortality and morbidity rather than on adverse events, some adverse events are captured by “external cause” codes and some disease/diagnosis codes specific for adverse events.

Adverse events were implicated in just under 3000 deaths in 1998, just under 5% of discharges in 1997–98, and just under 1% of general practice encounters in 1998–2000.15 A comparison with QAHCS data (which indicates that adverse events are associated with over 10 000 deaths each year and 14% of discharges10) suggests that only about one third of adverse events are captured by these routine collections. However, even if all adverse events were captured in this way, the classification used does not provide information that is of any use to clinicians who wish to reduce iatrogenic harm. Although there are plans to improve the capture of adverse events by improvements to these routine collections and ICD-10, it may be more effective to add a classification such as the GOC to the family of WHO classifications and to enhance “cross mapping” to ICD-10 and other relevant systems.

2000: The Australian Council for Safety and Quality in Health Care

This body was set up in early 2000 by the Federal Minister for Health.16 It has obtained substantial funding and is about to start a programme of work based on a number of problem specific task forces. It is still in the process of determining the required attributes of a national reporting system but has committed to “developing nationally consistent functional specifications for incident monitoring which support aggregation of incident data for analysis and actions at a national level”.17

2000: The Australian Health System Safety Surveillance Unit

In recognition of the fact that there was no single source of comprehensive information about things that go wrong in health care and of the need for a national repository, it was agreed that the APSF should become a collaborating unit of the Australian Institute of Health and Welfare (AIHW) in order to collect information at a national level from all available sources. In this way the expertise and systems developed by each organisation could be used to maximum advantage, including the improvement of routine data collection by the AIHW and the ability to use the GOC to classify adverse events.

INTERNATIONAL COLLABORATIONS

The original AIMS-Anaesthesia system has been adapted for use in over 30 member countries of the World Federation of Societies of Anaesthetists. More recently, APSF systems have been adopted by the National Patient Safety Agency as a national reporting system for near misses and adverse events in the NHS,18 and collaboration is underway with two patient safety centres in the USA. Interest has been expressed by a number of countries in using the new classification and data repository systems of the APSF.

LESSONS LEARNT BY THE APSF OVER 15 YEARS

The need to put patient safety and reporting and surveillance systems in context

Patient safety is a somewhat nebulous concept to many, and there is a need for a simple conceptual framework so that patient safety may be placed in context with respect to other healthcare activities (fig 1).

Figure 1

Patient safety in context. Patient safety is an important component of risk management, clinical governance, and quality improvement. Risk management is an important and necessary component of both clinical and corporate governance, and the quality improvement systems which have been established in most organisations provide a basis for introducing the necessary changes, both for patient safety and quality initiatives and for determining that they have been effective.

Reporting and surveillance systems also need to be placed in context; the risk management standard AS/NZ 4360 provides a useful framework4,19,20 (fig 2) and has been adopted in both Australia and the UK. It is apparent that a patient safety surveillance system would be concerned with the identification, evaluation, and analysis of risks to patients so that priorities can be set and the problems characterised as the necessary first step in devising corrective strategies. A major advantage for addressing clinical risk using such a framework is that it is familiar to those who will have to provide the resources for the substantial investments that will be necessary to redesign healthcare processes.

Figure 2

A simplified representation of a risk management framework adapted from Standard AS/NZ 4360.20

The need for common tools and terminology

Patient safety is now on the healthcare agenda in most western countries. Several countries including Australia and the UK have made commitments to some form of national reporting and/or patient surveillance system. It is highly desirable to set national and, preferably, international standards for the basic attributes of reporting systems and to use a common classification system for patient safety. A commitment has already been made to this principle, both in Australia and in the UK. Comparisons can then be made of the profiles of problems in different countries and of the effects of interventions on these profiles.

The need to set priorities and to act at local, national, and international levels

At a local level the immediate assessment of severity of risk may be carried out based on the likelihood of recurrence of the problem and the potential outcome to patients or the system. Scoring each incident or event using a 5 × 5 matrix of severity of outcome and likelihood of recurrence as adopted by the NHS,19,20 or a 4 × 4 matrix as used by the Veteran's Administration (VA) in the USA,21 provides a basis for deciding which problems need to be addressed urgently.

However, it is clearly desirable that problems and solutions identified at this level should also be fed into a national repository and, ideally, disseminated internationally. Priorities for national initiatives may also be informed by ranking the frequency, severity, and resource consumption of adverse events identified by medical record review. While dangerous events may be well handled locally, it is also necessary to address the more mundane events which consume large sums of money.11 Interventions such as changing an international standard or establishing a convention that requires all monitor alarms to default to standard parameters on being turned on have the capacity to eliminate problems worldwide without users even being aware that they have been introduced.4

The need for large repositories to collate information from many sources

One of the most important findings to emerge from the re-analysis of the Australian and US medical record review studies was that at least three quarters of adverse advents occurred sufficiently infrequently at a local hospital level to preclude meaningful characterisation. Such characterisation is necessary to allow healthcare practitioners to devise appropriate corrective strategies. Also, information from different sources reveals quite different profiles of what is going wrong. The APSF, NPSA in the UK, and VA in the USA are all advancing the benefits of national or large scale repositories.4,17,19,21

The need for a just system

It is imperative that the rights of individual patients, the interests of society at large, the needs and rights of healthcare practitioners, and the requirements of healthcare facilities are all recognised and accommodated. This has been succinctly summarised by Reason as the “need for a just system”. It is not possible to accomplish this with a single reporting or surveillance system. While it is important to ensure that there is full disclosure of material facts when patients have been harmed and that appropriate processes are instituted to ensure accountability, it is also important to remember that the current high rate of iatrogenic harm exists with these measures in place. There is a strong culture of blame in health care, exacerbated by “outcome bias” due to the severity of some of the unintended consequences of things that go wrong.22,23 It is vital to find out not only what is going wrong, but how and why these problems are occurring if appropriate preventive and corrective measures are to be devised.

The need for separate processes for accountability and for “systems learnings”

Systems have been developed by both the VA in the USA21 and by the APSF4 for accountability and for “systems learnings” which are quite separate. It is vital that those involved with things that go wrong in health care can provide full details of how and why they went wrong without being blamed or ridiculed or having their careers placed in jeopardy. In Australia all information on AIMS forms is legally protected.4 Also, reporters have the option of complete anonymity. Many doctors who fill in forms anonymously are happy to identify the incidents as theirs at audits with peers, thus allowing validation and follow up, but do not want their names to be recorded because of the possibility, no matter how remote, that this could disadvantage them at some stage in the future. The process for reporting to facilitate system improvement must be dissociated from that for ensuring accountability. With the VA system in the US, as long as it has been established that the problem did not involve an intentionally unsafe or criminal act, abuse of patients, or the use of drugs or alcohol by those involved, information is treated entirely confidentially and powerful safeguards are in place to protect reporters.21 It is important to note that this does not detract from the rights of the patients, but adds a mechanism for acquiring important additional information which serves overall societal interests by facilitating a reduction in iatrogenic harm.24

The need for feedback and the evidence of action

These are both the most important and the most difficult components of the process. The most powerful and developed process is that in place in the VA system, in which objective criteria are used to determine which problems are to undergo root cause analysis and the CEO is then required to concur with the recommendations.21 Concurrence requires that actions be specified, funded, and implemented; reasons for non-concurrence must be explicit and disseminated both to investigators and over the whole system.

The APSF has provided newsletters, publications, and advice at a system wide level but has had to rely on each health facility to provide local feedback and evidence of action; this has been patchy, at best. Deficiencies in the ability of professional bodies (other than those for anaesthetists) to introduce change are also evident and will need to be rectified.4

The need to involve and inform healthcare professionals, consumers, and the public at large

Both the APSF and the Australian Council for Safety and Quality in Health Care recognise the need for reporting by consumers and the involvement of consumers as well as healthcare professionals in setting standards and in devising and implementing strategies to enhance patient safety.4,17 Sensationalist reporting of tragic outcomes and general disinterest on the part of the media in “good news stories” have had a corrosive effect on trust, have caused alarm in the community, and have alienated healthcare professionals. Widespread use of the word “error” to imply blunders and negligence rather than a normal manifestation of cognitive function has exacerbated this problem. There is much to be done to redress this situation, and a concerted effort will be needed to find responsible journalists and to involve and inform consumers.

CONCLUSION

Both Australia and the UK are committed to the establishment of mechanisms for collating, classifying, analysing, and acting on patient safety problems at a national level.4,17,21 Some of the desirable attributes of a national patient safety surveillance system are summarised in box 4. The VA is partnering NASA in developing a de-identified voluntary patient safety reporting system to acquire information to which other systems do not have access, and to apply this to the broader healthcare system. There are some powerful arguments for establishing national patient safety surveillance systems. An important further step would be to establish an international patient safety reference group to align terminology, tools, and classification systems and to promote the rapid dissemination of strategies that have been proved to be successful.

Box 4 Some desirable attributes of a national patient safety surveillance system

  • Coordination by an independent body for patient safety surveillance

  • Agreed frameworks for:

    • patient safety

    • reporting systems

    • a surveillance system

  • Agreed standards for reporting

  • A single, clinically useful classification system for things that go wrong

  • A national repository for data from all available sources about these things

  • Data to be collected across the whole spectrum of health care

  • Mechanisms for setting priorities at local, national and international levels

  • A just system, accommodating the needs and rights of:

    • patients and their relatives, friends and carers

    • health professionals

    • health facilities

    • society at large

  • Separate processes for accountability and “systems learnings”

  • Explicit criteria for deciding whether the process should be an open one for accountability or be afforded protection and qualified privilege

  • A blame-free culture for reporting

  • The right to anonymity for reporters

  • Qualified legal privilege for quality and safety improvement (“systems learnings”) information

  • Ownership of “systems learnings” information by those who provide it

  • Systems for rapid feedback and evidence of action

  • Mechanisms for involving and informing all stakeholders

  • Mechanisms for disseminating successful strategies internationally

Appendix: Abbreviations used in text

AIHW, Australian Institute of Health and Welfare: a national body responsible for health statistics.

AIMS, Australian Incident Monitoring System: a system designed to collect, analyse and to disseminate information about things that go wrong across the entire spectrum of health care.

APSF, Australian Patient Safety Foundation: a not-for-profit organisation set up in 1988 to promote patient safety.

GOC, Generic Occurrence Classification: a multi-axial classification system for things that go wrong in health care—including contributing factors, salient features, and outcomes.

ICD-10, International Classification of Diseases Version 10: the official classification of the WHO for diseases and causes of death.

NASA, National Aeronautics and Space Administration: NASA is a leading force in scientific research and in stimulating public interest in aerospace exploration, as well as science and technology in general. NHS, National Health Service: a universal healthcare system provided in the United Kingdom for its citizens.

NPSA, National Patient Safety Agency: an agency set up in the United Kingdom in 2001 to coordinate patient safety initiatives for the NHS. VA, Veterans Administration: a network of 73 health facilities in the USA for people who have been in the American services.

WHO, World Health Organisation: formed in 1948 as a key agency of the United Nations Organisation.

REFERENCES