Article Text
Abstract
Patients, clinicians and managers all want to be reassured that their healthcare organisation is safe. But there is no consensus about what we mean when we ask whether a healthcare organisation is safe or how this is achieved. In the UK, the measurement of harm, so important in the evolution of patient safety, has been neglected in favour of incident reporting. The use of softer intelligence for monitoring and anticipation of problems receives little mention in official policy. The Francis Inquiry report into patient treatment at the Mid Staffordshire NHS Foundation Trust set out 29 recommendations on measurement, more than on any other topic, and set the measurement of safety an absolute priority for healthcare organisations. The Berwick review found that most healthcare organisations at present have very little capacity to analyse, monitor or learn from safety and quality information. This paper summarises the findings of a more extensive report and proposes a framework which can guide clinical teams and healthcare organisations in the measurement and monitoring of safety and in reviewing progress against safety objectives. The framework has been used so far to promote self-reflection at both board and clinical team level, to stimulate an organisational check or analysis in the gaps of information and to promote discussion of ‘what could we do differently’.
- Patient safety
- Adverse events, epidemiology and detection
- Risk management
- Incident reporting
- Medical error, measurement/epidemiology
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/
Statistics from Altmetric.com
- Patient safety
- Adverse events, epidemiology and detection
- Risk management
- Incident reporting
- Medical error, measurement/epidemiology
Introduction
Patients, clinicians and managers all want to be reassured that their healthcare organisation is safe. The organisation in question might be a general practice, a ward or department, or an entire hospital or healthcare system. But what exactly do we mean when we ask whether a healthcare organisation is safe? Anyone who has ever listened to a board, a clinical meeting or a group of any kind discuss this question will know that many different views will be advanced and defended with passion, if not always with clarity. This paper summarises the findings of a more extensive report1 and proposes a framework which can guide boards, clinical teams and healthcare organisations in using a holistic approach in the measurement and monitoring of safety and in reviewing progress against safety objectives. While our case studies and our examples are primarily UK based, we believe that the broader framework should be applicable in other healthcare systems.
Safety is concerned with the myriad ways in which a system can fail to function, which are necessarily vastly more numerous than the acceptable modes of functioning. Some of these failures may be familiar, even predictable, but the system may also malfunction in unpredictable ways. Safety is partly achieved by being alert to these perturbations, responding rapidly to keep things on track. Doctors, nurses and managers do this all the time in healthcare, probably to a greater extent than in any other industry. But when they succeed, or the system compensates in other ways, these actions are in a sense invisible. This suggests that assessing safety will require looking beyond a set of metrics to considering how it might be possible to monitor the functioning of the wider healthcare system.
In the UK there has been increasing government focus on assessing both quality and safety over the past 10 years.2 A very large number of quality outcomes have been specified but the approach to safety has been much narrower, leaving many aspects of safety unexplored.3 The measurement of harm, so important in the evolution of patient safety, has been almost completely neglected.4 The use of softer intelligence for monitoring and anticipation of problems receives little mention in official policy. The Francis Inquiry report into patient treatment at the Mid Staffordshire NHS Foundation Trust set out 29 recommendations on measurement, more than on any other topic, and set the measurement of safety an absolute priority for healthcare organisations.5 The Berwick review found that ‘most health care organisations at present have very little capacity to analyse, monitor, or learn from safety and quality information. This gap is costly and should be closed. Early warning signals can be valued and should be maintained and heeded’.6 In this paper we set out proposals for how this might be achieved in practice.
Methods
We began by conducting three scoping reviews. These reviews covered safety measurement in a range of high risk industries; conceptual approaches and models of systems safety; and the measurement of safety in healthcare. Abridged versions of these reviews became chapters in the main report.1 We also conducted additional searches on the technical properties of metrics, safety indicators and the role of patients and families in monitoring safety. These reviews used author and keyword searches using PubMed and internet search engines together with a review of bibliographic lists to identify relevant publications. The websites of key organisations were included where appropriate, enabling us to access technical reports and guidance documents, for example those issued by national and state regulators of different industries.
The scoping reviews on high risk industries and models of safety drew out the main practical implications for healthcare. We found that the measurement and monitoring of safety in other industries has evolved to encompass both lagging and leading indicators, to examine several different facets of safety and to use a variety of different methods of assessment and measurement. The specific tools, techniques and methods of other industries may not always transfer easily to healthcare. However, the understanding and principles behind safety measurement in other industries informed our approach to healthcare.
We conducted interviews with a range of senior staff in national organisations in the UK. For our case studies in healthcare organisations we developed a template to describe the information we required. We approached organisations in both the UK and internationally that we knew to be seriously engaged in the assessment and improvement of safety. These covered acute, community, mental health and primary care services, and specific services such as obstetrics and anaesthetics where measurement of safety is well developed. The case studies were conducted by interviews and visits to the organisations or via email where visits were impractical. To supplement the case studies we reviewed websites and board papers relating to patient safety from a range of other NHS trusts in England.
Findings: five fundamental questions
What exactly do we want to know when we ask whether a healthcare organisation is safe? We could look for a single defining index of safety; we might think of safety in terms of a set of core standards; we might seek it in the attitudes and behaviours of staff, perhaps in terms of safety culture. One reason these discussions are so difficult is that the underlying question has a number of different facets, which are not always clearly distinguished. A further problem is that safety is sometimes equated with compliance and assurance; in contrast we consider that safety must be approached in the spirit of active inquiry. In considering the evidence from the scoping reviews and the case studies we therefore decided that organisational safety should be approached by posing five fundamental questions:
Has patient care been safe in the past? We need to assess rates of past harm to patients, both physical and psychological.
Are our clinical systems and processes reliable? This is the reliability of safety critical processes and systems but also the capacity of the staff to follow safety critical procedures.
Is care safe today? This is the information and capacity to monitor safety on an hourly or daily basis. We refer to this as ‘sensitivity to operations’.
Will care be safe in the future? This refers to the ability to anticipate, and be prepared for, problems and threats to safety
Are we responding and improving? The capacity of an organisation to detect, analyse, integrate, respond and improve from, safety information
These five core questions lead directly to the five dimensions of our framework (figure 1): (i) past harm; (ii) reliability; (iii) sensitivity to operations; (iv) anticipation and preparedness; and (v) integration and learning. Table 1 shows some examples of methods and approaches pertinent to each of the five dimensions. We next summarise some of the key features of each dimension.
Has care been safe in the past? The measurement of harm
Most patients are vulnerable, to some degree, to infections, adverse drug events, falls, and the complications of surgery and other treatments. Patients who are older, frailer or have several co-morbidities may be affected by over-treatment, polypharmacy and other problems such as delirium, dehydration or malnutrition.7 In mental health suicide, violence and feeling safe on in-patient units are critical. In any setting patients may also suffer harm from rare and perhaps unforeseeable events stemming from new treatments or new equipment (box 1). To assess harm from healthcare, we ideally have to consider all these kinds of events.
Box 1 A typology of patient harm
Treatment-specific harm
Harm that results from specific treatments or the management of a particular disease, with varying degrees of preventability. This would include adverse drug reactions, surgical complications, wrong site surgery and the adverse effects of chemotherapy
Harm due to over-treatment
For example, polypharmacy and the consequent drug interactions are a major hazard, in that the benefits received from multiple treatments can be outweighed by the risks and adverse consequences
General harm from healthcare
Hospital-acquired infections, falls, delirium and dehydration are examples of problems that can affect any patient with a serious illness. Frailties or co-morbidities that increase vulnerability to falls, infections and so on
Harm resulting from delayed or inadequate diagnosis
A cancer diagnosis may be delayed because the patient delayed contacting their doctor or because the physician failed to refer. In either case the outcome may be poorer. To the patient this is harm, although not necessarily generally considered as an aspect of patient safety
Harm due to failure to provide appropriate treatment
Many patients fail to receive standard evidence-based care which may lead to harm; failure to provide rapid thrombolytic treatment for stroke provides an example. Such problems may be viewed as poor quality care, rather than safety, but for the patient may represent avoidable harm
Psychological harm and feeling unsafe
Patients may simply feel unsafe on psychiatric in-patient units and even on general wards. Awareness of unsafe care may have consequences for the wider population if it leads to a loss of trust. For instance, people may be unwilling to have vaccinations, give blood, donate organs or receive transfusions
Healthcare organisations have used a range of methods and data sources to assess harm. Some methods, such as record review, attempt to cover a very broad range of possible types of harm. In contrast, patient safety indicators derived from administrative data reflect highly specific events or processes. Each of these groups of measures has strengths and limitations, and none can claim to reflect all the kinds of harm discussed above.8 Many organisations place considerable emphasis on standardised mortality, an important warning sign but problematic as a measure of preventable harm.9 We need to devise more specific and more nuanced measures of harm that are relevant to each clinical setting and also to examine rates of harm within wards and clinical areas.
Are our clinical systems, processes and behaviour reliable?
Reliability, defined as ‘failure-free operation over time’, has been a focus of safety critical industries such as aviation and nuclear power for many years. The concept of reliability can be applied most meaningfully to relatively standardised aspects of healthcare8 which include procedures that staff need to carry out reliably. This would include compliance with hand hygiene procedures, the timely administration of antibiotics before operations, the timely ordering of diagnostic tests and many other fundamental processes. It also covers clinical systems supporting the delivery of care, such as the availability of essential medical records. Many healthcare systems have very poor reliability. For instance, recent studies showed that for 15% of patients essential clinical information was missing at the point when decisions were being made, and that essential equipment was missing or faulty in 19% of operations.10 These levels of reliability could not be tolerated in other safety critical industries.
In the English NHS, reliability is typically assessed through a rolling programme of clinical audits. These audits are important but, at a local level, the focus can be haphazard and insufficiently proactive. The next step for many organisations is to identify all safety critical processes within each clinical area and specify the levels of reliability expected. This seemingly simple step would be a massive transformation in healthcare, representing a move from gradual improvement towards an engineering perspective in which systems are designed to operate to certain specifications under a range of conditions.11 Staff are unaccustomed to thinking in terms of standardisation and reliability of processes that, for example, would come naturally to engineers. Monitoring reliability across a system would be a major challenge but is necessary if healthcare is to take safety seriously.
Is care safe today? Sensitivity to operations
Problems and crises that potentially threaten safety occur on a daily or even hourly basis, such as a sudden influx of very sick patients, staff sickness or equipment breakdowns. We might have been safe yesterday but how can we know whether we are safe today?
When we drive a car, operate machinery or cross the road, we continuously monitor our own actions and attend to the environment adapting to emerging hazards. This vision can be expanded to consider how to monitor the safe running of a healthcare organisation. ‘Sensitivity to operations’ (with operations referring to the workings of an organisation, rather than surgical procedures) describes workers’ acute awareness of the workings of the organisation and sensitivity to subtle changes and disturbances.12 ,13 Specific mechanisms that support sensitivity to operations in healthcare include safety walk-rounds, handovers and ward rounds, briefings and debriefings, and informal conversations. Such conversations are often thought of as ancillary to the real work of the organisation but are in fact critical to monitoring safety.
Patient interviews and conversations are a particularly vital form of safety monitoring14 ,15 and have been the most potent warning of recent tragedies. Both the Berwick and Keogh reviews6 ,9 have emphasised the need to seek out the patient voice as an essential and timely warning sign of deteriorating care. When patients ask ‘Am I safe?’ they draw to some extent on their knowledge of the organisation and available public information. The experience of safety probably depends very much on their moment-to-moment experience of care. Safety may be conveyed more by the manner of the staff, the care they take, their concern for checking details, and their empathy and compassion. Highlighting practical difficulties and harms experienced by patients that might not be immediately obvious to staff, such as assumptions by staff that a patient has understood the information provided at discharge, is important.
Will we be safe in the future? Anticipation and preparedness
In clinical work, treating complex, fluctuating conditions requires thinking ahead and being prepared to adjust treatment as the patient's condition changes. Considering the safety of an organisation requires a similar but broader vision. Clinicians and managers need to anticipate and assess potential hazards and take action to reduce the risks over time.16 ,17 Safety, from this broader perspective, requires anticipation, preparedness and the ability to intervene to reduce risks at the ward, department or systems level.
There is no special type of information that is suitable or unsuitable for reflecting on future hazards and potential problems. It is more that questioning needs to be encouraged, even when things are going well, creating opportunities for staff to envision scenarios. Formal approaches can however facilitate the creation of scenarios and proactive action on potential threat. These include the use of human reliability analysis, safety cases and the use of indicators such as safety culture and mapping of staffing levels to anticipate potential risks to safety due to staff shortages. We anticipate that high performing organisations will make increasing use of formal risk prediction systems in which staffing levels and other indicators are linked to assessments of the potential of harm and declining reliability. A notable finding from our case studies was that the organisations interviewed so far provided many fewer examples of ‘anticipation and preparedness’ than the other four classes of safety information in our conceptual framework.
At an organisational level, anticipation and preparedness is comparatively undeveloped in healthcare and within the NHS. The different dimensions of safety and the associated analysis for anticipation need to be further explored, in both research and practice.
Are we responding and improving? Integration and learning from safety information
All healthcare organisations will, if they look, discover numerous incidents and deviations from best practice. Safe organisations actively seek out such information and attempt to harness the learning to influence future functioning. Instead of relying on recommendations from single incidents or metrics, they integrate and analyse safety information from across the unit or organisation and use it to support longer term organisational learning and sustainable improvements.18 Data sources could include: incidents reported, patient safety indicators derived from administrative data, complaints, health and safety incidents, inquests, claims, clinical audits, routine data, observations and informal conversations with patients, families and staff.
A safety information reporting system should really be seen as an ‘information, analysis, learning, feedback and action’ system.19 Only a very few healthcare organisations have achieved this. We found examples of high performing teams who regularly reviewed a variety of sources of safety information combining quantitative measures of harm and reliability combined with the softer intelligence of observation and conversation. One major healthcare system has created an online reporting portal for quality and patient safety. The portal incorporates 80 patient safety metrics housed in a dimensional database that allows web-enabled reporting and has the capacity to produce statistical process control charts on demand.20
Putting the framework into practice
We recognise that the value of this framework and the associated report will not become fully apparent until it has been properly tested in practice. We derived the basic approach midway through our work and tested a preliminary version with a number of organisations. Since the completion of the main report the Health Foundation has commissioned further reports, commentary and conferences to assess the potential of our approach.
Initial testing of our approach in workshops in two acute and one integrated care trusts showed a positive response from board members, managers and frontline care-givers. All felt the framework was relevant to their roles and could see opportunities for its practical application within their own contexts.21 Board members felt that the framework provided both structure and clarity in reflecting on their current approach to measuring and monitoring safety (see online supplementary box S1). The five dimensions helped view patient safety activities and information through different ‘lenses’ and to widen their thinking about safety, particularly sensitivity to operations and anticipation and preparedness.21 Rather than being led by available data, board members were able to approach the measurement and monitoring of safety in a more holistic way, which in turn enabled the identification of gaps in their knowledge both at board level and in specific clinical settings. The framework has therefore been used to promote self-reflection at both board and clinical team level, to stimulate an organisational check or gap analysis and to promote discussion of ‘what could we do differently to address the identified measurement and monitoring gaps?’.
Feedback from focus group workshops with frontline clinical staff showed that the framework supported reflective thinking and broadened participants’ understanding of patient safety measures. The five dimensions helped clarify why different data and activities were undertaken, and provided a forum for debating whether or not some measurement activities were useful. A particular benefit was in stimulating discussion on the purpose of different kinds of measures and activities, for instance separating measures of harm from information that identified vulnerabilities in the system and enabled learning.
Discussion and implications
While measures of quality and cost are relatively well established, the measurement and monitoring of safety continues to be problematic. We believe that conceptual clarity is an absolute prerequisite for efficient practical action and hope that our approach offers an effective way forward. We believe that this framework encompasses the principal facets of safety and will provide a guide for clinical teams and organisations. We recognise however that both the value and the limitations of our approach will only become apparent with further use and testing. Some important questions for future research are: ‘Are some of the five dimensions more important to maintain a safe healthcare organisation than others?’ ‘What is the impact of weakness on one dimension on overall organisational safety?’ ‘How do the five dimensions relate to each other?’ and ‘How can boards and clinical teams use the framework to improve their approach to measuring and monitoring safety?’
In addition to providing a mode of exploration, our report has some immediate practical implications. In the main report we set out 10 guiding principles (box 2) for safety assessment and monitoring which summarised some of the lessons we had learned during our reviews and case studies. Here we focus on the most immediate lessons.
Box 2 Ten guiding principles for safety measurement and monitoring
1. A single measure of safety is a fantasy. The search for simple metrics has sometimes led organisations to use a single specific measure, such as standardised mortality, as a generic indicator of safety performance. However, safety cannot be encapsulated in a single measure and such an approach gives false reassurance
2. Safety monitoring is critical and does not receive sufficient recognition. Leaders at all levels need time to walk, to talk, to monitor and to intervene when necessary. Patients and carers play an essential role in safety monitoring but are an underused resource
3. Anticipation and proactive approaches to safety. More evolved safety measurement systems combining both lagging (after the event) and leading (before an event) indicators. In healthcare leading indicators are still very rare
4. Integration and learning: invest in technology and expertise in data analysis. Safety information is fragmented within NHS organisations and across the wider system. Probably the greatest challenge is to integrate it into a useable and comprehensible format
5. Mapping safety measurement and monitoring across the organisation. Safety measurement and monitoring must be examined within each clinical setting. In each clinical context, we need to consider what kinds of harm are prevalent, what features of care must be reliable, and how we monitor, anticipate and integrate safety information
6. A blend of externally required metrics and local development. Many measures indices should be agreed nationally or even internationally, though can be complemented by locally developed measures. But day-to-day monitoring, anticipation and preparedness are necessarily local activities, whether at the ward or board level
7. Clarity of purpose is needed when developing safety measures. Healthcare regulators, national agencies and commissioners need to consider the purpose of safety measures. They need to beware of excessively complex data collection and must test safety measures before implementation
8. Empowering and devolving responsibility for the development and monitoring of safety metrics is essential. Clinical units need the flexibility to develop measures that are relevant and adapted to their clinical context. Healthcare regulators need to move towards a goal-setting approach that allows organisations some flexibility in how they demonstrate that their care is safe
9. Collaboration between regulators and the regulated is critical. The fragmentation of key safety information across multiple national and local stakeholders, combined with the complex regulatory landscape are potential threats to safety. Considerable resource is devoted to meeting multiple external demands, to the detriment of critical activities such as monitoring, anticipation and improvement
10. Beware of perverse incentives. Some types of measurement introduce perverse incentives that can lead to box ticking or other unwanted behaviour. For example, imposing financial penalties may promote under-reporting or excessive focus on one type of harm. Instead, we need a more holistic approach to measurement and monitoring
First, it is necessary to abandon the search for a single measure of safety. Boards sometimes search for the elusive single measure of safety that, if in bounds, will enable them to sleep well. We believe that this is a fantasy—an understandable one but a fantasy nevertheless. In most organisations there are just too many different activities, too many different dimensions of safety and too many factors that influence safety.
Second, it is tempting, but not desirable, to examine the available metrics as a starting point. In contrast, we advise starting from the workplace. What kinds of harm are prevalent in this environment? What are the safety critical processes? What are the daily threats to safety? The framework provides a structure for this enquiry and some examples of information and processes to support safety.
Third, prioritise safety monitoring as an activity. Time to walk, talk and watch is critical to monitoring and maintaining safety as are handovers, debriefing and other methods of team reflection. Patients, carers and others play a particularly critical role in this regard both in monitoring their own safety and in the wider safety of the healthcare system. While regulators struggle with intermittent visits and a lack of timely data, patients have immediate experience of poor or dangerous care.
Fourth, review your capacity for analysis, reflection and learning and at both unit and organisation level. Many healthcare organisations have very little capacity for analysing or learning from safety and quality information. This completely obvious, but little remarked, fact underlies the inability of many organisations to effectively monitor safety and quality. (Compare the number and salaries of those monitoring safety and quality in your organisation with the number and salaries of those monitoring finances.)
Fifth, the area of greatest weakness for most organisations appears to be the capacity to anticipate and prepare for threats to safety. Some methods, such as safety cases, are already available to support anticipation of hazards, and the analysis of known indicators, such as staffing levels, is likely to be particularly critical. There is certainly a need for more research into this particular organisational capacity, but systematic and deliberate reflection on potential threats is undoubtedly of considerable value.
We recognise that this framework and the approach it provides to safety needs to be tested in practice and its value assessed by a wide range of clinicians, managers and others. Like other frameworks, its true worth and impact will only gradually be discerned.21 We hope however that it may play a part in a more general shift from simple reliance on regulatory compliance as the guarantor of safety to a more proactive approach to safety measurement and monitoring. We believe that the primary question posed by regulators should be not ‘Show us how you are complying with our standards’, but ‘Demonstrate your organisation's approach to safety measurement and monitoring’. When healthcare organisations can do this effectively we will see a new maturity in the overall approach to patient safety.
Acknowledgments
We thank the Health Foundation for commissioning and supporting this work. We would like to thank everyone who contributed to our work on the Measurement and Monitoring of Safety including colleagues at the Health Foundation, colleagues at the Imperial Centre for Patient Safety and Service Quality (CPSSQ) who worked as members of the project team, members of the project Advisory Board and the healthcare organisations who participated as case study sites. A full list of acknowledgements can be found in the Measurement and Monitoring of Safety report (http://www.health.org.uk/public/cms/75/76/313/4209/The%20measurement%20and%20monitoring%20of%20safety.pdf?realName=haK11Q.pdf).
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Files in this Data Supplement:
- Data supplement 1 - Online supplement
Footnotes
-
Contributors CV and SB wrote the original research proposal. CV led and coordinated the project. SB and JC carried out the interviews and case studies. CV and JC wrote the initial draft of the present paper. All authors critical reviewed the manuscript, made additional contributions and gave final approval.
-
Funding Health Foundation.
-
Competing interests CV and SB carry out occasional patient safety consultancy and are directors of Burnett Vincent.
-
Provenance and peer review Not commissioned; internally peer reviewed.