Beyond blame: cultural barriers to medical incident reporting

https://doi.org/10.1016/j.socscimed.2004.08.055Get rights and content

Abstract

The paper explores the attitudes of medical physicians towards adverse incident reporting in health care, with particular focus on the inhibiting factors or barriers to participation. It is recognised that there are major barriers to medical reporting, such as the ‘culture of blame’. There are, however, few detailed qualitative accounts of medical culture as it relates to incident reporting. Drawing on a 2-year qualitative case study in the UK, this paper presents data gathered from 28 semi-structured interviews with specialist physicians. The findings suggest that blame certainly inhibits medical reporting, but other cultural issues were also significant. It was commonly accepted by doctors that errors are an ‘inevitable’ and potentially unmanageable feature of medical work and incident reporting was therefore ‘pointless’. It was also found that reporting was discouraged by an anti-bureaucratic sentiment and rejection of excessive administrative duties. Doctors were also apprehensive about the increased potential for managers and non-physicians to engage in the regulation of medical quality through the use of incident data. The paper argues that the promotion of incident reporting must engage with more than the ubiquitous ‘culture of blame’ and instead address the ‘culture of medicine’, especially as it relates to the collegial and professional control of quality.

Introduction

International research has shown that errors in the delivery of health care are a major threat to patient safety (Brennan & Leape, 1991, Wilson et al., 1995, Vincent, Neal, & Woloshynowych, 2001) In the National Health Service (NHS) of England and Wales it has been reported that mistakes or ‘adverse events’ in the delivery of health care are experienced in around 10% of inpatient admissions (Department of Health, 2000; Vincent et al., 2001). It has been calculated that the human cost of these mistakes could be more that 40000 lives a year with a financial cost to the service of over £2billion in additional care (Department of Health, 2000).

A ‘patient safety’ agenda is now well established in countries such as Australia, the US and the UK (Department of Health, 2000; Institute of Medicine, 1999; Wollf & Bourke, 2000). In the NHS, health policies have adopted the principles and practices of error management that have been successfully utilised in other industries, such as aviation or nuclear energy (Department of Health (2000), Department of Health (2001); Reason & Hobbs, 2003). Here the theories of cognitive and social psychology, ergonomics and ‘human factors’ have combined to produce a new orthodoxy of error management (Reason, 1997). From this perspective threats to safety are elaborated along two dimensions. The first recognises the individual component where cognitive lapses or aberrations lead to active errors. The second emphasises the latent factors that enable or exacerbate human error within organisational systems (Reason, 1997). Human behaviour is regarded as inherently error-prone but importantly these errors are facilitated or amplified by actions, decisions, and plans made elsewhere, or ‘upstream’ within the system.

The management of errors requires an acceptance of error with consideration given to the relationship between individual human behaviour and the factors that influence this behaviour (Reason & Hobbs, 2003). In practice, error management requires that organisations learn from their threats to safety, identify the underlying causes, and seek out opportunities for change. This commonly involves the introduction of designated incident reporting systems that enable front-line staff to communicate their safety concerns and experiences of error to those responsible for safety and quality. These incident reports then furnish organisations with the necessary information and capacity to make proactive and remedial changes.

It is recognised, however, that there are considerable barriers to the successful implementation of error management and incident reporting systems (Barach & Small, 2000). In the safety management literature, significance is given to the cultural barriers and the need to create a ‘safety culture’ (Helmreich & Merritt, 2001, Reason, 1997, Reason & Hobbs, 2003). Helmreich and Merritt (2001) have shown how safety management must navigate national, organisational and professional cultures where issues as diverse as individual responsibility, gender divisions, teamwork, competence levels, transparency and punishment interact to shape cultural attitudes towards safety. Considerable significance is given to the fear of blame or the ‘culture of blame’ that inhibits participation in incident reporting. It is argued that people are disinclined to be open an honest about their experiences of error because of the deep-seated assumption that they will be found at fault and held individually responsible or punished for the event. As such, the fear of blame and retribution are seen as major cultural barriers to incident reporting. For Reason (1997) this culture of blame arises, in the widest sense, from the primacy accorded to individual autonomy in Western culture and as such individual responsibility for mistake or blame is apportioned when ‘things go wrong’. As such there is an assumption that openness and transparency, including forms of incident reporting, make possible the allocation of individual responsibility and therefore serve to distribute blame and possibly secure some form of retribution. Reason has argued that for error management to make a meaningful contribution to safety it is necessary to break free from the “blame cycle” and promote a “reporting culture”. This he argues can be achieved through practical measures, such as the de-identification of reporters, protecting reporters and whistle-blowers from unwarranted reprisals, and providing meaningful feedback that highlights the purpose of error management. More recently the notion of a ‘safety culture’ has been elaborated to suggest that safety is driven by a ‘learning culture’ that actively seeks out previous experiences of error in an effort to ensure they do not happen again. This is underpinned by a ‘reporting culture’ where staff routinely document and communicate their experiences of error to enable this learning. Accordingly, it is suggested that high levels of reporting are secured through the creation of a ‘just culture’ that recognises human fallibility but importantly establishes clear expectations of responsibility and does not unfairly or routinely blame or punish those who make mistakes.

The ‘patient safety’ agenda in the NHS has embraced the principles of human factors and practices of error management (Department of Health (2000), Department of Health (2001)). The National Reporting and Learning System (NRLA) is currently being implemented across the health service to enable front-line staff to record and report their experiences of error, and it is anticipated that through the collection of this information error producing factors can be identified and managed. It is recognised, however, that there are considerable barriers to staff participation in incident reporting and significant levels of ‘under-reporting’, especially for medical physicians (Barach & Small, 2000; Coles, Pryce, & Shaw, 2001; Vincent, Stanhope, & Crowley–Murphy, 1999). Significant factors included individual uncertainties about the purpose of reporting, the practical design of incident forms, systems of organisational communication and feedback and apprehension about the unjust consequences of reporting (Coles et al., 2001, Vincent et al., 1999).

Significant among the barriers to incident reporting in health care is the ‘culture of blame’ that inhibits reporting because of the expectation that those found at fault will be individually held accountable or responsible (Coles et al., 2001; Department of Health, 2000; Vincent et al., 1999). Although this is widely recognised in the error management literature, it is important to put this within the context of health care cultures, especially medical professional cultures. Helmreich and Merritt's (1998) analysis of work and safety cultures in aviation and medicine makes the point that professional groups are characterised by high levels of self-esteem, invulnerability and denial. As such reporting is discouraged because of a fear that they could reveal specific flaws in professional competence and individual ability, and provide a basis for professional sanctions or punishment. In addition, Lawton and Parker's (2002) study of incident reporting found that reporting is constrained by the specific occupational hierarchies of health care where professionals are typically reluctant to report their experiences of error, rule violation or poor performance to senior colleagues because of the cultural taboos associated with whistle-blowing and the assumption that it could inhibit career development.

These studies of blame and incident reporting give an indication of other more deep-seated and long-standing cultural dimensions of health care that have an important bearing on the implementation of incident reporting. Specifically, other studies have shown how the medical profession is characterised by a ‘closed culture’ that inhibits openness (Department of Health, 2000; Kennedy, 2001). Rosenthal (1995), Rosenthal (1999) study of ‘problem doctors’ found that physicians generally accepted mistakes as a necessary feature of their work. It was expected, however, that any issues of competence or wrong-doing should be addressed through ‘in-house’ and ‘collegial’ practices that served to maintain the exclusivity of medical knowledge whilst simultaneously limiting exposure to non-professional groups. Allsop and Mulcahy's (1998) study of patient complaints found that physicians regarded complaints as a challenge to their expertise and technical competence, and therefore constituted a threat to their professional identity. However, it was also found that the shared feelings of vulnerability and the loss of status serve to promote a collective understanding and attitude towards complaints that maintains professional control and identity in the face of these external or non-professional challenges. These works highlight the significance of collegiality in medical culture, and accordingly illustrate the importance of internal or self-regulation to medical professionalism. It is well-established how the regulatory character of medicine, including both formal and informal practices of occupational control, have served to ensure professional monopoly in the evaluation of medical work and exclude the participation of non-professional groups in the management of technical performance (Allsop & Mulcahy, 1996; Friedson, 1970; Lupton, 1998; Rosenthal, 1995). This broader theoretical context of professional regulation and collegiality is therefore central to the issue of medical reporting.

Drawing from this theoretical background and with specific focus on the medical profession, this paper aims to explore the cultural attitudes and barriers to incident reporting in the NHS. Importantly, the success of incident reporting is to a large extent premised on the creation a ‘just culture’ that counters the fear of blame, encourages openness and underpins a ‘culture of reporting’ (Department of Health, 2001; NPSA, 2003). Initially, this work suggests that the notion and significance of ‘blame’ presented in policy is somewhat vague, whilst there is little current empirical data to show how the ‘culture of blame’ influences medical attitudes towards incident reporting. Secondly, although the fear of blame may indeed be a substantial barrier to reporting, there is little consideration for other cultural factors that could also influence participation in incident reporting. This paper therefore aims to provide an empirical account of the medical attitudes towards incident reporting, and with consideration given to the broader socio-cultural theories of medical professionalism (e.g. Freidson, 1970; Rosenthal, 1995), it aims to move beyond the ubiquitous concern with a ‘blame culture’ and engage the other deep-seated cultural features of medical professionalism.

Section snippets

Methods

The results reported here were gathered between 2001 and 2003 from a larger qualitative study of clinical risk management and incident reporting. The setting for the study was a single medium-sized NHS District General Hospital in the English Midlands. The organisational site was selected because it was found to be typical of other acute hospitals in the NHS that were currently coming to terms with patient safety policies.

Interviews constituted the predominant method of data collection. In

The fear of blame and the fear of reporting

All doctors involved in the research made reference to the “blame thing” or a “blame culture” when expressing their apprehensions about incident reporting. It was evident from the way doctors discussed blame that it was perceived to involve the unfair or inappropriate allocation of responsibility for poor performance or outcomes, and possibly the unwarranted recourse to reprisals and punishment. It was also evident that there were different sources of blame that also made doctors apprehensive

Discussion

The interviews with physicians revealed several significant themes that characterised the way in which they understood their work and their participation in incident reporting. These reveal interesting features of medical culture in general and identify important cultural barriers to incident reporting. The fear of blame from both peers and non-peers was certainly found to discourage medical reporting on the basis that reporting could damage professional reputations or led to unjustified

References (32)

  • J. Allsop et al.

    Regulating medical work

    (1996)
  • J. Allsop et al.

    ‘Maintaining professional identitiesdoctor's responses to complaints’

    Sociology of Health and Illness

    (1998)
  • P. Barach et al.

    ‘Reporting and preventing medical mishapslessons from non-medical near miss reporting’

    British Medical Journal

    (2000)
  • C. Bosk

    Forgive and remember

    (1979)
  • T. Brennan et al.

    ‘Incidence of adverse events and negligence in hospitalized patients’

    New England Journal of Medicine

    (1991)
  • J. Cole et al.

    The reporting of adverse clinical incident—achieving high quality reportingthe results of a short research study

    (2001)
  • Department of Health (2000). An organisation with a memory. London:...
  • Department of Health (2001). Building a safer NHS for patients. London:...
  • J. Evetts

    ‘New directions in state and international professional occupationsdiscretionary decision-making and acquired regulation’

    Work, Employment and Society

    (2002)
  • R. Fox

    ‘Training for uncertainty’

  • E. Freidson

    Profession of Medicinea study in the sociology of applied knowledge

    (1970)
  • S. Harrison

    ‘New Labour, Modernisation and the Medical Labour Process’

    Journal of Social Policy

    (2002)
  • S. Harrison et al.

    Controlling Health Professionals

    (1995)
  • Helmreich, R., & Merritt, A. (2001). Culture at work in aviation and medicine, Aldershot:...
  • Institute of Medicine (1999). To Err is Human: Building a Safer Health System. Washington, DC: National Academy...
  • I.(Chair). Kennedy

    Bristol Royal Infirmary Final Report

    (2001)
  • Cited by (0)

    View full text