Article Text

Download PDFPDF

A very public failure: lessons for quality improvement in healthcare organisations from the Bristol Royal Infirmary
  1. K Walshe, senior research fellow1,
  2. N Offen, head of clinical quality and chairman, British Association of Medical Managers2
  1. 1Health Services Management Centre, University of Birmingham, Park House, 40 Edgbaston Park Road, Birmingham B14 2RT, UK
  2. 2NHS Executive, Eastern Regional Office, Victoria House, Capital Park, Fulbourn, Cambridge CB1 5XB, UK
  1. Dr K Walshe k.m.j.walshe{at}bham.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

When a major failure happens in a healthcare organisation in the British NHS an inquiry of some kind usually ensues, tasked with finding out what happened, diagnosing the problems or causes of the failure, and making recommendations for changes in policy or practice which would prevent or make such a failure less likely in the future.1 The inquiry is akin to an organisational post-mortem, intended to move beyond a simple description of the symptoms and effects of the failure and to provide a more insightful analysis of its pathology and aetiology. While the symptoms of failure are often clinical in nature—poor standards of care, avoidable mortality and morbidity, distressed patients and their families, and so on—the pathology of failure is usually organisational, concerned with things such as organisational leadership, management structures and systems, organisational culture, interprofessional relationships and teamwork. This paper presents an analysis of a recent and tragic example of failure at an acute hospital in the south west of England, and explores what lessons it offers for those involved in quality improvement and clinical governance in health care.

Events at the Bristol Royal Infirmary

The Bristol Royal Infirmary is a renowned hospital with a long and distinguished history. It has served the healthcare needs of people in Bristol and the south west of England for over 270 years and is a national and international centre for clinical research and innovation. Regrettably, its name is probably now best recognised in the UK and elsewhere for a tragic sequence of events in paediatric cardiac surgery in the late 1980s and early 1990s in which many young children lost their lives (box 1).

In the late 1980s some clinical staff at the Bristol Royal Infirmary, particularly a recently appointed consultant anaesthetist named Stephen Bolsin, began to raise concerns about the quality of paediatric cardiac surgery undertaken at the hospital by two surgeons who were responsible for both adult and paediatric cardiac surgery. In essence, it was suggested that the results of paediatric cardiac surgery were less good than at other comparable specialist units in the UK and, in particular, that mortality was substantially higher, especially for some types of operation. Between 1989 and 1994 there was a continuing conflict at the hospital about the issue between surgeons, anaesthetists, cardiologists, and managers. The Royal College of Surgeons and the Department of Health both became involved in the increasingly acrimonious dispute, and the media became aware of the concerns. Agreement was eventually reached that a specialist paediatric cardiac surgeon should be appointed and that, in the meantime, a moratorium on certain procedures should be observed. In January 1995, before the new surgeon had taken up his post, a child called Joshua Loveday was scheduled for surgery against the advice of anaesthetists, some surgeons, and the Department of Health. He died and this led to further surgery being halted, an external inquiry being commissioned from experts from the Great Ormond Street Hospital for Children in London, and to extensive local and national media attention. Parents of some of the children complained to the General Medical Council (GMC) which, in 1997, opened an investigation into events in Bristol and specifically examined the cases of 53 children, 29 of whom had died and four of whom suffered severe brain damage. The GMC inquiry, which concluded in 1998, found three doctors guilty of serious professional misconduct—James Wisheart and Janardan Dhasmana, the two cardiac surgeons involved in the operations, and John Roylance, a radiologist who was the chief executive of the hospital at the time. Mr Wisheart and Dr Roylance were struck off the medical register. The Secretary of State for Health immediately established a full public inquiry, chaired by Professor Ian Kennedy, professor of health law, ethics and policy at University College London. The Inquiry, which cost about £14 million, began hearing evidence in October 1998 and finally published its report with almost 200 recommendations for the NHS in July 2001.

When the problems in Bristol came to light they met with intense and sustained political, media and public interest, both in the UK and internationally. It can be argued that the Bristol affair has caused a sea change2 in medical and wider British societal attitudes to professional self-regulation, clinical competence, and healthcare quality improvement, and has been an important lever or catalyst for current reforms in these areas in the UK.3–5 It has also been the subject of perhaps the longest running and most expensive public inquiry in the history of the British NHS.6

    Key messages

  • Major quality failures in healthcare organisations provide important insights which can be used to strengthen and improve systems for quality improvement and clinical governance

  • An analysis of clinical failures in paediatric cardiac surgery at the Bristol Royal Infirmary in the late 1980s and early 1990s, which have been the subject of a major public inquiry, suggests that the hospital had poor and ineffective systems for quality improvement which made little contribution either to detecting the quality problems or to dealing with them.

  • The lessons from Bristol reinforce research findings which suggest that effective quality improvement needs strong and committed clinical leadership, clear organisational responsibility for quality, resource investment in quality improvement, and careful monitoring of progress and impact.

  • Quality improvement holds up a mirror to the organisation: quality programmes are formed in the organisation's image and reflect the function or dysfunction to be found there. For that reason, progress in quality improvement may be a useful marker of wider organisational function and health.

However, the purpose of this paper is not to review the events in Bristol or to make any comment on the wider technical, clinical, or professional issues that they raise.7 Rather, it focuses on the lessons from Bristol for those now engaged in quality improvement in healthcare organisations. It is clear that there was a major quality failure at the Bristol Royal Infirmary. This paper sets out to identify what we know about how systems for quality improvement in Bristol worked at the time, and what implications the Bristol experience holds for the current and future development of clinical governance in healthcare organisations.

Approach

The authors of this paper were commissioned in late 1999 by the Bristol Royal Infirmary Inquiry to provide an evaluative commentary on the systems for review and audit at the United Bristol Healthcare NHS trust between 1984 and 1995, with a brief to describe “the nature and merits of arrangements adopted at Bristol during this period and how they compared with contemporary policy and professional guidance, accepted standards of good practice, and the systems adopted by similar specialist centres or NHS trusts elsewhere in England”.8 In addition, KW was a member of the Inquiry's expert witness panel and gave oral evidence to the Inquiry. This paper draws on the evidence assembled and reviewed for that commentary between January and April 2000, which was all taken from the very substantial volume of documents, written and oral evidence collated by the Inquiry itself. More information on our approach and these data sources is contained in our report to the Inquiry.8

The changing policy context

Over the period studied (1984–95) the place of quality improvement in the British NHS was transformed. At the start of that period there were few systems to assure the quality of care in British healthcare organisations. While some pioneering initiatives were underway in particular specialties or organisations,9 and although many clinicians took part in a range of informal and quasi-educational activities aimed at improving the quality of practice, there were few, if any, healthcare providers which could claim to have a systematic approach to measuring or improving quality.10 Moreover, many clinicians and professional organisations had a record of being disinterested, sceptical, or even actively hostile towards the idea that systematic or formal quality improvement activities had much to offer in health care.11,12

Ten years later in 1995 much had changed. A raft of national and local quality initiatives13 accompanied by a five year £250 million programme of investment in quality improvement had generated a great deal of activity,14 virtually all healthcare organisations had established clinical audit or quality improvement systems and structures,15 and the culture had been changed substantially or even transformed. It had become more common to question clinical practices and to seek to improve them, activities which might have been difficult or even impossible a decade earlier.

The most significant catalyst for this process of change was the introduction of medical audit in 1989 as part of a wider set of NHS reforms. For the first time NHS doctors were required to take part in formal quality improvement activities, and resources were dedicated to developing the necessary infrastructure and support.16 Between 1990 and 1995 policy in this area evolved continuously. Audit arrangements were established for nurses and other clinical professionals, multiprofessional clinical audit was promoted, systems for risk management were put in place, a growing focus on clinical effectiveness and evidence-based health care was developed, and the ideas of quality improvement became increasingly embedded in healthcare organisations.17

Development of medical and clinical audit in Bristol

The events at the Bristol Royal Infirmary outlined in box 1 took place against the backdrop of national developments set out above. In the late 1980s, when the problems in paediatric cardiac surgery first became apparent, few UK hospitals had any organised systems for quality assurance or audit. By 1995, when the failures in Bristol came to a head, virtually all UK hospitals had developed an infrastructure for quality improvement and clinical audit, although their effectiveness varied widely.18 We offer below a brief summary of the development of such arrangements in Bristol.

Before 1990 there is little evidence that systems for review and audit were established at the Bristol Royal Infirmary in any systematic or organised form and, in this regard, it was not unusual. In December 1990 the Bristol district health authority established a district audit committee in response to the national policy guidance mentioned above. It was almost wholly composed of medical staff and was made responsible to the hospital medical committee. An early decision was taken to devolve responsibility for medical audit and the resources available for audit to directorates. The committee's formal remit, which remained unchanged in subsequent years until 1994, was primarily concerned with promoting audit, facilitating its development, advising on audit issues, and reporting on progress. It had few if any formal powers or sanctions at its disposal.

In 1991, once the United Bristol Healthcare NHS trust (UBHT, which incorporated the Bristol Royal Infirmary) had been established, it assumed responsibility for medical audit. In line with the culture of the organisation, which emphasised the maximum devolution of responsibility and clinical freedom,19 most of its resources for medical audit were distributed directly to clinical directorates and no central audit or quality function was set up. Much of the money was spent on a range of information technology investments such as clinical information systems. This pattern of spending continued up to 1995. While some clinical directorates reported on their progress to the audit committee, others did not, and so the picture of progress from contemporaneous documents is rather incomplete. There is no evidence that directorates which did not report were followed up in any way.

At that time, NHS trusts had to report on the development of medical and clinical audit to the then regional health authority which was responsible, among other things, for allocating funds for audit and monitoring progress. In fact, UBHT did not return the required data to the regional health authority and so is omitted from monitoring reports of the time.20,21 The regional health authority does not seem to have pursued the information although it did send a visiting team to the trust in 1994 which produced a generally critical report on its arrangements for clinical audit.22 The Bristol district health authority (which later became Avon health authority) showed some concern about the progress of audit at UBHT, but its efforts to become involved were rebuffed by the trust.

In 1994 the UBHT medical audit committee, responsible to the hospital medical committee, was replaced by a new clinical audit committee which reported to the trust board. The remit, leadership, and membership of the committee were changed but the devolved approach to the organisation of audit was continued, with the trust's allocation for clinical audit being largely devolved to clinical directorates while the balance was mostly used for investment in information technology. In 1996 arrangements for clinical audit at UBHT were substantially revised to bring audit resources including both finances and staff together; to provide greater central coordination, monitoring and control of clinical audit; and to strengthen the remit of the clinical audit committee.

Clinical audit and events in paediatric cardiac surgery

The first and most obvious question to ask is whether or not the developing systems for medical and clinical audit at UBHT played any role in the events in paediatric cardiac surgery which were outlined in box 1. It might be hoped that those systems for audit would help to identify or raise the problems, would act as a forum for those who had concerns about the quality of care to air those concerns and start discussion and debate, would provide a mechanism for investigating and analysing the situation, and would help to find ways to take action to resolve the problem.

In fact, audit was the “dog that did not bark”.23 Over this period the formal systems for medical and clinical audit in Bristol were never really used to tackle the concerns in paediatric cardiac surgery. One would not know from reading the documents of the time—such as the annual reports on clinical audit or the minutes of audit committee meetings—that any problem existed. In the field of paediatric cardiac surgery, what audit activity there was in 1990 and 1991 dwindled and more or less ceased during the early 1990s. As the atmosphere became more strained and the professional conflict more serious, audit was an early casualty of the problems, not part of their solution.

This is not to say that those responsible for medical and clinical audit in the trust were unaware of the problems in paediatric cardiac surgery. The evidence shows that Stephen Bolsin, the primary whistleblower in the case, made the chair of the medical audit committee aware of his concerns as early as 1990,24 and many of the main protagonists played a part in the arrangements for medical and clinical audit as members of the audit committee or as audit leads for their specialty. However, the arrangements for audit seem not to have been able to contribute to solving the problems within the trust. The systems for medical and clinical audit at UBHT failed in this regard, and the reasons for that failure deserve further exploration.

Our analysis leads us to conclude that the following five main factors reduced the effectiveness of clinical audit and quality improvement arrangements at UBHT and contributed to their failure to detect and address problems in paediatric cardiac surgery:

  • leadership and direction at a corporate or trust level;

  • the way in which resources and support for clinical audit were used;

  • the audit approaches, methods and techniques employed;

  • a tendency towards confidentiality and even secrecy about audit and quality issues;

  • the way in which arrangements for monitoring and reporting on progress in audit and quality improvement worked.

Each of these is described in more detail below with the aim of highlighting the potential lessons for the current and future development of clinical governance and quality improvement in healthcare organisations.

Leadership and direction

The UBHT audit committee and its chair adopted a passive and low profile approach to leadership. The audit committee had rather limited powers and little apparent influence. It did not control the resources for medical or clinical audit, it had no audit staff working for it, those on the committee were not the people responsible for clinical audit in directorates, almost all managers were excluded from it, and it had no form of reporting relationship with the trust board until 1994. In this position the clinical audit committee's formal remit was focused on facilitating, supporting, advising, and promoting medical and clinical audit. It had no powers or sanctions of its own, did little to develop or pursue a strategy for clinical audit, and seems to have offered little leadership in audit within the trust. It adopted a fairly traditional concept of the place of medical audit, considering it to be mainly or wholly a professional concern in which doctors would review what they did with other doctors, for which the results would be confidential to those concerned, and from which education and changes in practice would emerge naturally.

Research suggests that strong clinical leadership is perhaps the most important single determinant of the progress of clinical quality improvement in healthcare organisations.17,25 At UBHT audit lacked such leadership. There was limited vision or strategic direction and little attempt at planning or management. In some ways this reflected the wider culture of the organisation.19 Progress was certainly slowed and limited by this approach to leadership.

It should also be noted that the leadership of medical and clinical audit rested in part with one of the two surgeons at the centre of events in Bristol which created a serious conflict of interest. James Wisheart was medical director of the trust from 1991 to 1995 and so was a member of the medical and clinical audit committee. At one time he chaired the hospital medical committee to which the audit committee reported. He even chaired the clinical audit committee itself for a short period during 1994 when events in paediatric cardiac surgery were coming to a head. The close involvement of Mr Wisheart in the management of clinical audit at UBHT, and the lack of any mechanism for resolving the resulting conflict of interest when his clinical performance was called into question, probably made it less likely that the clinical audit systems would be used to deal with the problems in paediatric cardiac surgery.

Resources and support for clinical audit

UBHT was given considerable resources to support the development of clinical audit between 1990 and 1995. Known funding for medical and clinical audit at the trust over that period amounted to over 1 million pounds (table 1). UBHT was one of the largest acute trusts in the region with a large number of consultant medical staff. Since funding was distributed in part pro rata to numbers of consultant medical staff, UBHT received more funding for medical audit than any other trust in the region.

Table 1

Funding for medical/clinical audit at UBHT

However, it is not possible to tell from the available documents how most of the resources for medical and clinical audit were used. Most of the funding was distributed on a formula basis to clinical directorates, but there are few if any data available on how clinical directorates then used this funding, and how their use of it contributed to the development of clinical audit. It is difficult to see from the available papers how or, indeed, whether this substantial level of funding helped to progress the development of medical and clinical audit.

Some proportion of the audit funding was used to employ audit assistants or clerks in most directorates from around 1991 to 1995. Because of the devolved approach to the management of medical audit resources, it was left to each clinical directorate to specify the skills needed from their audit assistant and to recruit appropriately. In many cases the role of audit assistant was combined with secretarial or clerical duties, and it was largely seen in that context as a relatively unskilled position. The placing of audit staff in clinical directorates left them somewhat isolated from colleagues with similar roles, and made the sharing of skills, coordination of work, or development of specialisation difficult.

It is apparent that a substantial investment was made in information technology, with the intention that the computer systems purchased would support medical and clinical audit. The level of investment is difficult to quantify but it was probably the largest single area of expenditure from audit resources between 1990 and 1995. However, it appears that these information technology systems were not widely used to provide information for medical and clinical audit, and that their value was increasingly questioned. Problems were encountered with the functionality of software systems and their integration into clinical practice, most clinical audits did not use the data they gathered, and systems gradually fell into disuse.

In retrospect, resources for medical and clinical audit at UBHT were not well managed or well used. The level of funding available was substantial and could have supported the establishment of a strong central clinical audit function which would have had the skills and expertise needed to support clinical audit activities in directorates and specialties. Such a clinical audit department, as created at many other NHS organisations at the time,26,27 might have been able to play a significant role in dealing with the problems in paediatric cardiac surgery.

Clinical audit methods

In some parts of UBHT there were examples of good clinical audit practice in departments or specialties which understood and applied the ideas of clinical audit and were successful in producing important quality improvements. For example, the specialties of oncology, ophthalmology, anaesthetics, and general medicine all seem from the available documentation to have had active and worthwhile programmes of clinical audit. In other words, within the trust there were examples of good practice in clinical audit which could have been used to promote and encourage similar good practice elsewhere.

However, it appears that in many specialties rather less rigorous and effective approaches to medical and clinical audit predominated. For example, unstructured case presentations, discussions of deaths and complications, and reviews of quantitative data on throughput and workload were clearly seen as acceptable audit activities. Again, the culture of the organisation and the structures adopted meant that little was done to develop audit skills and expertise in directorates, to encourage the use of rigorous audit methods, to spread good audit practice from one directorate to another, or to support and promote changes in clinical practice where they were indicated.

Confidentiality and secrecy

The confidentiality of medical and clinical audit was often a concern for clinicians, especially in the early days of medical audit. Doctors were worried that the disclosure of data on clinical quality to anyone other than their peers (or just their immediate colleagues) would lead to hasty comparisons, inappropriate judgements, and further action. There were also concerns that the disclosure of medical audit data to plaintiffs' solicitors in cases of clinical negligence litigation would adversely affect such actions and make them more difficult to defend.28

UBHT adopted regional guidance on the confidentiality of medical audit data which was restrictive, even by the standards of the time. It essentially limited access to such data to those immediately involved in the clinical audit itself and prevented its wider dissemination. For example, although the chair of the audit committee was permitted to see the minutes of directorate audit meetings, he or she was not allowed to then use that information in any way that involved further disclosure, a provision which it could be argued severely limited his or her scope for action in raising issues of concern. It is evident that clinicians at UBHT raised worries about the confidentiality of audit, and that the medical audit committee responded to those concerns by being very cautious about providing information on any audit activities to anyone, even within the trust to the trust board and its chair.

This rather secretive approach to audit and its results made it less possible for the systems for clinical audit to be used to address the problems in paediatric cardiac surgery, even if they had been raised. It would have been difficult to have an open discussion without breaching the confidentiality arrangements, and involving others such as the trust board would not have been possible without the consent of the clinicians involved. Confidentiality was a barrier to dealing with the problems rather than an aid.

Monitoring and reporting

Between 1990 and 1995 the medical and clinical audit committee at UBHT monitored the progress of audit in departments and specialties by asking them to provide periodic returns (quarterly for most of this period). These returns described the audit activities which they were undertaking, and they were collated to produce an annual audit report for the trust.

In theory the monitoring process was a good idea but, in practice, it did not work well for two reasons. Firstly, many clinical specialties and directorates failed to return the data requested and the clinical audit committee lacked the will and resources to pursue them. As a result the annual reports referred to above are incomplete and some specialties (including paediatric cardiac surgery) are never mentioned. Secondly, the information which was gathered does not seem to have been used to identify and spread good practice or to focus on areas where more help or support was needed. The monitoring and reporting process was summative, aimed at describing audit activity for those who might want to know about it, but was not formative in any way, aiming to influence or direct that activity.

Anyone reading the annual audit reports from the trust or the regional health authority would be likely to conclude that audit at UBHT was making little progress, but noone appears to have acted to deal with that problem. To their credit, if somewhat belatedly, the regional health authority's clinical audit team visited the trust in March 1994 and wrote a critical report which highlighted a number of concerns about the effectiveness of audit arrangements at UBHT. It pointed to the devolved responsibility for medical audit and the resultant lack of coordination and oversight; the confusion of responsibility for audit between specialty audit leads and clinical directors; the lack of power and influence of the audit committee; the tendency for important quality issues to be dealt with outside the audit arrangements; the anomalous reporting arrangements of the audit committee; the slow progress in moving from medical to clinical audit; the limited involvement of non-medical clinicians in audit; the isolated position, confusion of responsibilities, and lack of support of audit assistants in clinical directorates; and the need to question the value of the trust's substantial investment in information technology. The report's criticisms do not seem to have been taken on board by the clinical audit committee or board at UBHT at the time, although the later reorganisation of clinical audit at the trust in 1995/6 did address most of the concerns.

Lessons for quality improvement and clinical governance

Many of those who were involved in medical and clinical audit in NHS organisations in the early 1990s when formal quality improvement systems were first being developed will find our description of arrangements at UBHT all too familiar. They may even feel that, had their institution faced a similar inquiry, they too would have been criticised for the conduct of medical and clinical audit. Because of the rapidly changing policy context, the beguiling power of hindsight, and the potential bias that results from our knowledge of the tragic events in Bristol, it is difficult to make a fair judgement about how UBHT's clinical audit arrangements compared with those elsewhere in the NHS at the time. It should be noted that it was not uncommon for teaching hospitals like UBHT to be slow to respond to the development of medical and clinical audit for a number of reasons to do with the size, culture, and complexity of these organisations. Even so, we believe the evidence we have reviewed shows that UBHT had, by the standards of the time, poor and ineffective systems for clinical audit.18

The experiences of the Bristol Royal Infirmary reinforce many of the research findings from evaluations of clinical audit and quality improvement activities in healthcare organisations. But research reports often make rather dry reading and their findings are not always given much credence by practitioners. The tragic story of events in Bristol has an emotional and narrative power that research can rarely match, and may be more likely to bring about real change. We think that those now engaged in the development of clinical governance in the NHS in England, and those working in quality improvement in healthcare organisations elsewhere, can learn a great deal from the experiences of the Bristol Royal Infirmary and use them to strengthen and support their own local programmes and activities:

  • The importance of strong and effective clinical leadership in quality improvement is reiterated and emphasised by the Bristol experience. Clinicians who lead quality improvement need to be well regarded and respected by their clinical colleagues, be genuinely committed to the ideas of quality improvement, have the time to commit to a leadership role, and have the managerial and leadership skills the role demands. They have to be able to articulate and promote a shared corporate belief in the importance of quality, even when circumstances or events make this difficult or challenging.

  • While quality is often rightly regarded as “everybody's business”, organisations need a strong corporate focus for quality improvement which is probably best served by a central quality or audit function which works in support of operational management units such as divisions or directorates. Devolving responsibility for quality improvement as UBHT did, in the mistaken belief that this will engender greater ownership and participation among clinicians, risks diluting and dissipating such endeavours and threatens their effectiveness.

  • Quality improvement needs resource investment, but those resources should be stewarded and used wisely. UBHT had plenty of resources for quality improvement but did not use them well. Resources are perhaps best used to provide support staff who can facilitate the quality improvement process, or to release clinical staff to focus on quality problems. Investments of resources for quality improvement in general organisational infrastructure such as information technology are unlikely to be productive and may divert resources for quality improvement from more worthwhile areas.

  • Organisations need to monitor the progress of quality improvement carefully in ways that will alert them to areas or departments where progress is slow or lacking. However, they then need to be willing and able to take action to deal with such problems and have the resources, incentives, or sanctions available to do so. UBHT had reasonable monitoring and reporting systems in place but they were not used.

Research suggests that, when healthcare organisations establish quality improvement or clinical audit programmes, those programmes reflect the organisational culture and context in which they are established.17,29 Healthcare organisations with a strong and shared vision and values, consistent and stable clinical and managerial leadership, good interprofessional relationships, and well established clinical/managerial arrangements seem to have been more successful at making quality and audit programmes work. In contrast, organisations facing major external threats or changes (such as mergers, financial problems, or reorganisation) where there is weak and ineffectual leadership with poor relations between managers, doctors, and other clinical professionals and with little clinical engagement in management are much less able to establish an effective quality or audit programme.

In a sense, quality improvement or clinical governance holds up a mirror to the organisation30 because the quality programme is shaped in the organisation's image and reflects the function or dysfunction to be found there. This means that, paradoxically, those organisations most in need of quality improvement programmes may be least able to make them work. But it also means that the progress of quality improvement or audit activities may provide a useful marker of wider organisational function or health. This, it can be argued, holds true both for NHS trusts and for the suborganisational units like divisions, clinical directorates, or departments of which they are constituted.

Conclusions

The philosopher and novelist George Santayana observed a century ago that “those who cannot remember the past are condemned to repeat it”.31 It is chastening to compare the findings from the Bristol Royal Infirmary Inquiry with those from a famous inquiry in 1969 into failings in the care of long term mental health patients and people with learning disabilities at the Ely Hospital in Cardiff.32 Both describe organisational failures of leadership, culture, and management which resulted in real harm to patients, and they make many similar recommendations. The crucial challenge for those who are responsible for leading healthcare organisations and for all those who work within them is to make some good come from the tragic events in Bristol by using them to bring about change and improvement.

Acknowledgments

At the time of writing Kieran Walshe was a Harkness Fellow in Health Policy at the University of California at Berkeley. He was supported by the Commonwealth Fund, a New York City based private independent foundation. The views presented here are those of the authors and not necessarily those of the Commonwealth Fund, its directors, officers or staff. This paper draws on research commissioned by the Bristol Royal Infirmary Inquiry and conducted by the authors, but the views presented here are those of the authors and not necessarily those of the Bristol Royal Infirmary Inquiry.

References

View Abstract