Article Text

Download PDFPDF

Clinical governance
Prescribing how NHS trusts “do” quality: a recipe for committees but little action?
Free
  1. P M Whitty
  1. Correspondence to:
 Dr P M Whitty
 Centre for Health Services Research, University of Newcastle, Newcastle upon Tyne NE2 4HH, UK; p.m.whittyncl.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The jury is still out as to whether the current clinical governance model is the best way to improve quality

Clinical governance, together with a number of national bodies to support and monitor it, was established in the NHS in 1997.1 Seven years on, it seems a good time to reflect on how effective this national quality assurance/improvement strategy has been, particularly in the light of subtle changes in the UK government’s attitude to quality. The recent publication of a new standards framework,2 the replacement of the Commission for Health Improvement (CHI) with the Healthcare Commission from April 2004, and insistence that trusts will have greater local autonomy2 all suggest a change in the wind—although the direction of the change is not quite clear.

In the initial guidance on the duty of NHS trusts to implement clinical governance,3,4 the government set out the mandatory components of clinical governance (variously slightly modified since but broadly comprising clinical risk management, clinical audit, patient/service user involvement, education and training, clinical effectiveness and research and development, staff focus, and use of information) and some of the structures that must be in place (notably, a senior clinician on the trust board and a board level clinical governance committee). Not surprisingly then, almost all trusts now have these mandatory structures (see www.chi.nhs.uk for archived CHI clinical governance review reports) and many other committees and systems besides, with structures often mirroring the “seven pillars” of clinical governance prescribed in the national guidance. But what evidence is there that such structures improve the processes and outcomes of care? And could a “one size fits all” approach to structures even mitigate against such improvements? This has been a massive—and, no doubt, costly—initiative and other countries would do well to examine the UK experience in depth before embarking on any similar approaches.

Robust evaluations of the clinical governance initiative are very much needed. As Freeman and Walshe report in their paper in this issue of QSHC,5 evaluations of clinical governance implementation to date have tended to focus on implementation of structures. Their survey begins to address the issues of process and outcomes, and to distinguish between success in quality assurance and quality improvement dimensions. While their survey is necessarily limited to perceptions and does not include those of clinicians, it nevertheless begins to ask the questions that really matter. What they find is not overly encouraging, supporting earlier evidence that structures are now well embedded and requirements of quality assurance fulfilled, but finding little priority given to or progress in quality improvement.

Of course, evaluation of such a wide ranging policy initiative as clinical governance is challenging. It may be difficult to determine what the appropriate process and outcome improvements should be; organisation-wide programmes such as these are very difficult to evaluate in traditional study designs;6 and it may be impossible to identify adverse consequences that have been averted, for example, through the application of good risk management systems. It could also be said that some components—for example, those related to reducing the consequences of litigation—are good “business” practice for any organisation or that others like patient/service user involvement are a duty of public sector organisations. However, the core problem remains: setting overall priorities and policy is an appropriate task for government, while prescribing the detail of how these issues should be tackled at a local level is frequently counterproductive.7,8

Few would disagree that the assurance and continuous improvement of the quality of health care for people in the UK is an essential duty of all healthcare organisations. As to whether the current clinical governance model is the way to do it, the jury is still out. No doubt there are selected components within the model that may be more promising than others—for example, research on the safety agenda is being heavily invested in via the NHS R&D Patient Safety Research Programme, allowing the National Patient Safety Agency (www.npsa.nhs.uk) to investigate and apply an evidence base for this part of the initiative. By contrast, the UK still seems determined to promote an approach to clinical audit that has been shown to be limited at best.9,10 If the UK is really coming (back) to devolving more control to local organisations, isn’t it time to monitor their success in improving quality through their process and outcomes rather than their structures? The new Healthcare Commission avows that this is their mission (www.healthcarecommission.org.uk): whether they can achieve it remains to be seen.

The jury is still out as to whether the current clinical governance model is the best way to improve quality

REFERENCES

Linked Articles