Intended for healthcare professionals

Education And Debate

External assessment of health care

BMJ 2001; 322 doi: https://doi.org/10.1136/bmj.322.7290.851 (Published 07 April 2001) Cite this as: BMJ 2001;322:851
  1. Charles Shaw (cshaw{at}kehf.org.uk), programme director
  1. CASPE Research, 11–13 Cavendish Square, London W1M 0AN
  • Accepted 1 December 2000

A rash of external inspection is affecting the delivery of health care around the world. Governments, consumers, professions, managers, and insurers are hurrying to set up new schemes to ensure public accountability, transparency, self regulation, quality improvement, or value for money. But what do we know of such schemes' evidence base, the validity of their standards, the reliability of their assessments, or their ability to bring improvements for patients, staff, or the general population?

Box 1: Characteristics of effective external assessment programmes

Give clear framework of values—To describe elements of quality, and their weighting, such as the enablers and results defined by the European Foundation for Quality Management

Publish validated standards—To provide an objective basis for assessment

Focus on patients—To reflect horizontal clinical pathways rather than vertical management units

Include clinical processes and results—To reflect perceptions of patients, staff, and public

Encourage self assessment—To give time and tools to internalise assessment and development

Train the assessors—To promote reliable assessments and reports

Measure systematically—To describe and weight compliance with standards objectively

Provide incentives—To give leverage for improvement and response to recommendations

Communicate with other programmes—To promote consistency and reciprocity and to reduce duplication and burden of inspection

Quantify improvement over time—To demonstrate effectiveness of programme

Give public access to standards, assessment processes, and results—To be transparent and publicly accountable

RETURN TO TEXT

In short, not much. The standards, measurements, and results of management systems have not been, and largely cannot be, subjected to the same rigorous scrutiny and meta-analysis as clinical practice. No one has published a controlled trial, and there are too many confounding variables to prove that inspection causes better clinical outcomes, although there is evidence that organisations increase their compliance with standards if these are made explicit. But experience and consensus are gradually being codified into guidelines to make external quality systems as coherent, consistent, and effective as they could be (box 1). Much of this consensus is ignored by those who develop and operate new programmes.

Summary points

External assessment and inspection of health services are becoming more common worldwide, using a combination of models—ISO certification, business excellence, peer review, accreditation, and statutory inspection

There is common concern that voluntary and statutory programmes need to be integrated to ensure valid standards, consistent assessments, transparency, and public accountability

International consensus on the effective organisation and methods of external assessment is growing, but hard evidence of clinical benefit is lacking

The United Kingdom has many independent and statutory programmes but no effective mechanism for coordinating their activity, standards, and methods according to this consensus

The NHS must be willing to support a public-private coalition to bring realism, clarity, consistency, efficiency, and transparency to external assessment

In Britain there has been no consistent central strategy to support or coordinate existing external assessment programmes. The NHS has introduced new statutory bodies and triggered more formal programmes of visiting and assessment. Each brings a burden of inspection and requires resources for development, but responsibility for ensuring the integration, consistency, and value of such programmes has not been defined.

This article describes the growth of external assessment and the issues it raises around the world, particularly in Britain.

Common approaches

Many countries have voluntary and statutory mechanisms for periodic external assessment of healthcare organisations against defined standards, and some have been systematically compared.13 They are all meant to assure or improve some elements of quality, but they are usually run by different organisations without national coordination to make them consistent, mutually supportive, economical, and effective. Broadly, these mechanisms include variants on five approaches (box 2).

Box 2: Common models of external assessment in health care

International Organization for Standardization (www.iso.ch/)

Origin and focus—European manufacturing industry 1946; quality systems (often within individual department or function)

Standards—ISO 9000 series (quality systems); also specific for radiology and laboratory systems

Products—Certification

Malcolm Baldrige “excellence” model (www.asq.org/abtquality/awards/baldrige.html)

Origin and focus—US industry 1987; management systems and results

Standards—European and national variants published with criteria

Products—Self assessment, national awards

Peer review

Origin and focus—Health care; specialty based professional training, clinical practice, and organisation

Standards—Variable detail, limited access

Products—Accreditation (of specialty training)

Accreditation

Origin and focus—US health care 1919; service organisation, performance

Standards—Published with criteria such as acute care, long term care, primary care, networks

Products—Accreditation (of organisation or service)

Inspection

Origin and focus—National or regional statutes; competence, safety

Standards—Published regulations such as for fire safety, radiation exposure, hygiene

Products—Registration, licensing

RETURN TO TEXT

The International Organization for Standardization provides standards against which organisations or functions may be certificated by accredited auditors. These have been applied in health care, specifically to radiology and laboratory systems, and more generally to quality systems in clinical departments.4

The Baldrige criteria have evolved into national and international assessment programmes such as the Australian Business Excellence Model (www.aqc.org.au/) and the European Foundation for Quality Management (www.efqm.org/).5

Peer review is based on collegiate, usually single discipline, programmes to assess and give formal accreditation to training programmes but is now also extended to clinical services.6

Accreditation relies on independent voluntary programmes developed from a focus on training into multidisciplinary assessments of healthcare functions, organisations, and networks. These have spread from Western countries into Latin America,7 Africa,8 and South East Asia 9 10 during the 1990s. Mandatory programmes have recently been adopted in France,11 Italy,12 and Scotland.13

Registration and licensing are statutory programmes to ensure that staff or provider organisations achieve minimum standards of competence. There are also inspectorates for specific functions to ensure public health and safety.

National requirements

Several countries have recently received recommendations on their ability to ensure high standards in health care nationally. The general conclusions on the role of external agencies have been remarkably similar.

The US president's advisory commission on consumer protection and quality in health care recommended in 1998 that public and private programmes of external review should make their standards, survey protocols, decision criteria, and results available to the public at “little or no cost.”14 The organisations themselves should work towards a common set of standards, coordinate their activities to avoid conflict and duplication, and commit themselves to a national quality forum. This forum aims to devise a national strategy for measuring and reporting healthcare quality and in 1999 began to standardise performance measures for the nation's 5000 acute general hospitals.15

In 1999 the US inspector general of the Department of Health reviewed the external quality oversight of hospitals that participate in Medicare.16 She concluded that voluntary “collegiate” accreditation by the Joint Commission on Accreditation of Healthcare Organisations and “regulatory” Medicare certification by state agencies had considerable strengths (box 3) but also major deficiencies. She recommended that both systems should harmonise their methods, disclose more details of hospital performance on the internet, and be held more fully accountable at federal level for their performance in reviewing hospitals.

Box 3: Features of collegiate and regulatory systems for assessing health care

Collegiate

  • Focus on education, self development, improved performance, and reducing risk

  • General review of internal systems

  • Based on optimum standards, professional accountability, and cooperative relationships

Regulatory

  • Timely response to complaints and adverse events

  • In depth probe of conditions and activities

  • Based on minimum standards, investigation, enforcement, and public accountability

RETURN TO TEXT

An Australian taskforce recommended in 1996 that the government should formally acknowledge independent assessment programmes that met defined criteria and should enable them to disseminate information about their processes and findings to the public.17 Two years later an expert advisory group recommended “that accreditation or certification of healthcare organisations be strongly encouraged with incentives, or indeed made mandatory, but choice of accreditation/certification/award approaches be allowed.”18

In Scotland the Carter report on acute services recommended a single mandatory system of accreditation for hospitals and primary care.19 This should be patient centred, clinically focused, and complementary to internal quality improvement, and its explicit, measurable standards and reports should be in the public domain. This recommendation led to the Clinical Standards Board for Scotland.

International solutions

Countries have good reasons to be able to show that healthcare standards are not only consistent within their own territory but also that they are comparable with those of their neighbours, suppliers, and competitors. Several recent European and international initiatives are making traditional assessment methods more accessible, convergent, and relevant to health care.

International Organization for Standardization—The ISO 9000 series of standards were designed for manufacturing industries and have been criticised for using language that is difficult to interpret in terms of health services. The 2000 version will be more readily applied, and US and European initiatives are under way to develop ISO guidelines specific to health care.

European Foundation for Quality Management—The original “business excellence” model has given way to “excellence” in the 1999 version and has shifted emphasis from “enabling processes” to results of concern to patients, staff, and society

Accreditation—The international arm of the US Joint Commission on Accreditation of Healthcare Organisations has developed a set of multinational accreditation standards.20 In addition the International Society for Quality in Health Care has developed (”ALPHA”) standards and criteria (available from the society's website http://www.isqua.org.au/) against which an accreditation programme may apply to have its standards and process assessed and internationally accredited.21 These also offer a template for standardisation and self assessment to any external assessment programme.

Programmes in Britain

The royal commission on the NHS recommended in 1979 that a special health authority be set up as a development agency and guardian of standards.22 In the early 1980s several monitoring agencies were suggested or piloted,23 but, despite favourable response from national professional bodies to leaked proposals, no such national agency featured in the government's white paper of 1989 Working for Patients.24

In the absence of any governmental lead, several small peer review and (some large) accreditation programmes emerged as external voluntary mechanisms for organisational development. There are now over 35 such programmes with a wealth of standards and trained assessors but little integration, consistency, or reciprocity between them. Their number could be doubled if each royal college, faculty, and professional association were to establish independent accreditation programmes as a collegiate approach to clinical governance. NHS institutions also have their share of visits from clinical training programmes, inspectors (such as for fire regulations, environmental health, etc), and other watchdogs that have begun to publish standards (such as the NHS Information Authority Information Management Centre for data quality and NHS Controls Assurance for risk management and controls assurance).

The Clinical Standards Board for Scotland and the National Institute for Clinical Excellence (NICE), and Commission for Health Improvement (CHI) for England and Wales have been established to improve standards in the NHS. After years of policy vacuum, an early common task must be to tidy up: they must synthesise the experience of Britain and other countries25; provide public access to their own valid standards, reliable assessments, and fair judgments; and, above all, avoid duplication and inconsistency in defining and measuring standards. In short, they should be open to assessment against international criteria and lead the way to consistency and reciprocity within and between systems for improving patient services, clinical training, and public accountability.

Britain could borrow from the US and Australian recommendations for partnership between state and independent programmes for external assessment and define the terms of collaboration. Independent and statutory programmes could be jointly assessed and harnessed according to general criteria drawn from UK policy and experience overseas and from the more specific ALPHA standards.

We need to catalogue, harmonise, and orchestrate organisational standards and their assessment, not only in the NHS but also in the independent and social care sectors. The National Institute for Clinical Excellence has a clear responsibility for defining clinical standards in England and Wales. The Commission for Health Improvement is concerned with the organisation and delivery of clinical governance and national service frameworks, but it has no mandate to define or orchestrate organisational standards (even for its own reviews), and it is specifically excluded from the independent sector. In Scotland the Clinical Standards Board integrates some key features of these two bodies, particularly the task of defining and measuring standards, both clinical and organisational. With yet broader vision, the Scottish Executive has adopted a charter that sets out principles for public and professional inspectorates whose role includes evaluation of cases in the public interest, including health, education, and social work services (www.scotland. gov.uk). This offers a starting point for coherence and learning within and between sectors, and an example for the rest of Britain.

The UK Accreditation Forum (http://www.caspe.co.uk/) was set up in 1998 to support accreditation and peer review programmes, and the Academy of Medical Royal Colleges (www.aomrc.org.uk/) is working towards more coherent procedures for hospital visiting for recognition of training. Neither body has the resources or the authority to standardise standards or to regulate the regulators across the country.

What we need is a formal means to pool current experience, to drive convergence, and to help new programmes to be efficient, complementary, and effective—a resource centre to do for organisational and management standards what NICE, the Cochrane Centre, and the Scottish Intercollegiate Guidelines Network are doing for clinical practice. Its task should be to ensure that organisational standards, assessments, and general results are in the public domain; that the legitimate interests of the public, professions, providers, and funding bodies are balanced and supported; that lessons from successes and failures are systematically embedded in common core standards for assessment; that assessment methods and reporting are consistent in time, place, and service; and that expenditure on the development and operation of external assessment programmes is demonstrably justified by improvements in patient care.

Conclusions

Schemes for inspection, registration, revalidation, and review are proliferating with little national coordination or regard for the evidence of what has worked or not worked for health care in Britain or overseas. This leads to uncertainty among service providers about which standards to adopt, inefficiency in developing new inspection and development programmes, duplication and inconsistency of external assessments, and an excessive burden on the services under scrutiny. The collegial and statutory mechanisms need a public-private partnership, perhaps similar to the National Quality Forum in the United States, to bring clarity, consistency, and transparency to external assessment in Britain.

Acknowledgments

I am grateful for advice on drafts of this paper from Barbara Donaldson of Quality Health New Zealand, from Elma Heidemann of the Canadian Council on Health Services Accreditation, and from Lee Tregloan of the International Society for Quality in Health Care.

Footnotes

  • Competing interests I was former leader of the European “ExPeRT” research project (funded by the EC 1996-99). As former president of the International Society for Quality in Health Care, I seeded the international accreditation project and was a member of the American Joint Commission International standards task force. I am founder chairman of the UK Accreditation Forum and have been paid by the Health Quality Service and the Hospital Accreditation Programme. I currently have a contract with the International Society for Quality in Health Care to provide a research review of accreditation programmes around the world, but I otherwise receive no funding from any of the organisations mentioned.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.