Article Text

Download PDFPDF

Learning how to make routinely available data useful in guiding regulatory oversight of hospital care
Free
  1. Martin Bardsley
  1. Correspondence to Dr Martin Bardsley, The Health Foundation, 90 Long Acre, London, WC2E 9RA, UK; martin.bardsley{at}health.org.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Though the past 20 years have seen a series of changes to the independent regulation of healthcare,1 there is surprisingly little empirical work that evaluates the effectiveness of different approaches. Even in the related area of accreditation, where there are more studies of impact, the literature is ‘limited’.2 There is no shortage of strongly held opinions on the best approach. Though most agree about the need for some form of regulation to offer an independent review of the quality of healthcare, there is less agreement about the best style and methods to be adopted. In England, the situation is further complicated in that national healthcare regulators have been subject to periodic regime changes and regularly review and revise their approaches.

Systems of government-funded but independent regulation of healthcare providers have been created in a number of countries as recognition of the limitations of both pure healthcare markets, on the one hand, and of a centralised controlling bureaucracy, on the other.3 The regulators work within a legislative framework and seek to minimise the chances of major quality lapses and to protect the markets for healthcare. In England, regulation goes beyond accreditation and public reporting of quality—it can carry sufficient weight to affect the survival of individual organisations and on the jobs of individuals and boards within those organisations. Yet it remains at arm's length from direct government control. In many countries, “governments have turned to ‘regulation’ as an appropriate balance between over-centralised governmental control on the one hand and an unbridled market on the other”.4

The central task regulators perform at national level is daunting. For example, the national regulator in England (the Care Quality Commission (CQC)) describes its role as “to monitor, inspect and regulate services to make sure they meet fundamental standards of quality and safety and we publish what we find, including performance ratings to help people choose care”.5 Expectations can often be unrealistically high,6 and a national regulator cannot act as a guarantor of high-quality care all of the time in all of the places where care is delivered. Apart from the sheer breadth of services to be considered is the fact that quality of care is such a complex and multifaceted concept. Moreover, assessing the quality of care often requires first-hand observation and experience since it is rarely possible to rely solely on reported performance measures. Most regulatory systems recognise the need for some form of inspection or on-site investigation. This incurs costs to both the regulator and the regulated, and the more inspection, the greater the potential burden on front-line services. For this reason, regulatory fashion talks of the need to be proportionate and targeted and to tailor inspection to the scale of potential problems, not doing the same thing everywhere but focusing precious resources on the areas of highest risk.7 This is also seen as a way to minimise the burden or distraction of inspection for services that are already performing well and probably would not benefit.

These are worthwhile aims, but exactly how does one identify which services are at highest risk? The approach adopted by the CQC has been to make the most of existing data by using a series of indicators that are likely to be associated with poorer quality care in hospitals. The rationale is that by using existing data one can potentially reduce the burden on the service. The approach builds on earlier work in the Healthcare Commission8 ,9 and others such as John Yates in Birmingham,10 ,11 who advocated the use of a series of process indicators to act as telltale signs of more important failures in service delivery. The scores produced by these indicators are not considered to be measures of performance but rather indicate a potential concern. The value of the risk scores will in part depend on how well they are able to identify genuine failures in care. But in practice devising such a system to identify risk is not simple or straightforward, and it is important that these systems are subject to continuous refinement through testing.

Testing the targeting process

The paper by Griffiths et al12 published in this issue tried just such a test. They looked at the performance of the current prioritisation systems adopted by the CQC. Known as the intelligent monitoring (IM)13 system, it generates a risk score created from an aggregation of 150 indicators considered to be predictive of risk to quality of care. The study took the first tranche of 103 acute trusts to receive a new inspection regime and compared the subsequent summary score after inspection with the original rating. So, the test set by the paper was that the risk score would be related to the final rating. They found no significant relationship between these two scores and concluded that IM was failing to do its job.

We should note that IM was intended to prioritise inspection by identifying organisations at greatest risk of failings in quality, and thus IM would not necessarily be expected to be inversely correlated with higher inspection ratings. There is comfort for the CQC in that trusts rated ‘inadequate’ were more likely to have higher IM scores than other trusts, but there was not a statistically significant relationship and many false positives and false negatives.

While it would have been nice to see a good match between prior IM risk scores and subsequent ratings, the absence of a strong relationship is not all that surprising. The paper compares two high-level aggregated summary scores that are each structured (with varying degrees of transparency) from many diverse elements that attempt to capture a complex construct such as quality of care (or risk of lapses in quality). Both the IM score and the inspection ratings have to encapsulate many different organisations offering hundreds of different services to millions of users. The findings of an inspection are summarised in a simple statement such as ‘outstanding’ or ‘inadequate’ to help communicate findings in a way that is simple and direct. But we know that each hospital is complex and multifaceted—even within an ‘outstanding’ organisation, there will be elements of poor practice, just as in the ‘inadequate’ hospital there are examples of excellence.14

It is worth noting that the CQC does not rely solely on the IM risk score to prioritise trusts for inspection but also uses other evidence and a wider palate of evidence.15 In fact, it seems that the information from within the IM score is used more to shape the structure of the investigation, for example, to generate key lines of inquiry. The risk scores seem to be more important in providing context for inspectors. Indeed, the CQC is just about to complete a round of inspection that covered all hospitals in the country, meaning that over longer periods of time it has not been selective in the hospitals inspected.

Creating an agenda for learning and development

So, do we throw away any attempt at targeting—resorting either to a random selection or simple rotation when choosing which hospitals to inspect next? The idea of targeting and proportionality in regulation has been around for some time. It has an appealing logic: to get the most from constrained regulatory resources, we need some relatively sophisticated ways to choose which services are inspected and how. Though it is a difficult task, I find it hard to believe that there are not some prior indications of when quality of care may be at risk and that good regulators should seek to identify these and use them. In fact, it would seem perverse if a regulator did not try and exploit existing information sources to shape its schedule of inspection. If you had to choose which organisation to inspect next, why would you look at the organisation where all the indications are positive in preference to one where data suggest there are multiple problems?

Of course, as usual, the devil is in the details. There are many different ways of selecting and configuring the information, and it is not always clear what the best markers are. So, as Griffiths et al point out, it is important to test new approaches and adapt and evolve the information tools we use to identify risk. The information gathered from the first round of inspections should be used to recalibrate and refine the risk models. Perhaps most importantly, we need to recognise that the information used to detect risk will work better in some aspects of care than others. Some information or intelligence sources are better, more up to date and more focused on care delivery than others. So, for example, process measures related to access are relatively common and reasonably up to date, but for information about patient experience we may have to rely on general surveys that are a year or two old.

Relying on existing information streams will inevitably tend to be backward looking and can easily create a picture that is out of date. So, it is important that targeting incorporates not just quantitative indicators but also qualitative intelligence specific to the organisation, including reports from local stakeholders and patient groups. Rather than relying solely on last month's waiting time figures or last year's hospital mortality rates, inspectors should also take notice of where patient groups, staff themselves or whistle-blowers report current problems in delivery. In theory, this type of intelligence can address gaps in the quantitative indicators and provide more focused and topical information. But exactly how can it be done in a way that identifies important signals from general noise? It is to the CQC’s credit that they are trying to work this out.16

There is therefore an important agenda to research and refine approaches to regulation, part of which is to improve targeting and surveillance. That agenda includes (1) identifying the information and intelligence that are better predictors of risks to quality and placing greater reliance on those, (2) exploring alternative data sources to assess quality domains where prediction is poor, (3) exploring the impacts of different thresholds and scoring systems and (4) finding ways to link quantitative information with qualitative observations. Above all else, regulators need to be able to assess the extent to which the complete package of assessment, publications and regulatory actions is leading to better healthcare services.17 While from an external perspective the notion of an organisation that learns and evolves is reasonable and desirable, the hotly contested political environment18 around healthcare regulation does not bode well for such an incremental approach. The ability of the regulator to adapt and evolve its approach to assessment is constrained by a number of factors, including legal requirements and a desire to treat all organisations in the same way. Often, there is demand for summary ratings that are comparable across organisations.19 Those being regulated understandably want prior warning of the nature of the regulation and the expectations placed upon them, but this can be difficult to achieve when regulatory systems purposely seek to change and adapt over time. If we want regulatory systems that make an impact, are affordable and minimise burden on front-line services, we will need to work out how to allow a regulator to experiment, adapt and evolve. Without this, we will be continually locked in the swings and roundabouts of regulatory style that have so dominated the past 20 years.

References

View Abstract

Footnotes

  • Competing interests The author acknowledges acting as an informal advisor to the CQC and former roles at the Commission for Healthcare Improvement and at the Healthcare Commission (the forerunners to the CQC). This included work developing models for surveillance to prioritise inspection and investigation of healthcare organisations—similar in some ways to the Intelligent Monitoring system examined in the companion study by Griffiths et al.11

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles