Article Text
Statistics from Altmetric.com
It is generally accepted that quality improvement efforts in health service delivery are best guided by measurement of performance and activity. Because of this, there are thousands of metrics and indicators produced worldwide attempting to capture the processes performed by, and outcomes achieved by, healthcare providers. However, for any of these indicators to be most useful, the variability in the indicator should reflect the performance or activity of the organisation being profiled. For example, it has been shown that much of the variation in doctor communication skills and in non-steroidal anti-inflammatory drug (NSAID) prescribing can be attributed to individual physicians rather than the organisations they work for.1 2 Those are examples of variation being attributed to a lower level than that at which it is often measured—that is, performance measures for inappropriate NSAID prescriptions often focus on practices, even though variation may occur predominantly at the individual prescriber level.3
Yet variations in performance may also reflect features a level up from the typical performance measures. This is the issue addressed in this issue of BMJ Quality & Safety by Burton and colleagues4 in their analysis of UK data on fast-track referrals for suspected cancer from general practices. In this example, they present an analysis of how much variation in general practitioner (GP) referrals for suspected cancer is attributable to local health services rather than practices or their populations.
Referral pathways for suspected cancer have been available to GPs in England since 2000. These pathways enable rapid access to a specialist opinion or diagnostic test within 2 weeks (ie, 2-week wait (2WW) referrals) for patients with specified symptoms based on referral criteria defined by the National Institute for Health and Care Excellence guidelines.5 Since 2009/2010 Public Health England (PHE) has publicly reported data6 on the use of these pathways. These data include for each general practice in England the standardised referral ratio, cancer detection rate (ie, sensitivity of referral) and conversion rate (positive predictive value or PPV of referral). There is substantial variation in referral use between general practices, which has been associated with mortality of patients with cancer.7
2WW referrals have continued to increase by 10% year on year, and are now running at over 2 million referrals per year in England, with significant cost implications. The total number of cancers detected via urgent referral has increased, with a concurrent decrease in those diagnosed via an emergency presentation to health services8 with worse outcomes.
At a practice level, urgent referral metrics for a single year can be based on relatively small numbers of referrals and cancer cases. The average full-time GP typically makes 50–60 2WW referrals, and having eight to nine patients newly diagnosed with cancer per year. These metrics exhibit year-on-year random variation,9 with previous work focusing on the potential primary care drivers of this including differences in random variation and therefore case-mix,10 referral selection accuracy and thresholds,11 12 practice and GP characteristics.13 14
The study by Burton et al 4 is the first to analyse these data at the levels of general practice, primary care organisation (206 clinical commissioning groups or CCG) and secondary care provider (126 acute hospital trusts) levels. They found that primary care organisations were associated with substantial variation in both the volume of referrals and in the effectiveness of those referrals in capturing cancer cases. Specifically, 21% of the variation between general practices in the standardised 2WW referral ratio, and 42% of the variation between general practices in the cancer detection rate, was attributed to primary care organisations (ie, to CCGs). Their analysis went further and attempted to disentangle to what degree the primary care organisations responsible for commissioning services was the source of this higher level variation, or rather should it be attributed to the hospitals that patients are referred to. They found that for both the 2WW referral ratio and 2WW detection rate, hospitals accounted for around two-thirds of the variability-attributed CCGs in the simpler analyses.
So what are the implications for the general practice level metric being investigated? First, it is worth noting that the majority of variation is still associated with the general practice. Furthermore, by using 5-year aggregate statistics, and restricting the analysis to practices with at least 50 cancer diagnoses over that time period issues, the play of chance and random variations in case-mix are minimised.9 Thus, the substantial and apparently real variability in referral behaviour by different GPs provides some validity for their use. However, care is needed in the interpretation of these metrics, as a naive approach may lead to misclassification of the source of any under-referrals. One option might be to provide indicator values adjusted for CCG or alternatively to provide benchmarking against local practices. Indeed the latter option is offered by the PHE fingertips website6 which presents the metrics.
While the majority of the observed variation in referral is attributable to general practices and their constituent GP referral decisions, there are clear implications that the whole of the healthcare system also needs to be taken into account. This includes both national and local funding and capacity. General practices and their GPs are not working in isolation, with the majority of 2WW referrals accessing diagnostic investigation in secondary care. Ninety per cent of National Health Service (NHS) contact occurs in primary care, while only an estimated 1 in 20 consultations leads to referral to secondary care.15 However, GPs in the UK have substantially less access to diagnostic testing than many other comparable countries.16 For example, GPs in Australia have much greater access to diagnostic testing such as CT, MRI and endoscopy,17 while achieving substantially better cancer outcomes.18 Variation in secondary care treatment is also important to consider, with a recent study in lung cancer showing substantial geographical variation in treatment significantly associated with worse patient outcomes.19
The interface between primary and secondary care is a key target for the NHS in England going forward. The NHS long-term plan has the aim of reducing outpatient visits by up to 30% of the current 100+ million hospital outpatient visits per year, and the development of larger scale primary care networks.20 With increasingly elderly, multimorbid populations, and stretched health service staff and resources, these aims face huge challenges. There is some evidence of attempts by primary care organisations to manage referrals to secondary care, including potentially for suspected cancer via referral management centres.21 While technologies such as e-advice and virtual clinics22 23 have the potential to reduce some of these outpatient referrals and visits, when cancer or serious pathology is considered there should be the facilities and resources for rapid referral and diagnostic testing. Otherwise, there is a clear risk of worse outcomes.7 With planned future investment in the NHS and other healthcare systems, the core underpinning of effective primary care population coverage needs to be emphasised. This includes continued work breaching the primary and secondary divide, with a whole systems approach to face the challenge of earlier cancer diagnosis.24
So what do these findings say about quality measurement and improvement as a whole? The analysis performed by Burton and colleagues points to various sources of variation in the performance indicator studied. By implicating secondary care as a key source of variation, intervention in a small number of hospitals could have a substantial impact in reducing variation between general practices. Perhaps, we should encourage more studies of this type, looking both to higher and lower levels of the healthcare system to shine a light on sources of variation,25 26 and question whether the level indicators are published for is the right one. That said, we need to be careful not to throw the baby out with the bathwater. Every indicator is different; some will be dominated by higher level organisations and some will not. Some will be strongly influenced by chance and small numbers and others will not.27 Not until all indicators are subject to proper scrutiny can we decide which are really useful and which less so.
References
Footnotes
Twitter @drtomround
Funding TR is funded by a National Institute for Health Research (NIHR) Doctoral Research Fellowship (DRF; DRF-2016-09-054). GA is a senior faculty of the multi-institutional CanTest Collaborative, which is funded by Cancer Research UK (C8640/A23385).
Disclaimer The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Commissioned; internally peer reviewed.