Article Text

Download PDFPDF

Variation and statistical reliability of publicly reported primary care diagnostic activity indicators for cancer: a cross-sectional ecological study of routine data
  1. Gary Abel1,
  2. Catherine L Saunders2,
  3. Silvia C Mendonca2,
  4. Carolynn Gildea3,
  5. Sean McPhail4,
  6. Georgios Lyratzopoulos2,4,5
  1. 1 Primary Care, University of Exeter, Exeter, UK
  2. 2 Cambridge Centre for Health Services Research, University of Cambridge, Cambridge, UK
  3. 3 Knowledge and Intelligence Team (East Midlands), Public Health England, Sheffield, UK
  4. 4 National Cancer Registration and Analysis Service, Public Health England, London, UK
  5. 5 Epidemiology of Cancer Healthcare and Outcomes (ECHO) Group, Department of Behavioural Science and Health, University College London, London, UK
  1. Correspondence to Dr Gary Abel, Medical School (Primary Care), University of Exeter, Exeter EX1 2LU, UK; g.a.abel{at}exeter.ac.uk

Abstract

Objectives Recent public reporting initiatives in England highlight general practice variation in indicators of diagnostic activity related to cancer. We aimed to quantify the size and sources of variation and the reliability of practice-level estimates of such indicators, to better inform how this information is interpreted and used for quality improvement purposes.

Design Ecological cross-sectional study.

Setting English primary care.

Participants All general practices in England with at least 1000 patients.

Main outcome measures Sixteen diagnostic activity indicators from the Cancer Services Public Health Profiles.

Results Mixed-effects logistic and Poisson regression showed that substantial proportions of the observed variance in practice scores reflected chance, variably so for different indicators (between 7% and 85%). However, after accounting for the role of chance, there remained substantial variation between practices (typically up to twofold variation between the 75th and 25th centiles of practice scores, and up to fourfold variation between the 90th and 10th centiles). The age and sex profile of practice populations explained some of this variation, by different amounts across indicators. Generally, the reliability of diagnostic process indicators relating to broader populations of patients most of whom do not have cancer (eg, rate of endoscopic investigations, or urgent referrals for suspected cancer (also known as ‘two week wait referrals’)) was high (≥0.80) or very high (≥0.90). In contrast, the reliability of diagnostic outcome indicators relating to incident cancer cases (eg, per cent of all cancer cases detected after an emergency presentation) ranged from 0.24 to 0.54, which is well below recommended thresholds (≥0.70).

Conclusions Use of indicators of diagnostic activity in individual general practices should principally focus on process indicators which have adequate or high reliability and not outcome indicators which are unreliable at practice level.

  • primary care
  • health policy
  • performance measures
  • quality measurement

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Footnotes

  • Contributors GL and GA conceived the study. CLS, SCM and CG performed the analysis with guidance from GA. GL, GA, CLS and SM drafted the original manuscript. All authors contributed to the interpretation of findings and revisions to the manuscript.

  • Funding This work was funded by Cancer Research UK (NAEDI) Grant No. C18081/A17854. The funder had no role in the study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication. GL is supported by a Cancer Research UK Advanced Clinician Scientist Fellowship (C18081/A18180).

  • Competing interests All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: support for the submitted work from Cancer Research UK; no financial relationships with any organisations that might have an interest in the submitted work in the previous 3 years; no other relationships or activities that could appear to have influenced the submitted work.

  • Patient consent The study was conducted using routinely collected and anonymous aggregated data which was made available in the public domain and as such consent from individual patients was not required.

  • Ethics approval This work was performed on data released by PHE under the Open Government Licence v2.0, as such no ethical review was necessary.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement Data are available from the Public Health England Fingertips website http://fingertips.phe.org.uk/.

Linked Articles