Article Text
Abstract
A summary of a systematic review of clinical audits of cancer referrals in England and Wales
- cancer
- waiting times
- audit
Statistics from Altmetric.com
This article is based on a recent report from the Centre for Reviews and Dissemination (CRD) which summarised the findings of a systematic review of clinical audits undertaken to assess the implementation and effectiveness of the National Health Service 2 week waiting time policy for cancer referrals in England and Wales.1
Providing patients with prompt access to specialist cancer services is a major concern of health policy in England and Wales. In its 1997 White Paper “The new NHS, modern, dependable” the government gave a commitment that, by the end of 2000, everyone with suspected cancer would be able to see a specialist within 2 weeks of a referral by his or her GP.2 Guidance setting out the necessary action to achieve the 2 week waiting time target for breast cancer, to be implemented in April 1999, was issued in December 1998.3 Details on how the 2 week waiting time standard was to be achieved for all other suspected cancers was outlined in September 1999,4 with a roll out period for delivery of the 2 week wait standard until December 2000.
To assist the implementation of the 2 week wait standard for these suspected cancers, the Department of Health developed and issued guidelines on the appropriate referral of patients with suspected cancers.5 The guidelines included pre-specified criteria to help GPs identify those patients requiring urgent assessment by a specialist, those requiring a routine referral to hospital, and to help reassure those patients unlikely to have cancer and who could be appropriately observed in primary care.
To ensure the delivery of the 2 week standard, the guidelines indicated that NHS trusts are expected to monitor and feedback to GPs, via primary care trusts (PCTs), information on the number, timeliness and appropriateness of referrals.5 All NHS trusts and strategic health authorities (SHAs) are expected to collate and submit national cancer waiting time datasets (which include data on how many patients are seen within the 2 week wait standard) to a national database. In addition to these arrangements for routine monitoring, all NHS trusts and SHAs have been encouraged to carry out clinical audits of suspected cancer referrals to generate further information.6
To inform the National Institute for Clinical Excellence (NICE) review of the cancer referral guidelines the CRD was commissioned by the Department of Health to carry out a systematic review of clinical audits undertaken to assess the implementation and effectiveness of the 2 week waiting time policy for cancer referrals. The revised cancer referral guidelines are due to be issued in March 2005. It was hoped that the results of the review would provide valuable information on the impact of the current referral guidelines as well as show whether the guidelines are having an impact on service delivery.
METHODS
Identification of clinical audits
Because many clinical audits are only documented internally, emphasis was placed on systematically contacting relevant people across the NHS to identify audits. All NHS trusts and SHAs were contacted via the CRD single contact point (SCP) network. The CRD has developed and maintains a network of some 650 key individuals within NHS trusts and SHAs. Most SCPs have roles and responsibilities connected with clinical audit, effectiveness, or governance. These SCPs use their local knowledge and experience to communicate the findings of CRD outputs within their organisation.
We did not think that we would identify all potential audits by contacting a single representative in every NHS organisation, so additional contacts were made with a number of key individuals and organisations across the NHS including PCT cancer leads, cancer service collaborative (CSC) national clinical leads, cancer network managers, cancer network service improvement leads, cancer registry contacts, cancer screening services, the Commission for Health Improvement, and the Welsh Assembly. We also searched the websites of key organisations, posted requests for unpublished audits on relevant email discussion lists, and conducted hand searches of conference proceedings and searches of electronic databases (including grey literature databases and those that record abstracts submitted to conferences) including Health Management Information Consortium (HMIC), SIGLE, ISI Proceedings: Science and Technology, Inside Conferences, MEDLINE, EMBASE, CANCERLIT, National Research Register, and REFER. Full details of the search strategy are available elsewhere.1
Assessment of retrieved audits for inclusion
The audit reports obtained were independently assessed for inclusion by two reviewers using pre-defined inclusion criteria. Any disagreements were discussed and, if no agreement was reached, a third reviewer was consulted. If an audit appeared to be relevant but we were unable to confirm this because information was missing, attempts were made to contact the authors for more information.
To be considered for inclusion, minimum details of the methodology used had to be reported, which constituted some sort of description of the included participants or a description of the data source. Any type of evaluation that measured the effectiveness (including timeliness and appropriateness) of the 2 week wait policy was considered.
Audits undertaken before April 1999 (when the first 2 week wait policy was introduced for breast cancer) were excluded. Furthermore, for audits restricted to a specific cancer site, those performed before the relevant introduction dates were also excluded. Clinical audits started before but completed after guideline implementation were included if more than 50% of the participants were seen after the implementation of the guidelines. Summary reports of the cancer waiting times datasets routinely collected by all NHS trusts to inform a national database were excluded.
Peer review comments on the draft protocol for this review included the suggestion that we should expand our inclusion criteria to incorporate audits conducted before the Department of Health implementation dates for comparison. However, because of financial and time constraints of the review, we were unable to do this.
Quality assessment
In order to identify existing methods of assessing the quality of clinical audit, we carried out an initial broad search of the literature for published checklists7,8,9,10 from which an initial comprehensive list of quality criteria was developed. We then developed a shorter list (by discussion and consensus) which included components for both generic clinical audit issues and the specifics of measuring cancer waiting times. We used the definition of clinical audit endorsed by NICE as the ideal audit methodology,10 which relates to a criterion based audit. To differentiate studies that the authors have called audit, but do not necessarily meet the definition of clinical audit endorsed by NICE—for example, non-criterion based audits, reports that describe clinical practice, before and after research studies—we classified the included audits/studies as follows: clinical audit (that is, criterion based audit), non-criterion based clinical audit (where practice is not compared with predefined criteria), or research study.
Data extraction strategy
Relevant data from each study were extracted using a pre-defined data extraction tool developed on Microsoft Access. The tool was piloted using a sample of clinical audits that met our inclusion criteria and then modified accordingly. Subsequent data extraction was then carried out by one reviewer and checked by a second with discrepancies being resolved by discussion or, if necessary, taken to a third party.
Synthesis
When reporting findings for 2 week wait referrals, included audits have made inconsistent use of the term “urgent”. Some have differentiated between urgent referrals and 2 week wait (or fast track) referrals, some have categorised the terms together, and others have used the term “urgent referral” without specifying which patients are included. Where audits differentiated between urgent referrals and 2 week wait referrals, we only used the data described as 2 week wait referrals. Where audits did not differentiate between urgent and 2 week wait referrals but appeared to be describing 2 week wait referrals, we have assumed that the data are in fact describing 2 week wait referred patients.
As most included audits were categorised as criterion based, our initial method of categorisation did not turn out to be an effective way of identifying those that would provide the most reliable results. Alternative criteria were therefore employed to determine which studies would be used to inform the summary of the overall findings. Only those audits (and research studies) that provided information on how included patients were identified and/or their data source were summarised in any detail.
RESULTS
A total of 624 clinical audits were received via correspondence with various individuals, including 576 from the CRD SCP network. In summary, 238 acute trusts were contacted of which 202 responded; 321 PCTs were contacted of which 186 responded; and letters were sent to 28 SHAs, nine of which responded. 241 clinical audits met the inclusion criteria; the number of single and multiple site audits included is given in table 1. A summary of all the included audits can be found on the internet at http://www.york.ac.uk/inst/crd/waittime.htm.
Number of included single and multiple site audits
Using our own classification for methodology, 193 clinical audits were classified as criterion based, 36 as non-criterion based, and 12 as research studies. Fifty seven included all referrals with a suspicion of cancer and 119 included patients referred under the 2 week wait rule/urgent referrals. In three audits the type of referral was either unclear or not stated. Thirty one audits examined patients diagnosed with cancer and 31 looked at both patients referred and those diagnosed with cancer.
Table 2 provides an overview of the quality assessment for all the included audits. Most of the included studies were poorly reported. Fewer than half (44%) provided sufficient detail on methodological aspects for the audit to be reproducible. Less than 20% provided an action plan outlining any recommended changes to service delivery or how any changes would be implemented.
Quality of included audits
The findings of audits (and research studies) that reported details on how included patients were identified or gave their data source (n = 173) are included in the summaries relating to the main outcome measures across all cancer sites shown in tables 3 and 4. The following limitations should be borne in mind when interpreting the results presented in these tables. There is a large variation between the included audits in terms of their timing, sample size, the type of population examined, the type of sampling method used, type of outcomes or audit criteria being evaluated, and how adherence to the guidelines or appropriateness of referrals was assessed. In particular, the small sample size of some of the included audits means that the extent to which the percentage value of the “sample” represents what is happening for all patients will be seriously compromised.
Summary results across all cancer sites for outcomes related to adherence to guidelines
Summary results across all cancer sites for cancer related outcomes
The main findings for each individual cancer site are described in detail in the full report.1
DISCUSSION
We were always aware that conventional literature searches would not identify all potentially relevant clinical audits for our review and therefore tried to devise searching methods that would. We received a very positive response from the NHS to our request for clinical audits made during the initial stage of our search strategy.
Although most audits were obtained by using the CRD SCP network, 8% (n = 48) of the records identified were obtained through contact with other individuals and organisations. This statement is somewhat simplistic because, in many instances, numerous follow up contacts were necessary before we actually received any audits. Given the logistical difficulties of obtaining information from the NHS, we cannot be sure that we have identified all potentially relevant clinical audits. Many trusts do not appear to hold a centralised record of what clinical audits have been performed within the trust, which should include those that did not involve the Clinical Audit Department.
Many of the potentially relevant audits that we received were only available in abbreviated form such as printed slide presentations or a single page of summary statistics. However, some of the audit reports were also accompanied by messages of willingness by various trusts to help with any queries or need for further information. Unfortunately, owing to the time constraints and the sheer number of audits that we received, we were unable to follow this up in most cases.
Quality of included audits
Being able to evaluate the quality of a clinical audit is central to informed decision making. The majority of included audits were poorly reported, with fewer than half (44%) providing sufficient detail on methodological aspects for the audit to be reproducible.
Poor reporting seriously compromises the integrity of the audit process. Many trusts do not appear to write up their audits in full. The reasons why they are not always formally documented may include the fact that clinical audits are often not published, and the audit process may be considered so familiar to those undertaking them that reporting methodological aspects is considered unnecessary. Audit reports should be written up in sufficient detail for a reader (who did not conduct the audit) to be able to ascertain how the audit was conducted.
Most of the included audits—even those available as a full report—did not report whether data collection and the population source were checked for accuracy and provide details of how compliance with the audit criteria was assessed. Making such information available would allow for a better evaluation of the validity of the results.
Although the referral guidelines indicated that NHS trusts were encouraged to carry out clinical audits of suspected cancer referrals,5 no supporting guidance on how best to conduct and report such audits was issued. The actual referral guidelines were vague and non-prescriptive and, as a consequence, front line health professionals were left to conduct some form of audit as best they could. The National Clinical Audit Support Programme commissioned by the Healthcare Commission is currently developing a programme of national clinical audits for breast, colorectal, head and neck, and lung cancer treatments, all designed to measure performance against explicit and nationally agreed standards (www.nhsia.nhs.uk/ncasp/pages/default.asp). Given the nature and quality of the audits presented in this review, it would seem that explicit supporting guidance on how best to conduct audits of suspected cancer referrals is also required.
Key messages
-
Most included clinical audits were poorly reported and their results showed a wide variation in compliance with guidelines.
-
Poor reporting can seriously compromise the integrity of the audit process.
-
Audit reports should be written up in sufficient detail to allow the reader to ascertain how the audit was conducted and to assess the validity of the results and how these will be used to improve existing practices and procedures.
-
The methods by which clinical audits of site specific cancers are conducted and reported should be standardised across the NHS.
There was wide variation in the findings of the included audits for all outcome measures (see tables 3 and 4) which seriously limited the interpretation of the summarised overall results. The diversity of the findings is not surprising, however, in view of the fact that the included audits varied quite considerably in terms of their timing, sample size, type of population examined, type of sampling method used, type of outcomes or audit criteria being evaluated, and how adherence to the guidelines or appropriateness of referrals was assessed.
Are trusts making appropriate use of clinical audit?
Where clinical audits indicate the need for changes to the process, procedure, or the delivery of services, this involves ensuring that such changes are implemented and that further monitoring is used to confirm improvement in healthcare delivery.
There was wide variation across included audits in the proportion of site specific cancer referrals seen within 2 weeks, the proportion of referrals found to be in accordance with the symptoms listed in the guidelines, and the proportion of 2 week wait referrals deemed by consultants to warrant an urgent appointment.
According to the guidelines, information should be fed back to individual GPs and PCTs on the appropriateness of their referrals. In this review, 70% of included audits provided no details on whether the results were or would be fed back to individual GPs and PCTs.
Less than 20% of included audits provided details of an action plan outlining any recommended changes to service delivery or how any changes would be implemented. In addition, fewer than 20% of included audits reported any plans to re-audit.
It is possible, that owing to poor reporting, documentary evidence of action plans exist elsewhere and that any necessary changes to processes and procedures are being acted upon. Making such information available would make it easier for those not directly involved in the audit to assess if—and in what ways—the audit findings are being acted upon.
Acknowledgments
The authors thank all the NHS staff who took the time to respond and to help identify and obtain potential clinical audits, and members of the advisory panel for their useful advice and constructive comments on the draft protocol and report.