Article Text
Statistics from Altmetric.com
Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.1 Though it is generally accepted that properly conducted randomised trials (and subsequent meta-analyses) provide the best evidence to answer a clinical question, these are not always possible and they have been less frequently performed in surgery. If there are randomised trials, and good evidence is thus available, this should guide therapeutic and diagnostic choices but also still requires clinical judgement to translate the results from these trials to an individual patient in daily practice, for instance because some subgroups of patients may have been excluded from these trials.2
In this issue of BMJ Quality & Safety, an interesting and informative article by Reeves et al entitled ‘Implementation of research evidence in orthopaedics: a tale of three trials’3 seeks to examine the impact of three widely known orthopaedic trauma randomised controlled trials (RCTs) and the subsequent Health Technology Assessment (HTA) reports and their uptake in clinical practice. The two key questions were whether, and when surgeons followed the recommendations of the trial. In two cases, a change of practice in line with the trial conclusions actually preceded publication, and in the third case there was no evidence of a change of practice, either during or after the trial.
There are many sources of information that surgeons may consider when trying to adopt best practice apart from clinical trials, including clinical quality registries, peer-reviewed cohort publications, scientific presentations, peer practice and using their own experience and clinical judgement whether the evidence is applicable to their own patients. Contrary to trials with medications, surgical trials may include a new technique or a new prosthesis that requires a particular skill set.4 Conversely, surgeons have already spent considerable time acquiring and improving surgical skills for procedures they already perform, so they may find disinvestment hard to accept if, as in these three trials examined by Reeves and colleagues3 demonstrated, less-invasive treatment performed better.
While these trials all compare two forms of treatment for specific orthopaedic trauma, there are similarities looking at the role of knee arthroscopy, which has received close attention as a potential example of ‘low value care’5 and is currently part of Choosing Wisely campaigns6 in many countries. In 2002, Mosely and colleagues conducted an RCT comparing arthroscopic debridement and lavage with placebo surgery in patients with osteoarthritis (OA) of the knee. They found no difference in self-reported pain and function over a 24-month follow-up period.7 A Cochrane review published in 20088 concluded that there was gold level evidence that arthroscopic debridement provided no benefit for patients with OA. In view of this level of evidence, Bohensky and colleagues9 performed an investigation on the use of elective knee arthroscopy in the state of Victoria, Australia, for adults with diagnosis of OA. Similar to the methods used by Reeves et al,3 episode discharge data using procedural codes were used to determine if arthroscopy had decreased following the publication of the Mosely paper7 and the associated comments and press around the findings. The study showed that, while the overall rate of knee arthroscopy had decreased, there was no sustained reduction in arthroscopy for patients with a concomitant diagnosis of OA. Further studies since then have again questioned the role of knee arthroscopy.10 11
The reasons for this are complex and some of them have been discussed in this article. Despite published evidence questioning the effectiveness, personal experiences of surgeons with a given procedure may encourage them to continue and has been shown to reduce implementation of Choosing Wisely recommendations against knee arthroscopy.12 Similarly, experiences from others may influence patients and their preference for a particular treatment.12 Implementation of evidence into practice may thus be delayed by several years and further studies maybe needed to ‘convince’ surgeons, and sometimes patients. Also, in a private practice setting there may also be conflicting issues of remuneration, though this was not a factor in the Reeves study.
Two of the three trials in this paper resulted in change of practice and one did not. Of interest is that two trials, DRAFTT and ProFHER, demonstrated a change before or at the start of the trial.
The reasons for this are not clear but at the heart of this research is the question of why were the trials commenced in the first place. To perform an RCT, particularly in surgery, there needs to be equipoise. With regards to both of these trials, the less invasive method had been in use for many years and these trials compared this to newer, more surgically invasive procedures. It is likely that the trials were organised after some experience with newer methods had been obtained and surgeons may not have been as convinced of the outcomes. Possibly, as the trial commenced, some surgeons were already altering their management and reverted to previous methods. Publicity around the trial, even before publication and the HTA may have led to even more surgeons changing practice.
With regards to the AIM trial, there was no change of practice in line with the findings of the trial and there was an increase in operations. There may well be practical reasons for this finding. Although close contact casting is not an ‘operation’, it may involve several casts by experienced plaster technicians who may not be available at all hospitals. There may also be repeated trips to the fracture clinic or theatre which, when discussed with patients, may lead them to choose a surgical option and a shorter time in plaster, in the belief that this will be a better option for them. Individual patient preferences may be poorly understood by surgeons and different patients may vary in their choice of treatment when confronted with choices.13 Also an RCT controlling for a specific intervention may not be as applicable in the ‘real world’ and this is a potential criticism of some clinical trials in that they may lack external validity.
Overuse of medical services is estimated to be a widespread problem14 and, as well as some procedures being ineffective, they may also cause harm. To aid in the clinical uptake of evidence-based medicine, policy initiatives may need to go hand in hand with the publication of trials and subsequent recommendations. For some medical services or procedures, new evidence will not be implemented by itself but requires a managed process, referred to as deimplementation.15 For instance, Cheng and colleagues16 have demonstrated a 58% reduction in knee arthroscopy for patients aged 50 years or over in hospitals that implemented a simple clinical governance process, compared with other control hospitals in the district. Another source of information, apart from using episode discharge data to monitor variation in practice are clinical quality registries. These clinical quality registries could be used to determine if the results of well-designed trials translate to everyday practice.
The study by Reeves et al highlights interesting observations on the implementation of best practice and, with the increasing number of clinical trials performed in surgery, provides opportunities for further research to understand what drives change in daily practice.
Footnotes
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Commissioned; internally peer reviewed.
Data availability statement No additional data are available.