Article Text

Download PDFPDF

Point-of-care decision support for reducing inappropriate test use: easier said than done
Free
  1. Kevin Levitt1,2,3,4,
  2. Kaveh G Shojania4,
  3. R Sacha Bhatia2,3,4
  1. 1Toronto East General Hospital, University of Toronto, Toronto, Ontario, Canada
  2. 2Women's College Hospital, University of Toronto, Toronto, Ontario, Canada
  3. 3Division of Cardiology, University of Toronto, Toronto, Ontario, Canada
  4. 4Department of Medicine, University of Toronto, Toronto, Ontario, Canada
  1. Correspondence to Dr Kevin Levitt, Department of Cardiology, 650 Sammon Ave Suite 303, Toronto, ON, Canada M4C 5M5; klevi{at}tegh.on.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Healthcare expenditure on cardiovascular imaging, including echocardiography, has been growing rapidly; echocardiography growth rates have been in the range of 5–8% per year.1 ,2 In an effort to both understand the drivers of the growth of cardiovascular imaging and to help curb unnecessary use of these tests, the American College of Cardiology Foundation developed Appropriate Use Criteria (AUC),3 which now apply to all cardiac imaging modalities, including echocardiography. Despite the availability of such documents, the uptake and utilisation of AUC remains modest at best, and educational efforts alone have largely proven unsuccessful.4 ,5 Consequently, attempts have been made to implement AUC using a variety of active methods, including decision support tools.

The study by Boggan et al6 sought to improve the appropriateness of transthoracic echocardiogram (TTE) ordering at a tertiary care Veterans Affairs hospital in the USA by incorporating a decision support tool into an electronic ordering system, focused primarily on congestive heart failure and valvular heart disease, and interestingly including the ordering of a brain natriuretic peptide (BNP) test as part of this tool. However, over the study period, which consisted of a baseline period of 20 months and a post-intervention period of 12 months, the overall number of TTEs ordered did not change significantly. An initial decrease in orders occurred in the first 6 months, but this effect did not persist. The authors did not specifically look at the appropriateness of echo ordering to assess changes during the period of study but rather the total volume of studies performed. However, it is unlikely that the approximately unchanged total volume masks a decrease in inappropriate TTEs and a compensatory increase in appropriate ones.

The null study results highlight the challenges of quality improvement research in general and the difficulty in effecting real, appreciable change in inappropriate ordering of diagnostic testing in particular. The initial enthusiasm for any intervention sometimes produces short-term improvement that disappears over time as clinicians revert back to previous behaviours. Those promoting interventions that demonstrate early improvement have to deal with issues regarding sustainability to ensure desired behaviours continue beyond the initial period of study. Maintaining any improvement is undoubtedly very challenging. It requires reinforcement of the desired behaviours and a review of the improvements achieved thus far. Moreover, local solutions that actively engage affected clinical groups and incorporate their suggestions into the design of the intervention are often the initiatives most likely to effect change. In particular, the intervention needs to be straightforward, and makes doing the ‘right thing’, such as appropriate ordering, easier, while doing the ‘wrong thing’ slightly harder.

The authors do not specifically mention whether they engaged key stakeholders (eg, clinicians groups that frequently order TTEs) in the intervention development process. Furthermore, iterative changes would certainly have been required and engagement by those most intimately involved and affected by the process would foster greater adoption and uptake of the decision support tool. Such interaction with the desired group would allow greater appreciation of the reasons for change and potentially may result in a more meaningful uptake of the intervention process.

A common difficulty in efforts to improve quality is that the target problem is not adequately characterised and the degree to which the proposed intervention addresses the various factors contributing to the problem is not examined. One needs to consider the theory of this intervention.7 The use of decision support to curb inappropriate TTE orders assumes that the primary problem consists of lack of knowledge and that the decision to request a TTE can be modified at the time of order entry.

In reality, however, the reasons for ordering TTEs are likely complex and may not primarily involve lack of knowledge regarding AUC or be amenable to a point-of-care intervention. In a teaching hospital environment, investigations are usually ordered by house staff and trainees under the direction and supervision of attending physicians. If the attending physicians (or senior fellows) instruct their trainees to order a TTE, it is unlikely that an intervention aimed at the trainee entering the order will have any effect on the appropriateness or volume of TTEs performed. Moreover, even if senior clinicians and trainees know the appropriate indications for TTE, they may have other reasons for ordering ostensibly inappropriate tests. For instance, maybe the result will facilitate timely discharge from the hospital or referral to a specialist. Thus, determining the common reasons for ordering inappropriate tests is crucial to developing an intervention to improve utilisation. Local data should be collected as factors intrinsic to the institution may play key roles in ordering behaviour. Interventions and solutions should be tailored to the specific characteristics and dynamics of the institution. Applying a ‘one-size fits all’ approach will likely have disappointing results if applied broadly (ie, applying a decision support tool for overused tests in general).

Nonetheless, ‘one-size fits all’ approaches have their appeal. In the particular case of decision support tools, once the institution has a well-functioning computer order entry system (as does the Veterans Affairs system), why not deploy order screens that alert clinicians to appropriate or inappropriate indications? Once the computer ordering system exists, creating a new decision support tool consumes minimal additional costs and effort. Even if the theory for this approach is not optimal in a given situation, supplying a decision support tool may be an efficient intervention.

In this regard, the results obtained by Boggan et al6 are fairly representative of similar attempts reported in the literature. Improvement interventions that employ a decision support tool (including simple point-of-care computer reminders as well as more complex decision support) often produce only small improvements. One meta-analysis8 reported a median improvement of 4.2% (IQR 0.8–18.8%) from such tools—that is, across the 32 comparisons included in the review, only about 4% more patients received the desired process of care (or did not receive an undesired process, like an inappropriate order). A minority of studies achieved larger effects, but no common characteristics explained these larger effects as many of them were seen in the results from one institution with a home-grown computerised order entry system. Requiring the user to enter a response to the decision support tool (eg, enter the indication for the TTE) showed a trend to greater improvement.8

Some other recently published studies employed decision support tools with some success. A study we conducted of a decision tool that replaced the traditional order form alongside an educational intervention produced a 39% reduction in inappropriate stress echocardiography and a corresponding 17% improvement in appropriate studies. The engagement of key stakeholders was essential to the success of this intervention.9 Another study by Lin and colleagues incorporated a decision tool to improve cardiac testing utilisation in the evaluation of coronary artery disease. This point-of-order support tool resulted in a reduction of inappropriate testing from 22% to 6%.10

Audit and feedback are another tool that many may reach for when attempting to reduce inappropriate testing. A Cochrane review by Ivers et al11 demonstrated that this technique can produce modest but meaningful improvements in the desired behaviour (adjusted risk difference 4.3%, IQR 0.5–16%). We successfully piloted two studies of physician performance feedback in both the inpatient and outpatient setting that successfully reduced inappropriate TTE ordered by over 50% in both studies.12 ,13 Again, the critical question for audit and feedback is the sustainability of the intervention. It requires resource and institutional effort to maintain active audit and feedback-type interventions over time, and discontinuation of the intervention has been shown to lead to a return to previous ordering patterns.

We believe the greatest chance for success in reducing inappropriate tests, like TTE, is offered by a multi-faceted intervention that includes some combination of clinician engagement and education, decision support tools, and audit and feedback. Engaging with clinicians, using this multifaceted approach over time with interventions that reinforce the desired behaviour, is often required not only to effect change but to provide the best opportunity for a sustained effect.14–16

While the task might seem daunting, the opportunity to reduce unnecessary tests and the accompanying benefits are potentially large. The issue of overuse of diagnostic testing is not unique to cardiology and the interventions that have shown benefit in reducing overuse are equally generalisable to other modalities. Imaging for lower back pain and neuroimaging for syncope are also low yield tests where active interest is being sought in developing tools to improve appropriateness. The Choosing Wisely campaign was founded in 2012 by the ABIM Foundation to advance a national dialogue on avoiding wasteful or unnecessary medical tests, treatments and procedures.17 ,18

More than 70 specialist organisations in the USA have each released a top 5 ‘do not do’ of laboratory tests or diagnostic imaging that adds little value to the care of patients, and many countries have launched similar campaigns. The campaign has spread internationally, attesting to the widespread interest in this topic.18 Despite its null results, the study by Boggan et al6 provides a helpful addition to the overuse literature. Tempting as it is to harness computer order systems to curb inappropriate use with point-of-care decision support, as with other quality problems, the intervention probably needs to be multifaceted, with careful attention to the local drivers of the problem, and a theory for how the proposed intervention addresses them.

References

View Abstract

Footnotes

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles