Article Text

Repurposing the Ordering of Routine Laboratory Tests in Hospitalised Medical Patients (RePORT): results of a cluster randomised stepped-wedge quality improvement study
  1. Anshula Ambasta1,
  2. Onyebuchi Omodon2,
  3. Alyssa Herring3,
  4. Leah Ferrie4,
  5. Surakshya Pokharel5,
  6. Ashi Mehta6,
  7. Liberty Liu3,
  8. Julia Hews-Girard3,
  9. Cheuk Tam7,
  10. Simon Taylor8,
  11. Kevin Lonergan9,
  12. Peter Faris10,
  13. Diane Duncan4,
  14. Douglas Woodhouse4
  1. 1 Medicine, University of Calgary Cumming School of Medicine, Calgary, Canada
  2. 2 Ward of the 21st Century, University of Calgary Cumming School of Medicine, Calgary, Canada
  3. 3 Alberta Health Services, Calgary, Canada
  4. 4 Physician Learning Program, University of Calgary, Calgary, Canada
  5. 5 Ward of the 21st century, University of Calgary, Calgary, Canada
  6. 6 Health Quality Council of Alberta, Calgary, Canada
  7. 7 Medicine, University of Calgary Faculty of Medicine, Calgary, Canada
  8. 8 Medicine, University of Calgary, Calgary, Canada
  9. 9 Analysis, Alberta Health Services, Calgary, Canada
  10. 10 Measurement and Analysis; Research Excellence Support Team, Alberta Bone and Joint Health Institute; Alberta Health Services, Calgary, Canada
  1. Correspondence to Dr Anshula Ambasta, Medicine, University of Calgary Cumming School of Medicine, Calgary, Canada; aambasta{at}ucalgary.ca

Abstract

Background Low-value use of laboratory tests is a global challenge. Our objective was to evaluate an intervention bundle to reduce repetitive use of routine laboratory testing in hospitalised patients.

Methods We used a stepped-wedge design to implement an intervention bundle across eight medical units. Our intervention included educational tools and social comparison reports followed by peer-facilitated report discussion sessions. The study spanned October 2020–June 2021, divided into control, feasibility testing, intervention and a follow-up period. The primary outcomes were the number and costs of routine laboratory tests ordered per patient-day. We used generalised linear mixed models, and analyses were by intention to treat.

Results We included a total of 125 854 patient-days. Patient groups were similar in age, sex, Charlson Comorbidity Index and length of stay during the control, intervention and follow-up periods. From the control to the follow-up period, there was a 14% (incidence rate ratio (IRR)=0.86, 95% CI 0.79 to 0.92) overall reduction in ordering of routine tests with the intervention, along with a 14% (β coefficient=−0.14, 95% CI −0.07 to –0.21) reduction in costs of routine testing. This amounted to a total cost savings of $C1.15 per patient-day. There was also a 15% (IRR=0.85, 95% CI 0.79, 0.92) reduction in ordering of all common tests with the intervention and a 20% (IRR=1.20, 95% CI 1.10 to 1.30) increase in routine test-free patient-days. No worsening was noted in patient safety endpoints with the intervention.

Conclusions A multifaceted intervention bundle using education and facilitated multilevel social comparison was associated with a safe and effective reduction in use of routine daily laboratory testing in hospitals. Further research is needed to understand how system-level interventions may increase this effect and which intervention elements are necessary to sustain results.

  • Audit and feedback
  • Continuous quality improvement
  • Continuing education, continuing professional development
  • Healthcare quality improvement
  • Hospital medicine

Data availability statement

Data are available upon reasonable request. All outcome data relevant to the study are available from Alberta Health Services Data and Analytics team upon request in accordance with institutional policies and procedures. All process measure data are available from the corresponding author upon reasonable request.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

  • Low-value use of laboratory testing remains a global problem. There is a need to develop evidence-based interventions to improve the use of laboratory tests in healthcare settings.

WHAT THIS STUDY ADDS

  • Our multipronged intervention bundle was associated with an effective and safe reduction in routine laboratory testing through simple educational and facilitated social comparison tools.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

  • This study highlights the potential for targeted behavioural change interventions to reduce use of low-value laboratory testing in hospitals. Further understanding of the additional impact of system-level interventions is needed.

Introduction

Indiscriminate use of laboratory testing in healthcare has contributed to overuse, with 16%–56% of testing estimated to provide no clinical value.1 In hospitals, low-value laboratory testing often occurs in the form of daily repetitive use of a panel of routine tests.2 This is associated with hospital-acquired anaemia, which may lead to increased blood transfusions, prolonged length of stay and higher mortality for patients.3 4 Moreover, results obtained from routinely ordered tests have limited predictive value and may lead to further unnecessary tests and procedures related to false-positive results.5–7 There are currently no standard intervention strategies to reduce laboratory test overuse in hospitals, although a recent meta-analysis recommends multifaceted interventions.8 Most reported intervention studies are small,9–14 with variability in stakeholder engagement and test selection and poor sustainability.8 15 Moreover, only a few studies have examined safety outcomes.16 17

Our single integrated provincial health authority has identified reduction of laboratory test overuse in hospitalised patients as a quality improvement priority. Our multidisciplinary team, titled Repurposing the Ordering of Routine Laboratory Tests in Hospitalised Medical Patients, has previously published the pilot success of a multifaceted intervention bundle including education and multilevel social comparison in safely reducing the use of routine laboratory tests on a single medical unit.18 The aim of this current study was to evaluate whether the implementation of an enhanced and evidence-based intervention bundle, rooted in education and facilitated social comparison, can safely reduce the use of routine laboratory tests in hospitalised medical patients across eight medical units and four tertiary care hospitals.

Methods

Design, setting and study participants

Our single provincial health authority has organised the province into five geographical health zones, with each zone containing a network of healthcare facilities. This study was conducted on one of these health zones. This zone includes four adult tertiary care hospital sites, each with its own clinical teaching unit (CTU) and hospitalist unit. Medical patients are cared for either by internists on CTUs or by general practitioners (GPs) in hospitalist units. CTUs are teaching units that use a team-based structure to provide patient care.19 Learners (ie, medical students and resident physicians) work in 4-week blocks, supervised by an attending internist who rotates every 7–14 days. Laboratory tests are ordered by attending and resident physicians through the same electronic medical record (EMR) system (Sunrise Clinical Manager; Allscripts, Chicago, Illinois, USA). Orders entered by medical students require verification by a resident or an attending physician before processing. Hospitalist units are non-teaching units and medical care is generally provided by a single GP. Our study included eight units, with one CTU and one hospitalist unit from each of the four hospitals.

We implemented our intervention bundle across the eight units according to a randomised stepped-wedge design20 and followed the Consolidated Standards of Reporting Trials extension on stepped-wedge trials to report the study.21 As the study was undertaken as a health system quality improvement project, the study was not pre-registered nor was its protocol published. The study started in October 2020 and included a 4-week control period (no intervention), a 9-week feasibility testing period (November 2020–January 2021) where intervention elements were tested on all units, four 4-week intervention periods (11 January–2 May 2021) and one 8-week follow-up period (3 May–30 June 2021) after implementation on all units had been complete (figure 1). The order each site entered the intervention period was randomly selected using a random number generator. To minimise potential contamination between intervention and non-intervention units (through rotation of medical learners to different sites), all eight units received their intervention within 16 weeks. Study participants included attending physicians (internists and GPs), learners (medical students and resident physicians) on the eight medical units and admitted patients. All study activities ceased after 30 June 2021. The prior pilot had been conducted on one of the CTU units between March 2017 and March 2018.

Figure 1

Study diagram for stepped-wedge implementation of the routine laboratory test optimisation INT bundle across eight units by site. CTU, clinical teaching unit; INT, intervention.

Intervention bundle: elements and delivery

Elements

We identified the ‘routine’ panel of tests for this study as the six most routinely used tests on these units: complete blood count, electrolytes (comprising sodium, potassium, chloride and CO2), creatinine, urea, international normalised ratio and partial thromboplastin time. Using previously published consensus-based recommendations guiding the use of these six laboratory tests,22 we developed a new online case-based educational module (https://cards.ucalgary.ca/deck/432) accredited by the Royal College of Physicians and Surgeons of Canada. The module included eight case studies followed by seven skill-testing questions. Posters with key messages were placed on participating units, and an electronic brochure (online supplemental appendix item 1) was distributed as a communication tool to participating physician and learner groups. An individualised report card was provided to participating attending physicians detailing their volume of routine test use, costs, as well as the number of patients per day who did not undergo any routine tests relative to their peers (online supplemental appendix item 2). Feedback was also provided at a team level to CTU learners via email mid-block (week 2) and end-of-block (week 4).

Supplemental material

Delivery

On the day a site entered the intervention phase, an email was sent out by a peer (internist or GP) introducing the study through the electronic brochure, providing a link to the online module and information on reports and inviting them to attend an online peer-facilitated session to discuss the reports. On the same day, attending physicians received a system-generated email in their secure health authority account with a copy of their report. Attending physicians received the reports every 2 months. Any laboratory test that was ordered on a patient under the attending physician’s care was attributed to them to help increase accountability. Facilitated sessions to discuss the reports were generally scheduled within the first 10 days of report delivery. These sessions were developed in partnership with the Physician Learning Programme23 from the University of Calgary and were led by research team members AA (an internist) and DW (a GP). These 1-hour-long sessions were conducted according to the Calgary Audit and Feedback Framework.24 There were two distinct forms of sessions: an introductory session (total of eight, one for each unit; eg, session 1 was only for site 1 hospitalists) after the first report and a follow-up combined session after the second report. The introductory sessions focused on relationship building, establishing the importance of the topic and ensuring that data representation was adequate. The facilitators helped participants move through the different phases of reacting to data, seeking to understand, question, justify and contextualise, before moving to reflection and planning for change. There were four follow-up sessions (one for each site, eg, session 1 for site 1 hospitalists and CTU physicians) which opened with peer local champions sharing their own reports, and then focused primarily on change planning with physician participants identifying concrete strategies they would employ.

CTU learners, before or on the first day of the block, were contacted via email by a peer study team member (an internal medicine resident physician) to generate awareness about the initiative (through the electronic brochure) and the upcoming social comparison and to provide a link to the online educational module. Learners received aggregate team-level reports on their use and costs mid-block and end-of-block, where both emails also served to remind them of module completion. There were no peer-facilitated data review sessions for learners. The bulk of intervention elements was delivered in the first couple of days of the block, with the physician-facilitated review sessions occurring within 10 days, and the aggregate learner reports being sent at weeks 2 and 4 of the block.

No intervention activities occurred during the control period. During the feasibility period, we sequentially trialled each intervention element (delivery of educational module, attending physician reports, learner team aggregate reports and a practice feedback session with all unit champions) across all eight units and corrected issues of incorrect lists of physicians, incomplete email addresses and incomplete data on reports. During the intervention period, intervention elements were delivered in a stepwise fashion across each site. During the follow-up period, intervention elements including automated delivery of reports, links to educational module and prescheduled facilitated sessions were continued, but no additional emails or efforts to raise awareness about the project were made. No project-specific activities occurred after 30 June 2021.

Measures and data collection

We used a tracking tool (online supplemental appendix item 3) to ensure timely delivery of intervention elements. We also tracked completion of the online module, scores on the skill-testing quiz and attendance at facilitated report review sessions. Our primary outcome was the number of the six routine laboratory tests ordered per patient-day. In addition to routine tests, we identified on these units ‘all common’ laboratory tests (online supplemental appendix item 4) that were the top 80 tests ordered by volume. Our secondary outcome measures included cost of routine laboratory tests, number of all common (ie, top 80 most commonly ordered) tests per patient-day, number of patient-days that were free of both routine and all common laboratory tests, and safety endpoints (number of ‘stat’ tests ordered, number of blood cultures as a non-routine test, number of critically abnormal routine laboratory test results (online supplemental appendix item 5), patient length of stay, hours on the intensive care unit (ICU), 30-day readmission, and inpatient and 30-day mortality).

We used the hospital EMR system to collect data on the number and type of laboratory tests, and the attending physician and group for each patient-day. Only those patient-days that were under hospitalist or CTU attending groups were included in the cohort. Of an individual patient’s hospitalisation that may have included stay on other units (eg, surgery or ICU), only those days that were under CTU or hospitalist groups were included in the cohort. We used previously published reference median costs of common laboratory tests to estimate costs of each individual test.25 The reference median costs were calculated by determining the price list of all-inclusive indirect costs from six different clinical laboratories across Canada. We used the provincial health zone data repository system to obtain data on patient demographics and outcomes.

Statistical data analyses

We estimated a mean of three routine laboratory orders per patient-day and an average of 100 patients per site per day based on historical data. With four participating hospital sites, 4-week intervention periods and an intracluster correlation of 0.1 based on our prior work, we would need 3500 patient-days per cluster to detect a difference of 10% with greater than 90% power (two-sided alpha of 0.05) (https://clusterrcts.shinyapps.io/rshinyapp/), which we anticipated we would achieve.

Sample characteristics such as patient age, sex, etc, were described using means and proportions. The primary outcome was calculated by aggregating the total number of routine tests per unit per period. We used negative binomial regression to obtain point estimates and 95% CIs for the effect of the intervention on the count of routine laboratory test orders. An offset of total patient-days was included to obtain a rate. A random effect of unit was included in our model to account for repeated measures of the units over the course of the study. Analyses to obtain intervention effect on secondary outcomes were done using the same methods for the primary outcome. We used gamma regression to obtain point estimates and 95% CIs for the intervention effect on total cost of routine laboratory test orders. Duan’s smearing estimator26 was used for retransformation to obtain a dollar value for cost savings.

For all analyses, we used generalised linear mixed models to account for the different observation periods and repeated measurements. Sites were analysed according to their allocation sequence, which remained unchanged from the protocol. Period based on calendar time was included as a fixed effect for each step, with time being modelled as a categorical variable. This allowed accounting for the potential confounding effect of an underlying time trend in the outcome. Cluster was incorporated as fixed effect due to the relatively small number (four clusters) and size of clusters (one site per cluster).27 28 Our analyses accounted for a possible interaction between the intervention effect and type of unit (CTU vs hospitalist). The data during feasibility testing were excluded from the analyses as ‘washout’.

The differences in safety endpoints between the control and intervention periods were determined by calculating the differences in rates (per 100 patient-days for critically abnormal routine laboratory test results, stats tests and per 100 patients for 30-day mortality, inpatient mortality and 30-day readmissions) and means (length of stay and ICU hours) of the endpoints between the two periods. Ninety-five per cent CIs were calculated to determine if differences between periods were significant. All analyses were conducted with the use of SAS statistical software V.9.2.

Results

Patient characteristics

A total of 125 854 patient-days (11 988 unique patient encounters) were included in our analysis with 31 619 CTU patient-days and 94 245 hospitalist patient-days. A total of 1240 unique patient encounters were COVID-19-related. All four sites had agreed to their involvement before they were randomised, and none dropped out after randomisation. All patient-days on CTU and hospitalist units across the four hospitals (sites) and eight units for the duration of the study were included in the analysis. Table 1 describes the baseline characteristics of patients included in the study period.

Table 1

Study sample characteristics

Fidelity

Sixty-four per cent of the respondents completed the educational module per block, with the mean score on the knowledge assessment test for all those that completed the module being 79%. We conducted eight introductory (one for each unit) facilitated report review sessions and four follow-up sessions (one for each site). Of the total 119 hospitalist physicians and 60 CTU physicians, there were 83 unique attending physicians who attended either the introductory or follow-up sessions or both.

Intervention effect

From baseline to follow-up period, 198 903 routine laboratory tests were ordered. Analyses showed an overall significant reduction of 14% (incidence rate ratio (IRR)=0.86, 95% CI 0.79 to 0.92) in the incidence of routine laboratory tests with the intervention beyond any reduction noted during control. Further subanalysis was done to obtain estimates and 95% CIs for the intervention effect for hospitalist and CTU groups separately due to a statistically significant interaction term (p<0.0001). The hospitalist group showed a significant reduction of 21% (IRR=0.79, 95% CI 0.73 to 0.86). The reduction of 9% (IRR=0.91, 95% CI 0.83 to 1.01) seen in the CTU group did not reach statistical significance (table 2).

Table 2

Primary and secondary outcome measures at intervention sites compared with control sites during study period

An overall significant reduction of 14% in the total cost of routine laboratory tests was observed with the intervention relative to control, amounting to a total cost savings of $C1.15 per patient-day. A 15% (IRR=0.85, 95% CI 0.79 to 0.92) reduction in the incidence of all common laboratory tests (top 80 tests by volume) was noted with the intervention compared with the control, suggesting there was no increase in the use of other tests to compensate for the reduction in use of routine laboratory tests. Intervention sites showed a 20% increase in test-free patient-days compared with control sites for routine (IRR=1.20, 95% CI 1.10 to 1.30) and all-common laboratory tests (IRR=1.20, 95% CI 1.11 to 1.32).

Safety endpoints

There were significantly fewer stat tests and blood culture tests per 100 patient-days ordered during control compared with intervention period. A stat laboratory test in our hospital is ordered when the results are needed immediately for clinical decision making. However, length of stay was significantly shorter during the intervention period compared with control. There were no significant differences between the control period and the intervention period for the number of critically abnormal routine laboratory tests, ICU hours, in-patient mortality rate, 30-day mortality rate and 30-day readmission rate (table 3).

Table 3

Safety endpoints before and after intervention at the intervention site compared with control sites

Discussion

In this stepped-wedge implementation of an intervention bundle to reduce the routine use of laboratory tests, we noted an overall significant reduction of 14% in the use of routine laboratory tests ordered per patient-day, associated with cost savings of $C1.15 per patient-day. We saw a similar reduction in all common tests, which suggests that reduction in use of routine tests did not lead to increased use of other tests. We also saw significant increases in routine test and all common test-free patient-days with the intervention. Test-free days imply that patients were saved from disruption of sleep and pain associated with phlebotomies, and there was reduced use of laboratory personnel for phlebotomies. With the increase in test-free days, we also see an increase in number of stat tests during the intervention period. Stat test orders are typically placed in our hospital when the results are required within the next couple of hours. We postulate that as providers refrained from pre-emptively ordering tests on certain patients, changes in clinical status evident during rounds (after the routine early morning phlebotomies) likely led to addition of stat bloodwork to get timely results. In the future, delaying the time of blood draws in our units to later in the morning (~11:00), whereby providers have a chance to assess patients prior to the unit scheduled phlebotomies, may mitigate the increase in the number of stat test orders. Blood culture testing is not routine, and expectedly, our intervention bundle targeted towards routine tests was not associated with a reduction in blood culture testing. More recent data examining trends in routine laboratory test use pattern demonstrated sustainability of results among the hospitalist group. However, on the CTUs, the changes did not sustain after the study period. Recognising the additional merits of system-focused interventions on effectiveness and sustainability,29 our team has worked with operational and medical team leaders in advance of the launch of a new provincial harmonised EMR system to design system-level changes including modifications of order sets and entry processes.30 We are currently examining the effect of the provincial harmonised EMR deployment on the use of laboratory tests.

The differences noted between CTU and hospitalist units may be explained by differences in context. Learners on medical units are known to order more laboratory tests.31 In addition, CTUs have a rotating model with new learners in each block. Both of these factors likely affected the magnitude and sustainability of change on CTUs. As the delivery of education and feedback ceased after June 2021, the new learners (particularly the new cohort of residents beginning July 2021) did not have the necessary skills to sustain the results. On the other hand, for the non-teaching hospitalist units, the results of the education and facilitated group-based feedback seemed to have sustained. This suggests that different strategies may be required for teaching teams that consider both the rotating model of care and different stages of training. Although the Charlson Comorbidity Index across sites was comparable between the two groups, it is possible that certain patient acuity factors that are not captured by the index were higher on CTUs. Moreover, we were unable to organise peer-facilitated review sessions for CTU learners, which may have affected the impact. In the future, it would be useful to incorporate peer group-based sessions for more active engagement of learners.

We note a similar reduction of routine laboratory testing to the 11% reduction seen in our pilot and other similar studies aiming to reduce use of low-value laboratory testing in hospitals.8 15 18 32 With evidence for multicomponent initiatives,8 33 our intervention intentionally blended education, social comparison reports and facilitated review sessions that address barriers pertaining to provider knowledge. We systematically tracked balancing measures and did not find any safety concerns with the intervention. The significant increase in stat tests ordered with the intervention did not translate into significant differences in the rates of critically abnormal test results or patient-relevant outcomes. Prior studies have also confirmed no increase in adverse outcomes including electrolyte abnormalities, readmission rates and mortality.16–18

Limitations

Our intervention was performed at hospitals in a single city, and our findings may not be generalisable to other cities. Creation of automated comparison feedback reports requires a robust electronic health record system which may not be available at all places. Although our intervention occurred during the COVID-19 pandemic, we had a relatively small number of patient-days due to COVID-19. This is because each hospital additionally had cohorted COVID-19 units which were not included. Hence, even though our intervention was effective in the context of the pandemic, its generalisability to patients with COVID-19 needs further study. Although our data set included information on the most ordered tests, we did not have data on other specialised tests that may have been ordered. Despite working to create system-level design changes on the anticipated provincial EMR system, operational timelines and constraints limited our ability to combine person-focused and system-focused interventions. With the current roll-out of the provincial harmonised EMR system, we are studying the combined effect of both.

There are elements of the intervention bundle where fidelity of implementation could not be measured; for example, we do not know how many learners reviewed the aggregate laboratory test use reports nor how many attending physicians reviewed their email reports. The module completion rate was relatively low at 64%, and the attendance at the group data review sessions comprised 46% of all attending physicians. These numbers are likely an underestimation of completion rates since, to improve access, we deliberately waived mandatory registration requirements, leading to several counts of unidentified access of the module. For the facilitated review sessions, sometimes several attending physicians attended via the same login account of a hospital conference room, and it was difficult to record and identify each participant. However, the fact that our data showed that the intervention elements directly reached approximately half our participants (through completion of module and/or attendance at the session) and likely indirectly reached others who may have reviewed their reports over email, spoken to another colleague or completed the module without logging in is encouraging. We were unable to study cumulative patient-level outcomes like anaemia and transfusion requirements in this study.

Finally, some elements of our intervention bundle (eg, facilitated feedback review sessions) are resource-intensive. The study required hiring of a half-time research assistant for coordination, in addition to time from research team members and in-kind support from our health system data analyst and the Physician Learning Programme. We are not sure which combination of educational modules, performance feedback, physician champions or Hawthorne effect,34 where knowing about the study and data tracking impacts behaviour, overall led to the outcome. Reassuringly, we see sustainment within the hospitalist group despite cessation of all project activities in June 2021. From a resource standpoint, the reports are automated and easily (and freely) available on secure dashboards through health system logins for each individual. They can be programmed for automated pushouts as desired by each group. The facilitated feedback sessions were primarily organised with the goal of building stakeholder support and acceptability of other intervention elements (including subsequently planned system-level interventions), such that the more resource-intensive, person-focused interventions could be subsequently discontinued. With the model of CTUs and rotating learners, the module can easily be incorporated as part of the educational sessions already prearranged for learners during their CTU block, with discussion of reports incorporated as part of rounds. Once this education occurs early in medical training, this will help perpetuate and sustain a culture of appropriate ordering through system-level changes.

Conclusions

A multipronged quality improvement initiative was associated with reductions in low-value inpatient laboratory testing among hospitalised patients on medical units with no concerning safety outcomes for patients.

Data availability statement

Data are available upon reasonable request. All outcome data relevant to the study are available from Alberta Health Services Data and Analytics team upon request in accordance with institutional policies and procedures. All process measure data are available from the corresponding author upon reasonable request.

Ethics statements

Patient consent for publication

Ethics approval

This study was approved by the Conjoint Health Research Ethics Board in our institution and university (REB17-1215). As this was a study of a health system quality improvement initiative, the review board waived patient participant informed consent, and participation of physicians in facilitated review sessions and online module occurred through implied consent.

Acknowledgments

We acknowledge all participating physicians and learners for their engagement with the project, and the Physician Learning Program for their support with implementation.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Twitter @aambasta1, @JuGirard5

  • Contributors AA designed and conceived the study and wrote the first draft of the manuscript. DW oversaw the study conduct and coordination. OO conducted all data analyses, guided by AA and PF. All coauthors contributed to the implementation and evaluation of study, critical review and redrafting of the manuscript into the final version. All authors consented to the publication of the final version of the manuscript. AA acts as the guarantor and accepts full responsibility for the work and/or the conduct of the study, had access to the data, and controlled the decision to publish.

  • Funding The funding body (Choosing Wisely Alberta) played no role in the design of the study; collection, analysis and interpretation of the data, and the decision to approve the publication of the finished article.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Linked Articles