Background Unintentional discrepancies across care settings are a common form of medication error and can contribute to patient harm. Medication reconciliation can reduce discrepancies; however, effective implementation in real-world settings is challenging.
Methods We conducted a pragmatic quality improvement (QI) study at five US hospitals, two of which included concurrent controls. The intervention consisted of local implementation of medication reconciliation best practices, utilising an evidence-based toolkit with 11 intervention components. Trained QI mentors conducted monthly site phone calls and two site visits during the intervention, which lasted from December 2011 through June 2014. The primary outcome was number of potentially harmful unintentional medication discrepancies per patient; secondary outcome was total discrepancies regardless of potential for harm. Time series analysis used multivariable Poisson regression.
Results Across five sites, 1648 patients were sampled: 613 during baseline and 1035 during the implementation period. Overall, potentially harmful discrepancies did not decrease over time beyond baseline temporal trends, adjusted incidence rate ratio (IRR) 0.97 per month (95% CI 0.86 to 1.08), p=0.53. The intervention was associated with a reduction in total medication discrepancies, IRR 0.92 per month (95% CI 0.87 to 0.97), p=0.002. Of the four sites that implemented interventions, three had reductions in potentially harmful discrepancies. The fourth site, which implemented interventions and installed a new electronic health record (EHR), saw an increase in discrepancies, as did the fifth site, which did not implement any interventions but also installed a new EHR.
Conclusions Mentored implementation of a multifaceted medication reconciliation QI initiative was associated with a reduction in total, but not potentially harmful, medication discrepancies. The effect of EHR implementation on medication discrepancies warrants further study.
Trial registration number NCT01337063.
- medication reconciliation
- medication safety
- quality improvement
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
One of the most prevalent hazards facing hospitalised patients is unintentional medication discrepancies, that is, unexplained differences in documented medication regimens across sites of care.1 2 Unresolved medication discrepancies can contribute to medication errors and adverse drug events (ADE), resulting in patient harm.3 4 Nearly two-thirds of inpatients have at least one unexplained discrepancy in their admission medication history, and some studies found up to three medication discrepancies per patient.5–7 Medication discrepancies are caused either by history errors (ie, errors in determining a patient’s preadmission medications) or reconciliation errors (ie, errors in orders despite accurate medication histories, eg, failure to restart a medication at discharge that was held on admission).3 4
One way to minimise medication discrepancies is to perform high-quality medication reconciliation, defined as ‘the process of creating the most accurate list possible of all medications a patient is taking… and comparing that list against the patient’s admission, transfer, and discharge orders, with the goal of providing correct medications to the patient at all transition points within the hospital.’8 In research studies, hospital-based medication reconciliation interventions have consistently demonstrated efficacy in reducing medication discrepancies, though a positive impact on patient outcomes such as readmission has been less consistent and limited by study size.9 10
While medication reconciliation practices are required at all hospital care transitions, implementation has been challenging. For many hospitals, it involves change in work processes and additional tasks for clinicians. Furthermore, hospitals need clearer guidance on which interventions are more likely to be successful in their local environment.11 Lastly, while hospitals may document compliance with medication reconciliation processes to meet national regulatory requirements,12 13 it is unclear how much medication safety has actually improved. Indeed, a study at two large urban academic hospitals found that general medical inpatients averaged more than one potentially harmful discrepancy in medication orders despite documented completion of medication reconciliation.14 To identify and address implementation barriers, the Society of Hospital Medicine (SHM) in 2009 brought together 36 diverse stakeholders from 20 organisations for an Agency for Healthcare Research and Quality (AHRQ)-funded conference.15 Subsequently, SHM received research funding from AHRQ to conduct the Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS).
The objectives of MARQUIS were to design an evidence-based toolkit of best practices in medication reconciliation, use a mentored approach to support implementation at six US hospitals, evaluate the effects of the intervention on unintentional medication discrepancies, identify the most important components of the intervention and determine barriers and facilitators of implementation. This manuscript reports the impact of the intervention on unintentional medication discrepancies, both total and those with potential for harm, the main quantitative findings of the study. We hypothesised that the intervention would reduce potentially harmful medication discrepancies over baseline temporal trends.
MARQUIS was a pragmatic quality improvement (QI) study with concurrent controls, using time series methodology to measure the incremental effect of the intervention over baseline temporal trends. Detailed descriptions of the intervention toolkit and study design have been published.16 17 Four sites’ institutional review boards (IRB) considered the study an exempt QI project, while two sites required informed consent of patients. The study was approved by the IRB of the central coordinating site. The study is registered at ClinicalTrials.gov (NCT01337063).
Six US sites originally chose to participate in this study: three academic medical centres, two community hospitals and one Veterans Affairs Medical Center. We purposely chose sites that varied in size, academic affiliation, geographic location and use of health information technology (HIT). However, all sites had several common features: (1) medication reconciliation was a priority; (2) hospital leadership was committed to making process improvements; (3) an active hospitalist group was engaged in QI; (4) a suitable hospitalist and/or pharmacist was identified as the site’s clinical champion; and (5) each site planned to use primarily its own resources for the study.
One site withdrew from the study after the period of baseline data collection due to changes in leadership and resources to complete the study, leaving five MARQUIS sites for evaluation. Each site chose one or more non-critical care medical or surgical inpatient units as intervention units. Any patients admitted to these units were eligible to participate in the study. Two sites were large enough to have concurrent control units, in which case intervention units were chosen based on receptivity to the intervention and/or need for improvement in their medication reconciliation processes.
Study outcomes were assessed from 6 months before implementation to a maximum of 25 months after implementation. At each site, trained pharmacist staff took a ‘gold standard’ medication history using a standard protocol on a random sample of approximately 22 hospitalised patients per month. A random number table for every day of data collection was used to assign the order in which admitted patients from the previous day were selected. The gold standard medication history process has been described in previous studies, where reliability has been shown to be moderate to high.14 18 This history was then compared with the primary team’s medication history and to admission and discharge orders. Discrepancies in admission or discharge orders due to errors in the primary team’s medication history were categorised as ‘history errors’ (eg, team did not realise patient was taking aspirin prior to admission and therefore did not order it). For discrepancies in orders not caused by history errors, the pharmacist reviewed the medical record for a clinical explanation, and if necessary, communicated with the primary team. This allowed pharmacists to distinguish unintentional medication discrepancies (ie, due to ‘reconciliation errors’, eg, team knew patient was taking aspirin prior to admission, documented it, held it on admission for clinical reasons, but forgot to restart it at discharge) from intentional medication changes. Pharmacists then categorised each unintentional discrepancy by timing (admission vs discharge orders), type (eg, omission, additional medication, discrepancy in dose) and reason (history error vs reconciliation error). Unintentional discrepancies in all these categories were then flagged for physician adjudication (see below). If pharmacists felt that a discrepancy had potential for severe harm, they contacted the primary team to address the error. Study pharmacists and patients could not be blinded to intervention status.
To ensure consistency in outcome assessment across pharmacists, the research team: (1) provided baseline training; (2) led monthly phone meetings to discuss a patient case and its medication discrepancies; (3) provided an updated frequently asked questions document for managing new situations; and (4) conducted site visits by the research team’s pharmacist (SL) to observe data collection processes and provide feedback. Sites also collected deidentified data on each study patient from computerised administrative sources. Based on the literature on risk factors for postdischarge medication discrepancies,14 we collected demographic, socioeconomic and clinical variables.
Physician adjudicators, blinded to the status of intervention implementation, reviewed the medical record and study pharmacist documentation, confirmed that discrepancies were unintentional, and confirmed pharmacist categorisation of timing, type and reason. They then adjudicated the potential for any harm using a six-point scale, dichotomised as in previous studies, and the potential severity (significant, serious, life threatening, fatal) of the unintentional discrepancies.18 19
Adjudicators received standardised training including a primer on medication safety, a guide on how to perform adjudication and standardised cases to review. To ensure consistent adjudications, the principal investigator (PI) conducted quarterly conference calls with the sites’ physician adjudicators to discuss cases from each site. Additionally, the PI and a coinvestigator reviewed six cases from each site quarterly and reviewed the results individually with each site’s adjudicators. With these steps, inter-rater reliability of discrepancies exceeded 80% across sites.
The evidence-based medication reconciliation toolkit20 was developed by the research team using expert recommendations,15 a systematic review10 and a conceptual framework of how medication reconciliation improves medication safety based on a modification of the Donabedian structure-process-outcome framework proposed by Brown and Lilford (online supplementary appendix figure 1).21 The model emphasises the need for interventions to address structure, management processes and clinical processes to be most effective and that impact on outcomes is dependent on environmental context, the number and types of interventions, and the fidelity with which they are implemented. The individual toolkit components included obtaining an accurate medication history from the patient and other sources (ie, a ‘best possible medication history’ (BPMH)),7 20 as part of a medication reconciliation bundle ideally delivered to all patients, empowering patients and caregivers to take ownership of their medication list (thus improving access to preadmission medication sources), training providers in BPMH and discharge medication counselling techniques, and risk stratification for deploying limited resources to high-risk patients. The toolkit also emphasised basic QI principles, assigning roles and responsibilities to clinical team members, and phased implementation. Additional toolkit components included effective HIT design and implementation and social marketing techniques. Each of the 11 toolkit components was framed as a standardised functional goal (eg, ‘Improve access to preadmission medication sources’), allowing sites to adapt specific manner of implementation to their local needs and circumstances. Sites were advised to, at a minimum, implement a core bundle that consisted of a BPMH, discharge medication reconciliation, patient discharge counselling and forwarding medication information to the next providers of care. Ultimately, based on their local needs and resources and with mentor input, sites prioritised which and how many components to implement and could phase-in intervention components at different time points. Hereafter, the ‘intervention’ is synonymous with implementation of the toolkit in this flexible manner.
MARQUIS used SHM’s mentored implementation approach,22 providing each site with one hospitalist mentor to facilitate toolkit implementation. Mentors with QI expertise were trained in SHM’s methods and performed distance mentoring through monthly calls with the study site’s mentee/local team leader.20 Each mentor had one or two sites (see online supplementary appendix table 1), and there was no turnover in mentorship. Topics for monthly calls included reviewing discrepancy data, evaluating progress along milestones, identifying challenges and successes, offering advice to overcome challenges and defining next steps and tasks with clear accountability. Each study site also received two mentor visits, approximately 5-10 and 16-19 months after starting interventions, depending on the site, important from a QI standpoint (eg, to maintain institutional support and better understand local practices) and from a research standpoint (eg, to assess barriers and facilitators of implementation). Additionally, SHM provided sites with support staff to assist with monitoring progress and collecting data.
At each study site the mentee led the local QI team and held regular meetings to oversee intervention implementation and data collection. Sites could access a central website with additional resources and a listserv. Monthly mentor calls and additional email communications promoted a consistent approach across sites.15
The primary outcome of the study was unintentional medication discrepancies in admission and discharge orders with potential for patient harm. Secondary outcomes included the total number of unintentional medication discrepancies per patient (regardless of potential for harm), discrepancies in admission orders versus discharge orders and discrepancies due to history errors versus reconciliation errors.
The number of potentially harmful discrepancies per patient was analysed using multivariable Poisson regression. All models used number of medications in the gold standard medication list as a model offset given its close correlation with number of discrepancies. To account for temporal trends and the varied introduction of interventions by site, we employed a longitudinal analysis on all patients across the five sites, evaluating outcomes monthly during the preimplementation and postimplementation periods.23 The outcome was assessed as both a change from site-specific baseline temporal trends (ie, change in slope) and sudden improvement with implementation of the intervention as a whole (ie, change in y-intercept). To adjust for concurrent controls, we also entered into the model any baseline differences in discrepancy rates and in temporal trends between intervention and control units, as well as sudden improvement in control units at the time when interventions started on other units (ie, to adjust for the effect of contamination). Additionally, we adjusted for patient demographic, socioeconomic and clinical variables, then manually eliminated non-significant collinear variables. We used general estimating equations to cluster by site. We repeated this process for the secondary outcome, total number of discrepancies per patient. We used multiple imputation for missing administrative data (which varied by site and characteristic: approximately 26% for marital status; 17%–19% for age, sex, prior admissions, insurance, length of stay and discharge destination; less than 2% for all other demographic variables). Due to restrictions on sharing patient-level billing data from sites, Elixhauser score and diagnosis-related group weight were missing in 60% and 54% of patients, respectively, but we received aggregated data by site for these variables to improve our imputation calculations. In summary, our modelling approach allowed us to reduce confounding by comparing each unit to itself over time, adjusting for temporal trends and adjusting for patient case mix.
We also present our results as ‘run charts’, illustrating the number of potentially harmful discrepancies per patient over time for each site. Because the statistical models use the total number of medications as a model offset, these discrepancy rates are divided by the total number of medications per patient. Superimposed on these run charts are the major interventions implemented by each site, based on the start date of implementation. Lastly, we present summary statistics of potentially harmful discrepancies for each site in the preimplementation and postimplementation periods on control and intervention units.
Power and sample size
For a stable estimate of temporal trends, each site’s data collection goal was 22 patients per month, beginning 6 months before implementation through a minimum of 21 months after implementation. Due to our study design, it was impossible to know a priori the nature of our postintervention data nor the effect of any specific intervention. However, we assumed the number of medication discrepancies per patient would follow a Poisson distribution and that, at baseline, each hospitalised patient would have an average of 1.5 potentially harmful medication discrepancies per patient in admission and discharge orders combined.18 We also conservatively assumed that an intervention would be implemented at only one site with 12, not 21, months of follow-up due to delays in planning and phasing in the intervention widely. This would yield data from 133 patients before intervention and 266 patients after intervention. With these estimates and alpha=0.05, we would have 90% power to detect a reduction in the mean number of potentially harmful medication discrepancies from 1.5 to 1.1 per patient.18 Two-sided p values were considered significant. SAS V.9.2 was used for all quantitative analyses.
Across the five participating sites, 1648 patients were enrolled from September 2011 to July 2014, including 613 patients during the preimplementation period and 1035 patients during the postimplementation period, of whom 791 were on intervention units and 244 on control units (see flow diagram, figure 1). The characteristics of the sites are shown in online supplementary appendix table 1 and patient characteristics are shown in table 1. Characteristics differed between usual care and intervention arms and across time due to the non-random selection of intervention units by site and the relatively small sample size compared with all admitted patients to these hospitals.
In the time series analysis, we found a significant reduction in potentially harmful medication discrepancies over time (ie, a temporal trend) during the preimplementation period in the control units. When adjusted for this trend, as well as any differences between control and intervention units at baseline or in the preimplementation temporal trend, we found that implementation of the intervention was associated with a reduction in the number of potentially harmful discrepancies over time: incidence rate ratio (IRR) 0.89 per month (95% CI 0.80 to 0.98), p=0.02 (online supplementary appendix table 2). This effect was attenuated after adjustment for patient factors and clustering by site (IRR 0.97 per month (95% CI 0.86 to 1.08), p=0.53). A more robust effect was seen for the effect of the intervention on total discrepancies, adjusted IRR 0.92 per month (95% CI 0.87 to 0.97, p=0.002; online supplementary appendix table 3). In this model, while discrepancy rates were fairly constant in control units throughout the study period, in the intervention units discrepancy rates started higher and were increasing over time prior to implementation, a trend that was reversed after the start of implementation in those units.
Before implementation, rates of potentially harmful discrepancies ranged from 0.17 to 1.00 per patient (table 2).
One participating site did not implement any toolkit intervention components during the postimplementation period, despite monthly mentor calls and site visits (site 1, including 164 patients in ‘intervention units’ who should have received interventions but did not). Of the four other sites that implemented anywhere from four to six different intervention components during the study period (see online supplementary appendix table 4), three sites (sites 2, 3 and 5) saw reductions in their potentially harmful discrepancy rate in the intervention units, from 0.09 to 0.12 potentially harmful discrepancies per patient, depending on the site.
Figure 2A–E demonstrates ‘run charts’ for each of the participating sites, including selected interventions during the study period and potentially harmful medication discrepancies per medication per patient over time. Site 4 (figure 2D) had a sudden, large, sustained increase in their potentially harmful discrepancy rate after implementation of a new electronic health record (EHR). Prior to EHR implementation, they experienced improvements in discrepancy rates with the intervention and in fact had the lowest discrepancy rate of all sites. Moreover, site 1 (figure 2A), which did not implement any toolkit interventions, also implemented an EHR during the study period and had an increase in their potentially harmful medication discrepancy rate. In contrast, the other sites showed variable improvements over time, depending on the number and types of interventions implemented.
With exclusion of site 4 in a post hoc sensitivity analysis, the remaining sites combined had a net reduction in total discrepancies in the intervention units. Also seen were decreases in discrepancies in admission orders and reconciliation errors, smaller decreases in potentially harmful and discharge discrepancies and an increase in history errors (online supplementary appendix table 5). In a post hoc complete case analysis of the two sites with control and intervention units in the preimplementation and postimplementation periods, there were non-significant trends towards sudden improvement in potentially harmful discrepancies (IRR 0.45, 95% CI 0.13 to 1.52, p=0.20) and total discrepancies (IRR 0.78, 95% CI 0.52 to 1.14, p=0.20) with implementation of the intervention (results not shown).
While adoption of a multifaceted medication reconciliation QI initiative using a mentored implementation model was associated with a reduction in total medication discrepancies, it did not reduce potentially harmful discrepancies per patient across all study sites in fully adjusted models, the primary outcome measure of the study. We observed heterogeneity across the five participating sites in terms of how many and which intervention components they chose to implement, confounding effects of EHR implementation and differences in site-level results. Encouragingly, of the four sites that implemented intervention components, three reduced potentially harmful medication discrepancies. This suggests that mentored implementation of medication reconciliation best practices improves medication use accuracy and likely improves medication safety, with site contextual factors influencing results.
We believe the potentially beneficial effects of our intervention at sites with successful implementation were due to a combination of the evidence-based components of the toolkit and our mentored implementation approach, which has been a successful means of spreading other QI interventions in the hospital setting such as prevention of venous thromboembolism, glycaemic control and the discharge process.24–26 To our knowledge, this is one of the largest multicentre medication reconciliation improvement studies conducted in the USA to date. Other studies have shown the benefits of interventions to improve medication reconciliation, usually at single sites, and often using only one or two intervention components.27 28 Also, by conducting this study as a ‘real world’ study, for example, not providing sites with resources or personnel other than a small stipend for data collection, it provides a realistic assessment of the magnitude of likely benefit were this effort to be implemented more widely.
While the effects on improvement in total discrepancy rates were robust, the effects on potentially harmful discrepancy rates were less than expected (study powered on a 27% relative reduction), of borderline statistical significance in unadjusted analyses, and not significant in adjusted and clustered analyses. One likely reason is that physician adjudicators only found approximately 14% of discrepancies to have potential for harm, and thus the statistical power to examine this outcome was reduced (previous studies have shown more than 25% with potential for harm).13 Power was also affected by one site dropping out at the beginning of the study and another site failing to implement mentored interventions. There was a delay of more than a year between site selection (done at the time of grant submission) and beginning of the study, during which time leadership and institutional priorities and support changed. We were also surprised that the overall rate of discrepancies due to history errors did not improve. This is currently being explored in mixed methods analysis; preliminary findings suggest limitations in training existing providers in taking medication histories, without hiring additional staff, and without certifying competency in this skill. Finally, the effectiveness of the intervention was almost certainly affected by the exact nature of the interventions chosen (see online supplementary appendix table 4), the number of providers impacted and the fidelity of implementation. This study illustrates the challenges of implementing complex QI interventions in the real world, in particular medication reconciliation, which is resource intensive and involves complex multidisciplinary workflows.29
We were surprised by the large increase in medication discrepancies that occurred after EHR implementation at site 4 and the slightly smaller increase in discrepancies at site 1. These sites implemented two different vendor EHRs widely used by US hospitals. While the lack of intervention data in site 1 and control units in site 4 limits our ability to draw firm conclusions, the magnitude and duration of the negative effects are concerning and warrant further examination. Previous studies of dedicated medication reconciliation HIT, often using proprietary systems, did not show an initial negative effect on error rates and in fact showed overall benefit.30 31 In those studies, medication reconciliation was the major (if not sole) HIT focus of the institution for that year. That is very different from wholesale adoption of a vendor EHR, where the medication reconciliation component may not be attentively designed or locally customised. Additionally, attention may be divided among many other priorities, leading to inadequate attention to medication reconciliation processes and optimal use of the technology by individuals and teams. Wholesale implementation of a vendor EHR is fundamentally different from implementing stand-alone medication reconciliation software or making improvements to existing medication reconciliation HIT, which were considered components of the MARQUIS intervention. Deficiencies in the design of the medication reconciliation components of vendor EHRs are currently being explored.32
The work to conduct medication reconciliation does not come without a cost. In particular, taking a BPMH is time consuming, approximately 21 min per patient based on this study and on prior work.17 It could be argued that this work has never been adequately resourced, and to do so would require hiring additional staff (as some sites did in this study) or reallocating staff (potentially taking them away from other tasks). However, these costs may be more than offset if the results lead to fewer inpatient ADEs and/or fewer readmissions (depending on the hospital’s financial incentives). Such arguments have been effective in persuading some hospitals to make these investments in personnel.
Another medication reconciliation toolkit in common use is Medications at Transitions and Clinical Handoffs (MATCH).27 The MARQUIS and MATCH toolkits have much in common, including clearly defining roles and responsibilities of clinical personnel, flowcharting the design and redesign of medication reconciliation processes and educating patients and families. Both toolkits also discuss the importance of institutional support, having the right members on the project team, measuring outcomes and iterative refinement of interventions. MARQUIS provides more specific guidance about particular intervention components (eg, improving access to preadmission medication sources, hiring and training personnel to conduct specific tasks, improving HIT) and is more directive about measuring medication discrepancies as a standard way to truly understand the quality of the medication reconciliation process. MATCH places more of an emphasis on keys to successful implementation (eg, make the right thing to do the easiest thing to do within the patterns of normal practice).
There are several limitations to our study. The disadvantage of a real-world study is that we could not measure the potential impact of the intervention under ideal conditions (eg, by providing resources to hire new medication reconciliation personnel). Also, we did not measure intervention fidelity (eg, the number of patients who received discharge medication reconciliation and counselling). Study pharmacists may have artificially reduced measured discrepancies in control and intervention units by intervening in the event of a perceived major medication order discrepancy (eg, before discharge orders were written), biasing towards the null hypothesis. The choice of intervention and control units was not random, raising the possibility of confounding, or even moving attention and resources away from control towards intervention units; however, we minimised this effect by comparing each unit to itself over time, adjusting for temporal trends and robustly adjusting for patient case mix. The fact that the intervention units started with a higher total discrepancy rate that was increasing over time prior to the intervention, compared with control units, may indicate that these units were specifically chosen because they were in need of improvement; however, other explanations are possible. Also, not every site was large enough to have concurrent control units, but we accounted for this in our analyses. As noted above, in a post hoc analysis of the two sites with control and intervention units in the preimplementation and postimplementation periods, there were quantitatively large reductions in total discrepancies (and even larger for potentially harmful discrepancies) after implementation of the intervention, but these were not statistically significant, likely from the reduced sample size.
MARQUIS demonstrates the potential of an evidence-based toolkit and mentored implementation to improve medication reconciliation processes across a wide variety of hospitals. With several improvements planned to the toolkit, lessons learnt regarding implementation and the collective experience with MARQUIS, we hope future efforts will be even more successful. Specifically, we are now conducting a second round of mentored implementation with 18 additional sites to continue learning how best to implement medication reconciliation in diverse real-world settings.
The authors would like to acknowledge the following members of the participating sites’ medication reconciliation quality improvement teams, without whom this study would not have been possible: Randy Peto, Aaron Michelucci, Kyle Danis, Adam Pesaturo, Suzi Wallace, Alice Ehresman, Mihaela Stefan, Carlos Ronca, Monica Fitzgerald, Ryan Atwood, Hasan Shabbir, Beth Delrossi, Kathleen Herman, Jose Diaz, Mohammed Moussa, Rebekah Lafoe, Kimberly Miles, Erin Mowbra, Anita Rich, Christy Lee, Lori Hinton, Brad Foresythe, John Gardella, Becky Largen, Meghan Galloway, Jennifer Kerns, Hingne Priyanka, Kelly Hundley, Justin Metzger, Amy Aylor, Robyn Cruz, David Maddox, Chaitanya Are, Andrew Auerbach, Kirby Lee, Judy Maselli, Stephanie Rennke, Kanizeh Visram and Ryan Beechinor.
Contributors JLS had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Concept and design: JLS, JS, TBW, PJK, SL, SK. Acquisition, analysis or interpretation of data: JLS, AM, JS, TBW, PJK, SM, JAM, EB, EJO, JG, NVN, SK. Drafting of the manuscript: JLS, SK. Critical revision of the manuscript for important intellectual content: JLS, AM, JS, TBW, PJK, SM, JAM, EB, EJO, JG, NVN, SK. Statistical analysis: EJO, EB. Administrative, technical or material support: JAM, JG, NVN. Study supervision: JLS.
Funding This study was supported by the Agency for Healthcare Research and Quality (grant number: R18 HS019598).
Disclaimer The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. The funding agency was not involved in the design and conduct of the study; collection, management, analysis and interpretation of the data; and preparation, review or approval of the manuscript. The contents do not represent the views of the US Department of Veterans Affairs or the US Government.
Competing interests JLS has received funding from Mallinckrodt Pharmaceuticals for an investigator-initiated study of opioid-related adverse drug events in postsurgical patients. AM was funded by a VA HSR&D Career Development Award (12-168). SK has served as a consultant to Verustat.
Patient consent Not required.
Ethics approval Partners Institutional Review Board.
Provenance and peer review Not commissioned; externally peer reviewed.