Article Text

National cross-sectional cohort study of the relationship between quality of mental healthcare and death by suicide
  1. Brian Shiner1,2,
  2. Daniel J Gottlieb3,
  3. Maxwell Levis1,2,
  4. Talya Peltzman3,
  5. Natalie B Riblet1,2,
  6. Sarah L Cornelius3,
  7. Carey J Russ1,2,
  8. Bradley V Watts2,4
  1. 1 Mental Health and Behavioral Science Service, White River Junction VA Medical Center, White River Junction, Vermont, USA
  2. 2 Department of Psychiatry, Geisel School of Medicine at Dartmouth College, Hanover, New Hampshire, USA
  3. 3 Research Service, White River Junction VA Medical Center, White River Junction, Vermont, USA
  4. 4 Office of Systems Redesign and Improvement, United States Department of Veterans Affairs, Washington, District of Columbia, USA
  1. Correspondence to Dr Brian Shiner, Mental Health and Behavioral Science Service, White River Junction VA Medical Center, White River Junction, VT 05009, USA; brian.shiner{at}va.gov

Abstract

Background Patient safety-based interventions aimed at lethal means restriction are effective at reducing death by suicide in inpatient mental health settings but are more challenging in the outpatient arena. As an alternative approach, we examined the association between quality of mental healthcare and suicide in a national healthcare system.

Methods We calculated regional suicide rates for Department of Veterans Affairs (VA) Healthcare users from 2013 to 2017. To control for underlying variation in suicide risk in each of our 115 mental health referral regions (MHRRs), we calculated standardised rate ratios (SRRs) for VA users compared with the general population. We calculated quality metrics for outpatient mental healthcare in each MHRR using individual metrics as well as an Overall Quality Index. We assessed the correlation between quality metrics and suicide rates.

Results Among the 115 VA MHRRs, the age-adjusted, sex-adjusted and race-adjusted annual suicide rates varied from 6.8 to 92.9 per 100 000 VA users, and the SRRs varied between 0.7 and 5.7. Mean regional-level adherence to each of our quality metrics ranged from a low of 7.7% for subspecialty care access to a high of 58.9% for care transitions. While there was substantial regional variation in quality, there was no correlation between an overall index of mental healthcare quality and SRR.

Conclusion There was no correlation between overall quality of outpatient mental healthcare and rates of suicide in a national healthcare system. Although it is possible that quality was not high enough anywhere to prevent suicide at the population level or that we were unable to adequately measure quality, this examination of core mental health services in a well-resourced system raises doubts that a quality-based approach alone can lower population-level suicide rates.

  • mental health
  • quality measurement
  • qualitative research

Data availability statement

Data on United States Department of Veterans Affairs (VA) users were obtained from the VA corporate data warehouse (CDW), which requires VA approvals and credentials to access. Data on suicide among VA users were obtained from the VA-Deparmment of Defense Mortality Data Repository (MDR). Our data use agreements do not allow us to share CDW or MDR data. Deidentified suicide risk for the US general population, calculated using county-level estimates for the years 2013–2017, was obtained from the Centers for Disease Control and Prevention (CDC)'s Wide-Reaching online Data for Epidemiologic Research (WONDER) database, which is publicly available online (https://wonder.cdc.gov/). WONDER data are provided for the purpose of statistical reporting and analysis; the CDC prohibits the use of WONDER data for the purpose of identifying individuals.

Statistics from Altmetric.com

Introduction

Healthcare systems’ suicide prevention efforts range from patient safety approaches, focused on suicide as an avoidable harm resulting from a clinical error, to quality improvement approaches, focused on suicide as a clinical outcome that can be avoided by providing high quality care.1 2 Patient safety approaches such as lethal means restriction have garnered attention because of their success in preventing suicide in high-suicide risk contexts.3 For example, the US Department of Veterans Affairs (VA) Mental Health Environment of Care Checklist (MHEOCC) is a lethal means restriction intervention that aims to identify and eliminate environmental hazards for suicidal behaviour on inpatient mental health units.2 Although implementing MHEOCC contributed to the near elimination of inpatient suicide across over 100 VA mental health units,4 it required significant capital investment for the sole purpose of preventing an exceptionally rare event.5 Even with the success of MHEOCC in inpatient settings, the large majority of suicides occur outside of inpatient mental health units, where healthcare systems have limited ability to intervene to eliminate environmental hazards. While there have been notable efforts at lethal means restriction to decrease outpatient suicide deaths,6–10 most of these interventions target broader societal policy issues and as such are beyond healthcare system influence. While the healthcare system can promote public health through interventions such as providing counselling and resources for firearms safety,11 clinicians face notable implementation barriers to lethal means restriction that may be beyond their control within the US sociopolitical context.12 Thus, patient safety approaches to suicide prevention may require augmentation with other approaches outside of highly controlled settings.

In order to address suicide from a quality improvement perspective, it is necessary to identify evidence-based clinical interventions that either directly address suicide risk or target related risk factors. However, there are relatively few randomised clinical trials of clinical interventions for suicide prevention, and the only interventions that are effective at decreasing death by suicide target especially high-risk populations such as those being discharged from an emergency room following a suicide attempt.13 Similarly, it has been challenging to demonstrate that there is an association between treating risk factors for suicide such as depression and subsequent reductions in death by suicide using randomised clinical trials.14 15 However, one notable exception is Detroit’s Henry Ford Health System, which saw a 75% decrease in suicide from 89 per 100 000 people in 2000 to 22 per 100 000 people from 2002 to 2005 among members of a regional healthcare system following the implementation of a system of ‘perfect depression care’.16 The treatment model was based on the principles of healthcare quality outlined in the National Academy of Medicine’s Crossing the Quality Chasm Report1 and included a commitment to consistent delivery of patient-centred, evidence-based care for depression, as well as a robust model for the continual evaluation and improvement of care processes.16 One of the many health benefits that resulted from excellent depression care was a sustained, lower suicide rate for a decade compared with the control period.17 18 To the best of our knowledge, there are no similar reports in the literature of healthcare systems achieving substantial reductions in suicide through broad implementation of mental health quality improvement.

If it is true that high-quality mental healthcare systems can reduce death by suicide and that these systems exist and can be identified, then the relationship between higher quality and lower suicide rates should be broadly observable. The VA, the USA’s largest health system, with an extensive focus on quality of care,19 access to mental health services20 and suicide prevention,21 presents a unique forum to evaluate this relationship. This study’s goal was to determine the relationship between the quality of mental healthcare and suicide rates among patients who access VA care (VA users). We have three specific objectives: (1) to calculate regional mental health quality metrics representing key mental health services and conditions, (2) to calculate adjusted regional suicide rates and (3) to determine the relationship between quality of care and suicide at the regional level. Demonstrating whether variation in the quality of mental healthcare is related to variation in suicide rate could lead to the identification of practices in areas with lower than expected rates that may aid in decreasing suicide in regions that encounter higher than expected rates.

Methods

Data sources and cohort selection

We identified all VA users from 2013 through 2017 using the VA corporate data warehouse,22 which is a repository of electronic medical records data. Patients were included in each year that they used VA services and were assigned to the zip code associated with their most common home address during that year. Covariates obtained from the corporate data warehouse included age, sex and race. VA users who reside outside of the USA or with missing zip code, race, age or sex were excluded from all analyses. We identified mortality status and cause of death, where applicable, using the mortality data repository,23 which is a cross-linkage of VA and Centres for Disease Control and Prevention National Death Index data maintained by the VA and Department of Defense.

We identified information on county-level suicide risk for the US general population the Wide-Reaching online Data for Epidemiologic Research (WONDER), limiting to the combined years 2013–2017.24 Both VA users and the general US population were limited to age 18 years or older and were aggregated, by county, to 115 VA mental health referral regions (MHRRs), which are geographically continuous groups of counties where the plurality of VA users obtain mental healthcare at the same locations.25 All 50 states and the District of Columbia are contained within the 115 MHRRs. Both VA and general population rates were adjusted for demographic variation using direct adjustment with the 2017 US population acting as the reference. We chose to aggregate 5 years of data to provide stable estimates of suicide rates within each MHRR.

Quality metrics

VA quality metrics are generally assessed based on where patients receive care.26 27 Because we assessed our outcome at the MHRR level rather than the facility level (see the Adjusted regional suicide rates section), we adapted VA metrics to calculate our predictor of quality for VA users living in these geographical regions. As shown in table 1, we chose quality metrics representing four core mental health service functions, including screening, effective medication treatment, subspecialty access and care transitions. We calculated these metrics for patients living in each MHRR with diagnoses and treatment scenarios that are especially common or high risk in the VA, including depression, substance abuse, post-traumatic stress disorder and inpatient mental health discharge,28–30 as well as a summary index. Additional details regarding our individual quality metrics are provided in the online supplemental material.

Supplemental material

Table 1

Health system-level mental health quality metrics

We created our Summary Quality Index based on the assumption that areas with high performance on multiple metrics were likely to have high performance across unmeasured domains. The index was built using rank statistics to avoid the effect of outliers such that for every metric each of the 115 MHRRs were ranked from 1 to 115. The overall index is simply the mean of all metrics, although we down weighted the opioid use disorder (OUD) and alcohol use disorder (AUD) metrics to one-half so that when combined, they have the same weight as the other domains. We also developed alternative indices using quartiles and a modified quartile that has three levels: bottom 25%, middle 50% and top 25%. The results were not substantively different for these alternative indices, so we report only the mean of ranks.

Analysis

Quality of mental healthcare

We summarised each quality metric across MHRRs using the mean and SD. Because the mean performance differed substantially among metrics, we also calculated a relative SD (RSD) to describe a consistent measure of relative regional variation. We calculated RSD by dividing the SD by the mean.

Adjusted regional suicide rates

Our primary outcome was suicide, as identified in the mortality data repository. We identified suicide deaths when the underlying cause was indicated by an International Classification of Disease, 10th Revision, code in the following range: X60-X84, Y87.0 and U03. For the years 2013–2017, we calculated annual VA user suicide rates for each MHRR using the total number of annual users in a given MHRR as the denominator and the total number of suicide deaths which occurred among those users in the calendar year as a the numerator. Numerators and denominators were summed across 5 years, divided by 5 and then multiplied per 100 000 to give a 5-year annualised suicide rate for each MHRR. We selected 5 years as the period of analysis to ensure that all 115 MHRRs were reportable (based on >10 deaths) and stable (based on >20 deaths). To account for demographic differences of VA patients across MHRRs, we used direct standardisation to adjust crude annualised VA user suicide rates to the age (18–64 and 65+ years), sex and race distribution of the 2017 US population (our standard population). Race strata were defined using mutually exclusive categories (non-Hispanic (NH) white, NH black, NH Asian, NH American Indian and Hispanic). Using WONDER data extracts (as described earlier), we generated a comparison rate in the general adult population of each MHRR during the same time period using the same standard population and adjustment techniques. We then divided the VA user rate by the general population standardised rate to generate a standardised rate ratio (SRR). Because suicide rates vary both significantly and consistently across the USA,31–33 we used the SRRs to account for variation in the plethora of contextual and collective factors that could affect population-level suicide risk across MHRRs such as poverty, social capital, climate and geography.34–37 An SRR of 1 indicates the standardised suicide rate was the same among VA users and the general population within an MHRR.

Relationship between quality of care and suicide at the regional level

We used the same years to evaluate our quality of care predictors and our suicide outcomes (2013–2017) when possible. However, the psychotherapy templates used in our screening and specialty access metrics were not required by the VA or commonly used until 2015.38 39 Therefore, we used 2015–2018 data for these metrics. We calculated Pearson’s correlation coefficients between the overall quality summary index and suicide rates and SRRs at the MHRR level. Unlike SRRs, comparisons of quality across MHRR were not age-adjusted, sex-adjusted and race-adjusted.

Sensitivity analysis for the effect of temporal trends in quality on suicide

Because we pooled 5 years of data to ensure stable estimates within MHRRs, we performed a sensitivity analysis to check for temporal trends. We used an individual-level logistic regression to predict suicide, adjusting for age, sex, race and ethnicity, annual marital status and our quality index (calculated annually). We entered the quality index as the ratio for a given year relative to the same MHRR in 2013, such that every MHRR had a value of 1 in 2013. We included fixed effects for year and MHRR to account for national trends and regional variation. All analyses were performed using SAS V.9.4.

Results

The total population across all years was 8 667 459 unique individuals, and the mean annual population of VA users in each of our 115 MHRRs was 10 909 (SD=6307). There was substantial regional variation in demographics, individual quality metrics and suicide outcomes (table 2). MHRRs differed most dramatically in terms of per cent of patients reporting Asian race (RSD=205%). Regions were demographically most similar in terms of mean age (RSD=5.6%). In terms of individual quality metrics, we found the greatest degree of variation in the subspecialty care access metric (RSD=60.3%) and the least degree of variation in the care transition metric (RSD=10.6%). However, mean regional performance was generally low on all individual metrics, ranging from 7.7% (SD=4.7) on the subspecialty care access metric to 58.9% (SD=6.2) on the care transition metric. In terms of suicide outcomes, there were 9274 deaths by suicide among VA users across the 5 years of observation. The age-adjusted, sex-adjusted and race-adjusted annual suicide rates varied from 6.8 to 92.9 per 100 000 VA users between MHRRs. The age-adjusted, sex-adjusted and race-adjusted suicide rate varied from 6.7 to 29.9 per 100 000 population among the wider population living in these same regions, for an SRR that varied between 0.7 and 5.7, with a mean of 1.9 (SD=0.9). We note that the adjusted rates were nearly identical to the crude rates, and there were no substantive changes to our results when using adjusted rates in our analyses.

Table 2

Characteristics of VA MHRRs (N=115)

The correlation between individual quality metrics and suicide outcomes was negligible,40 at less than 0.3 in all cases (table 3). Correlations were similarly negligible whether we used an age-adjusted, sex-adjusted and race-adjusted suicide rate for VA users or an SRR to account for underlying geographical differences in suicide risk. However, the findings were statistically significant in two cases. First, the depression screening metric had a weak positive correlation with our suicide outcomes. Second, the effective medication for the AUD metric had a weak negative correlation with our suicide outcomes. The Overall Quality Index, which was bounded between 1 and 115 (in the case that a single region had the worst (1) or best (115) performance on all individual metrics), was normally distributed around a mean of 58.0 (SD=17.0), a range of 11.1–95.6 and an RSD of 29.2%. We found no association between either adjusted suicide rates and the Overall Quality Index (figure 1) or SRR and the Overall Quality Index (figure 2).

Figure 1

Scatterplot of Overall Quality Index versus age, sex and race-adjusted suicide rates per 100 000 at the mental health referral region level (n=115). VA, US Department of Veterans Affairs.

Figure 2

Scatterplot of Overall Quality Index versus SRR (comparing the US Department of Veterans Affairs population to the general population living in the same region) for suicide at the mental health referral region level (n=115). SRR, standardised rate ratio.

Table 3

Correlation of individual quality metrics and suicide outcomes among 115 Department of VA MHRRs

Our individual-level sensitivity analysis found no association between quality and risk of suicide (OR=1.01, 95% CI 0.96 to 1.01), indicating that within-region changes in quality were not associated with changes in suicide rates. The year terms were not significant, indicating our results were not affected by temporal trends.

Discussion

Using the 115 regions that define VA Mental Healthcare delivery within the USA, we found no overall association between quality of mental healthcare and demographically adjusted suicide rates. We used a set of quality metrics that both represented core mental healthcare processes and were aligned with metrics being used by the VA to judge hospital and clinic performance. We accounted for underlying geographical and demographic variations in suicide across the USA and still observed no relationship. Our findings indicate that there is no clear relationship between the quality of mental health services and suicide rates. While there were statistically significant correlations between two individual quality metrics and our suicide outcomes, the effects were small in magnitude and inconsistent in direction. Thus, our results do not support a convincing relationship between quality of mental healthcare and population-level suicide risk.

We did not observe the relationship between higher-quality mental healthcare and lower suicide rates previously reported by the Henry Ford Health System.1 16 17 It is possible we would have seen a stronger correlation had we focused on a single condition that has a strong association with suicide in our population such as depression29 41 and constructed comprehensive metrics across the continuum of care for that condition. While our quality metrics were generally focused on timely and effective treatment, the Henry Ford Perfect Depression Care Programme included goals across the National Academy of Medicine domains of quality including safety, patient centeredness, efficiency and equity.16 It is possible that VA users living in regions ranked most highly in our study could have received timely and effective care that was not patient centred, efficient or equitable. Thus, it is conceivable that the quality of VA mental healthcare was not sufficiently high to decrease population-level suicide rates in any region. Our difference in findings could also reflect that we examined suicide rates in the overall VA user population, whereas the Henry Ford analyses focused on patients under the care of the department of psychiatry.1 While we might have found a stronger association by focusing on high-risk groups such as mental health patients, previous VA initiatives that exclusively targeted users with the highest calculated suicide risk missed the overwhelming majority of deaths by suicide.41–43 Thus, we believe our population-level framework may be necessary to address the question of how a health system might decrease overall suicide rates.

There are several additional limitations to our study. First, our individual quality metrics may lack refinement. Even if we ignore the aforementioned issue that quality is a broad concept and we have limited ourselves to the easily measurable domains of timeliness and effectiveness, it is possible that our metrics do not accurately reflect the process of care. For example, our effective medication for the OUD metric only reflects whether patients received any supply of appropriate mediations. Data indicating a mortality benefit for these medications show that medication continuation-associated clinical contacts may be required to achieve an effect.44 45 Second, there is very little research evidence about the care processes that actually prevent suicide.13 Therefore, this study attempted to measure overall mental health quality rather than quality of care specifically for suicide prevention. If the evidence base for suicide prevention were stronger, we would have been able to create stronger metrics for accountability, and driving improvement in those metrics could be expected to improve the outcome of suicide.46 Certainly, this has been the case for inpatient suicide prevention, where adherence to a safety-based intervention has resulted in sustained improvement in suicide rates.4 5 Given the current weakness of the evidence base for outpatient suicide prevention, such metrics would be unlikely to drive improvements in suicide rates and risk unintended adverse consequences such as shifting clinicians’ attention away from evidence-based care processes that have positive impacts on other important outcomes.

In conclusion, our study found no association between the overall quality of mental healthcare and population-level suicide rates. This is not a reason to stop focusing on quality, as mental healthcare has many benefits other than reducing risk of suicide. Continued efforts to build the evidence base for suicide prevention are critical. This should include both the development of new treatments and the improvement of methods to understand whether existing practices reduce suicide risk.

Data availability statement

Data on United States Department of Veterans Affairs (VA) users were obtained from the VA corporate data warehouse (CDW), which requires VA approvals and credentials to access. Data on suicide among VA users were obtained from the VA-Deparmment of Defense Mortality Data Repository (MDR). Our data use agreements do not allow us to share CDW or MDR data. Deidentified suicide risk for the US general population, calculated using county-level estimates for the years 2013–2017, was obtained from the Centers for Disease Control and Prevention (CDC)'s Wide-Reaching online Data for Epidemiologic Research (WONDER) database, which is publicly available online (https://wonder.cdc.gov/). WONDER data are provided for the purpose of statistical reporting and analysis; the CDC prohibits the use of WONDER data for the purpose of identifying individuals.

Ethics statements

Patient consent for publication

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Contributors All contributors meet the ICMJE-recommended criteria for to be listed as authors on this manuscript. Each author made substantial contributions to conception and design (BS, DG, ML, TP and BVW), acquisition of data (BS, DG and TP), or analysis and interpretation of data (BS, DG, ML, TP, NR, SLC, CJR and BVW). Each author was involved in drafting the manuscript (BS and ML) or revising it critically for important intellectual content (BS, DG, ML, TP, NR, SLC, CJR, BVW). All authors gave the final approval of the version to be published, participated sufficiently in the work to take public responsibility for appropriate portions of the content and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

  • Funding This work was funded by the VA National Center of Patient Safety Center of Inquiry Program (PSCI-WRJ-SHINER) as well as the VA Office of Rural Health (ORH15533). The opinions expressed hereinare those of the authors and not necessarily those of the funders.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Linked Articles