Article Text

Download PDFPDF

Interventions to improve hospital patient satisfaction with healthcare providers and systems: a systematic review
  1. Karina W Davidson1,2,
  2. Jonathan Shaffer3,
  3. Siqin Ye1,
  4. Louise Falzon1,
  5. Iheanacho O Emeruwa1,
  6. Kevin Sundquist1,
  7. Ifeoma A Inneh2,
  8. Susan L Mascitelli4,
  9. Wilhelmina M Manzano4,
  10. David K Vawdrey2,
  11. Henry H Ting2
  1. 1Department of Medicine, Center for Behavioral Cardiovascular Health, Columbia University Medical Center, New York, New York, USA
  2. 2Value Institute, New York-Presbyterian Hospital, New York, New York, USA
  3. 3Department of Psychology, University of Colorado Denver, Denver, Colorado, USA
  4. 4New York-Presbyterian Hospital, New York, New York, USA
  1. Correspondence to Professor Karina W Davidson, Department of Medicine, Center for Behavioral Cardiovascular Health, Columbia University Medical Center, New York, New York 10032, USA; kd2124{at}columbia.edu

Abstract

Background Many hospital systems seek to improve patient satisfaction as assessed by the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys. A systematic review of the current experimental evidence could inform these efforts and does not yet exist.

Methods We conducted a systematic review of the literature by searching electronic databases, including MEDLINE and EMBASE, the six databases of the Cochrane Library and grey literature databases. We included studies involving hospital patients with interventions targeting at least 1 of the 11 HCAHPS domains, and that met our quality filter score on the 27-item Downs and Black coding scale. We calculated post hoc power when appropriate.

Results A total of 59 studies met inclusion criteria, out of these 44 did not meet the quality filter of 50% (average quality rating 27.8%±10.9%). Of the 15 studies that met the quality filter (average quality rating 67.3%±10.7%), 8 targeted the Communication with Doctors HCAHPS domain, 6 targeted Overall Hospital Rating, 5 targeted Communication with Nurses, 5 targeted Pain Management, 5 targeted Communication about Medicines, 5 targeted Recommend the Hospital, 3 targeted Quietness of the Hospital Environment, 3 targeted Cleanliness of the Hospital Environment and 3 targeted Discharge Information. Significant HCAHPS improvements were reported by eight interventions, but their generalisability may be limited by narrowly focused patient populations, heterogeneity of approach and other methodological concerns.

Conclusions Although there are a few studies that show some improvement in HCAHPS score through various interventions, we conclude that more rigorous research is needed to identify effective and generalisable interventions to improve patient satisfaction.

  • Patient satisfaction
  • Healthcare quality improvement
  • Health services research
  • Patient-centred care
  • Quality improvement

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The importance of patient satisfaction has long being recognised,1 and is being increasingly emphasised by health systems including those of the UK2 and the USA.3 In the USA, beginning in 2007, the Centers for Medicare and Medicaid Services (CMS) launched an ambitious programme to require hospitals to report patient satisfaction through the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey to be eligible for annual Inpatient Prospective Payment System updates.4 HCAHPS results across 11 domains are also publicly reported through the Hospital Compare website (http://www.medicare.gov/hospitalcompare). Starting in 2012, the CMS programme for hospital value-based purchasing also incorporated HCAHPS survey scores to determine global bonus or penalties for Medicare severity diagnosis-related groups payments.4 ,5

The HCAHPS public reporting and inclusion in value-based purchasing have impelled hospitals and clinicians to closely monitor and improve their patient satisfaction and HCAHPS survey scores. Scientifically, much remains unknown regarding the impact of various interventions for improving patient satisfaction, the magnitude of improvement and in what context improvement efforts are successful. Given the scope of the CMS HCAHPS programme, a better assessment which interventions are effective would be vital for improving patient satisfaction in diverse healthcare settings.

We conducted a systematic review of all studies that employed experimental designs to improve hospital patient satisfaction as measured by the HCAHPS survey. As this is a large domain of possible interventions and practices, we focused specifically on hospital inpatients, receiving interventions to improve patient satisfaction, compared with preintervention or control group(s), with a goal of improving HCAHPS scores.

Materials and methods

We conducted a systematic review of the literature using formal methods of literature identification, selection of relevant articles, data abstraction and quality assessment. We then assessed the scope and nature of the available research literature.

Searches

The search strategy was developed by one of the authors (LF), an information scientist. We searched electronic databases, including MEDLINE, EMBASE and the six databases of the Cochrane Library (inception to date of manuscript submission). The MEDLINE search strategy, which formed the basis for the search strategies for the other electronic databases, is shown in online supplementary appendix A. We also searched the following grey literature: Open Grey and NY Academy of Medicine Grey Literature Report.

Study inclusion and exclusion criteria

We included studies of inpatients with interventions targeting at least one of the 21 HCAHPS survey items. Only studies that reported one or more HCAHPS measure as an outcome were included. We excluded articles written in languages other than English. We restricted eligible studies to those of sufficient quality to allow data extraction and interpretation, as described below.

At least two reviewers (J.S. and S.Y.) independently screened the titles and abstracts of all of the citations retrieved by the search strategy to identify articles potentially meeting the inclusion criteria. When reviewers agreed that an article was eligible or a decision regarding eligibility could not be made because of insufficient information, the article was retrieved for full-text review. When reviewers disagreed on eligibility, the remaining team members were consulted and disagreements were resolved by consensus.

Data extraction strategy

We developed a data extraction form to: (1) confirm eligibility for full article review, (2) record study characteristics and (3) abstract relevant data regarding the intervention. Specifically, we abstracted the HCAHPS domain or domains that were targeted by each intervention, the intervention type and description and the study results. HCAHPS scores are typically presented as percentages of patients who respond using the most positive categoryi (ie, ‘top-box scores’, ‘always’ for 5 HCAHPS domains, ‘yes’ for Discharge Information, ‘9’ or ‘10’ for Hospital Rating and ‘definitely’ for Recommend the Hospital). For example, if a study reports that a cohort of patients received a score of 75% on the item “During this hospital stay how often did nurses treat you with courtesy and respect,” this finding indicates that 75% of patients responded ‘always’ to this item. Percentage ‘top-box’ scores for each of the three nursing communication items are then averaged to yield the ‘top-box’ percentage for the HCAHPS Nurse Communication domain. Where possible, we present the improvement in ‘top-box’ scores.

Study quality assessment and quality filter

We used the Downs and Black rating scale to assess the quality of the studies.6 This 27-item checklist assesses studies' reporting of objectives, outcomes, interventions and findings; external validity; internal validity and confounding. Given the pre-post nature of most of the studies and the fact that different cohorts of participants were assessed during the pre-phase and post-phase, items pertaining to follow-up of the same patients were deemed not eligible for inclusion in the quality rating. In addition, as most of the retrieved citations were in abstract form, we could not assess quality for certain items across all studies. As such, we offer a prorated score percentage. For example, if we could only assess 20 of the 27 items on the checklist for a given study and that study received 10 points, it was assigned a quality rating of 50%. We defined our quality filter as having a prorated quality rating of 50% or higher, and restricted our final sample to those studies that met these criteria. As few studies presented data that could be submitted to a meta-analytic approach, we performed only a qualitative review of the evidence.

Results

Literature search and review process

We identified 548 unique studies in our initial search results. Of these 548, 98 were selected for title and abstract review and 59 were determined to be eligible for formal quality rating, as described above. A total of 15 studies were selected as eligible for final inclusion because they met our criteria for being of sufficient quality for data extraction and interpretation (figure 1).

Figure 1

Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) 2009 flow diagram.

Description of studies

Eligible studies were published between the years 2013 and 2016. The sample size of the 15 eligible studies ranged from 72 to 3021 patients; however, especially for studies in 2016, the sample sizes for the HCAHPS scores were often not reported, as these were often secondary outcomes. For evaluation of the impact on HCAHPS interventions, 10 studies featured pre-post designs, 4 were randomised controlled trials and 1 was a prospective, observational study.

Methodological quality

For the 15 eligible studies, the average prorated score was 67.3% (±10.7%). An additional 18 studies had quality rating between 0% and 24%, and 26 had quality rating between 25% and 50%; the average quality rating of these 44 studies were 27.8% (±10.9%). Few of the eligible studies provided enough information to rate whether adverse clinical events occurred, whether study participants were representative of the entire population from which they were drawn and the degree of compliance with the interventions. In addition, most studies provided limited information regarding whether attempts were made to mask participants or observers to intervention status. Few studies reported characteristics of the study participants, and even fewer reported whether confounding variables were considered in statistical analyses.

Intervention methods

As seen in table 1, eight studies targeted the Communication with Doctors HCAHPS domain, six targeted Overall Hospital Rating, five targeted Communication with Nurses, five targeted Pain Management, five targeted Communication about Medicines, five targeted Recommend the Hospital, three targeted Quietness of the Hospital Environment, three targeted Cleanliness of the Hospital Environment and three targeted Discharge Information.

Table 1

Description of high-quality interventions to improve Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) domains

Efficacy of interventions

Eligible interventions are presented with their quality rating and main results in table 2.

Table 2

Results of high-quality interventions on Improve Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) domain scores*

Eight studies reported statistically significant results. One of these was a small randomised controlled trial, finding that the use of therapy dogs prior to physical therapy sessions for orthopaedic patients improved Pain Management, Communication with Nurses and Overall Hospital Rating.14 Two studies with pre-post assessment found that constructing a new hospital building improved Cleanliness of Hospital Environment, but did not impact other domains16 and that physician education and real-time feedback of patient satisfaction via an information technology intervention improved Communication with Doctors and Recommend the Hospital domains.12 Another pre-post assessment of a pharmacy team intervention found significant improvement for Communication about Medicine domain,19 while an observational study assessing an intervention consisting of communication training for attending physicians found improvement in a single item of Communication with Doctors.17 A more complicated study assessed two sequential interventions using a ‘surgical flight plan’, and then providing a large menu of patient education videos via ‘SmartRoom’ technology.8 Although this latter study reported some statistically significant improvements in individual communication questions from different domains, this was after multiple comparisons without correction and domain scores were not reported. An additional study reported the results of advertising about the use and cleanliness of a portable ultraviolet disinfection device.10 Although the authors reported improvement in the Cleanliness of Hospital Environment domain, the sample size was not reported, and there was already a strong trend for improvement for many HCAHPS domains even prior to the intervention. Similarly, a final study on development and implementation of a standardised analgesia protocol for neurosurgery patients demonstrated improvement in Pain Management, but the authors state that persistent trends in improvement after the intervention argues for the presence of other system causes for the observed improvement.20

Seven additional studies did not report significant findings, either because statistical significance was not assessed or the study had inadequate power, or because the interventions were implemented inappropriately or were truly ineffective. Two randomised controlled trials assessed interventions targeting physician communication, one through providing patients with physician face cards,11 while the other by providing physicians with training and real-time patient satisfaction feedback.15 Although both demonstrated positive trends, the sample size for which HCAHPS scores were assessed was small, which may have limited their ability to detect statistical significance. Another pre-post assessment of a communication skills training programme for hospitalists also did not improve Communication with Doctors or Overall Hospital Rating.7 A randomised controlled trial for a nurse-led, language-concordant, hospital-based care transition programme that did not improve any of the Communication domains or Discharge Information domains;13 similarly, a pre-post assessment of changing care management from a unit-based model to a service-based one did not affect HCAHPS score for Recommend the Hospital.9 Finally, two studies did not report p values. One involved the development and deployment of a Pain Management education module for nurses on an orthopaedic unit, showing potential improvement in Pain Management,18 while the other was a personalised pharmacist intervention for transition of care, with potential improvement in Communication about Medicine.21 Both studies used HCAHPS scores for pre-post assessment but did not report sample sizes or statistical testing for HCAHPS comparisons.

Discussion

In this systematic review of interventions to improve HCAHPS scores, we found that most of the studies published were of low quality. For those with satisfactory quality, the most frequent HCAHPS domains targeted included Communication with Doctors, Communications with Nurses, Communication about Medicines, Pain Management, Recommend the Hospital and Overall Hospital Rating. These studies differed widely in approach, methodology and targeted patient population, and even the studies that reported statistically significant results often have caveats that would limit recommendations for adapting them at other healthcare institutions.

Our results also highlight the dilemma faced by healthcare institutions that seek to improve HCAHPS scores, as it is unclear whether comprehensive approaches such as global physician education or new facilities would be more effective, or if it might be better to target specific units or HCAHPS domains. Our review identified remarkably few high-quality designs and/or evaluations, with most demonstrating impact that was narrow in scope and small in magnitude. Across the heterogeneous domains assessed through the HCAHPS survey, we found little evidence of either specific or globally efficacious interventions for the HCAHPS domains. Nearly all of the studies located were of poor methodological quality and only a few employed a rigorous intervention design, and it is often unclear whether the effect on HCAHPS scores is the direct result of the intervention or is due to spillover effects. Thus, any type of quantitative synthesis to estimate effect sizes was not possible. We did find that of those that were eligible by our quality filter a slight majority had significant findings. However, caution is warranted in interpreting even these results, as often the reported HCAHPS scores are secondary outcomes collected through the mandated surveys, and, as several authors acknowledge, could be influenced by other ongoing quality initiatives.

The lack of appropriate design, reporting and statistics among our additional 44 located but quality-ineligible studies is problematic for the improvement of patient satisfaction with hospital and provider care for many reasons. First, there may be important and useful hospital/provider improvements that were tested among these possible interventions that will go unrecognised, because studies did not have sufficient sample sizes or robust study designs to assess their usefulness. Second, hospital and clinician initiatives, such as interdisciplinary rounding and commercial customer service training, are currently being implemented and disseminated by hospitals at great expense, but there is little published evidence suggesting these will result in improvements in patient satisfaction, particularly across diverse geographic and practice contexts. The absence of high-quality evidence about ways to improve the hospital experience for patients leaves healthcare leaders with little more than anecdotes to guide their strategic decision-making. For example, one healthcare leader conducted daily Chief Executive Officer (CEO) rounds,22 but it is not clear how beneficial this type of practice might be because anecdotal/single case studies are the only available evidence. In the absence of rigorous, actionable evidence on which to judge the appropriateness of interventions aimed at improving patient satisfaction, we cannot expect hospitals or clinicians to adopt best evidence-based practices.23

To help address these issues, it would be useful for future studies to adapt more rigorous approaches. These would include formal power calculations that take into account reasonable assumptions for effect size and local survey response rate. The latter is particularly important, as in our experience it is often no longer feasible to directly conduct surveys using HCAHPS items as part of study protocols, due to concern for contamination with CMS-required surveys. This likely explains our observation that more recent studies have tended to use HCAHPS scores obtained through surveys as secondary outcomes. An example of such a power calculation might be as follows. If a hospital had a response rate of 35%, and wanted to improve one of the HCAHPs domains from their current 75% to 80%, it would take approximately 2262 survey responses to effectively test their proposed intervention; 6463 patients would need to be exposed to the intervention to receive that many surveys. More thoughtful sample size planning in this fashion might alleviate the issue of being unable to assess whether a targeted intervention that met the primary research outcomes might also meaningfully impact patient satisfaction as measured by the HCAHPS score.

One of the reasons for the excitement and interest in improving patient satisfaction with hospital care is derived from other study results that have noted that these scores are observationally associated with improved clinical outcomes.24–27 A recent systematic review concluded that higher patient satisfaction was observationally associated with better patient safety, clinical effectiveness, health outcomes, adherence and lower resource utilisation.28 However, many other studies examining quality process measures, such as those reported by the Hospital Compare website, have found a low concordance between excellence in care and HCAHPS scores (κ<0.20).29

Yet other studies have found no association between patient satisfaction and the technical quality of care.30 A national study of 51 946 adult respondents reported that higher patient satisfaction was associated with higher risk of inpatient admission, greater expenditures, greater prescription drug expenditures and higher mortality;31 and a study of 31 hospitals in 10 states reported that patient satisfaction was independent of hospital compliance with surgical processes of quality care.32 Nonetheless, despite some inconsistencies, patient satisfaction is likely to remain a key quality metric, especially given its essential importance to the relationship between patients and the healthcare system.33 It is therefore imperative to identify effective patient satisfaction interventions, and to directly investigate if improving patient satisfaction can also directly improve other important clinical outcomes.

This systematic review does have implications for policy and for public reporting. As of now, there is a lack of evidence-based interventions for improving HCAHPS scores, yet hospitals are being driven, through value-based purchasing and public reporting, to use a metric that may not be easily modifiable. The majority of hospitals that currently have high HCAHPS scores are small (<200 beds), and are based in a community setting. If receiving care at an urban hospital necessarily results in lower patient satisfaction—perhaps because of factors such as crowded facilities, clinical or sociodemographic case mix and payer mix—penalising those hospitals serving with the greatest needs seems counterproductive to the ultimate goals of the CMS and Affordable Care Act programmes. Further, adjustment for sociodemographic variables at the hospital level may improve comparisons of patient satisfaction between hospitals and reduce the unintended consequences of value-based purchasing penalties. To effectively improve patient satisfaction, we need to discover modifiable causes for patient dissatisfaction that are empirically tested with appropriate designs and sufficient statistical power in similar types of hospitals. Only then can we test if this improves, or harms, the quality of care received by a patient.

What then can be done to move this field forward? There seem to be few interventions designed to improve one patient satisfaction domain, across all hospitalised patients, and that is rigorously tested for usefulness. These might be the next generation of interventions, which if married with more rigorous designs and power analyses, appropriate correction for multiple comparisons and use of the correct unit of analysis (eg, site, physician, patient, service line) would be helpful in building an evidence-base. Published interventions most commonly used a pre-post design, which does not guard against secular trends, contamination by other co-occurring interventions and the other validity threats present when randomisation is not present. An example of future useful intervention might be randomising all physicians to either receive or not receive real-time feedback on their own Communication with Doctors domain scores, to determine if this improved that one domain across the hospital and across all patient groups. Or, one could test one of many behavioural economic approaches have been used to change physician behaviour, including randomising physicians to a peer-commitment letter about their Communication with Doctors score goal versus no such commitment.34 Another example might be implementing sleep hygiene environment practices for all patients on a floor,35 in which noise metres, red-spectrum lighting and white noise machines are introduced, and alerts, overhead paging systems and elective phlebotomy are minimised or eliminated. Units could be randomised in a stepped wedge design to test the rollout of such environmental changes to determine if the Cleanliness of Hospital Environment and Quietness of Hospital Environment domains are improved. Guarding against multiple comparisons and conducting the analyses mindful of the correct unit of analysis (surveys nested within physician or within unit) would be important. Successful studies along these lines would also need to recognise resource constraints and the operational priorities of healthcare systems. Thus, these types of innovative interventions will require close collaboration among hospital leadership with frontline staff and patients, to address the need for the improvement in satisfaction with healthcare service, while rigorously testing the implications of the intervention for the quality of that care.

Limitations

The systematic review reported here is limited by a number of factors. First, because the HCAHPS score contains many domains, this required the use of a broad range of search terms which contributed to the heterogeneity of the studies captured. Relatedly, this ‘scoping review’ differed from an in-depth systematic review in that: (1) hand searching was not conducted, (2) there was no contact with the study authors and (3) there was no attempt to combine results in a meta-analysis.36

Conclusion

In conclusion, we identified few high-quality studies that tested the efficacy of interventions to improve patient satisfaction scores as assessed by the HCAHPS survey. Despite the visibility of public reporting and accountability of value-based purchasing for HCAHPS survey scores, there is minimal evidence to inform hospitals, clinicians, payers and healthcare policy/management experts about what interventions can improve patient satisfaction and in what context. Given the importance of patient satisfaction as well as patient outcomes, safety and cost in high-value healthcare, there is an urgent need for properly designed interventions to evaluate novel and sustainable methods to improve patient satisfaction that have a demonstrable impact on important clinical outcomes, and that can be spread across different regions and hospital contexts.

References

View Abstract

Footnotes

  • Contributors JS, SY, KS, IAI and IOE conducted the title, abstract and full-text review for this study, performed data extraction, evaluated study quality and drafted major parts of the manuscript. LF developed the search strategy. DKV, HHT, REK, SLM, WMM and KWD conceived the idea for this study and drafted major parts of the manuscript. All authors read and approved the final manuscript.

  • Funding National Institutes of Health K23 career development awards, K23 HL112850, K23 HL121144.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • i HCAHPS items are scaled in a number of different ways. Fourteen items feature a 4-point response scale ranging from ‘never’ to ‘always.’ Three items use a 4-point response scale ranging from ‘strongly disagree’ to ‘strongly agree.’ Two discharge-related items offer a yes/no response option. Overall rating of care uses an 11-point Likert scale, and the item ‘likelihood to recommend’ features a 4-point response scale ranging from ‘definitely no’ to ‘definitely yes.’