Introduction Data were used from inpatient, outpatient and accident and emergency surveys in acute trusts in England to examine consistency in patient-reported experience across services, and factors associated with systematic variations in performance.
Methods Standardised mean scores for six domains of patient experience were constructed for each survey for 145 non-specialist acute trusts. Hierarchical cluster analysis was used to investigate whether and how trust performance clusters. Multilevel regression analysis was used to determine trust characteristics associated with performance.
Results Cluster analysis identified three groups: trusts that performed consistently above (30 trusts) or below (six trusts) average, and those with mixed performance. All the poor performing trusts were in London, none were foundation trusts or teaching hospitals, and they had the highest mean deprivation score and the lowest proportion of white inpatients and response rates. Foundation and teaching status, and the proportion of white inpatients, were positively associated with performance; deprivation and response rates showed less consistent positive associations. No regional effects were apparent after adjusting for independent variables.
Conclusion The results have significant implications for quality improvement in the NHS. The finding that some NHS providers consistently perform better than others suggests that there are system-wide determinants of patient experience and the potential for learning from innovators. However, there is room for improvement overall. Given the large samples of these surveys, the messages could also have relevance for healthcare systems elsewhere.
- Healthcare quality improvement
- quality improvement
- patient satisfaction
- patient-centred care
- quality measurement
- general practice
- health policy
- medical error
- mortality (standardised mortality ratios)
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
- Healthcare quality improvement
- quality improvement
- patient satisfaction
- patient-centred care
- quality measurement
- general practice
- health policy
- medical error
- mortality (standardised mortality ratios)
The healthcare quality agenda globally is characterised by a move from the conventional ‘biomedical’ model to a patient-centred approach to healthcare provision. Leading on from the Darzi Review,1 the coalition government's NHS Outcomes Framework includes patient experience among the five domains for assessing NHS performance.2
Introduced in 2002, the NHS National Patient Survey Programme is among the largest of such programmes internationally, covering all NHS providers of acute, mental health and ambulance services in England and with large, representative samples designed to support robust comparisons of provider performance.3 The survey results are used by government, commissioners and regulators to assess providers' performance, by providers for quality improvement purposes, and are publicly available for use by patients and the public. Separate surveys covering inpatient, outpatient and accident and emergency (A&E) services provided by acute trusts provide a detailed view of user experiences in these areas. However, there has been no analysis to date of patients' experiences across the range of services provided by an organisation.
We used data from recent inpatient, outpatient and A&E surveys in England collectively to examine whether there is consistency in the way patients experience care across these service areas, and identify organisational and population factors associated with systematic variations across trusts in the experience of care reported by their patients. This research is the first to examine how acute providers perform overall in terms of delivering a consistently good experience for patients across a range of services. The findings are highly relevant for measuring and improving quality, and can inform policy development (nationally and at trust level) relating to the measurement of patient feedback. For example, evidence of consistency in organisational performance across surveys could inform trust policies for system-level improvements and facilitate learning from high-performing organisations. Examining patient feedback to identify organisations with the greatest need for improvement relative to peers is equally important.
Sample and data
We used NHS trust-level data for England from the 2009 inpatient and outpatient surveys, and the 2008 A&E survey, as reported by the Care Quality Commission. Care Quality Commission scores individual responses to questions on a linear scale of 0 (most negative) to 100 (most positive). Mean trust-level scores for each question are standardised by age, sex and, for the inpatient survey, method of admission, to account for differences in patient characteristics between trusts.4
We excluded specialist trusts from the main analysis because of the specialised nature of their services and absence of A&E departments, both of which make their patient populations inherently different from those of general acute trusts. However, we provide a descriptive account of the performance of specialist trusts relative to other trusts across the inpatient and outpatient surveys. Two trusts formed from recent mergers were excluded as they had no data for the 2008 A&E survey. The final sample comprised 145 non-specialist acute trusts.
Measuring patient experience
Patient experience was measured as mean scores for ‘domains’ of patient experience. The aim was to base our assessment of trust performance on a parsimonious set of measures that nevertheless represented the broad range of patient experience encompassed by the large number of individual questions in the different surveys. These domains were constructed as follows. The questions in all three surveys were reviewed by the authors and allocated to putative domains of patient experience based on the item content, that is, the dimension of patient experience that the question related to. The domains were chosen so that their meaning was as similar and consistent as possible across the different surveys, supported by the presence of questions common to more than one survey. These domains could then be compared across surveys. The seven domains identified were: cleanliness, dignity and respect, consistency of communication, involvement in decisions, information provision, confidence in staff, and waiting. The domains were chosen pragmatically, with a view to maximising our ability to compare results between surveys. They differ somewhat from those reported nationally for the surveys, but are similar to those identified in previous research.5
The items allocated to each domain for each survey were subject to statistical analysis to determine whether the selection was sufficiently unidimensional for use as a composite measure, and whether this had sufficient internal consistency for reliable discrimination between trusts' performance. Problematic items were removed and the item set retested until it performed satisfactorily (further details are given in the online appendix).
‘Waiting’ was dropped from the analysis because it gave poor reliability for the outpatient survey and was not considered to be measuring the same construct in the different instruments. Trust-level scores for the remaining six domains were then calculated as the means of the standardised scores for the questions comprising each domain. See online appendix table 1 for the questions comprising these domains.
The following analyses were undertaken:
Hierarchical cluster analysis was conducted to assess whether there is consistency in the way patients experience care across the three service areas. Cluster analysis is a statistical technique for finding patterns within the data without imposing an a priori structure. It was therefore used to investigate whether and how trusts cluster according to distinct patterns of performance across the three surveys. This analysis was based on domains' standardised z-scores (mean 0, SD 1), so that the potential differences in the distribution of scores across surveys did not influence the clustering. The decision on the number of clusters used (three) was based on our assessment of the results of the cluster analysis and the consistency of the patterns depicted.
Multilevel regression analysis6 7 was conducted to determine the correlation between different surveys at trust level, thus providing further evidence about consistency in the way patients experience care across the three service areas; and determine which patient-level and trust-level characteristics were associated with performance. Models were structured so that the response (dependent) variable for each model was a domain z-score, with random effects specified at strategic health authority (SHA), trust and survey levels. The independent variables used were: foundation status, teaching status, trust size, average Index of Multiple Deprivation for the trust inpatient population, ethnic composition of the trust inpatient population and survey respondents, and survey response rates (see online appendix tables 2 and 3 for details). Trust size and respondent ethnicity were dropped from the analysis as modelling showed they were poor predictors.
See online appendix for further details of the methodology.
Overall, mean trust domain scores across surveys were 70 or higher, with the exception of the information domain (table 1) (individual questions on information about drug side effects and danger signals after discharge had especially low scores). Results were generally more positive for outpatients compared with other services, and variations in inter-trust scores were consistently larger for the A&E survey.
Mean domain scores for specialist trusts on the inpatient and outpatient surveys were consistently higher than those for general acute trusts.
Consistency of patient experience across services
Hierarchical cluster analysis was used to assess whether or not there is consistency at trust level in patients' experience of care across the three service areas. For all domains, cluster analysis identified three clusters with typical patterns of patient experience across the three surveys (above average, average/mixed, below average). Table 2 shows mean patient experience z-scores and the number of trusts for each cluster by survey and domain. Because standardised scores (mean 0 and SD 1) are used, scores above and below zero indicate above and below average performance respectively and therefore represent a convenient way of describing and comparing the clusters. For all domains and surveys, the analysis identified trust clusters that performed consistently above or consistently below average. Z-score differences from the mean were generally greater for the below average clusters than for the above average clusters. Below average clusters also showed more variation in z-scores between surveys, which were generally worse for the outpatient and A&E surveys than for the inpatient survey. A third cluster presented a less clear pattern of performance with variations across domains. For some domains, this group performed near average (confidence in staff) or just below average (cleanliness and involvement in decisions) on all three surveys. For the dignity and respect domain, performance was consistently negative, but lowest only for the inpatient survey. For the information provision and the consistency of communication domains, performance was more mixed.
Overall, most trusts fell in either the above or below average clusters, with the exception of the involvement in decisions domain, where most trusts fell in the mixed cluster. Domains with the largest number of trusts in the above average cluster were dignity and respect (85) and consistency of communication (80), and those with the fewest were confidence in staff (43) and involvement in decisions (50). Domains with the fewest trusts in the below average cluster were cleanliness (17) and involvement in decisions (8); those with the most were provision of information (52) and confidence in staff (40). The distribution of trusts across the three clusters was most balanced for the confidence in staff domain.
Most trusts belonged to different clusters for different domains. However, 21% of trusts (n=30) had above average performance across all surveys on all domains (table 3). Notably, none of these trusts were in London, and the majority had foundation status. In contrast, all six trusts that were consistently in the below average cluster across domains and surveys were in London, but none of these had foundation or teaching status. These trusts also had the highest mean deprivation scores, the lowest mean percentage of white inpatients, and (with the exception of the A&E survey) the lowest mean response rates.
Trust-level intra-class correlation
Variance component estimates from the unconditional (null) models enabled computation of the intraclass correlation for trust-level scores (table 4), which measures the overall variance accounted for by differences between trusts. These confirmed that there is a substantial relationship between the different survey results for each domain. This relationship was strongest for cleanliness, dignity and respect and involvement in decisions, and weakest for consistency of communication.
Association of patient experience with trust-level characteristics
Multilevel regression analysis was used to identify the organisational and population factors associated with systematic variations in patient experience across trusts. Of the seven variables initially modelled, trust size and percentage of white respondents were significant predictors in only one model, and were therefore removed and all models re-estimated—both with and without response rates, given that their interpretation as a predictor is arguably unclear (ie, whether response rates are a cause or result of differences in reported patient experience). On the whole, results were qualitatively similar with and without response rate; the results in table 5 refer to models that include response rates.
Foundation and teaching status showed consistently significant positive associations with domain scores (resulting in +0.3 to +0.6 and +0.2 to +0.3 differences in trust z-scores respectively across domains).i The proportion of the inpatient population that was white also showed a positive association across domains, with a 1 percentage point increase being associated with a +0.02 to +0.04 increase in z-scores. Mean inpatient Index of Multiple Deprivation scores showed a less consistent pattern; statistically significant positive associations were found only in the cleanliness and confidence in staff domains, with a 1 SD increase in deprivation associated with a +0.17 and + 0.25 increase in z-scores, respectively. Similarly, survey response rates showed a positive association in the information provision and confidence in staff domains only, with a 1 percentage point increase in response rate being associated with a 0.02 and 0.04 increase in z-scores, respectively. London SHA did not appear significantly different from the overall mean in any survey domain, but South West SHA performed better than average for some domains.
Limitations of study
There are a number of limitations to this study. First, an analysis based on one year's surveys might limit the generalisabilty of the findings, as patterns may differ across repeat surveys. However, while results for individual organisations may improve or worsen between surveys, the overall pattern of performance across trusts typically does not show marked changes in year on year performance.8 Second, it was possible to use only a limited number of trust-level characteristics; other organisational-level factors could also be associated with variations in patient experience. Third, it could be argued that differences in the number of questions in each domain have an impact on the pattern of variations observed across domains. However, the results provide no indication of differential patterns between domains according to the number of constituent questions. Additionally, we retained domains (such as involvement in decisions) with fewer relevant questions because they represent important aspects of care and are defined by questions that are the same/similar across surveys. Finally, the experiences reported by patients, and of non-respondents, could be influenced by many factors—related or unrelated to the quality of care. These factors could potentially impact on the survey results. Examples include response bias associated with differential response rates (the direction of response bias is not known but it could systematically impact on the results); language differences between trusts' patient populations that could, for example, make involvement in decisions more difficult; differential impact of trust location; urban–rural residence, case mix of patients etc. The direction and magnitude of impact of such factors (and potentially others) is not known, and could potentially influence the survey results and therefore also the results of our analysis. These examples illustrate the complexities of and challenges in measuring patients' experience of care. Nonetheless, we consider it important to make maximum use of the available data.
Strengths of study
Patient experience is internationally recognised as a key dimension of healthcare quality, and a priority for the NHS identified by successive governments in England. The surveys capture patients' feedback on their experiences of NHS services, and are the leading source of information on this key dimension of NHS performance. However, the use of these rich data for quality improvement purposes has not been fully exploited because of the compartmentalised approach taken thus far in the analysis and use of these datasets.
Our paper adds value by taking a systems perspective to examine whether there is consistency in the way patients experience care across services within an organisation, and if so, what might characterise trusts that demonstrate levels of excellence. Our findings about the scale and patterns of variations in performance across trusts have significant implications for quality improvement initiatives in the NHS. They also have significance for commissioners, regulators and policymakers in terms of their system-wide roles. Given that the unprecedentedly large samples of the NHS patient surveys provide unique opportunities for research, the general messages could also have relevance for healthcare systems elsewhere.
Interpreting the findings
The hypothesis that there should be some correlation between performance measures within organisations is grounded in systems theory, which implies that the culture of the system permeates all areas of operation.9 Thus organisations driven by a culture of excellence would be expected to have positive patient experience across all areas, while organisations that have not established such a culture would be expected to have consistently poor performance.
Our analyses overall suggest that there is a degree of consistency in trusts' performance in terms of the experiences of their service users. We found statistically significant correlations between performance on the different surveys, the correlations being strongest for the cleanliness and dignity and respect domains—aspects of quality where a pan-organisational approach may be more readily driven from board level.
Overall, the cluster analysis revealed patterns of consistent performance across surveys and domains, with some trusts appearing consistently in above or below average performance clusters on all domains and service areas (21% and 4% of trusts, respectively). The domains with the largest number of trusts (almost 60%) in the best performing cluster were dignity and respect, and consistency of communication; in contrast, for provision of information, almost one-third of trusts were consistently in the poor performing cluster. These findings suggest that there are system-wide determinants of patient experience, with some organisations being consistently high performing—either overall across surveys, or in certain dimensions such as cleanliness and dignity and respect—and others that lack a culture and mechanisms for delivering a good experience for their patients.
Overall, the distribution of trusts is skewed towards better performance. However, the survey results overall show considerable room for improvement (mean domain scores being well below the maximum of 100). About 65% and 70% of trusts, respectively, were in the below to average performance clusters for the involvement in decisions and confidence in staff domains, and up to half of trusts for the remaining domains.
We found that teaching and foundation trust status showed an almost consistently positive association with performance across domains. This finding is consistent with other recent research10 and suggests that such organisations may have implemented patient-focused strategies more effectively than others. A review of the performance of foundation trusts suggests that they outperform non-foundation trusts in areas such as staff satisfaction, financial management and quality of care, but that the differences are longstanding rather than the effect of the foundation trust policy per se.11 (Performance on the patient surveys could, of course, have contributed to achievement of foundation status.) Foundation trusts are also generally reported by their regulator, Monitor, to have better clinical quality and service performance than non-foundation trusts.12 These patterns are consistent with our findings.
The proportion of the inpatient population that was white was also positively associated with patient experience, consistent with other research showing that people from black and ethnic minority groups often report a less favourable experience of NHS services than white respondents.10 13–15 This could reflect aspects of service delivery that need improvement in trusts with higher proportions of patients from ethnic minority groups. For example, language and communication problems can impact negatively on patient experience.16 Until the change of government in 2010, the Department of Health's Public Service Agreement target on patient experience, for which it was accountable to the Treasury, extended to improving patient experience for ethnic minority groups. NHS policies and equality legislation continue to require providers to meet the needs of all groups of patients. Trusts therefore should identify ways of meeting the particular needs of patients from black and ethnic minority groups, including through greater engagement with patient groups representing these communities.
The weak but positive association of deprivation with patient experience could reflect lower expectations of the NHS among patients from poorer socio-economic groups: alternatively, this may reflect differences in the patterns of interactions between people of different social classes with healthcare services and professionals. Other research found a negative association between social class (measured as respondent age at completing full-time education) and reported patient experience, which is consistent with our findings in relation to deprivation.14 15
Although none of our high-performing cluster included London trusts, and although the poor performing cluster consisted entirely of London trusts, we found no regional effects after adjusting for the independent variables. Ethnic ‘fractionalisation’ (heterogeneity) and low response rates are often cited as explanations for the comparatively poor survey results for London trusts,17 and inclusion of these variables in the regression analysis could explain the lack of a residual regional effect. However, as noted above, trusts should be addressing the needs of ethnic minority patients: population differences should never be seen as an ‘excuse’ or ‘justification’ for poor patient experience, even if these differences do mean that some trusts face different challenges in understanding and meeting the needs of their populations. Another possibility, given the association between response rates and survey scores, is that lower response rates in London—attributable to particular local characteristics such as a more transient population—might somehow mitigate against higher survey scores. However, as the scale and direction of any potential response rate bias is unclear, and given the lack of evidence about causality in regional effects, we do not consider these factors should be adjusted for in the analysis and interpretation of data from patient surveys.
Implications for improving and measuring patient-centred care
Our study has important implications for quality improvement and measurement. Overall, it is encouraging that trusts are clustered towards better performance. However, this should not lead to complacency, as the findings also show considerable room for improvement. Strategies are needed for raising poor performers to the levels of the best, and for moving the whole performance distribution upwards. The finding that some NHS providers consistently perform better than others in delivering a good experience for patients suggests that there is potential in learning from these innovators.
Based on the organisational factors that distinguish hospitals able to deliver patient-centred care, Shaller identifies seven factors contributing to patient-centred care18:
Engagement of the top leadership.
A strategic vision clearly and constantly communicated to all staff.
Involvement of patients and families at multiple levels.
A supportive work environment for all employees.
Systematic measurement and feedback.
The quality of the built environment.
Supportive information technology.
Shaller also identifies the key strategies for leveraging change as those designed to strengthen the capacity to achieve patient-centred care at the organisation level, and those aimed at changing external incentives in the healthcare system, to influence and reward organisations striving to deliver patient-centred care. The NHS reforms provide opportunities for taking forward such strategies through the commissioning outcomes framework and quality premiums, with the new clinical commissioning groups using contracts and pay for performance schemes such as Commissioning for Quality and Innovation (CQUIN) as additional levers.
Luxford et al's investigation into organisational facilitators and barriers to patient-centred care among US healthcare providers renowned for improving patient experience found that organisations that succeed in fostering patient-centred care adopt a strategic organisational approach to patient focus.19 The facilitators they identified were largely consistent with those identified by Shaller. The barriers were changing staff mindsets from a provider focus to a patient focus, and culture change as a journey rather than a ‘quick fix’. They highlight the need for an organisation-wide approach and culture that goes beyond mainstream frameworks for quality improvement, such as performance measurement, audit, risk management systems, incident reporting and clinical governance, to becoming a learning organisation with a culture that values people, stimulates ideas, develops teamwork and adopts staff recognition systems. The association between the self-reported experience of NHS staff and patients20 is further evidence of the need to engage the workforce in a culture that fosters patient-centred care.
Goodrich and Cornwell's review of the literature on patient experience notes that evidence about organisations reputed for providing excellent patient care shows that it means transforming hospital cultures and working practices, a complex task requiring investment at both strategic and operational levels.21 They note that patients' experience is shaped by organisational and human factors interacting in dynamic, complex ways at four levels: individual patient–staff interaction, the team and clinical micro-system, the institution and the wider health system.
This framework serves to highlight a key constraint of the national survey programme. Goodrich and Cornwell highlight the crucial role of local ward, team leadership in modelling patient-centred care behaviours and defining expected values and behaviours of team members. While the surveys provide valuable organisation-level information for accountability and transparency purposes, and for senior management to act on, they are not designed to provide intra-organisational data. Their practical utility as an improvement tool for frontline staff in particular is therefore constrained because the sample sizes do not allow disaggregation of the data to a granular level, such as ward/clinical teams. The national survey programme does therefore need to be supplemented locally by other mechanisms for collecting and using such data. While highlighting the general paucity of evidence about effective interventions for improving patients' experience, Goodrich and Cornwell do identify some interventions designed to achieve system-level change.
Finally, the key is the use of the patient survey data for improving patients' experience of care. Reeves and Seccombe found that the surveys are widely used by trusts for these purposes, but key constraints included the lack of clinical engagement because the surveys do not extend to specialty or department level, and inadequate resources, statistical expertise and knowledge of effective interventions.22 Their findings about the role of leadership, organisational culture, incentives, benchmarking, public reporting, and internal drivers are consistent with the other research cited. Coulter et al point to a lack of systems for coordinating the collection of patient feedback and acting on the results. They suggest a strategy for improving patient experience that cuts across organisational levels (from the Board to clinical departments) and stress the role of strong leadership.23
Our study also shows that appropriately constructed composite measures have utility in summarising performance across quite lengthy questionnaires about patient experience. Some composite measures are already in use in the NHS for pay-for-performance schemes such as CQUIN.24 However, composite measures are more complex for staff, patients and the public to understand. We stress that, if used, composite measures should be based on rigorous statistical analysis to assure the robustness of the constructs. While composites provide convenient summary measures, scrutiny of responses to individual questions is critical for targeting improvement actions. In addition to analysing the various patient surveys separately, there is merit in benchmarking performance in the round across all surveys at an organisational level. Finally, local data collections (such as ward-level patient feedback) are a timely and valuable supplement to survey data for understanding and improving patient experience at the service level.
We hope that our findings will encourage trusts to examine their performance across the different patient surveys to see how they perform overall. This could help drive improvements by learning from the better performing parts of their organisation, and cross-trust collaboration to promote learning from exemplars. Our finding about the need to improve performance levels overall, coupled with the evidence about organisational factors that facilitate the development and implementation of patient-focused care, suggest that improvement strategies need to be undertaken by providers and the healthcare system as a whole.
We are grateful to the Care Quality Commission for assistance with the data. We are grateful to the following at The King's Fund: Jocelyn Cornwell and Catherine Foot for suggestions relating to the paper, and James Thompson for help with compiling the data.
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Files in this Data Supplement:
- Download Supplementary Data (PDF) - Manuscript file of format pdf
Competing interests Two of the authors are employees of Picker Institute Europe, which is contracted to the Care Quality Commission to develop national patient experience surveys, and was involved in developing and coordinating the surveys that provided the data for this study.
Patient consent This study entails secondary analysis of patient survey data. Ethics clearance for each survey was obtained by the Picker Institute on behalf of the Care Quality Commission prior to the surveys commencing, with consent provided via patient responses.
Provenance and peer review Not commissioned; externally peer reviewed.
↵i The coefficients indicate the magnitude of domain score difference associated with a one-unit change in the predictor. For binary variables, such as foundation trust status, a one-unit change means foundation trusts compared with others. For percentage white, it means the difference in score associated with a 1 percentage point increase in the white population. As domain scores are standardised z-scores, a score difference of one equates to a 1 SD difference. SHA-level effects were evaluated by means of the level 3 residuals. These are interpreted in a similar way to the model coefficients for other predictor variables, except that the comparison in each case is to the overall mean.