Article Text

Download PDFPDF

Staff perceptions of quality of care: an observational study of the NHS Staff Survey in hospitals in England
  1. Richard J Pinder1,
  2. Felix E Greaves1,
  3. Paul P Aylin2,
  4. Brian Jarman2,
  5. Alex Bottle2
  1. 1Department of Primary Care and Public Health, Imperial College London, London, UK
  2. 2Dr Foster Unit at Imperial College London, Imperial College London, London, UK
  1. Correspondence to Dr Richard J Pinder, Department of Primary Care and Public Health, Imperial College London, Reynolds Building, St Dunstan's Road, London W6 8RP, UK; richard.pinder{at}doctors.org.uk

Abstract

Background There is some evidence to suggest that higher job satisfaction among healthcare staff in specific settings may be linked to improved patient outcomes. This study aimed to assess the potential of staff satisfaction to be used as an indicator of institutional performance across all acute National Health Service (NHS) hospitals in England.

Methods Using staff responses from the NHS Staff Survey 2009, and correlating these with hospital standardised mortality ratios (HSMR), correlation analyses were conducted at institutional level with further analyses of staff subgroups.

Results Over 60 000 respondents from 147 NHS trusts were included in the analysis. There was a weak negative correlation with HSMR where staff agreed that patient care was their trust's top priority (Kendall τ = −0.22, p<0.001), and where they would be happy with the care for a friend or relative (Kendall τ = −0.30, p<0.001). These correlations were identified across clinical and non-clinical groups, with nursing staff demonstrating the most robust correlation. There was no correlation between satisfaction with the quality of care delivered by oneself and institutional HSMR.

Conclusions In the context of the continued debate about the relationship of HSMR to hospital performance, these findings of a weak correlation between staff satisfaction and HSMR are intriguing and warrant further investigation. Such measures in the future have the advantage of being intuitive for lay and specialist audiences alike, and may be useful in facilitating patient choice. Whether higher staff satisfaction drives quality or merely reflects it remains unclear.

  • Attitudes
  • Performance measures
  • Quality measurement
  • Health services research
  • Surveys

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

While the pursuit of high-quality healthcare has become the norm,1–3 there remains considerable argument over how to evaluate performance.4 A broad range of performance indicators have been developed,5 many of which have provoked considerable criticism, with particular concern voiced about whether they may mislead, among others, the public. The ideal performance indicator has been described as being meaningful, scientifically sound and interpretable.6 While many performance indicators provide useful feedback on specific aspects of complex healthcare systems, providing a summary indicator encompassing multiple processes related to a meaningful outcome has proven a challenge. The Hospital Standardised Mortality Ratio (HSMR) is one such approach to providing a high-level summary.7 ,8 Critics suggest that measuring mortality is too blunt an approach, given that adjusting for case mix well is difficult, and that the proportion of deaths that are preventable is relatively small,9 although the performance of hospital-wide mortality measures as a screening tool depends on the definition of ‘preventable’.10

Despite this, in 2011, the English Department of Health (DH) committed to another overall mortality measure, the Summary Hospital-level Mortality Indicator, which summarises mortality during hospital admission and within 30 days of discharge.11 As such, hospital mortality is likely to continue to be considered an important indicator for some time.

Over the last decade, and alongside repeated and substantial structural reorganisations of the UK's National Health Service (NHS), questions have arisen over what improvement such changes may or may not have leveraged. In this time, there has been increasing recognition that quality improvement necessitates changes that go beyond structures, and interest has grown in the concept of what has been termed ‘organisational culture’ and how this might relate to healthcare performance.12 Yet there is little robust evidence supporting what might be anticipated to be a straightforward association, due in part to the challenge of quantifying organisational culture as a phenomenon.13 However, in their final report, Mannion and colleagues conclude that organisational culture appears to be linked to performance, though they expressed reservations in respect of inferring causality.12 

These caveats aside, one approach to evaluating organisational culture is to look at its effects, more specifically on morale, or as a proxy thereof, such as staff satisfaction. However, there has been relatively little published on these more subjective metrics. Although it has been suggested that staff satisfaction may have a role in evaluating overall organisational performance,14 the relationship between staff satisfaction and clinical care or outcomes has received less attention. There is, however, evidence that more satisfied doctors have safer prescribing practices,15 and effect higher levels of concordance among their patients.16 Higher satisfaction among nurses has been linked with better safety,17 shorter length of stay18 and higher patient satisfaction.19–21 Limitations establishing the potential direction of causality are a theme that runs through this literature, reflecting methodological challenges to establishing such a connection.

In 2010, a public inquiry was announced in response to concerns about care provided at a UK hospital trust. Although The Mid Staffordshire NHS Foundation Trust Public Inquiry will report in early 2013, it has already heard from the Medical Director of the NHS in England who has put on record his belief that processes, including the monitoring of HSMR and the Staff Survey, may have flagged problems at this hospital at an earlier stage than did occur.22 ,23

Using data taken from the NHS Staff Survey 2009, in this study we aimed to evaluate the potential use of three candidate indicators by comparing them with HSMR for acute NHS trusts in England, including the indicator highlighted by the NHS Medical Director in his evidence. We hypothesised that higher levels of staff satisfaction would be associated with lower HSMR. More specifically, we hypothesised that satisfaction among clinical—especially medical—staff would be more closely correlated with HSMR than non-clinical staff.

Methods

Data sources

Trust-level clinical dataset

Data on hospital admissions from all NHS hospital trusts in England were extracted from the Hospital Episode Statistics (HES) system for the year 2009–2010. This administrative dataset comprises demographic and clinical data. Using the methodology used by Dr Foster Intelligence,24 HSMRs were calculated, which adjusted mortality rates for available demographic and case-mix information.

Staff dataset

Since 2003, the NHS has conducted an annual survey of staff to provide feedback on their experience of working within the service. The results are intended to manage local performance as well as provide information for regulators and the DH. In April 2009, the health service regulator, the Care Quality Commission, took over the running of this annual survey. A quality-assured copy of the dataset was obtained through the UK Data Archive,25 a designated ‘Place of Deposit’ by the UK National Archive. For the NHS Staff Survey 2009, 2 88 000 members of staff were invited by their local trust to take part in the survey,26 representing approximately 24.5% of all staff.27 The sample size from each hospital was determined by the number of employees, ranging from a census for institutions of less than 600 to a sample of 850 for institutions with more than 3000 staff.26 Following two reminders, and between September and December 2009, 1 56 951 paper questionnaires (54.5% of those invited) were returned.

From the 2009 survey, we selected three questions a priori that, we hypothesised, may reflect staff satisfaction and organisational culture. Staff were presented with the following statements: (1) ‘Care of patients is my trust's top priority’; (2) ‘If a friend or relative needed treatment, I would be happy with the standard of care provided by this trust’; (3) ‘I am satisfied with the quality of care I give to patients’. To each of these questions they were asked to select one of the following five responses: ‘strongly disagree’, ‘disagree’, ‘neither agree nor disagree’, ‘agree’, or ‘strongly agree’. In turn, each of these responses was accorded a number ranging from 1 to 5 with 1 representing that they ‘strongly disagree’ and 5 representing that they ‘strongly agree’. The responses to each question were aggregated at trust level, and a mean response ranging from 1 to 5 was attributed to each trust.

A further variable was created for each hospital, giving the proportion of respondents that agreed (aggregating those who ‘strongly agree’ and ‘agree’).

Within the questionnaire, participants were asked to select their staff group. A further variable was generated that recategorised participants into clinical and non-clinical groups, with the latter including administrative and ancillary staff. Where trusts or providers had changed or been reconfigured, responses were collated at institutional level to bring them in line with current NHS service configuration.

Statistical analysis

The statistical analysis was conducted using STATA 12.0 for Mac.28 Due to having non-normal data, rank correlation analysis using Kendall's τ with 95% CIs was conducted with pairwise correlation.29 Because the number of respondents per institution was relatively similar (mean 409, IQR 360–448), trusts were given equal weights in the correlation analysis.

In order to test whether agreement was different between subgroups, Z-tests were performed on the Kendall τ coefficients, from which p values were also calculated.

Results

Respondents numbering 77,730 were included in the analysis from 147 NHS trusts for which HSMRs had been calculated; HSMRs are calculated for acute general hospitals and do not include specialist units or mental health facilities. The HSMR for the 147 acute hospitals ranged from 71.9 to 117.9, with an IQR of 93.0 to 105.8 and a median of 99.5.

The mean number of respondents per trust was 409. Within this sample of the wider dataset, it was not possible to determine the response rate for individual trusts.

Respondents numbering 34,817 (58.0%) stated that they agreed that care was their trust's top priority compared with 9954 (16.5%) who disagreed (see table 1). Thirty-seven thousand two hundred and eighty-two (62.0%) agreed that they would be happy with the standard of care provided, were a friend or relative be in need of treatment, compared with 7327 (12.2%) who stated they were not. Albeit with fewer respondents, 45 028 (86.4%) agreed with the statement that they were satisfied with the quality of care they provided to patients; conversely, 3201 (5.3%) said they were not satisfied with the quality of care that they gave. When these data were analysed at organisational level, they showed marked variation, with agreement ranging from approximately 70% up to 95% (see table 2).

Table 1

Overall staff responses to questions posed by National Health Service (NHS) Staff Survey 2009 questionnaire at all 147 acute general NHS hospitals in England

Table 2

Distribution of staff responses by National Health Service trust (n=147)

Pairwise correlation analysis (see table 3) revealed a negative correlation between the ‘care as a priority’ statement and HSMR, suggesting that staff at hospitals with higher than expected death rates were less likely to agree that their Trust considered care of patients as their top priority (Kendall τ=−0.217, p<0.001). The feedback of non-clinical staff correlated as strongly as that of non-clinical staff.

Table 3

Pairwise Kendall-τ correlation analyses with 95% CI of staff ratings (scale of 1–5) with hospital standardised mortality ratios at all 147 acute general National Health Service hospitals in England

Agreement with the statement that staff would be happy with the standard of care if a friend or relative needed treatment was also negatively correlated (Kendall τ=−0.198, p<0.001), with similar results in the clinical and non-clinical subgroups; there was no difference at conventional statistical significance between these two subgroups (p=0.50).

In respect of the statement regarding satisfaction with patient care provided by themselves, staff agreement was very weakly negatively correlated (Kendall τ=−0.062, p=0.27).

Further analysis comparing the proportion of staff responding that they agree or strongly agree against all others showed broadly similar but weaker associations (see table 4).

Table 4

Pairwise Kendall-τ correlation analyses of staff agreement with hospital standardised mortality ratios at all 147 acute general National Health Service hospitals in England

Further analysis shows variation across HSMR hospital quartiles by staff group (see table 5). Again, medical and dental staff demonstrated the lowest agreement with the ‘care as a priority’ statement. These data, overall, showed a correlation between staff feedback for the ‘care as a priority’ and ‘if a friend or relative needed treatment’ questions, but showed no such correlation for the ‘satisfaction with care provided’ statement; these results are reflected by the pairwise correlation analyses (see table 6), which demonstrate that nursing staff agreement appeared most strongly correlated with performance as assessed by HSMRs.

Table 5

Distribution of staff responses by Clinical Professional Group for HSMR Quartile

Table 6

Pairwise Kendall-τ correlation analyses with 95% CI of staff ratings (scale of 1–5) with hospital standardised mortality ratios by staff professional group at all 147 acute general National Health Service hospitals in England

Discussion

Statement of principal findings

The data presented suggest that the majority of staff approved (albeit not strongly) of their institutions across the three domains of inquiry. Agreement with the ‘care as a priority’ and ‘if a friend or relative needed treatment’ statements correlated weakly with HSMR, lending weight to the principal hypothesis. These results suggest that staff feedback may be useful in assessing organisational performance as well as raising interesting questions over the potential value of negative feedback of this nature. That there is a correlation between HSMR and staff feedback in the expected direction gives some support to the use of adjusted hospital-wide mortality metrics, despite their relatively modest sensitivity to quality of care.

However, these data do not support the hypothesis that clinical staff satisfaction is more closely correlated with institutional mortality than non-clinical staff. Furthermore, satisfaction among nurses appeared more closely correlated than did that of medical and dental staff, though this did not achieve statistical significance.

Main analysis

The advantage that these indicators have over summary mortality data, length of stay or other metrics is their intuitive nature which may provide more intelligible information to facilitate patient choice. Second, the inter-relationship of organisational culture and performance highlights the importance of involving staff with organisational change, and that staff feedback may be a useful metric in identifying, as well as evaluating improvement of, suboptimal health services. The variation in responses to the three statements suggests a degree of specific interpretation and discrimination for each question, in contrast with individuals responding negatively or positively across all the questionnaire domains.

The statements regarding whether staff agree that care is their trust's top priority, and whether or not they would be happy with the care provided were a friend or relative to need it, are subtly different, though both perhaps assess overall performance. Trusts may prioritise care; however, in a failing trust that does prioritise care, it remains possible that the quality of care delivered may not be of a satisfactory nature. This difference is reflected in the differing proportions with 14.9% of respondents strongly agreeing that care is their trust's top priority, but only 10.8% strongly agreeing that they would be happy with care for their friend or relative.

That HSMR did not appear to correlate with perceptions of participants’ own satisfaction with the care that they personally delivered is of particular note, and may suggest a degree of cognitive dissonance. While healthcare professionals seek to deliver the highest quality of care, they can at least recognise failures in the broader hospital environment or among other members of staff. Whether these responses highlight individuals perceiving that they deliver care of satisfactory quality despite broader organisational restriction, or whether they are simply unwilling to admit deficiencies in their practice, cannot be determined. It may reflect the phenomenon of illusory superiority (termed by some the Lake Wobegon effect30). Collectively, however, it highlights a potential weakness inherent in this sort of feedback when used to measure healthcare quality.

Yet these Kendall-τ correlations are weak by conventional standards. For this, various explanations may be apparent. The comparison of two measures, both surrogates for overall quality, is unlikely to exhibit a perfect correlation given the imprecision associated with either metric, and which assess different aspects of performance. That the correlation is as weak as it is however, suggests that there are further contributing factors and potential confounders of which several are outlined in the limitations that follow. Despite these limitations, this correlation, though weak, remains intriguing.

Staff subgroup analysis

That correlation of feedback from non-clinical staff was broadly similar to that of their clinical counterparts highlights and reinforces the concept of organisational culture or morale. It is uncertain, however, whether perceptions among non-clinical staff are a response to engaging with patients informally or for administrative reasons, with clinical staff for professional reasons, or reflect a more general organisational culture. In particular, the interaction that this group has with patients may be more ‘honest’, as patients do not have to ‘fear reprisal’ in a way which they might, should they highlight concerns with nursing or medical staff. It is likely that all these factors have some degree of impact. The importance of these non-clinical staff groups should not be forgotten or sidelined, as they are likely to have a substantial impact on organisational effectiveness.

More focused analysis of the different professional groups making up the clinical group demonstrates the importance of nursing staff throughout the healthcare process. Nursing staff feedback was more closely correlated with HSMR than that of medical or allied health professionals, although the difference in agreement between nurses and medical staff did not achieve statistical significance. Although nursing staff numbered more, nurses may be more attuned to the quality of healthcare delivered due to them providing the majority of healthcare interactions; when nursing feedback is compared with medical and dental feedback, a similar phenomenon may be at work, with patients being more honest with nurses than they are with their doctors.

Strengths and weaknesses of the study

The major strength of this study is its coverage of all acute general inpatient NHS services in England, encompassing many thousands of staff respondents. To the authors’ knowledge, this is the first time that the NHS Staff Survey has been correlated with outcomes, although previous work has linked positive staff perception of hospital cleanliness (from a previous staff survey) with improved patient experience metrics.31

However, caveats with this process should be noted. Primarily, the use of adjusted overall mortality measures remains contentious and has clear weaknesses as outlined previously in this paper and elsewhere. It is one measure that attempts to capture the performance of the whole hospital without relying on self-reporting, but there are few others.

A limitation of the NHS Staff Survey dataset was that while the response rate overall was 54.5%, we were unable to determine the response rate for acute trusts overall, nor whether response rate varied between trusts. Given the nature of the data collection process (which was delegated to individual trusts), disentangling variation in response rate due to logistical reasons or other potential causes would be potentially challenging. Likewise, while the selection of respondents was intended to be at random, it is conceivable that trusts may have selected participants resulting in the potential for bias. However, trusts would not have been aware that the methodology employed by this study would use the responses collected, and whether trusts have the means or intention to systematically attempt to ‘enhance’ staff feedback is doubtful.

While the results are from the last few months of 2009, the HSMRs reflect performance from across the 2009–2010 financial year. Given the size of the institutions in the study, it is unlikely that substantial change would have occurred in the quarter following the survey. Furthermore, while the results in question are from 2009 and may not be representative of the organisations in 2012, as a proof of concept linking staff feedback to performance, the results are likely to be as valid in terms of correlation even if not exactly representing the NHS at present.

Mechanisms, unanswered questions and future research

There are a number of possible mechanisms that may explain the association between staff morale and HSMR. The first, along the lines of the hypothesis, is that staff can discern quality in healthcare and reflect this by evaluation and survey. Alternatively, more satisfied staff may provide a higher quality of care and, as such, the HSMR improves; conversely, hospitals with lower HSMR may employ more positive or optimistic staff. A further explanation may involve aspects of a positive feedback loop where staff are buoyed by good results in metrics, such as HSMR, and vice versa. However, the degree to which staff are aware of hospital performance beyond major scandals publicised in the media is uncertain. From the results ascertained, it is not possible to determine what the underlying mechanism/s is/are, and, like previous research, it is with caution that one should infer causality, given the complexity of healthcare organisations and the staff within them.

While the majority of respondents were positive about their institutions, nearly a fifth of respondents were negative. It is possible that this small group of respondents may provide a sentinel marker of institutional poor performance. Further analysis within this group is on-going. Of particular interest are the effects of organisational culture across professional groups, both clinical and non-clinical. Understanding the effect that these different groups play in improving healthcare is much sought after.

Should staff satisfaction be considered for future use, the possibility of gaming must be considered. The effect could be of organisations attempting to improve their staff feedback; providing that this was through genuine means of improving staff morale, and not simply selecting respondents to the question, this may be an asset in the longer term.

Establishing causality among these variables is challenging, and inherently beyond the scope of a cross-sectional analysis. The potential for tracking these data in a longitudinal format presents an opportunity, not least with the NHS Staff Survey as a routine and annual process. Tracking changes in staff satisfaction and their temporal relationship may well be valuable in contributing to this discourse on causality. The potential for evaluation of these staff-side metrics with the growing literature on patient experience (whether by survey or by unsolicited online review32) also presents opportunities for future research.

Conclusions

These results are intriguing and, while requiring further investigation, support the case for staff feedback metrics to be considered as indicators for both professional and lay audiences. The primary advantage of these metrics over other established indicators is that they are intuitive to all stakeholders in healthcare. They may also facilitate better communication between healthcare professionals, decision makers (including politicians) and the public at large. It may hold that in healthcare, as it does in many other areas of life, that it is the insiders that can signpost the path to higher quality care.

References

View Abstract

Footnotes

  • Contributors RJP conceived the study, and with AB developed the study design. All authors interpreted the data and critically reviewed drafts of the manuscript. RJP collected and analysed the data and prepared the manuscript. AB is guarantor.

  • Funding RJP is funded by an Academic Clinical Fellowship from the NIHR Integrated Academic Training Programme. The Department of Primary Care and Public Health at Imperial College is grateful for support from the NIHR Collaboration for Leadership in Applied Health Research and Care (CLAHRC) for North West London (a partnership between Chelsea and Westminster NHS Foundation Trust and Imperial College London), the NIHR Imperial Biomedical Research Centre (a partnership between Imperial College Healthcare NHS Trust and Imperial College London), and the NIHR Imperial Centre for Patient Safety and Service Quality. The authors would like to thank Hilary Watt for her advice on the statistical analysis.

  • Competing interests All authors have completed the unified competing interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author); BJ, PA and AB are part of the Dr Foster Unit at Imperial, which is principally funded via a research grant by Dr Foster Intelligence, an independent healthcare information company and joint venture with the Information Centre of the NHS. The authors’ work was independent of Dr Foster Intelligence, which had no role in the analysis, interpretation or decision to submit this paper. The Unit is affiliated with the NIHR Imperial Centre for Patient Safety and Service Quality at Imperial College Healthcare NHS Trust.

  • Ethics approval We have permission from the National Information Governance Board (NIGB) for Health and Social Care under Section 251 of the NHS Act 2006 (formerly Section 60 approval from the Patient Information Advisory Group) to hold confidential HES data. We have ethical approval to use them for research and measuring quality of delivery of healthcare, from the South East Ethics Research Committee.

  • Provenance and peer review Not commissioned; externally peer reviewed.