Article Text

Download PDFPDF

Hearing the patient’s voice? Factors affecting the use of patient survey data in quality improvement
  1. E Davies1,
  2. P D Cleary2
  1. 1King’s College London School of Medicine, Thames Cancer Registry, London SE1 3QD, UK
  2. 2Department of Health Care Policy, Harvard Medical School, 180 Longwood Avenue, Boston, MA 02115, USA
  1. Correspondence to:
 Dr E Davies
 Senior Lecturer in Cancer Registration, King’s College London School of Medicine, Thames Cancer Registry, London SE1 3QD, UK; Elizabeth.Davies{at}kcl.ac.uk

Abstract

Objective: To develop a framework for understanding factors affecting the use of patient survey data in quality improvement.

Design: Qualitative interviews with senior health professionals and managers and a review of the literature.

Setting: A quality improvement collaborative in Minnesota, USA involving teams from eight medical groups, focusing on how to use patient survey data to improve patient centred care.

Participants: Eight team leaders (medical, clinical improvement or service quality directors) and six team members (clinical improvement coordinators and managers).

Results: Respondents reported three types of barriers before the collaborative: organisational, professional and data related. Organisational barriers included lack of supporting values for patient centred care, competing priorities, and lack of an effective quality improvement infrastructure. Professional barriers included clinicians and staff not being used to focusing on patient interaction as a quality issue, individuals not necessarily having been selected, trained or supported to provide patient centred care, and scepticism, defensiveness or resistance to change following feedback. Data related barriers included lack of expertise with survey data, lack of timely and specific results, uncertainty over the effective interventions or time frames for improvement, and consequent risk of perceived low cost effectiveness of data collection. Factors that appeared to have promoted data use included board led strategies to change culture and create quality improvement forums, leadership from senior physicians and managers, and the persistence of quality improvement staff over several years in demonstrating change in other areas.

Conclusion: Using patient survey data may require a more concerted effort than for other clinical data. Organisations may need to develop cultures that support patient centred care, quality improvement capacity, and to align professional receptiveness and leadership with technical expertise with the data.

  • patient centred care
  • patient-provider relations
  • patient survey data
  • quality improvement

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Health services in England and Wales and the United States now seek to develop “patient centred care”.1,2 Over the last decade, research on patients’ perspectives on care has moved from asking about “overall satisfaction”—a broad concept producing data that are difficult to interpret—to asking about specific patient experiences.3 Issues emerging as critical components of high quality care include information and education; respect for preferences, coordination and continuity; and transitions in care.4 Surveys conducted by the Centers for Medicare & Medicaid Services (CMS) in the US5,6 and the NHS in England and Wales now assess these issues.7,8 In both countries, data showing variation in experience across geographical areas, hospitals, and health plans are now routinely published.5–9

Despite these developments, little is known about why variations in patient experiences persist and whether reporting survey data improves care. Younger patients, those with low income, poor perceived health, and from non-black ethnic groups tend to report worse experiences, but such factors explain only a small proportion of the variation observed.10–14 In general, the range of scores suggests that institutional characteristics and management are important regardless of the population served.4,10,15

Observations from Wisconsin,16 California,17 and Massachusetts3,18 in the US suggest that public reporting of survey and clinical data may focus attention on improvement efforts. There is no published evidence that such feedback leads to sustained improvement although the Veterans Health Administration, which surveys patients quarterly, has reported a 15% improvement in overall scores between 1995 and 1999.19 Rogut and Hudson described the response of 15 New York City hospitals to a patient survey in 1994.11,20 They found that, although hospitals generally thought the survey identified problems, only a few actually launched patient centred interventions.20 One randomised controlled trial of giving survey results to 55 general practitioners in the Netherlands found this had no effect on patients’ evaluations of their care 1 year later.21

This study aimed to develop a framework for understanding the factors which affect the use of patient survey data in quality improvement. We interviewed senior health professionals and managers in teams from eight medical groups in Minnesota, USA who had joined a quality improvement collaborative designed to teach them how to use survey results to improve patient centred care.

METHODS

Setting

The collaborative was organised by the Institute for Clinical Systems Improvement (ICSI) in Minnesota and the Consumer Assessments of Healthcare Providers and Systems (CAHPS) team at Harvard Medical School, Boston. ICSI is a state wide consortium of health plans, medical groups, and hospitals that have worked to develop clinical guidelines and quality improvement collaboratives since 1993. Eight self-selected medical groups providing primary and/or secondary care services to urban and rural populations were included.

Study design and interview development

Semi-structured interviews22 with key informants in each group were conducted to identify difficulties or successes they had experienced when trying to use patient feedback or survey data. Detailed descriptions and interpretations of experiences and processes were sought and multiple perspectives solicited.22 Our interview guide was informed by the New York study20 and suggestions from senior physicians, managers, and experts in patient survey work in England and the US. We discussed and refined questions in the CAHPS team and tested them in a pilot interview (box 1). The Institutional Review Board at Harvard Medical School approved the study.

Box 1 Interview questions covering previous experience of trying to improve patient centred care

Background and institutional support

  • I’d like you to talk me through the programme of work you’re undertaking and the key insights you’ve had so far, but first I’d like to know a bit about the background to the initiative.

  • What was the motivation for improving patient-centred care in your medical group?

  • Who was leading this and how?

  • What is the timing of this project in relation to other quality improvement initiatives?

  • Is patient centred care a new topic for your medical group to tackle?

  • Has there been a history of similar projects or initiatives? What else is happening now?

  • Have you or people in your group done something like this before?

  • To what extent do you feel your institution is actively supporting you to promote the success of this project? In what way?

  • To what extent do you feel the onus to demonstrate success to your institution?

How survey data have been received and understood

  • How similar are the survey results received as part of this collaborative to those you’ve received in the past using other survey methods?

  • Were these different from those you or others expected?

  • Have they been more or less useful?

  • How do staff view the validity of the data?

  • Can you give me some examples of their views?

Recruitment of respondents

Team leaders were contacted in January 2004 while the collaborative was developing aims and measures. All eight leaders agreed to be interviewed and one of the authors (ED) visited 2 weeks later to conduct the interviews. Three of the leaders were medical directors, three were directors of clinical improvement or service quality, one was a group manager, and one was a quality improvement coordinator. They had worked in their medical groups for 2–20 years. Four of the leaders invited other colleagues whom they thought had relevant experience to participate in the interview—one director of customer relations, two clinical improvement coordinators, and three other managers—making a total of 14 respondents. Interviews lasted 60–90 minutes and were conducted in the work setting in all but one case. A transcript was returned to each team leader to review.

Analysis of data

One of the authors (ED) identified 58 comments about working with patient feedback or survey data from the transcripts and sorted them into 25 types of barriers. The barriers were then grouped together into three broader categories22 defining sub-themes within each. ED then reapplied this framework to the 16 initiatives that teams had reported to see if lack of these barriers was related to better use of patient survey data, and re-reviewed the transcripts for contradicting examples. The draft paper was shared with team leaders and discussed with them.

RESULTS

All medical groups reported initiatives for improving patient centred care over 1–6 years (box 2).

Box 2 Examples of initiatives for improving patient centred care

Data collection

  • Commissioning patient surveys from outside companies

  • Developing in-house survey methods

  • Interviewing patients

  • Collecting feedback from patients using websites and telephone messages

Feeding back data

  • Reviewing survey results at the board

  • Using patient complaints to identify areas for improvement

  • Feeding survey data back to individual clinicians

Improving access

  • Scheduling appointments with patients’ preferred doctor

  • Decreasing waiting times for appointments

Improving education

  • Developing education materials for patients

  • Improving pain control

  • Training front of house staff in customer relations

Each team identified more examples of barriers than of success and these were grouped into those that were (1) organisational, (2) professional, and (3) data related.

Organisational barriers

Lack of supporting values

Lack of an emphasis on patients’ needs in decision making at all levels of the organisation made it difficult to create a tension for change. Three respondents identified a traditional hierarchical management structure, the influence of personalities, and greater importance of staff needs as influences on group culture. For example:

So that the first remark may be extremely un-patient friendly and you literally have to say: ‘That has no patient focus. Let’s think about why we’re doing this.’ But I think we have great support at the top in the management team in theory. The question is, do they remember it on a day-to-day basis?” (Team leader, Group 5)

Two medical groups described strategies to change their organisational culture towards patient centredness. One leader described realising that their culture was not supporting the generalisation of success from individual projects. Their response had been to “decide which battles to fight”, ensure they did not over commit, to bring clinicians into projects one by one, and to provide adequate administrative support for these.

Competing priorities

Competing priorities that detracted from an organisational focus on patient centred care were financial goals (three respondents), the number of patients to be seen (two respondents), and major restructuring (two respondents). For example:

The reimbursement for spending time with people is dramatically less than that given for procedures. This results in a feeling that we don’t get paid for listening or supporting people … Providers are often poorly informed about the reason for the visit and what information has previously been gathered. The approach of encouraging people to ‘come in and then we will figure out what to do’ can leave the impression that the system is disorganised.” (Team leader, Group 6)

Lack of a quality improvement infrastructure

Effective response to patient feedback or survey data appeared to require the prior development of quality improvement structures, capacity, and skills. Without leadership committed to quality improvement it was difficult to integrate activity throughout the organisation to tackle complex areas such as patient centred care. For example:

We had an executive committee that met every week but everything came to them. And they rarely finished anything because they didn’t know how to fix it and they had no other place to put it. So they’d table it and they’d say, ‘Isn’t there somebody that can work on that?’”

Smaller medical groups don’t have access to things like statisticians and database creators. In fact there’s a huge lack that resource in health care. And I think that’s why we can’t excel at this as fast as manufacturing companies. So I work harder to develop new ways to get data out of old systems.” (Team leader, Group 1)

Seven groups said that ICSI had provided a focus for initiatives, maintaining momentum, gaining professional “buy in”, and teaching quality improvement skills. As this activity gained prominence, four groups reported having developed more coherent internal forums to make collective decisions, assign responsibility, develop champions, and use data to monitor progress.

Professional barriers

Many respondents reported that using survey data was a new concept which challenged traditional ways of working and thinking about measuring quality of care.

Clinical scepticism

All groups had experienced the sceptical responses of staff to survey results. Five reported that one concern—expressed primarily by clinicians—was the degree to which data should be taken seriously. For example:

They’ve learned the term ‘statistically significant’. They’ve somehow picked that out and they’ll always ask if they are statistically significant. And we do spend a fair amount of time talking about, you know, it’s a good sample size, let’s talk about what we are using the data for, it is not a research study.” (Team leader, Group 3)

When findings seemed critical of, or inconsistent with, clinical experience, clinicians tended to want larger samples. One leader reported needing to wear “that bullet proof vest” while presenting data, and noted that qualitative case examples could paradoxically sometimes be taken more seriously than survey data.

Defensiveness and resistance to change

Five respondents noted patient survey data were potentially threatening. For example:

There are often questions—‘My patients are sicker’, ‘My patients are different’, ‘My patients are this or that’. You can come up with any different variation, but I think we’ve probably heard them all! So it’s—you know—I think the standard push back that you would get.” (Manager, Group 2)

Three described the difficulty of changing doctors’ independent behaviour. For example: “Generally speaking, people agree with the data; they just don’t think it applies to them!” (Team leader, Group 2)

Projects attempting to influence clinicians were seen as requiring senior clinical “enthusiasts” to bring along “the majority”.

Lack of staff selection for skills

Two respondents identified the need for clinical staff to be able to elicit another’s point of view. For example:

A lot of it has to do with … their attitude toward giving information and making sure they explain things. And those are habits and personality issues. Some do it extremely well, naturally, and others don’t do it as well. And it’s the ones who don’t do as well, I think it’s a real struggle because we’re asking them to change lifelong habits and that’s going to be hard.” (Team leader, Group 4)

Staff were noted to be selected for technical rather than “people” skills. For example, one group had found that, given the choice, most reception and administrative employees had opted to move away from patient contact to dealing with paperwork and finance. Managers responded by selecting employees for work matching their personality and to support teams to focus on patient needs. Two other groups had found that training reception staff in isolation was ineffective.

Three leaders reported knowing which staff members struggled with communication but a key issue was to avoid presenting them with an overly negative message that might label them as a “bad person”. In one group a senior physician had fed back comparative patient survey data. Results were not made public, but low and high performing clinicians were paired for mentoring and further training was offered if necessary. Three respondents felt that clinician specific data were essential to target interventions, although another was sceptical of using it for formal accountability.

Data related barriers

Lack of expertise

All but one respondent thought that special expertise was necessary to work with survey data. Two teams reported not having had time to synthesise or disseminate reports.

Lack of timely feedback

Three respondents mentioned the long delay from data collection to analysis and feedback as a major limitation. For example:

It was old data … and it seems like by the time you get that type of data and by the time you look at reacting to it, it’s very easy for people—staff, physicians, whoever it might be—to say: ‘Well, you know that was a long time ago. We’ve already fixed that or that was when we had that receptionist’.” (Manager, Group 7)

Lack of specificity and discrimination

A significant issue for four teams was the need for data specific to a single clinic or patient group. Data suggesting a general problem or dissatisfaction rather than a specific care process that could be changed were seen as difficult to interpret and unlikely to lead to any action. For example:

We didn’t really have much success with being able to put our hands around anything and really improve anything based on the information that we were getting.” (Manager, Group 7)

Three teams mentioned the problem of determining whether high scores were due to “halo effects”—where patients might be so grateful for treatment they were unwilling to criticise care. Two others mentioned “ceiling effects” at the top end of the scale that made it difficult to know what to focus on.

Uncertainty over effective interventions and rate of change

Three teams reported sustained high results or gradual improvement over several years and one reported no change, despite several interventions. The time from data collection to feedback, intervention, and further measurement made it difficult to infer what had caused what. Three groups reported success in using complaint data. One had found that the most common complaints related to difficulty in obtaining appointments. After a 2 year system redesign, such complaints became the least frequent category. The group used this success to justify turning attention to the next most common complaint of staff being disrespectful and uncaring.

Lack of cost effectiveness

Three teams mentioned the high cost of data collection. For example:

The numbers weren’t relevant unless they did they did a huge survey. But when they did a huge survey it cost them too much money. They never did anything with it. The data were old and it was viewed as lost money. Where’s the incentive?” (Team leader, Group 1)

DISCUSSION

Limitations of study and summary of findings

This is a small study of experienced enthusiasts for patient centred care from eight medical groups in a unique quality improvement organisation. Their experience may not be representative of all organisations receiving and responding to survey data. Our interviews may have elicited idealised accounts. For example, we have no way of verifying that the interventions reported were actually successful or knowing if the respondent was a barrier, or of identifying all the barriers in each setting. However, the accounts were detailed and the respondents seemed very forthcoming and honest about the difficulties they had faced. The results might therefore provide some insight into why health professionals and managers find it so difficult to use survey results effectively. We identified three main types of factors—organisational, professional and data related—that had previously affected the use of patient survey data in quality improvement (see box 3 for a summary of the overall framework).

Box 3 Framework for factors affecting the use of patient survey data to develop patient centred care

Organisational

Barriers

  • Competing priorities

  • Lack of supporting values for patient centred care

  • Lack of quality improvement infrastructure

Promoters

  • Developing a culture of patient centredness

  • Developing quality improvement structures and skills

  • Persistence of quality improvement staff over many years

Professional

Barriers

  • Clinical scepticism

  • Defensiveness and resistance to change

  • Lack of staff selection, training and support

Promoters

  • Clinical leadership

  • Selection of staff for their “people skills”

  • Structured feedback of results to teams or individuals

Data related

Barriers

  • Felt lack of expertise with survey methods

  • Lack of timely feedback of results

  • Lack of specificity and discrimination

  • Uncertainty over effective interventions or rate of change

  • Lack of cost effectiveness of data collection

Comparison with other findings

Research on the effectiveness of using patient survey data in quality improvement is limited.23 However, our findings are consistent with Rogut and Hudson’s conclusion that a structured process for addressing problems and obtaining resources was critical in marshalling energy to tackle the issues raised by surveys, as well as a strong motivating force to produce changes in staff behaviour.20 They also found that some staff were frustrated by relatively small sample sizes, the long time it took to carry out the survey, and the fact that additional information had to be collected and considered before solutions could be targeted and action taken.20 A case study of a single US hospital also identified very similar barriers: organisational (size, structures and strategy), characteristics of individuals (fear, scepticism, awareness, training and physician interest), and data problems (not being user centred or linked directly to care processes).24 One Netherlands study found that, despite strong motivation, general practitioners found it difficult to use patients’ evaluations of care to change their behaviour and became sceptical of their value.25

The barriers we identified are similar to those found in other studies of quality improvement. For example, Kaluzny and McLaughin26 describe the steps in making improvements as awareness of a problem, identification of a solution, decision to implement it, institutionalisation (that is, the extent to which total quality management is integrated into ongoing activities of the organisation), and impact. They point out that “institutionalization is unlikely to take place without some observed positive impact”. Shortell and colleagues27 proposed that the extent to which quality management is institutionalised is a function of the organisation’s structure, culture, and implementation approach.

Implications for practice

Survey data often provide information of which busy health professionals and healthcare systems were previously unaware, so findings may be surprising or uncomfortable. It makes little sense for healthcare systems to seek patients’ views and then to discount their concerns as either unrealistic or inevitable. Similarly, seeing clinicians as “the problem” seems neither helpful nor consistent with quality improvement approaches that seek to move away from individual blame to identifying and fixing system failures. Our results suggest that healthcare organisations need to develop cultures that support patient centred care, quality improvement capacity, professional receptiveness and leadership, and technical expertise with survey data. They also emphasise that surveys themselves do not indicate what needs to be done to improve any situation. Further commitment and ingenuity are needed to understand shortcomings in an organisation and to develop solutions.

Implications for future research and policy

More studies about how to use patient survey data effectively are needed. The characteristics of organisations that perform highly on patient centred care15 are likely to be different from factors needed to transform a low performing organisation into a high performing one. Retrospective case studies of organisations that have successfully improved their patient experience scores may help to identify successful strategies.28 The use of patient surveys in the US and UK reveals differing strengths and weaknesses. In the US, many survey tools have been developed and used widely by health insurance plans. Recently the Centers for Medicare & Medicaid Services published data from national surveys of Medicare beneficiaries.5,6,9 In England and Wales the NHS has also collected national data using comparable methods. Large databases covering patients with cancer, heart disease, mental health problems, and those attending hospital outpatients, emergency and primary trust are now available.7,8 Recent analyses of these data show improvement in some areas linked to national priorities,29 but little change elsewhere and little evidence that improvement is due to feedback. Current drawbacks include lack of local ownership of national data and too infrequent collection for effective monitoring or to keep up the momentum for change. A further weakness for both countries may be the belief that improvements in practice will somehow follow naturally from the fact that results are publicly reported. Evidence from this and other studies suggests that many barriers need to be removed before results will start to improve. In both countries policy makers need to seek ways of providing direction and support, information on effective approaches, and forums for organisations to share knowledge as they develop the kind of care that patients need.

Acknowledgments

The authors thank ICSI staff Beth Green, Gary Oftedahl and John Sakowski for their help and team leaders and members for their time and insights; Angela Coulter, Irene Higginson, Mike Richards, Lynn Rogut, and Stephen Schoenbaum for advice on the approach and issues this study should consider; and Susan Edgman-Levitan, Tim Ferris, Dana Safran, Dale Shaller, Soshanna Sofaer, and Joan Teno for help in developing the interview.

REFERENCES

Footnotes

  • ED was supported by a Harkness Fellowship from The Commonwealth Fund, a New York City based private independent foundation. The views presented here are those of the author and not necessarily those of The Commonwealth Fund, its director, officers, or staff.

  • Competing interests: PDC has been an unpaid advisor to the Picker Institute and holds grants from the Agency for Healthcare Research and Quality to develop the Consumer Assessment of Health Plans Survey (CAHPS) method for nationwide use in the US.

  • ED designed the study, collected and analysed the data and wrote the paper. PC helped design the study, analyse the data, and write the paper. ED is the guarantor.

Linked Articles

  • Quality lines
    BMJ Publishing Group Ltd