Article Text

other Versions

Download PDFPDF

A core questionnaire for the assessment of patient satisfaction in academic hospitals in The Netherlands: development and first results in a nationwide study
Free
  1. S M Kleefstra1,
  2. R B Kool1,
  3. C M A Veldkamp2,
  4. A C M Winters-van der Meer1,
  5. M A P Mens3,
  6. G H Blijham4,
  7. J C J M de Haes5
  1. 1Prismant, Utrecht, The Netherlands
  2. 2Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands
  3. 3Netherlands Federation of University Medical Centres (NFU), Utrecht, The Netherlands
  4. 4University Medical Centre, University of Utrecht, Utrecht, The Netherlands
  5. 5Department of Medical Psychology, Academic Medical Centre, University of Amsterdam, Amsterdam, The Netherlands
  1. Correspondence to Sophia Martine Kleefstra, Prismant, Postbus 85200, 3508 AE Utrecht, The Netherlands; sorien.kleefstra{at}prismant.nl

Abstract

Background Patient satisfaction is one of the relevant indicators of quality of care; however, measuring patient satisfaction had been criticised. A major criticism is that many instruments are not reliable and/or valid. The instruments should have enough discriminative power for benchmarking of the results.

Objective To develop a “core questionnaire for the assessment of patient satisfaction in academic hospitals” (COPS) that is reliable and appropriate for benchmarking patient satisfaction results.

Research design The development of COPS, the testing of its psychometric quality and its use in eight Dutch academic hospitals in three national comparative studies in 2003, 2005 and 2007 are described in this study. Results were reported only if they were significant (p<0.05) and relevant (also Cohen d>0.2).

Results The questionnaire was returned in 2003 by 40 678 patients (77 450 sent, 53%) and by 40 248 patients (75 423 sent, 53%) in 2005. In 2007, the questionnaire was returned by 45 834 patients (87 137, 53%). The six dimensions have good Cronbach α's, varying from 0.79 to 0.88.The results of every item were reported to the individual hospital. A benchmark overview showed the overall comparison of all specialties of the eight hospitals for the clinic and outpatient departments. The 2007 measurement showed relevant differences in satisfaction on two dimensions in the clinical setting.

Conclusions COPS is shown to be a feasible and reliable instrument to measure the satisfaction of patients in Dutch academic hospitals. It allows comparison of hospitals and gives benchmark information on a hospital as well as data on specialty levels and previous measurements, including best practices.

  • Patient satisfaction
  • instrument for benchmarking
  • development of instruments
  • improving quality of care
  • academic hospitals

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Patient satisfaction is an important indicator of quality of care.1–5 39 Indeed, hospitals worldwide measure patient satisfaction to improve the quality of their care.6–13 More specifically, patient satisfaction feedback helps healthcare providers identify potential areas for improvement, which in turn can increase the effectiveness of healthcare systems.16 18 Satisfied patients are important for hospitals as they are more likely to return, to comply with medical treatment and to recommend the hospital to others.3 14–16

Measuring patient satisfaction has also been criticised.17–19 One major criticism is that many instruments for measuring patient satisfaction are not reliable and/or valid.20 21 Also, a recent study showed no significant association between patient satisfaction and quality of care.34 Patient satisfaction surveys are often not followed by changes in medical provider behaviour or hospital care.9 Measuring patient satisfaction was also criticised for not discriminating between hospitals.5 7 22 These surveys should be able to put satisfaction ratings into perspective rather than having them for their own individual hospital only.6 The instruments should therefore have enough discriminative power for benchmarking the results between hospitals.

While realising the advantages and limitations of patient satisfaction research, eight academic hospitals in The Netherlands decided in 2002 to develop a reliable and valid instrument to compare the satisfaction of their patients throughout the country. They wanted a short, core instrument to screen patient satisfaction based on the needs of patients in academic hospitals. Such instrument would provide them with the possibility of being open about their patients' judgements. Several questionnaires had been developed earlier, but most of them were not suitable for the goals described.

In this article, we describe the development of the “core questionnaire for the assessment of patient satisfaction in academic hospitals” (COPS) and its preliminary psychometric testing. We also describe the experience with its use in three national comparative studies among eight academic hospitals in 2003, 2005 and 2007.

Methods

Development

First, relevant content areas were selected by comparing the existing surveys on patient satisfaction in academic hospitals. To provide a mutual framework, the content was analysed against a study performed by the Dutch National Patient Platform. The study defined preliminary criteria for quality of patient care in hospitals.23 Seven elements had been formulated: (1) accommodation, (2) organisation, (3) professional skills, (4) information, (5) communication, (6) support and (7) autonomy. A list of 72 care elements covering these areas was created. This list was given to 44 representatives of different patient organisations relevant for academic hospitals. They were informed about the study aim by letter and were invited to indicate importance levels for the different times on a questionnaire during their regular board meeting. Representatives who were absent were invited to complete the questionnaire at home. The level of importance could be scored on an 11-point response format, varying from “not important” to “the most important”.

Second, we used existing data from a survey instrument used in the Academic Medical Centre in Amsterdam as a first basis for creating the item wordings.26 Patients (n=784) from different departments were included in the sample. They had received a questionnaire at home up to 3 months after discharge. The questionnaire covered 12 elements of hospital care represented in 54 questions. Earlier findings indicated that responses should preferably be formulated in terms of satisfaction.26 Therefore, 5-point Likert-type scales with answering categories unsatisfied (=1), somewhat satisfied (=2), rather satisfied (=3), quite satisfied (=4) and very satisfied (=5) were used. An intentionally skewed wording of answering categories was chosen, like the SF-36,38 as patients are likely to give answers to positively framed responses rather than to negative ones.

Additionally, patients had been asked to indicate how satisfied they were with the care overall and whether they would recommend the hospital to others.

Third, the psychometric properties of COPS were tested in the Leiden University Medical Centre. COPS was sent to 4693 patients from 25 inpatient wards of the Leiden University Medical Centre within 3 months of discharge (about 190 questionnaires per department). Similarly, questionnaires were sent to 5326 patients who had visited the hospital's outpatient department.

Experience

COPS was used in three large-scale nationwide comparative studies in all eight academic hospitals in The Netherlands in 2003, 2005 and 2007. The study sample was stratified according to the 17 of the 27 main medical specialties in The Netherlands. Two hundred consecutive patients were approached from every department given an expected non-response of 50% and the wish to obtain questionnaires from 100 patients per department for eventual analysis. A coordinator was appointed in each academic centre and instructed to ensure a comparable approach across the eight centres. The central study office provided a manual for data collection procedures and organised instruction meetings for the coordinators. Because we worked with a core questionnaire, hospitals were allowed to add additional questions if desired.

In 2003 and 2005, COPS was sent to patients within 2 months after admission or an outpatient visit, accompanied by a letter from that patient's hospital. Specific information was given in this letter in English, French, German, Turkish, Moroccan and Spanish, inviting patients to ask the help of others in case they were unable to read Dutch. Questionnaires could be returned to the independent research organisation Prismant in a prestamped return envelope. A reminder was sent after 2 weeks. A helpdesk using phone and email was installed for patients needing support. In 2007, patients were also offered the possibility to complete the questionnaire online. A personal code was given in the letter. It remained possible to send the questionnaire back by mail. Patients were invited to give their comments on the questionnaire, if desired.

Analysis

Development

To select content, the judgements by different respondents were analysed and ordered in a so-called norm analysis.25 The items were scored on a psychological scale. The item with the highest score was considered to be most important for the representative patients.

To select items, 10 items of the original 54 questionnaire items were omitted because of local specificity, a relatively high number of missing values or a skewed distribution. A factor analysis was done to establish the structure of the questionnaire and the loading of the individual items on possible factors. We decided to select only those factors that were reliable after rotation (α>0.72). To select items in the relevant domains, regression analyses were done for each element. The items were formulated as a dependent variable to predict the patients' overall satisfaction as well as their intention to recommend the hospital to others.

Experience

In the nationwide studies, we compared two groups—for example, patients treated by a specialty in a hospital with the other specialties in the same hospital—by using t tests. We only reported the results if they were significant (p<0.05 after correction for the number of tests) and relevant (Cohen d>0.2). A hospital was reported as a best practice if its results were the best and the difference with others is more than 1 SD.

Results

Development

The “item content relevance” questionnaire was completed by 36 representatives (82%). Representatives displayed a high level of agreement regarding the (un)importance of most items. We selected the 25 items considered “most important”. These were related to information (10 items), organisation (five items), communication (five items), having professional skills (three items) and autonomy (two items). Items regarding accommodation and perceived support were considered less important. The results were discussed in a meeting with the patient representatives who confirmed their relevance.

To “select the items”, we included data from the 784 patients who responded to the analysis. The factor analysis exploring the structure of the original questionnaire yielded two factors explaining 26% and 16% of the variance, respectively. The first factor (α=0.85) referred to disease-related and treatment-related elements of care. It covered hospital admittance, nursing care, medical care, information, autonomy and discharge. The second factor (α=0.75) referred to other elements of care such as hotel facilities and accessibility. Given the concordance of these results with the results of the study determining the relevance of content areas, we decided to proceed with the elements involved in the first factor only.

The two most relevant items for every element, as selected in the regression analyses, are given in table 1. If the order of items was different in the two regression analyses, three items are represented in the table.

Table 1

Regression analysis to select the two most relevant items for every satisfaction domain (element)

As comparable content areas were found to be relevant in both studies and the factor analysis gave similar results, we decided to combine the results of the two studies.

First, as information giving was found to be the most important aspect when distinguishing relevant content areas, it was given extra attention in the final questionnaire. Second, communication was brought under the headings of medical and nursing care. Similarly, “expertise” or “professional skills” were assessed under these headings in two questions. Thus, scales covering medical and nursing care were proposed. Third, the organisation was covered under two headings: admission and discharge. As the questionnaire was intended to be feasible in clinical and outpatient departments, the questions regarding admittance were transformed in questions covering the reception in the outpatient clinic. Finally, the element autonomy was kept as proposed in the original questionnaire. Yet, patient representatives were found to highly value confidentiality and/or privacy. These issues were not covered in the original questionnaire and were therefore added to the autonomy scale.

The overall response rate for “testing the psychometric properties” in the Leiden University Medical Centre clinical departments was 55% (n=2 581). Women and patients older than 45 years were overrepresented in the group of responders. The response rate in the outpatient clinic was 53% (n=2 823). In this group, patients older than 45 years were overrepresented as well, but men and women did not differ in response rates. The reliabilities of the scales are given in table 2. All scales had a good reliability (α=0.79–0.88).

Table 2

Reliability of satisfaction scales (Cronbach α)

Experience

For the nationwide studies, the questionnaire was returned in 2003 by 40 678 patients (77 450 sent, 53%) and by 40 248 patients (75.423 sent, 53%) in 2005. In 2007, the questionnaire was returned by 45 834 patients (87 137, 53%). Less than 5% of the patients, from the inpatient and outpatient setting, completed the questionnaire on the internet in 2007. The response rates and patient characteristics for inpatients and outpatients are given in table 3. They differed across the hospitals varying from 40% to 61% in 2003, from 44% to 66% in 2005 and from 41% to 64% in 2007 after excluding questionnaires that were damaged, not readable or lacking in essential data.

Table 3

Patient characteristics of the three nationwide studies in percentage

Reliabilities of the scales are given in table 4. Also in the overall benchmarks, all scales had a good reliability (α=0.70–0.91). The highest internal consistency is seen in the scale covering medical care, and the lowest in the scale covering patient autonomy.

Table 4

Reliability of scales in the large-scale comparative study (Cronbach α)

The satisfaction scores of the nationwide studies are shown in table 5. In the inpatient setting, patients' satisfaction with medical care and discharge and after care increased during the years. It remained stable for the other dimensions. In the outpatient setting, the mean of all six dimensions increased during the years. These are significant increases (p<0.05) but not relevant (Cohen d>0.2). In neither the clinical nor the outpatient setting did patients' satisfaction decreased over time in any dimension.

Table 5

Scale scores (mean) of the three nationwide studies

All study results were presented to hospitals and the public in reports and through the internet. Every item was reported to the individual hospitals as part of the CI or not, as shown in figure 1. For a benchmark overview, we presented the overall comparison of all the specialties of the eight hospitals for the clinic and the outpatient departments (table 6 gives an example of the inpatient benchmark 2003). Several specialties could be assigned as best practices. In the first two measurements (2003 and 2005), no relevant differences in satisfaction on any hospital level were found. The third measurement (2007) showed relevant differences in satisfaction on two dimensions in the clinical setting for one hospital, namely on information and on patient autonomy (p<0.05 and Cohen d>0.2).

Figure 1

Patient satisfaction on the dimension medical care. Mean score (black stripe in the middle) on the dimension medical care with the 95% CI. Green and red triangle means outside CI, positively and negatively, respectively.

Table 6

Overview of best practices in the patient clinic in 2003 for all 17 specialties on every element

Discussion

Although the items in COPS seem to be comparable with items used in several existing questionnaires, most of these questionnaires were not suitable for the goals of the eight academic hospitals: a short, reliable and a valid core instrument to screen patient satisfaction, based on the needs of patients in Dutch academic hospitals. This study shows that COPS is a feasible and reliable instrument to measure patient satisfaction. It proved useful in comparing hospitals; it provides hospitals with benchmark information on a hospital as well as a specialty level; and it can distinguish best practices. The response rates are average compared with the international response rates (from 74% in Germany to 46% in the USA).36

In practice, some hospitals used the information obtained for interventions to improve patient satisfaction. For example, the Radboud University Nijmegen Medical Centre made, based on the results of the measurement of 2003, a checklist to improve the communication between doctors and nurses on the department of gastroenterology.35

Use by general hospitals

Since 2004, several Dutch general hospitals also started using COPS as an instrument for measuring patient satisfaction. Because the dimensions and items of COPS were originally compared against a framework for patient satisfaction in general hospitals, it was reasonable to assume that COPS is also a feasible and valid instrument for general hospitals. Therefore, the Federation of Dutch Hospitals accepted COPS as a standard instrument in hospitals. The Federation of Dutch Hospitals only added one item in assessing overall satisfaction measured with a 10-point rating scale. Nowadays, most hospitals in The Netherlands are using COPS.

Limitations of the study and future research

A number of study limitations can be mentioned. First, we could not investigate the characteristics of the non-responders. Thus, extremely (dis)satisfied patients, in particular, may have returned the questionnaire. However, former research showed that the impact of non-response bias on comparisons between hospitals is small.27

Second, it can be argued that our satisfaction scores were high, as in most satisfaction research studies, and may be too high to function as a basis for improvement. Moreover, it has been suggested that these high scores are not always an indication that patients have good experiences.7 12 19 28 Our results indeed show high scores in some dimensions, but there is still is room for improvement, especially in the dimensions information and after care and discharge, and in the departments with a significantly lower patient satisfaction than the benchmark. One should also consider that it is difficult to measure dissatisfaction in a healthcare system where most consumers are very satisfied.29

Third, all eight hospitals used the information from the patient satisfaction benchmark to make improvements.24 35 37 The measurements actually showed increased satisfaction in some cases. An effect study to assess whether there was a direct relationship between the improvements hospitals made and increased satisfaction could not be performed. There is hardly any evidence for such relation in the literature, which makes it an important topic to explore in future research. Nevertheless, the academic hospitals are satisfied that their idea for a short, reliable, valid and discriminating questionnaire for measuring patient satisfaction proved feasible.

References

Footnotes

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.