rss
BMJ Qual Saf doi:10.1136/bmjqs-2011-000332
  • Original research

Promoting patient-centred care through trainee feedback: Assessing Residents' C-I-CARE (ARC) Program

  1. Nasim Afsar-manesh3,4
  1. 1Assessing Residents' C-I-CARE (ARC) Program, University of California, Los Angeles (UCLA), Los Angeles, California, USA
  2. 2Department of Patient Affairs, University of California, Los Angeles (UCLA) Ronald Reagan UCLA Medical Center, Los Angeles, California, USA
  3. 3Department of Medicine, University of California, Los Angeles (UCLA) Ronald Reagan UCLA Medical Center, Los Angeles, California, USA
  4. 4Department of Neurosurgery, University of California, Los Angeles (UCLA) Ronald Reagan UCLA Medical Center, Los Angeles, California, USA
  1. Correspondence to Dr Nasim Afsar-manesh, UCLA Med-GIM & HSR, BOX 957417, RRUMC #7501A, Los Angeles, CA 90095-7417, USA; nafsarmanesh{at}mednet.ucla.edu
  1. Contributors TW and BH collected data, performed statistical analyses, drafted and revised the manuscript. VM helped design the study, oversaw the programme and revised the manuscript. NA provided faculty support and revised the manuscript.

  • Accepted 2 December 2011
  • Published Online First 2 January 2012

Abstract

Aims In recent years, patient satisfaction has been integrated into residency training practices through core competency requirements as set forth by the Accreditation Council of Graduate Medical Education (ACGME). In 2006, the UCLA Health Systems established a program designed to obtain patient feedback and assess the communication abilities of resident physicians with a standard tool through the Assessing Residents' C-I-CARE (ARC) Program.

Methods This Program utilized a 17-item questionnaire, completed via a facilitator-administered interview, which employed polar, Likert and comment scale questions to assess physician trainees' interpersonal and communication skills.

Results From 2006 to 2010, the ARC Program provided patient feedback data to more than six clinical departments while collecting 5,634 surveys for 323 trainees. Scores for resident recognition and performance increased from the first to second year of activity by an average of 22.5%, while attending recognition scores decreased 19% over the four years. Additionally, residents and attendings in surgical specialties received higher recognition rates than those in non-surgical specialties.

Conclusions The ARC Program provided a standard tool for attaining patient feedback through a facilitator-administered survey that assisted in the accreditation process of training programs. Furthermore, hospitals, health organizations and medical schools may find the ARC Program valuable in collecting information for quality control as well as providing an opportunity for students to become involved in the healthcare field.

Introduction

The physician–patient interaction is a vital component of patient-centred care and there have been extensive national discussions on approaches that produce the most effective and highest quality of care.1 This interaction is critical in providing patients an opportunity to learn and engage in the management of their condition, treatment and follow-up.2 To improve patient satisfaction, medical centres and hospitals have always held an interest in patients' feedback regarding the care provided through national surveys such as the Hospital Consumer Assessment of Healthcare Providers and Systems and other private efforts.3–7 These various mechanisms enable hospitals to pinpoint areas that require improvement, including the care provided by physicians.7–10 In fact, many studies have demonstrated that patient feedback is important in improving physician performance and bedside manner.11–13

Evaluation of physicians has increasingly focused on trainees as frontline providers interacting with patients. In 1999, the Accreditation Council for Graduate Medical Education (ACGME) established six required competencies (patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systems-based practice) for the evaluation of training programmes.14 While currently there is no formal standard for competency integration into training, the ACGME has suggested the use of patient-based surveys as a method of obtaining evaluations of the trainees' performance.15 16

The University of California, Los Angeles (UCLA) Health System has used surveys and feedback as a mechanism of quality assurance. One of the primary customer service tools at the UCLA Health System is the C-I-CARE Program. C-I-CARE is a protocol that emphasises for medical staff and providers to Connect with their patients, Introduce themselves, Communicate their purpose, Ask or anticipate patients' needs, Respond to questions with immediacy and to Exit courteously. C-I-CARE represents the protocol for medical staff and physicians in any encounter with patients or patients' families. With the leadership of the Department of Patient Affairs of the UCLA Health System, employees assess and observe providers' and staff's dedication to service and professionalism for quality assurance. This serves as an evaluation of staff interaction and patient satisfaction along with other commonly used methods such as the Press-Ganey surveys and Hospital Consumer Assessment of Healthcare Providers and Systems.7 17

In 2006, to provide patient-centred care and address the interpersonal and communication skills competencies of the ACGME, the Department of Patient Affairs at the Ronald Reagan UCLA Medical Center, in conjunction with the David Geffen School of Medicine at UCLA, launched the Assessing Residents' C-I-CARE (ARC) Program. The goals of this programme were to monitor resident performance and patient satisfaction while improving trainee education through real-time feedback.

Materials and methods

Study site

This programme serviced the inpatient facilities of the UCLA Medical Center from 2006 to 2008 and that of the Ronald Reagan UCLA Medical Center from 2008 to 2010. It also took place at the Santa Monica UCLA Medical Center and Orthopaedic Hospital from 2009 to 2010. The ARC Program continues to operate at both locations.

Populations sampled

The sample populations consisted of patients of all demographics, barring exclusion criteria, seen by residents in their respective training programmes in general surgery, internal medicine, orthopaedic surgery, neurology, obstetrics and gynaecology, and neurosurgery. The specific exclusion criteria included patients who refused to participate, had language barriers, were in contact or respiratory isolation, resided in the intensive care units, or were otherwise unable to complete the survey (eg, cognitive impairment). All surveys were facilitator administered via interviews conducted by volunteers on an anonymous and confidential basis. Patients were interviewed during their hospital stay with the aim of collecting at least seven surveys for each resident in training.

The ARC Program infrastructure

The pool of facilitators (used synonymously with surveyor) consisted entirely of undergraduate students from UCLA. Upon selection, each facilitator received a minimum of 12 h of training in the Health Insurance Portability and Accountability Act of 1996 and a standardised interview and data collection protocols established by the ARC Program and by the UCLA Department of Patient Affairs.

A department in the ARC Program consisted of five to ten surveyors led by two senior surveyors, known as Department Interns. Each department was responsible for providing surveys and patient feedback for an ACGME accredited residency programme. A Program Lead Intern supervised all departments, while a manager within the Department of Patient Affairs, the parent department of ARC, oversaw all activities of the ARC Program.

Audit tool

The audit tool used for this study was a 17-item survey. The first 15 questions were multiple choice questions, with answer choices presented in either a polar or Likert scale (figure 1). The final two questions were open-ended questions that allowed patients to share any comments about the resident and/or the patients' hospital experiences. Survey questions targeted specific areas of the resident's doctor–patient care as outlined by the C-I-CARE Program at UCLA.

Figure 1

Assessing Residents' C-I-CARE Program audit tool.

Protocol

Upon entering the patient's room, the facilitator was required to introduce himself/herself, describe the purpose of the ARC Program and explain that participation was completely voluntary and confidential. The surveyor proceeded to begin the interview only after obtaining verbal consent from the patient.

The first question pertained to recognition of the attending physician to observe whether patients were capable of identifying their attendings. Following the first question, the facilitator showed the patient a pictorial roster that was used to identify residents. The identification step was important because the patient had to have interacted and been able to recognise the resident in question in order to accurately answer questions about the trainee. The surveyor ended the interview if the patient either did not recognise or did not have enough interaction with the resident (eg, only saw the resident once or twice briefly). Surveyors were also trained to detect uncertainty in the patients' responses and to end interviews if patients seemed unsure about the identity of the resident or appeared to guess on questions. As part of protocol, patients were not surveyed until 3 days after their date of admission (which was available to the surveyor via the patient census database) to allow sufficient time for patients to interact with the residents.

Following the questions regarding the resident, the facilitator asked the patient to share any comments in the final two open-ended questions of the survey. The introduction, questions, and possible answer choices were all read from the audit tool and all comments were recorded verbatim. Interviews lasted 2–3 min each with each questionnaire reflective of a single resident.

Statistical analysis

We conducted data analysis on surveys collected from the ARC Program's establishment in August 2006 to June 2010 for residents in six departments. Due to the variation in the number of surveys collected for each department, we separated and analysed departments in two categories: surgical (general surgery, neurosurgery and orthopaedic surgery) and non-surgical (internal medicine, obstetrics and gynaecology and neurology).

We grouped questions into five categories: attending recognition, resident recognition, professionalism, communication quality, and diagnostic purposes. Questions 1 and 2 pertained to the recognition of attendings and residents, respectively. Questions 3, 4, 6, 7 and 8 were polar questions assessing the residents' professionalism. Questions 9–14 were Likert-scale-based items evaluating the communication quality. Lastly, we categorised questions 5 and 15 as relating to diagnostics.

We calculated success score percentages for the professionalism, communication and diagnostic categories, using a formula that took the total number of ‘yes’ responses from polar questions or the number of ‘yes, always’ responses for Likert-scale questions divided by the total number of responses.

For professionalism, communication quality, and diagnostic categories, we ran tobit regression models with resident physician as a grouping factor and a random intercept for each resident. We fit three separate models using the success scores for professionalism, communication quality, and diagnostics as outcome variables. For all models, the predictors were academic year, specialty (surgical or non-surgical) and year in residency.

For the resident recognition, we ran logistic regression models (with binomial error structures and logit link functions) with residents as a grouping factor and a random intercept for each resident. The model had resident recognition as a binary outcome variable and academic year, specialty, and year in residency as predictors.

Because attending physician recognition data were not specific to each survey, we used a χ2 test to examine the frequency of surveys with identified attending physicians over academic year and between specialties.

Results

The ARC Program obtained 5,634 surveys for 323 resident physicians during the academic years of 2006–2010 (see table 1 for breakdown). Patient response rate and demographic information were not collected.

Table 1

Breakdown of all surveys obtained

Scores for professionalism, communication, diagnostics and resident recognition differed among academic year (table 2, figure 2A). All categories experienced a significant increase in scores from 2006–2007 to 2007–2008: professionalism: +23%; communication: +22%; diagnostics: +21%; and resident recognition: +24%. These remained at this elevated level for the following two academic years (±4%). Conversely, attending recognition appeared to decrease by 19% over the four academic years (2006–2007: 71.2%; 2007–2008: 68.1%; 2008–2009: 53.5%; 2009–2010: 51.9%; χ2=113.07, df=3, p<0.001, figure 2B).

Table 2

Results from all models examining success scores for professionalism, communication, diagnostics and resident recognition

Figure 2

Changes in success scores (±SE) for (A) resident recognition, communication, professionalism, diagnostics and (B) attending recognition over academic year. While attending recognition deceased over the four academic years, the remaining categories experienced a significant increase from the first to second year and a stabilisation of scores in the subsequent years.

Scores for resident recognition also differed among residency year (p=0.003). Specifically, patients recognised second year residents more often than first, third and fourth year residents, but at the same rate as fifth year residents. Residency year did not have an overall effect on the success scores for the other categories (table 2).

Surgical programmes received higher scores in both resident (Coefficient=0.481, SE=0.146, p=0.001) and attending (χ2=371.02, df=1, p<0.001) recognition categories than non-surgical specialties (figure 3). Residents in surgical specialties also had higher scores in professionalism, but this difference was minimal (<0.5%). There was no significant difference among surgical and non-surgical specialties for scores in communication or diagnostics (table 2).

Figure 3

Differences in attending and resident recognition success scores (±SE) between surgical and non-surgical specialties. Attendings and residents of surgical specialties received higher recognition scores than those in non-surgical specialties.

Discussion

Resident physicians experienced a general increase in scores for all categories from the first to the second year of the ARC Program's implementation (figure 2). The minimal fluctuations of scores in the subsequent academic years may be due to a ceiling effect since the initial scores were already relatively high.18 Conversely, attendings received lower recognition rates, which actually decreased over time. These findings are consistent with other studies that have reported that patient knowledge of attending physicians is low.19 20 These low rates can also be common in busy teaching hospitals, where attending physicians spend more time teaching trainees than focusing on their own performances.21

Success scores for attending and resident physician recognition categories differed among surgical and non-surgical specialties. In a large academic medical centre, patients are often seen in the outpatient clinic by their surgeon and therefore have an established relationship compared with non-surgical physicians who see the patient for the first time in the hospital. Therefore, patients should more easily recognise their surgeons than their non-surgical physicians.22

In general, the scores for the remaining categories were relatively high with minimal differences between specialties. Though insignificant, these variations may be due to the training protocol and the types of patients encountered.

Program highlights

Measuring patient satisfaction and obtaining feedback is not a novel concept, but rather a growing practice in healthcare.3 4 While prior studies have only evaluated trainees of a single department, the ARC Program is one of the first to establish an infrastructure to conduct evaluations on a system wide scale.23–26 The survey developed by the UCLA Department of Patient Affairs is a valuable tool that assesses the communication abilities of resident physicians. Coupled with the use of facilitators, the ARC Program provides an innovative method to retrieve real-time data for a broad range of inpatient services within the UCLA Health Systems.

The ARC Program's use of facilitator-based interviews allows for greater participation and faster and more efficient data collection than mail-in surveys.27 28 The use of face-to-face surveys ensures that the respondents do not engage in mail-in survey behaviours, such as completing the survey in multiple sittings and illegible handwriting.28 In addition, the protocol of displaying the pictorial roster to patients improves the validity of the data collection process, as surveys can only be completed if the patient correctly recognises the resident.

Because this programme stemmed from the Department of Patient Affairs, the surveys administered by the ARC Program also served as a patient experience quality meter. For example, patients could express their feedback to the facilitators who would then forward these issues to the Department of Patient Affairs. Due to the facilitator-based surveying technique, surveyors could assess more accurately if an issue required the attention of the Department of Patient Affairs.29

Furthermore, similar to other survey studies, the ARC Program's results are foremost an educational venue for the residents to improve their communication skills throughout their training.24 26 The ARC Program is centralised in collecting surveys for all residents and can provide clinical departments with either raw or aggregate data to evaluate their trainee cohorts. Also, the ARC Program provides real-time data on a weekly basis to residency programme directors. While most departments typically discuss the data and patient comments with the residents at their biannual meetings, some departments, such as Neurosurgery, have instituted weekly reporting of the ARC feedback to the residents via email. However, further study is warranted to determine the magnitude of effects for receiving real-time data on resident communication performance. As with the Communication Assessment Tool, the implications of the data collected could offer profound learning experiences for the trainees and provide performance overviews over an extended period.24

Many studies have used patient surveys over a short period of time, but few have established a programme to measure patient feedback over an extended time scale.6 18 23 25 30 Because of this long-term span, the ARC Program allows the opportunity to track residents status throughout their training, which is a goal of survey-based studies analysing the quality of care.30 At the end of every academic year, the data are compiled to create an aggregate report called the Resident Individual Progress (RIP) report. The RIP report presents all of the comments and data, using a numerical and graphical breakdown for each question, for an individual resident per year. As the residents go through their residency, each year's data are added into the RIP to illustrate the resident's progress over the entire training period.

Often one of the primary constraints with a face-to-face survey model is typically the cost of maintaining such an organisation of facilitators.28 However to counter this, the ARC Program uses volunteers from the undergraduate population at UCLA who provide labour at no cost to the institution. Besides conducting interviews, the volunteers are also in charge of training, advertising and preparing data. Furthermore, the supervision provided for the volunteers and programme to develop was integrated into the programme director's original job duties without any additional budget or full time employee.

The benefits of this programme are not limited to the David Geffen School of Medicine's Graduate Medical Education division, but also extend to the volunteer facilitators. Many of the facilitators are premedical UCLA undergraduates using the ARC Program for unique healthcare experiences to help make informed career decisions and form realistic perceptions of the medical profession.31 In addition, they have the opportunity to hear from practicing physicians and understand the day-to-day realities of a trainee.32

As with other studies there are distinct limitations to this study. First, because patients did not have to identify their attending using a pictorial roster, any potential misidentifications may have led to an incorrect number of responses. Additionally, the exclusion of certain patient populations, such as non-English speaking individuals, may have introduced bias from the participating patients. Similarly, the considerable skew in the number of surveys obtained for surgical residents may have biased the final results in favour of these residents. Furthermore, as the programme's main emphasis was on collecting data, there was no examination of whether there was a positive or negative effect on the bedside manner of individual physicians.33 Likewise, there was no recording of any patient demographics or the number of people who declined to participate. Because patient demographics were not documented and the survey was non-validated, we were unable to make any substantive correlations with Press-Ganey or other patient satisfaction surveys.

The ARC Program is an innovative initiative that combines the use of patient surveys and facilitators to provide real-time feedback to residents with the objective of enhancing performance. This programme can provide vital data not only for ACGME accreditation but also for quality improvement initiatives by clinical departments and administration. Ultimately, this programme possesses the potential as a benchmark programme to achieve the ACGME's 360° evaluation and also allows both medical and administrative professionals to determine the effects of their quality improvement initiatives.

Acknowledgments

We would like to acknowledge the Ronald Reagan UCLA Medical Center Department of Patients Affairs and the UCLA Health System for conceiving and implementing the ARC Program in conjunction with the UCLA David Geffen School of Medicine. Furthermore, we would like to thank the student volunteers and interns of the ARC Program for their tireless commitment in obtaining valuable patient feedback which has greatly improved the quality of communication at UCLA. Additionally, we would also like to thank the programme directors of the UCLA David Geffen School of Medicine's residency training programmes. Finally, we would like to acknowledge the participation of Andrea Vo, Kristin Toy, Frank Chen, Amanda Varanasi, Andrew Chiu, Nousha Hefzi, Miranda Liou, Justin Zaghi and Joyce Chen for assisting with the initial drafting and thank Lorna Kwan for her assistance with the statistical analysis.

Footnotes

  • TW and BH contributed equally to this manuscript.

  • Competing interests None.

  • Ethics approval Ethics approval was provided by the UCLA IRB.

  • Provenance and peer review Not commissioned; externally peer reviewed.

References