Article Text

Identifying patient safety problems during team rounds: an ethnographic study
  1. A Reema Lamba1,
  2. Kelly Linn2,3,
  3. Kathlyn E Fletcher2,4
  1. 1Department of Internal Medicine, Baylor University Medical Center, Dallas, Texas, USA
  2. 2Clement J. Zablocki VAMC, Milwaukee, Wisconsin, USA
  3. 3Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
  4. 4Department of Internal Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
  1. Correspondence to Kathlyn E Fletcher, 5000 W. National Ave., Milwaukee, WI 53295, USA; kfletche{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


Triggered by the 1999 Institute of Medicine report, To Err is Human,1 patient safety has become a prominent part of the medicine culture. Hospitals and residency programmes have increased patient safety awareness and education.2–4 In fact, the Accreditation Council for Graduate Medical Education had made explicit its high expectations around quality and safety in the curriculum and the practice environment.5

Many methods exist to identify patient safety issues for quality improvement purposes.6 One study looked at incident reporting, malpractice claims, executive walk rounds, patient complaints and risk management reports to identify areas of overlap7 and found that each had its own strengths and weaknesses. Medical wards may offer additional opportunities to collect safety data and incident reporting. In fact, direct observation of rounds has been used to identify adverse events.8 Therefore, we set out to assess how often patient safety issues occur in daily rounds on medicine wards, what types of issues they are, and how often they are addressed.



An academic veterans affairs medical Centre.

Study design and subjects

This was an observational study conducted between April and June 2012. Inpatient medicine and cardiology team members participated. These teams were chosen because both had inpatient services with similar call schedules. Teams included interns, residents, attending physicians, medical students, pharmacists and pharmacy students. One observer (ARL) conducted all of the observations. In order to optimise reliability, four of the initial observations were done by two observers, and their findings were compared. This allowed subsequent data collection by the single observer to be more open to the insights provided by the secondary reviewers. The primary (ARL) and one secondary observer (KL) had special training through the VA's Chief Resident in Quality and Safety programme.9 This training included a 5-day ‘boot camp’ with sessions that taught advanced quality improvement and patient safety concepts. The other secondary observer (KEF) is the faculty mentor for the Chief Resident in Quality and Safety (CRQS) programme.

Using a structured data collection form (see online supplementary file), the observer recorded the patient safety issues that were brought up or occurred during rounds. The individual team member that brought up the issue was not recorded. The events were categorised (table 1) based on categories developed from the London protocol.10 Each patient safety issue could be assigned to more than one category. There was no limit to the number of categories to which an event could be assigned or to the number of events that could be assigned to each category.

Table 1

Frequency of patient safety issue by category

After recording the patient safety issue, the observer recorded whether there was a consequence that occurred as a result. If so, the observer determined if the issue was actionable and whether some type of action was discussed. ‘Actionable’ meant that something could be done at that moment to address the specific patient safety issue. For example, if a radiological exam had not been completed as ordered, an ‘action’ may be calling the department to inquire about the test and ensure that it gets completed as soon as possible. Following rounds, a debriefing session was held with the team to discuss concerns that may have been noted by the observer as potential sources of harm but were not addressed during rounds. This discussion was not recorded or included in the data.

Recruitment of subjects

Inpatient team members were given information about the aims of the study and invited to participate. A waiver of written informed consent was granted. The study was approved by the Institutional Review Board at the Zablocki VA Medical Center.

Data analysis

Descriptive statistics were performed. Overall agreement was calculated by dividing the number of events agreed upon in description divided by the total number of events (only one hand hygiene event per team was counted because these were so common). Agreement on categorisation was calculated by dividing the number of times the raters agreed on at least 1 category for an event divided by the number of events agreed upon overall (ie, the numerator from the overall agreement number).


We observed rounds with 11 different teams for a total of 1032 min, 52% of which were at the bedside. Agreement on the actual events was modest at 56%, with each observer noticing some events that the other did not. When both observers did note an event, the agreement on categorisation was 77%. On the observation days, the median number of patients per team was 7 (range 1–14). Rounds lasted for a median of 84 min (range 5–178), corresponding to a median of 10.4 min per patient. Eleven of the 13 teams (85%) rounded at the bedside on at least some patients.

A total of 88 patient safety issues were noted during the study. The mean number of patient safety issues per team was 8 (SD 3.9) with a range of 1–15. The mean number of total issues per patient was 1. Sixty-six per cent (58/88) of the issues were discussed. Eighty-five of the issues were thought to be actionable; for 50% of these (43/85), an action was discussed during rounds. Thirteen adverse events (15%) were identified on rounds. More events were noted on bedside rounds than during sit-down rounds (47 vs 34). For the teams that conducted bedside and sit-down rounds, more issues were discussed during bedside rounds: 2.8 vs 2.2 issues per hour, but this was not a statistically significant difference.

The most common category of patient safety issues was ‘medication’—accounting for 20% (22/107) of the patient safety issues. Other common categories included environment (18%), communication (11%) and treatment error (10%). Examples in the ‘other’ category, accounting for 16% of the observed issues, included poor hand hygiene, inappropriate Foley catheter use, and patients refusing treatments. Of note, hand hygiene was listed as a one-time issue per team rather than as a repeated issue for each individual patient.


We observed inpatient ward teams and recorded patient safety issues discussed or identified during these rounds. The methodology used in our study—direct observation—provides a unique way to look at patient safety. Like executive walk rounds,7 we found many issues that were environmental in nature. However, we also identified medication errors in a similar proportion to incident reporting.7 In addition, we probably identified some issues that would not make it into these other reporting systems such as potassium supplementation inadvertently continued in a patient without resulting in harm. In our study only 15% of the observed issues had a related adverse event. This suggests that relying on systems that capture only adverse events can miss a large number of events with safety implications, but that do not cause harm, consistent with other studies.11 In prior work, Andrews et al conducted a rigorous study of adverse events in surgical patients, using trained ethnographers to identify events that were discussed during rounds, meetings, shift changes and other conferences.8 Our study corroborates Andrews’ finding that rounds are a rich forum for identifying patient safety events. Another small study described the errors detected by faculty on their own teaching services.12 In this study, fewer events were found compared with our study (47 errors on 528 patients), but like our study, many were near misses.

Direct observation of rounds could be a powerful tool for educating trainees about patient safety issues. In our study, we used chief residents who had been trained in patient safety to do the observations. Other programmes could consider having their chief residents occasionally round with teams to highlight how common patient safety issues are in their daily work. This direct observation, coupled with specific feedback to the teams about the identified issues, would make patient safety education more experiential than is often the case with the typical retrospective morbidity and mortality conferences.

Faculty who are leading rounds could also make an explicit effort to incorporate discussion of real-time patient safety issues into the usual discussion of diagnosis and treatment plans. In our study, only 50% of the ‘actionable’ events had an actual action plan discussed or action taken, suggesting that there are missed chances for incorporating patient safety into daily assessments and plans. Our data also suggest that bedside rounds may be somewhat more fertile ground for identifying and discussing such issues, adding to the reasons for moving rounds to the bedside.13 ,14 Faculty could also use this opportunity to review event reporting mechanisms, so that these issues can be evaluated more systematically.15 ,16

Our study has limitations. The observations were mostly done by one observer. However, the observer had special training in patient safety and quality improvement. Four of the observations were done in pairs with modest agreement, suggesting that having two observers would pick up more events. Second, the identification of patient safety issues was based on what was discussed and observed during rounds. There were likely other issues that were not brought up during rounds. Third, the fact that there was an observer could have changed the discussion around patient safety issues. Fourth, this was a single institution study and the sample size was small. Fifth, nurses were not present on rounds in this study and would have likely contributed incidents not brought up by the other team members. Finally, rounds were not conducted in an uniform manner, and in two of the observations, the team did not go to the bedside at all.

This study suggests that rounds, a common occurrence in hospital settings, present a rich opportunity for identifying and teaching about patient safety issues in real time. As residency programmes increase their emphasis on patient safety and quality improvement, direct observation could be an important tool for real-time identification of issues and teaching.


Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:


  • Contributors ARL: concept, data collection, drafting of manuscript. KEF and KL: concept, data collection, critical revision of manuscript.

  • Funding Although this research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors, it was done with support from the Clement J Zablocki Veterans Affairs Medical Centre.

  • Competing interests ARL and KL were VA Chief Residents in Quality and Safety during this work, and KEF was a staff physician at the Clement J Zablocki VAMC.

  • Ethics approval VA IRB.

  • Provenance and peer review Not commissioned; externally peer reviewed.