Background OpenNotes, a national movement inviting patients to read their clinicians' notes online, may enhance safety through patient-reported documentation errors.
Objective To test an OpenNotes patient reporting tool focused on safety concerns.
Methods We invited 6225 patients through a patient portal to provide note feedback in a quality improvement pilot between August 2014 and 2015. A link at the end of the note led to a 9-question survey. Patient Relations personnel vetted responses, shared safety concerns with providers and documented whether changes were made.
Results 2736/6225(44%) of patients read notes; among these, 1 in 12 patients used the tool, submitting 260 reports. Nearly all (96%) respondents reported understanding the note. Patients and care partners documented potential safety concerns in 23% of reports; 2% did not understand the care plan and 21% reported possible mistakes, including medications, existing health problems, something important missing from the note or current symptoms. Among these, 64% were definite or possible safety concerns on clinician review, and 57% of cases confirmed with patients resulted in a change to the record or care. The feedback tool exceeded the reporting rate of our ambulatory online clinician adverse event reporting system several-fold. After a year, 99% of patients and care partners found the tool valuable, 97% wanted it to continue, 98% reported unchanged or improved relationships with their clinician, and none of the providers in the small pilot reported worsening workflow or relationships with patients.
Conclusions Patients and care partners reported potential safety concerns in about one-quarter of reports, often resulting in a change to the record or care. Early data from an OpenNotes patient reporting tool may help engage patients as safety partners without apparent negative consequences for clinician workflow or patient-clinician relationships.
- Patient safety
- Quality improvement
- Patient-centred care
Statistics from Altmetric.com
Patients are increasingly accessing their health data, as movement towards fully transparent records gains momentum.1 ,2 Patient engagement has been linked to improved outcomes, lowered costs and better experiences of care,3 ,4 but little is known about how patients engage with their health data, and whether transparency alone drives patient activation. In addition, while growing evidence suggests that patients can identify mistakes in their care,5–9 few ways exist for patients to comment routinely on their medical records, limiting the opportunity to learn more about the nature of documentation errors and improve quality systematically.
The electronic health record (EHR) is populated with notes written by busy providers without systematic accuracy checks.10 Studies have found that 26% of primary care physicians believe patients would find non-trivial errors in their notes,11 that up to 60% of EHRs contain at least one error,12 that 43% of medications in the EHR may be inaccurate13 and that the mismatch rate between the medications patients take and those listed in the medical record may be as high as 95%.14
OpenNotes, an innovation that invites patients to review their clinicians' visit notes online,15 may improve safety through new opportunities for patients to report possible documentation errors. During the initial OpenNotes study and the broad scale implementation that followed at our organisation, we received several anecdotes involving patient safety.16 Some patients found medication errors in their notes; others read notes and were reminded of overlooked tests or appointments, such as a skin biopsy or follow-up for a pulmonary nodule. A total of 7% of patients reported contacting their doctor's office because of their notes; 22% of these did so because of a perceived error.11 Today, over 11 million US patients have online access to their notes.17 Enlisting note feedback from patients and care partners—family or friends who care for vulnerable or chronically ill patients—may help reduce medication errors, readmissions or other costly problems stemming from miscommunication.
Experts cite the powerful effects of patient-reported errors, and call for routine implementation of patient reporting systems.9 ,18 Patients and families are the consistent thread in the space between many different providers, uniquely positioned to observe mistakes in transitions of care or information transfer, unanticipated symptom changes in the diagnostic process or documentation errors. Yet while there is general support for engaging patients and families in safety,19–26 the literature remains equivocal on best practices.27–31 Critics worry that involving patients in safety may place undue burden on vulnerable and ill patients, introduce new worries or concerns about medical care, negatively impact the clinician-patient relationship6 ,32–34 or distract from safety efforts by focusing on service issues.
One year after broad implementation of OpenNotes at our organisation, we introduced a quality improvement (QI) pilot programme that solicited patient and care partner feedback on visit notes through an online OpenNotes patient feedback tool, with particular emphasis on perceived errors. We hypothesised that patients and their care partners would find mistakes in notes and would report them. In this paper, we share the results of this programme, focusing on: (1) patient-perceived understanding of notes and their accuracy; (2) characterisation of patient-reported inaccuracies; (3) the volume of feedback reports in comparison to existing reporting mechanisms and (4) effect on clinician workflow and the patient-clinician relationship.
Beth Israel Deaconess Medical Center (BIDMC) in Boston implemented OpenNotes broadly in 2013.35 ,36 Shortly thereafter, we convened a team of stakeholders to develop and pilot an online patient OpenNotes Feedback Tool (‘Tool’) that could collect and act on feedback from patients about their notes. The stakeholders included physicians and a nurse manager from our hospital-based primary care practice, OpenNotes researchers and representatives from multiple departments: Health Care Quality/Patient Safety, Patient Relations, Information Systems, Health Information Management and the Patient and Family Advisory Council (PFAC). This team met every 2 weeks for 9 months to: (1) establish consensus about what kind of feedback would inform organisational improvement, (2) develop a Tool to collect that feedback; (3) design a QI workflow to respond in a timely way to patient feedback and (4) develop patient communications materials.
The stakeholders decided to solicit feedback on potential documentation mistakes and patients' understanding of their care plans, and to offer an opportunity for other general feedback. We developed nine questions designed to solicit this information (figure 1) and a Frequently Asked Questions (FAQs) document for patients (see online supplementary appendix 1). Specific words and terms used in the Tool were selected after review of the literature.37 ,38 The Tool and the FAQs were reviewed by ‘plain language’ specialists, the hospital's legal team and its physician insurer. Questions were also evaluated for face validity by several physicians and nurses; in addition, three PFAC members gave detailed item-by-item feedback. The Information Systems department programmed the Tool and linked it to eligible notes.
We developed a review process harmonised with our organisation's existing QI workflow and policies. Our goal was to facilitate patients' reporting of possible errors while offloading work from clinicians. Both to shield clinicians from reports that were not clinically relevant, and to give patients a protected space to voice concerns, we first vetted responses through the Patient Relations Department, who contacted clinicians only upon identifying a potential safety concern. The Tool's introductory language explained to patients that responses would be confidential and would not go to the clinicians unless they revealed a possible safety concern.
All reports were automatically routed to Patient Relations and reviewed within 72 hours, and each potential safety concern triggered a conversation with the patient, care partner and/or clinician, as indicated. We identified a clinician with safety training as a resource for Patient Relations staff regarding any clinical questions. As a QI intervention, we aimed to cast the net broadly under possible patient harms. We therefore defined ‘potential safety concern’ as a ‘No’ response to the question, “Did you understand what your provider wants you to do next (the care plan)” or a ‘Yes’ response to the question, “Did you notice anything you thought may be an inaccuracy in your note (not counting misspellings and typographical errors),” further explained by patients in a free text box (figure 1). Because we anticipated that not all cases of misunderstood care plans or documentation inaccuracies constituted a safety concern, a clinician team then reviewed each response to determine whether the patient report represented an actual safety concern. Two team members (the project principal investigator and the Patient Relations representative) met weekly to review cases and address any immediate safety concerns. All potential safety concerns were also reviewed monthly by the Senior Director of Patient Safety, the Patient Relations representative and the principal investigator. This group categorised reports as actual safety concerns, possible safety concerns (not enough information) or not safety concerns, based on discussion until consensus was reached. We categorised the resolution of each concern according to a coding scheme; the step-by-step algorithm and decision points are shown in figure 2. If a documentation disagreement could not be resolved through these steps, the patient was referred to our formal amendment process.
We compared the number of patient reports with the number of ambulatory events reported on our hospital's online clinician adverse event reporting system in the same practice during the same year. Reports that did not contain a potential safety concern populated an aggregate database from which we summarise descriptive statistics.
Two of 10 teams in our primary care practice agreed to pilot the Tool as part of their QI practice. We presented the project at a faculty meeting and invited input. Team leaders encouraged participation. The two teams included 41 clinicians: attending physicians, resident physicians, nurse practitioners and nurses. Each clinician had the option to not participate.
Patients with a visit note by a participating clinician that was posted to the portal during the pilot period (6 August 2014–5 August 2015) were invited to use the Tool. As part of the OpenNotes system, clinicians can choose to hide notes from patients, but <0.25% of notes are ‘hidden’ (personal communication, Lawrence Markson, MD, Vice President, Clinical Information Systems, BIDMC). All patients registered on our portal receive an automatically generated email notifying them when a note becomes available. We modified this email by adding language in the body and the header inviting patient feedback, a link to the Tool FAQ document and an email address for any technical problems. Patients accessed the Tool by clicking on a ‘MyFeedback’ link at the end of the note, which was accompanied by a brief description of the project. Though our patient portal does not offer separate access by family members or other proxies, we envisioned that some care partners would also use the Tool.
User experience survey
Drawing on prior published OpenNotes patient-reported outcome measures, we developed two parallel online surveys, to assess the experiences of patients/care partners and clinicians after 1 year.36 ,39 The 15–21 item surveys (available on request) assessed patient engagement, clinician workflow and user experiences with the Tool. Although we expected few responses from patients who did not use the Tool, we also included questions for this population about why they chose not to use it.
The project was reviewed by our Institutional Review Board and determined to be a QI initiative.
All 41 clinicians on the two teams began the intervention; 29 were still practicing at our hospital at the end of the intervention and 12 (41%) submitted user experience surveys (figure 3). Twelve clinicians were graduated from their programmes after 11 months of the intervention or left the practice before the survey was conducted.
During the year, 6225 patients had at least one visit note posted by participating clinicians to their portal accounts and were invited to submit feedback. Among these patients, 413 lost access to the portal during the year; 5812 completed the intervention (figure 3).
Characteristics and outcomes of reports received
A total of 2736 patients (44%) opened at least one note during the pilot period. We received 260 reports from 217 users; 211 were patients and 6 were family members or other informal care partners. Patients who submitted feedback were older, had more visit notes, were more likely to be white and were more likely to have public rather than private insurance than those who did not (table 1). Nearly all patients/care partners (≥96% of reports) reported understanding the note and the plan of care, and 93% thought the note accurately reflected the visit. They documented potential safety concerns in 59 reports (23%), most commonly citing possible mistakes (21%) regarding medications, existing health problems, something important missing from the note, current symptoms or ‘other’ (table 2). For patients who did not understand the care plan (2%), we called to clarify next steps. These cases included confusion about medication tapers, concerns about medications during travel or lack of a care plan documented in the note. Patients/care partners also volunteered positive experiences with clinicians or OpenNotes in 77% of all reports (and 72% of reports with potential inaccuracies). Overall, 99% reported that reading and providing feedback on notes was valuable.
Among 59 reports with potential safety concerns, clinician reviewers determined no action was needed in 8 (14%), and Patient Relations attempted to call the remaining 51 (86%) patients. Among these 51 cases, patients confirmed their concerns and clinicians agreed to change the record and/or make a change in care in 29 (57%); patients confirmed concern and clinicians declined to make changes in 4 (8%) (eg, updating medications or problem lists entered by a different clinician); concerns were resolved in conversation in 3 (6%); patients decided not to pursue further action (no pressing safety issue) in 2 (4%); the case was referred to Patient Relations for issues beyond the Tool capacity in 2 (4%) (among these, the patient wanted to file a formal amendment request (which was never filed) in 1 (2%)) and finally, patients did not respond to phone calls in 11 (22%). Of the 59 reports, the QI team classified 26 (44%) as definite safety concerns, 12 (20%) as possible safety concerns and concluded that 21 (36%) were not safety concerns, such as the name or spelling of other clinicians involved in care, or age of children.
The two primary care clinical teams conducted a total of 10 039 visits among patients with portal access during the 1-year pilot, with an overall reporting rate of (260/10 039) 2.6% of visits and a reporting rate for a safety concerns of (59/10 039) 0.59% of visits. In comparison, during the same year, all 10 teams in the clinic received 123 reports from providers via the online ambulatory adverse event reporting system, for a reporting rate of (123/96 934) 0.13% of all hospital-based primary care visits (October 2014–September 2015, data courtesy of Pat Folcarelli, Director of Patient Safety).
Postintervention user experience survey
A total of 119 of 217 Tool users (55%) responded to the experience survey. Nearly all (>96%) participants reported the Tool was a good idea, that it was easy to use and that it should continue. In addition, 83% reported they were very or somewhat comfortable using the Tool. Asked what would make them more comfortable, patients commented on ensuring privacy, knowing who reads reports or technical issues such as single-click access to notes or shorter scroll-down pages. A quarter (25%) of patients reported their relationship with their clinician improved as a result of the feedback tool; the majority (73%) felt that it did not change. About half (47%) who reported a possible inaccuracy felt their concern was somewhat or very serious. Of these, 87% reported that their concern was taken somewhat or very seriously and 73% were somewhat or very satisfied with the resolution.
A total of 504/5595 (9%) non-users submitted surveys. The most common responses to “Why didn't you use the feedback tool?” were “Did not know about it” (54%) and “Everything was fine, so I didn't need to provide feedback” (26%).
Twelve of 29 clinicians (42%) who completed the intervention submitted user experience surveys. Overall, 83% (10/12) thought the Tool was a good idea, that it should continue and that it could be a useful educational tool for learners. Two-thirds of clinicians (8/12) reported being contacted about a potential safety concern and rated the meaningfulness of issues that reached them at a mean of 5.4 (scale 0–9). None reported worsening workflow (11 clinicians reported no change and 1 thought it improved). Similarly, 83% (10/12) of clinicians thought their relationships with patients remained the same; 2 thought they improved as a result of the Tool. In addition, 67% (8/12) of clinicians reported receiving positive feedback, with one commenting that it was an ‘antiburnout experience’.
Our pilot study of the first online patient feedback reporting tool linked to OpenNotes suggests that such a tool can enable patients to identify documentation errors and QI opportunities without overburdening providers. In a 12-month period, 8% of primary care patients and their care partners who read notes used the Tool, and 23% of their reports identified potential safety concerns, predominantly comprising documentation errors and a small number of misunderstood care plans. The majority of cases in which patients confirmed a safety concern led to a change in the record or practice. Overall, virtually all respondents found the reporting tool valuable, and in a postintervention survey, the vast majority of both patients and provider respondents wanted the reporting tool to continue.
These findings add to growing evidence that patients can identify mistakes.5 ,6 ,26 In a study of parents asked about safety incidents during their children's hospitalisations, over 80% of reports were found on physician review to contain medical errors or quality issues.7 At Geisinger Health System, inviting patient feedback on electronic medication lists improved record accuracy in nearly two-thirds of cases,40 and other sites have reported similar results.41
Patient feedback on notes can help create a ‘learning EHR’ that closes the loop on individual documentation errors. It can put a safety net under patients who do not understand the plan of care. It may also surface other topics useful to clinicians and managers at the organisational level. For example, in the course of conversation with reporting patients and their clinicians, we heard uncertainty about who ‘owns’ the medication or problem list. Patients also helped us to identify faulty vaccine notification programming, and physical exam template ‘copy/paste’ activity, even when the complete exam was not performed.
Will inviting patients to identify mistakes increase liability? Claims emerging from EHR mistakes are rare, although they may be increasing.42 Outpatient medication safety may be a particular beneficiary of patient review; medication errors are the most frequent basis of ambulatory EHR-related claims,42 and experts believe medication errors are mostly unrecognised.10 In this study, they were the most common source of potential safety concerns reported by patients, perhaps preventing some harms. In general, reading notes may increase trust between patients and doctors,11 and transparent communication about mistakes nurtures better relationships, a key factor in lawsuits.43–45However, more data are needed.
A universal limiting factor to patient and clinician reporting tools is getting people to use them.9 ,46 ,47 Patients were judicious with Tool use. However, we were struck that 1 in 12 patients who read a note provided feedback, and that reporting of potential safety concerns exceeded several-fold those of our hospital's established online clinician ambulatory reporting system. Among patients who did not use the Tool, more than half did not know about it, suggesting that marketing may increase reporting. Although we expected that younger, highly educated patients would be more likely to use the Tool, we were surprised that older and non-Caucasian patients with a range of self-reported educational levels and their care partners also submitted reports. Still, there may be some patients who simply do not wish to participate or need more support to do so.48
Equally as important as what happened after inviting patients to provide feedback on their notes, is what did not. Although providers worried that identification of errors in notes may adversely affect the patient-provider relationship, not a single provider in our small pilot reported such an event. While providers may forecast being overwhelmed by patient complaints that are not clinically relevant, about two-thirds of reports submitted through the feedback tool represented possible or actual safety concerns on clinician review, and our vetting system did not appear to harm clinician workflow, but larger studies are needed. Finally, although providers in our study initially worried that an invitation to find mistakes in notes would foster negative feedback and culture, <1% of all patients misused the Tool or made a formal amendment request. Notably, Patient Relations already knew these patients from prior complaints, and the Tool allowed for early outreach and communication. By contrast, the majority of patients submitted positive feedback, supporting positive culture change.
As a QI intervention, the patient reporting tool has been ongoing for over 2 years. Up front resources for implementation include programming the Tool by Information Systems, dissemination of educational materials to patients and clinicians and algorithm training and support for Patient Relations Staff. However, if the QI algorithm is harmonised with existing workflows, the overall benefits can outweigh operational costs, as the Tool becomes part of transparent culture. This may be in part because a small proportion of patients use the Tool, consistent with our findings from the OpenNotes study, where relatively few patients contacted the doctor's office about a note concern.11 It may also be because the benefits extend beyond potential safety catches.
Rigorous economic analyses of value gained from corrected medication errors, updated family histories and more timely or accurate diagnostic and therapeutic assessments are needed. However, the relational benefits of OpenNotes and the reporting tool, such as enhanced trust and engagement, are harder to measure. Taken as a whole, these likely represent a net benefit to healthcare organisations and to patients. In the future, the Tool might be modified by inviting patients to annotate their own records and display these annotations for clinicians, a feature advocated by PFACs and one that would further reduce the Patient Relations staffing resources. However, such a modification would need careful consideration regarding potential impact on clinician workflow.
Looking to the future, the reporting tool may help address other safety and education priorities.16 ,49 An estimated 12 million adults are affected by diagnostic errors in ambulatory settings each year,50 and few (if any) systems bring patients into the diagnostic process in a readily scalable way.25 Coupled with EHR trigger mechanisms that can identify ‘high-risk’ patients, targeted use of the Tool for those vulnerable populations and their care partners, where misdiagnosis rates have been shown to be as high as 20%, may prove particularly helpful.51 In addition, a comparison of patient versus clinician severity rating of patient-reported safety concerns may help better define the relationship between documentation errors and patient harm. Such knowledge could also then inform educational initiatives to bridge the gap between patient and clinician perspectives. Finally, educators will quickly recognise this as a tool for patient safety, and for communication skills, professionalism and patient-centred medical education.52 ,53 An incentive structure applied to the Tool may motivate both patients and providers to improve safety outcomes, particularly in various fields susceptible to communication breakdowns.54–56
There are still large gaps to be addressed: non-English speaking patients or those with low literacy levels, cultural barriers to patients providing feedback and the large denominator of patients (and providers) who do not use reporting tools. A better understanding of barriers (and facilitators) to reporting and interventions to support more robust safety behaviours is needed. The 20% no response rate from patients we reached out to may simply reflect a sense that the issue was not important. However, it may also represent other barriers.
Evidence from OpenNotes to date suggests that record transparency increases patient engagement, at least for some patients.36 But ‘engagement’ is a broad term that is likely influenced by health literacy, cultural differences, demographic factors, cognitive issues and provider-related factors, and further studies may shed light on the relationship between transparency and different forms of engagement, including interest in the reporting tool.57 Similarly, while a relationship between identification of documentation errors (such as wrong medication lists or doses, inaccurate family histories, missing information pertaining to the diagnostic process and inaccurate description of symptoms) and potential patient harm is plausible, more research is needed.58–60
Frontline clinicians struggle with EHRs.10 ,61–64 If we aspire to an EHR that will tell patients' stories in a human way, patients themselves may help. With feedback from patients, perhaps clinicians will make even small changes in their notes so that patients can see themselves in their records. And the unexpected power of positive feedback from patients for providers—some of whom called this ‘the best thing’ about the project, may enhance joy and meaning in healthcare. The potential effects on burnout reduction through such appreciative inquiry merits further study.65–68
Limitations of this pilot initiative include a non-randomised design at a single institution, with voluntary participation, and a likely response bias from activated patients. The note reading rate in the small pilot group was a little lower but overall comparable to the read rate at our organisation.69 In prior OpenNotes surveys, reasons for not reading notes most commonly included not knowing about OpenNotes or having difficulty with accessing notes, both potentially modifiable factors, and also reasons that may bias results. Patient use of the Tool was comparable to or higher than other reporting systems. The algorithm we developed is institution-specific to harmonise with QI workflow, limiting generalisability and requiring adaptations to local culture and practices elsewhere. Our patient population was predominantly Caucasian and English-speaking, limiting in-depth assessment of reporting tool use among other patient populations.
Patients and care partners can identify confusion about care plans and documentation mistakes in their notes, and did so in about one-quarter of reports, resulting in a change in the record or practice in the majority of closed loop cases. Two-thirds of such reports represented actual or possible safety concerns on clinician review, and our early findings in a small pilot intervention suggest that the reporting tool does not appear to harm workflow. An online patient reporting tool linked to OpenNotes can help engage patients as safety partners, support providers with positive feedback and inform organisations of opportunities to improve care.
The authors thank Melissa Anselmo, Mary Barry, Hannah Chimowitz, Rossana Fazzina, Beth French, Amy B. Goldman, Howard Hillman (in memoriam), Heidi Jay, Margaret Jeddry, Jing Ji, Susan Johnson, Gila Kriegel, Suzanne Leveille, Julia Lindenberg, Lawrence Markson, Roanne Mejilla, Karla Pollick, Elana Premack Sandler, George Silva, Qiang Wang, Gail Wood, Guoping Xu and the BIDMC PFAC for their valuable contributions to the project, and CRICO/Risk Management Foundation of the Harvard Medical Institutions for their generous support of the work.
Portions of this manuscript were previously presented at the Society of General Internal Medicine national meeting 2015, Toronto, Ontario; and the National Patient Safety Foundation Congress 2016, Scottsdale, Arizona.
Funding CRICO/Risk Management Foundation of the Harvard Medical Institutions.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.