Article Text

Can patients report patient safety incidents in a hospital setting? A systematic review
  1. Jane K Ward1,
  2. Gerry Armitage2
  1. 1Yorkshire Quality and Safety Research Group, Bradford Institute for Health Research, Bradford, UK
  2. 2School of Health, University of Bradford, Bradford, UK
  1. Correspondence to Dr Jane Ward, Bradford Institute for Health Research, Temple Bank House, Bradford Royal Infirmary, Duckworth Lane, Bradford BD9 6RJ, UK; jane.ward{at}bradfordhospitals.nhs.uk

Abstract

Introduction Patients are increasingly being thought of as central to patient safety. A small but growing body of work suggests that patients may have a role in reporting patient safety problems within a hospital setting. This review considers this disparate body of work, aiming to establish a collective view on hospital-based patient reporting.

Study objectives This review asks: (a) What can patients report? (b) In what settings can they report? (c) At what times have patients been asked to report? (d) How have patients been asked to report?

Method 5 databases (MEDLINE, EMBASE, CINAHL, (Kings Fund) HMIC and PsycINFO) were searched for published literature on patient reporting of patient safety ‘problems’ (a number of search terms were utilised) within a hospital setting. In addition, reference lists of all included papers were checked for relevant literature.

Results 13 papers were included within this review. All included papers were quality assessed using a framework for comparing both qualitative and quantitative designs, and reviewed in line with the study objectives.

Discussion Patients are clearly in a position to report on patient safety, but included papers varied considerably in focus, design and analysis, with all papers lacking a theoretical underpinning. In all papers, reports were actively solicited from patients, with no evidence currently supporting spontaneous reporting. The impact of timing upon accuracy of information has yet to be established, and many vulnerable patients are not currently being included in patient reporting studies, potentially introducing bias and underestimating the scale of patient reporting. The future of patient reporting may well be as part of an ‘error detection jigsaw’ used alongside other methods as part of a quality improvement toolkit.

  • Patient involvement
  • patient safety
  • adverse events
  • patient safety incidents
  • incident reporting
  • patient reporting
  • epidemiology and detection
  • medical error
  • measurement/epidemiology
  • near miss

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The patient ‘voice’ is emerging as a key part of the research, development and management of patient safety both internationally and within the UK. The main driver for this shift in focus was the political move towards ‘patient choice’ as part of creating a more dynamic and responsive health service.1 This change of policy was aimed at empowering patients to act as partners in their healthcare, and in terms of patient safety has been translated into practice within the UK through the establishment of national initiatives such as the Patient Safety Champions Network2 and the Patient Advice and Liaison Service at a local level.3 More recently, this has been brought sharply into focus by the UK coalition government's white paper, outlining the legal duty of those with health service commissioning responsibility; the principal aim is to facilitate active participation from patients and public.4 The UK government's current health minister encapsulated his vision of the patient perspective in the words ‘No decision about me, without me’.5

In addition to establishing patient choice in shaping healthcare services, patients have also more recently been viewed as key stakeholders in the management of patient safety. The National Patient Safety Agency has recognised this by incorporating patient reports into its National Reporting and Learning System, alongside clinician reports. However, the current position within the National Health Service and healthcare services internationally is still very much dominated by clinician-led reporting of patient safety incidents, a position which has also been apparent in the most recent data published by the UK Patient Safety Observatory.6

It is intuitive that patients would be a useful source of information on patient safety. Patients are often the only common link between the different treatments and consultations that make up their healthcare experience, and as such are uniquely motivated and positioned to contribute to the quality and safety of their own care.7 ,8 However, despite this, it has often been commented that one of the main issues for the patient safety movement has been the lack of patient perspective.9–13 Indeed, a central question for patient safety research must be: how relevant and effective is our research and management of patient safety, if one of the protagonists in the patient safety experience is effectively excluded?

In order to address such questions, and in line with policy development as already discussed, researchers have more recently started to focus efforts upon understanding more about how best to engage patients in patient safety initiatives. One emerging area is how best to involve patients in the reporting of patient safety incidents or issues. A recent systematic review of patient reporting across a variety of settings14 concluded that despite a relative paucity of studies in this area, patient reporting has been shown to be reliable following corroboration of reports.15–17 However, this timely review also revealed that what little evidence is available comes from a disparate body of work, and further research is required to identify the optimal method for capturing patient reports, cognisant of different clinical settings and the duration of stay.

In line with such a recommendation, we undertook this systematic review as part of a wider research project aimed at developing and evaluating a range of patient-led patient safety incident reporting tools. Crucially, this research will address the need for a system which can be used across a hospital with its diversity of clinical settings and allow patients to ‘hot report’, that is, to report patient safety incidents while receiving treatment in hospital, thus reducing retrospective recall bias known to be an issue in incident reporting systems across high risk domains.18 ,19 Given the developmental nature of this work, it was clear that a detailed examination of the current evidence was required, informed by a human factors perspective, in order to ensure that any reporting tool developed would both build on the existing knowledge base and contribute to effective clinical governance. This review builds on previous reviews14 ,20 by widening the search strategy (through increased numbers of databases searched), by examining the quality of the included papers and most importantly by focusing only on studies exploring patient reporting of patient safety incidents experienced within a hospital setting. This focus has allowed us to consider in greater depth how patient reporting has been examined within the wider context of the systems that exist in hospitals around the measurement of patient safety, the management of this information and the clinical governance or quality improvement agenda.

In terms of specific objectives for this review, we aim to explore:

  • the types of patient safety incidents identified by patients, how this differs from other reporting methods and where patient reporting fits in relation to other methods of measuring patient safety;

  • the settings in which patients have been asked to report on patient safety incidents;

  • the timing of patient reports of patient safety incidents in relation to the experience of the patient safety ‘event’;

  • how patients have been asked to report upon patient safety incidents, and what has been done with this information.

Method

Search strategy

Five databases were searched for this review: MEDLINE, EMBASE, CINAHL (Kings Fund), HMIC and PsycINFO. These databases were selected to cover both medical and psychological literatures. The search strategy was developed iteratively, with reference to the study aims and an assessment and on-going revision of the keywords of target articles. In addition, in conjunction with the specialist librarian at Bradford Teaching Hospitals Foundation Trust, subject headings that mapped onto the keywords were identified to ensure that papers not using the keyword but within the subject area were also picked up within the search. For each of the databases, the subject headings were identified separately to ensure optimum coverage. The final list of search terms is detailed in the online appendix 1. Final searches across all five databases were run on 9 August 2010.

Given that patient reporting is a relatively new phenomenon in both research and practice, that the related terminology lacks standardisation14 and that through pilot searches we identified some inaccuracy of indexing in electronic data bases, we opted for a search based upon a high sensitivity, low specificity strategy. This also necessitated hand searching the literature.

Study selection

The inclusion and exclusion criteria were decided upon with respect to the aims of the review, and defined in terms of the population, intervention/comparators, outcome measures and study design, as advised by the Centre for Reviews & Dissemination.21 Studies were included if they satisfied one or more criterion under each of the following headings:

Participants

  • adult patients in a hospital setting/recently hospitalised adults.

Interventions/comparators

  • intervention studies where patients are involved in reporting patient safety incidents, or

  • surveys of, or interviews documenting patient-reported safety events or incidents, or

  • comparison with staff incident reporting or case note review.

Outcomes

  • reported error rates, or

  • adverse event/adverse drug event rates, or

  • incidence of complaints, or

  • patient and/or staff satisfaction.

Study design

  • experimental (RCT, cluster randomised), or

  • quasi-experimental (non-randomised, pre- and post), or

  • cross-sectional (during or posthospitalisation) surveys or interviews, or

  • cohort studies.

Studies were excluded if they were published in a language other than English; if they were unpublished; in a healthcare setting other than a hospital; case studies, discussion, review or editorial articles; or studies relating specifically to adverse drug reactions or pharmacovigilance.

Data extraction

Results from the searches were merged using reference management software and duplicates removed. The titles and abstracts of retrieved citations were reviewed by a researcher (JW) using the inclusion and exclusion criteria, following which the full text of 51 studies was retrieved for further assessment. The reasons for excluding studies at this stage were primarily due to their not relating to the subject area, and references to non-empirical studies such as letters, editorials or position papers. The retrieved full text articles were then scrutinised against the inclusion criteria, and the reference sections examined, which identified a further 17 articles. At this stage, concordance in the decision for inclusion or exclusion was achieved by two reviewers (JW and GA). Following this, 12 articles were selected for inclusion in this review. Typically, studies excluded at this stage discussed patient involvement in general patient safety initiatives (rather than incident reporting) or concerned patient reporting in a non-hospital setting (eg, primary care, or outpatient/ambulatory care, community-based surveys). A further paper was included within the review during the manuscript preparation process, resulting in a final total of 13 papers selected for inclusion in the review.

Quality assessment of selected articles

The heterogeneous nature of the study designs, aims and key findings precluded a full meta-analysis of the data from this review. Furthermore, given the range of methods used within the identified studies, a validated quality assessment tool—an adapted form of EPPI narrative empirical analysis22—was utilised, which allows comparative analysis of quantitative, qualitative and mixed methods studies. This tool facilitates assessment of the robustness of study design and methods; reference to theory; sample size and representativeness; any validation of measures; the extent of user involvement; and evidence of critical discussion and limitations. Studies were rated against 16 criteria (14 applying to qualitative papers only, 14 to quantitative papers only and 16 to mixed methods papers), with scores ranging from 0 to 3 for each criteria (0: no evidence; 3: ‘complete’). Total scores were translated to percentages allowing comparison across studies using different methods. Included studies were judged against the scoring criteria by a researcher (JW), with a random sample of three studies further scrutinised by another researcher (GA). Differences in scores were resolved by discussion.

Results

A total of 3304 citations were retrieved, with a further 18 papers identified from hand searching (one paper identified during the review process). Following the screening process (see figure 1), 13 papers were included in this review. Tables 1 and 2 represent a synthesis of the data following extraction from the identified papers. Most papers reported studies carried out in North America (69%), and employed a cross-sectional design (69%). The mean quality rating for identified papers was 53% (range 38%–64%; please contact lead author for details of full scores). A significant criticism of this body of work centres on the lack of theoretical frameworks for any of the included papers. Similarly, little attempt was made to identify appropriate sample sizes or assess data collection tools for reliability or validity. However, most papers reported studies which appeared to apply appropriate experimental designs and analytical techniques to meet research aims, with most samples reasonably or very representative of the target population.

Figure 1

Flow chart of study selection process. *One paper was included during the manuscript preparation process.

Table 1

Summary of included papers: study context and design

Table 2

Summary of included papers: frequency of patient reporting and classification

Terminology

All papers concluded that patients were able to report on patient safety incidents in a hospital setting. However, the terminology used to describe such incidents did vary considerably across papers (see table 3). Four papers were concerned only with issues related to medication or treatment.25 ,28 ,30 ,32 Where a broader perspective was taken, papers were split between those only concerned with adverse events as categorised by physician review,23 ,26 ,31 ,33 and those that widened this categorisation to also include near miss/close calls, and medical error with minimal or no risk of harm.17 ,24 ,29 Two of these latter papers took a more analytic approach to differentiate between patient safety incidents (adverse events, near miss/close calls and medical error with minimal risk of harm) and service quality incidents and process of care problems, respectively.17 ,29 The last category was those papers in which patients were asked to what extent they had experienced any ‘undesirable events’ from a prespecified list.15 ,27

Table 3

Terminology used for patient reported PSIs

Nature of patient reports

Types of safety issues identified in patient reports

The type of safety issues identified by patients is influenced by the type of questions asked by researchers. Ten of the included papers chose to restrict their questions to a predefined category or set of categories.15 ,23 ,25–28 30–33 Clearly, where patients were asked to report on predefined categories of patient safety incidents, this limited their responses. Only three papers reported asking open-ended questions where the patient was not restricted to certain categories of patient safety incident (PSI).17 ,24 ,29 Table 4 summarises the types of patient safety incidents (PSIs) reported by patients from the three papers asking open-ended questions. The taxonomy of patient reports is limited to these three papers, as to include others (where reports were restricted to certain categories) would risk inflating certain PSI types and therefore misrepresent the available data. It is clear from the summary that patient reports span the full clinical spectrum: from diagnosis and testing through to problems with treatment, medication and care procedures. However, patients do seem to report more medication-related PSIs than any other category. In addition to the type of clinically-focused PSIs that staff might be likely to report, patients also reported other issues, particularly service quality events. It should, however, be borne in mind that ‘service quality’ reports were over-represented in one study.29 This used as its sample day-case oncology patients, which it could be argued, may have a smaller spectrum of possible incidents and different priorities regarding reporting, when compared with inpatients from a range of hospital specialties. This is likely to have artificially inflated the total percentage of ‘service quality’ reports from the available data, and in doing so may not be indicative of the reality of reports from the wider hospital inpatient population.

Table 4

Nature of patient reports from studies asking open-ended questions

Parties involved in patient reports

Only two papers made reference to the parties involved in patient-reported PSIs.17 ,29 Nurses were identified more than any other professional group (26%), closely followed by physicians (22%), with other health professionals and visitors also identified within patient reports (15% and 0.5%, respectively). Interestingly, in a large percentage of reports, the party involved could not be identified or was ill-defined (54%). Multiple parties were often identified in patient reports.

Classification of patient reports

In total, 10 papers reported using some form of review and classification of patient reported PSIs.17 ,23–27 ,29 ,31–33 One of these papers reported researcher confirmation of patient reported PSIs only,27 and in one further paper the nature of the personnel undertaking the classification was unclear.32

Of the remaining eight papers, five used physicians only to undertake the classification of PSIs,17 ,23 ,24 ,26 ,33 two used physicians and nurses,29 ,31 and one reported classification by both physicians and pharmacists.25 Three of these papers did not report the number of patient reported PSIs, but only those which after review had been categorised as PSIs. In the five papers that did report on both,17 ,24 ,25 ,29 ,33 there was wide variation in the degree to which patient reports were judged to constitute classified PSIs (17%–100% x=51%).

As part of the classification process, a judgement is usually made about two key risk indices, the degree of preventability and severity of any given report. Of the eight papers that undertook classification of patient reports, seven report enough information from which to draw definitive data about preventability and severity.17 ,23 ,24 ,26 ,29 ,31 ,33 Three of the included papers are based on the same dataset,23 ,26 ,33 and therefore for the purposes of summarising these data, results from only one of these papers will be reported.23 Table 5 details the available information from the five eligible papers about the physician (and other health professional) classified preventability and severity of patient reported adverse events.17 ,23 ,24 ,29 ,31

Table 5

Preventability and severity of physician-classified patient-reported adverse events (AEs)

Although patients across studies included in table 4 clearly report PSIs across the full range of physician-classified severity, patient reported PSIs do appear to be towards the significant to insignificant end of the severity spectrum, with fewer patients reporting serious or life-threatening PSIs. In terms of preventability however, patients do seem to be in a position to report PSIs judged by physicians as preventable.

Concordance with other error detection methods

Only 5 of the 13 identified papers sought to examine the degree of concordance between patient reporting and other methods of error or incident detection.17 ,24 ,25 ,31 ,32 Medical record review was the method found to have the most concordance with patient reporting (50%;25 77%;31 40%17), although one paper reported no concordance between these methods.24 Staff incident reporting was less likely to overlap with patient reports. Physician and nurse reports were found to have 8% concordance each with patient reports in a paper concerning medication misadventures,25 and 1% and 2%, respectively, in a paper examining adverse drug events.32 One further paper found general staff incident reporting to have no concordance with patient reports of adverse events and near misses.17 It is important to note, however, that for two of the above papers,31 ,32 only patient reports that had been classified (as adverse events and adverse drug events, respectively) were included in the final sample. It is possible that this may have inflated the concordance above what might have been found if all patient reports (and not just those ‘confirmed’ as adverse events/adverse drug events) were taken into consideration.

Healthcare setting

Although all papers included in this review concerned PSIs experienced within hospitals (as either inpatients or day patients), there was variety in the type of setting in which patients were asked to report PSIs. Three papers were based in general medical units,17 ,25 ,30 with five papers sampling patients from both general medical and surgical units.23 ,26 ,27 ,31 ,33 One further paper reported a sample based within medical and paediatric units.32 Two papers were based in oncology units,28 ,29 with one paper sampling patients within an emergency department.24 Only one paper reported a sample from across the full hospital population.15

Timing of reports

Papers reported a variation in the ‘recall period’ of a patient-reported PSI, that is, the length of time from a patient experiencing a PSI and reporting it. Two out of 13 papers reported surveying patients at discharge or postdischarge,25 ,30 and for both these papers the recall period was unclear. Five papers reported surveying patients postdischarge only. The length of recall period for these papers varied from under 7 days24 to between 6 and 12 months.23 ,26 ,31 ,33 Two papers using inpatient interviews specified shorter recall periods of <24 h32 and <7 days,29 with one further paper using both inpatient interviews (recall period <3 days) along with a postdischarge survey (<10 after discharge) to identify PSIs experienced between the inpatient stay and discharge.17 Irrespective of the method of data collection for patient incident reports, in five of the papers, the recall period was unclear or not reported.15 ,25 ,27 ,28 ,30

Method of eliciting patient reports

Two methods of collecting incident reports from patients dominated included papers. Interviews (often using a quantitative style structured survey format) were the norm, with nine papers reporting using this method.17 ,23–26 ,29 ,31–33 The other method of collecting patient reports was to administer a survey or questionnaire to patients to complete alone. This method was reported across three papers.15 ,28 ,30 One further paper reported utilising both methods, with a questionnaire first supplied to inpatients, with a follow-up interview for those reporting an adverse event.27

Relation to clinical governance/quality improvement

With regard to what is done with the information from patient reports, none of the papers in this review mentioned how safety information from patients could be used as part of the wider clinical governance/quality improvement agenda. In addition, no paper made mention of feedback to study participants or staff groups hosting the research.

Discussion

This paper set out to review the extant literature examining the nature of patient reporting of PSIs within a hospital setting. The literature suggests that in academic terms, patient reporting is in its infancy, with included papers varying considerably in terms of their focus, design and quality. Indeed, some of the papers seemed only to include patient reporting as a minor part of the research aims. This notwithstanding, we feel confident that this literature allows a number of conclusions to be drawn, which have implications for both research and practice.

Can patients report PSIs in a hospital setting?

It is clear when one considers the results in their totality that patients are in a position to report on safety related issues experienced in a hospital setting. Furthermore, these studies do suggest that patients are able to identify PSIs from across a range of incident ‘types’, referencing a variety of different parties, and across the full range of preventability and severity. On this last point, although patients generally reported PSIs which were not life-threatening, they did report a large number of PSIs rated as ‘significant’ by physicians, suggesting that the patient's role in error detection is unlikely to be limited to information deemed to be clinically insignificant. Furthermore, in those studies undertaking physician classification, on average, nearly half of all PSIs reported by patients were judged to be ‘definitely’ or ‘probably’ preventable. This clearly demonstrates that if asked the right questions about the incident context, patient reporting may offer healthcare providers a valuable source of information about how to proactively manage safety.

Implications for patient reporting: research

None of the reviewed papers used any theoretical underpinning to inform either their design or analysis of patient reports. A number of models may be of value in investigating patient reporting, for example, social cognition models such as the Theory of Planned Behaviour.34 However, we believe taking a human factors perspective is perhaps the most appropriate foundation for research in this area, due to its focus on the multi-level, multi-factorial nature of PSI causation, as well as its increasing adoption by service providers in safety improvements. Additionally, this perspective also attributes a high value to near-miss events as well as harm events, thereby widening the opportunity for learning from PSIs. Developing a method for capturing patient reports without recognising human factors may lead to a superficial interpretation of PSIs, and one which may inappropriately focus on the role of individuals in causation. This could be a particular issue for nurses, who as a professional group are frequently mentioned in patient reports, which may be largely due to their on-going visibility through the ‘patient lens’, and numerous encounters as the last point of direct professional contact during a process of care. It has been suggested that patients do not have knowledge of the reasons for, or consequences of, adverse events.23 We would contend this has yet to be fully established empirically, and would likely vary across different patients and their level of contact with health services. Furthermore, research from staff incident reporting suggests that such schemes fail to routinely capture context and causes of PSIs.35 ,36 As for the value of patient reporting, we can infer from such research that even when those reporting may understand the clinical reasons for preventable events, reporting schemes may not facilitate the capture of such information, leading to the erroneous conclusion that they are unaware of any causal antecedents.

The length of recall period between experiencing and reporting a PSI remains unexplored within the literature. Some authors have commented on the impact of lengthy recall periods introducing ‘recall bias’ into patient reporting.23 ,26 The authors of one study did report that PSI rates did not decrease as a function of time,23 but this was related only to rates of PSI reports, which is different from any impact on the accuracy of information. This literature seems to currently lack a sound understanding of first, the key biasing influences on patient-reporting of PSIs, and second, the optimal period of recall. Further research is needed to clarify the optimum period for recall based on the experience of real patients with associated issues of acuity, length of stay, severity of illness, the emotional impact of a PSI and the potentially disorienting hospital environment.

A related issue is the method used for patients to report PSIs within these studies. All of the included studies actively ‘solicit’ reports from patients, via either an interview or written survey. None of the study designs allow for patients to spontaneously report a PSI. This is significant, as it may be that research risks inflating the extent to which patients may be willing to report PSIs, simply through the methods used to collect such reports. Some authors have reflected on this point, highlighting the related issue of how the role of the researcher and the nature of the questions may preferentially elicit certain responses.29 Perhaps the key research question should no longer be ‘can patients report’, but rather ‘can patients report in a system designed to collect this information routinely in a clinical setting?’. Consequently, there is currently no evidence as to whether patient reporting is feasible outside of a research study or if it could be an integral and complementary element of a service provider's safety intelligence network. In order to assess the latter and examine the validity of patient reporting, we argue that future studies routinely compare the type and the quality of patient reports with conventional methods of incident detection such as case note review.

Irrespective of the specific study design or the nature of capturing the patient safety reports, there are ongoing issues that researchers, practitioners and managers need to be cognisant of when designing studies, or indeed systems, to capture PSI reports from patients. A significant issue is the somewhat paradoxical situation that those who are least able to report PSIs may also be at most risk of experiencing a PSI. A number of authors have commented previously on this paradox with reference to the inherent bias arising from asking only those who are discharged from hospital about PSIs, when those unfortunate enough not to have survived and been discharged may have been at a higher risk of experiencing a PSI.23 ,31 It has been demonstrated empirically that older people do experience more PSIs,37 but there is also emerging evidence to suggest that other factors influence the likelihood of experiencing a PSI (eg, non-native language speaking),38—factors that may also lead to under-representation in the studies conducted so far. Overall, current estimates of patient reporting may be particularly inaccurate on the basis that some of the most vulnerable groups are under-represented in patient safety research. Further research should focus on the best ways to engage with these patient groups in order to gain a fuller understanding of patient-reported PSIs.

Implications for patient reporting: practice

If patient reporting is to become a valid tool for measuring ‘performance’ in patient safety terms, consideration must be given as to how it fits with other existing error detection methods. Some authors have discussed the problem of a higher false-positive rate for patient reporting of medication errors than those detected through physician and nurse reporting.25 Perhaps this finding highlights a weakness in the proposition that patient reporting can be a valid error detection tool. However, others have presented the counter argument that as false-positive reports can be ‘validated’ by clinical review, the bigger issue is that patients might suffer from higher rates of false-negatives than clinicians, meaning that many potential PSIs may go undetected.27 Thus, the evidence seems to suggest that patient reporting may risk both overestimating and underestimating the PSI rate due to misunderstanding of what is normal within the clinical context. There is evidence from the wider incident reporting literature that when triangulated, different error detection methods may lack a high degree of overlap in the PSIs identified.25 ,39–42 Taking this into consideration, patient reporting may suffer from some of the perennial problems inherent in staff reporting,43 but as a part of an error detection jigsaw it may also prove a valuable, and as yet untapped, resource. Mindful of the continuing policy emphasis on patient involvement and its relationship with quality improvement, it would seem entirely appropriate to integrate patient reporting as a viable means and formal component of clinical governance.

Limitations

Due to the focus of the overarching project (on the basis of which this review was conducted), the search was limited to studies within a hospital setting. This clearly does exclude other healthcare settings, for example, primary/ambulatory care. However, very little information is published about incident reporting (staff or patient) in primary care, and so the inclusion of such studies here may skew the findings of hospital-based studies.

Recommendations

As discussed above, future research is clearly needed to demonstrate that patient reporting can move beyond the research domain and become an established part of clinical governance. Some authors make suggestions for implementing patient reporting in practice, and one has discussed the possibility of distributing notepads to patients to write down concerns, events or questions to share with healthcare staff.25 Others discuss the possibility of ‘hot reporting’, with systems designed to allow patients to use a dedicated phone line to the hospital pharmacy to report medication errors.32 Both suggestions take patient reporting into the realm of workable systems, but with the caveat that they are combined with other error detection methods to form part of an overall safety strategy. Furthermore, to be successful, there should be a ‘collective responsibility’ for the development of any patient reporting system, with ‘coordinated improvement efforts involving all members of the healthcare team (including patients)’ (Coulter and Ellins, p 172).44 Indeed, for a given system to be workable for patients, methods of reporting should be designed, tested and evaluated in consultation with patients. The work of Bate and Robert would be useful here,45 which suggests that whatever patients design should be part of a carefully managed emancipatory process which incorporates staff as stakeholders, increasing a sense of co-ownership but also ultimately demonstrating a fit with the pragmatics of clinical governance.

As with all patient involvement, before establishing patient reporting tools within clinical settings, consideration does need to be given to the issue of patient burden. It is an issue that has been raised previously,46 with the concern expressed that blanket patient involvement interventions may risk shifting the responsibility of safety onto the patient at a time when they are arguably at their most vulnerable. Furthermore, we know that not all patients will be willing or able to be engaged with such interventions, and therefore must ensure that such patients are not negatively affected as a result of their lack of engagement. Going forward, both research and practice must be mindful that any approach must be flexible enough to deal with such differing levels of engagement.

Conclusions

Patient involvement is a policy imperative. It would appear that hospitalised patients have the potential to report safety concerns. However, the evidence base is currently equivocal and dominated by studies which have focused upon active solicitation to the neglect of hot reporting. Future study designs should be underpinned by a human factors approach, developed in collaboration with patients, taking account of memory recall and other cognitive biases, and use terminology that is understandable to patients but also which reflects the predominant language of patient safety. Samples should be representative of the entire hospital population, and the tool or tools developed must complement existing organisational governance and improvement strategies.

Acknowledgments

We acknowledge the support of the Yorkshire Quality and Safety Research Group and the unique contribution of the patient panel at both the Bradford Institute for Health Research and Newcastle University. We also thank our two anonymous reviewers for their comments and suggestions.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:

Footnotes

  • Funding This review was undertaken as part of a wider programme of research kindly commissioned by the National Institute for Health Research (NIHR) under the Health Services Research programme. The views expressed in this publication are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement We would be happy to share the quality appraisal of included papers with interested readers.