Article Text

Download PDFPDF

Assessing the patient safety competencies of healthcare professionals: a systematic review
  1. Ayako Okuyama1,2,
  2. Kartinie Martowirono1,
  3. Bart Bijnen1,3
  1. 1Foreest Medical School, Medical Center Alkmaar, Alkmaar, The Netherlands
  2. 2Graduate School of Medicine, Osaka University, Osaka, Japan
  3. 3VU University Medical Center, Institute for Education and Training, Amsterdam, The Netherlands
  1. Correspondence to Ayako Okuyama, 1–7 Yamadaoka, Suitashi, Osaka, Japan 565-0871: aokuyama-tky{at}umin.ac.jp

Abstract

Background Patient safety training of healthcare professionals is a new area of education. Assessment of the pertinent competencies should be a part of this education. This review aims to identify the available assessment tools for different patient safety domains and evaluate them according to Miller's four competency levels.

Methods The authors searched PubMed, MEDLINE, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), Web of Science, psycINFO and the Education Resource Information Center (ERIC) from the start of each database to December 2010 for English-language articles that evaluated or described tools for the assessment of the safety competencies of individual medical and/or nursing professionals. Reports on the assessment of technical, clinical, medication and disclosure skills were excluded.

Results Thirty-four assessment tools in 48 studies were identified: 20 tools for medical professionals, nine tools for nursing professionals, and five tools for both medical and nursing professionals. Twenty of these tools assessed the two highest Miller levels (‘shows how’ and ‘does’) and four tools were directed at multiple levels. Most of the tools that aimed at the higher levels assessed the skills of working in teams (17 tools), risk management (15 tools), and communication (11 tools). Internal structure (reliability, 22 tools) and content validity (14 tools) when described were found to be moderate. Only a small number of tools addressed the relationship between the tool itself and (1) other assessments (concurrent, predictive validity, eight tools), and (2) educational outcomes (seven tools).

Conclusions There are many tools designed to assess the safety competencies of healthcare professionals. However, a reliable and valid toolbox for summative testing that covers all patient safety domains at Miller's four competency levels cannot yet be constructed. Many tools, however, are useful for formative feedback.

  • Educational measurement
  • safety management
  • professional competence
  • health professions education
  • medical education
  • patient safety
  • quality measurement
  • crew resource management
  • educational outreach
  • academic detailing
  • surgery

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Patient safety is nowadays increasingly recognised as a key dimension of quality care, and is thus progressively being integrated into the education of healthcare professionals.1–4 The Institute of Medicine, renowned for its 2000 publication To Err is Human,5 stated in a 2003 report that ‘all health professionals should be educated to deliver patient-centred care as members of an interdisciplinary team, emphasising evidence-based practice, quality improvement approaches, and informatics’.6 Professional bodies and the WHO have since endorsed a patient safety competencies framework for healthcare professionals to enhance local patient safety training programs.6–9

Assessment is an essential component of competency-based education,10–12 and should be used for giving feedback as well as for summative examination. While a significant number of studies have been carried out to assess the competency of healthcare professionals, none of the previous systematic reviews on professional competence have focused on the domains relevant to patient safety.13 14 Competence is context dependent because it interrelates the ability of the healthcare professional, the task at hand, the ecology of the working environment, and the clinical contexts in which the task is performed.15 Accordingly, competence can only be assessed in the work place, but trainees must know what is required in order to provide safe care, and know how to use the knowledge they have accumulated. Thus, ideally, there would be a toolbox available that enrols all of the elements of the defined patient safety competencies throughout the curriculum, assessing each of these at that level of Miller that is fitting to the stage of training reached.16 This review therefore aims to identify and evaluate the available tools for the assessment of the safety competencies of physicians, residents, medical students, nurses, and nursing students in a hospital setting. Four review questions are discussed:

  1. What types of tool, focused on knowledge, skills, or behaviour, are available for the assessment of the individual safety competencies?

  2. What types of content are measured by these assessment tools?

  3. To what extent are these assessment tools reliable and valid?

  4. Which safety domains7 are being covered?

Methods

Data sources

Relevant English language articles published from the start of each database to December 2010 were sourced using PubMed, MEDLINE, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), Web of Science, psycINFO, and the Education Resource Information Center (ERIC). Combinations of search terms were used, relating to assessment (educational measurement, teaching, assessment, curriculum, education professional), competence (professional competence, competence, competencies, educational status, ability, skills), and patient safety (medical error, risk management, safety, patient safety, error, errors).

An additional keyword search was also conducted using PubMed. Again, combinations of search terms were employed, relating to assessment (OSCE, peer assessment, oral examination, essay, portfolio, CEX, mini-CEX, non-technical skill) and patient safety (safety, patient safety, error, errors). The Medical Subject Headings (MeSH) were used when available.

Hand searches were also conducted of relevant journals on medical/nursing education and patient safety (Medical Education, Academic Medicine, Medical Teacher, Journal of Nursing Education, Journal of Patient Safety, The Joint Commission Journal on Quality and Patient Safety, BMJ Quality & Safety). Furthermore, the referenced articles listed in each of the selected publications were examined, the abstracts of relevant congresses were screened, and several patient safety and educational experts were consulted.

Selection of articles

An article was selected only if it fulfilled all of the following criteria: the subjects of the study were physicians, residents, medical students, nurses, or nursing students; the article described the tools for the assessment of the individual safety competencies, focusing on knowledge, skills or behaviour; and the content of the assessment covered the domains of the Canadian Safety Competencies Framework.7 The Canadian Patient Safety Institute finalised this inter-professional safety competencies framework in 2008.7 The framework consists of six core competency domains: 1. Contributing to a culture of patient safety. 2. Working in teams for patient safety. 3. Communicating effectively for patient safety. 4. Managing safety risks. 5. Optimising human and environmental factors. 6. Recognising, responding to, and disclosing adverse events.

An article was excluded if it focused on the assessment of procedural or technical skills (eg, surgical skills, time and motion instrument handling), skills of medication safety (eg, drug prescribing skills, drug administration skills), general clinical skills, diagnostic skills, or open disclosure skills.

Data extraction

Two authors (AO, KM) reviewed the titles and abstracts of citations generated by the search to assess their eligibility for further review based on the selection criteria, and chose relevant articles for possible inclusion. They, supported by the third author (BB), then reviewed all of the selected articles and decided which to include in this study. The standard Best Evidence in Medical Education coding sheet17 was modified to focus on relevant parameters (country, single/multi-institution, speciality, trainee level, use for formative/summative evaluation) and tool characteristics (assessment methods, assessed competencies, reliability, validity, outcomes). One author (AO) abstracted data using this modified coding sheet, and then all of the authors reviewed the abstractions to ensure completeness and accuracy. Differences in data abstraction were resolved by consensus.

Information on the reliability, validity and outcomes of the assessment tools described in the selected articles was also extracted. Messick's unitary concept, ‘construct validity’, was used: the degree to which a score can be interpreted as representing the intended underlying construct.14 18–20 Validity evidence was evaluated for five areas:

  1. Content (face) validity—the relationship between a tool's content and the construct it intends to measure.

  2. The response process—the relationship between the intended construct and the thought processes of subjects or observers. This provides evidence that raters have been properly trained (faculty development).

  3. Internal structure (reliability)—looks at internal consistency, test–retest reliability, agreement (inter-rater reliability), and generalisability.

  4. Relationship with other variables (concurrent, predictive validity)—the correlation of scores with those from other assessments or outcomes, and the differences between scores of learner subgroups.

  5. Outcomes (educational)—the consequences of assessment.

A modified version of Kirkpatrick's hierarchy was used to evaluate the outcomes of tool implementation.14 21 Outcome levels which were abstracted included: 1) participation, in other words the learners' and observers' views on the tool and its implementation; 2) self-assessed modification of learner and observer knowledge, skills and behaviour; 3) transfer of learning, in other words an objectively measured change in learner or observer knowledge or skills; and 4) results in terms of a change in organisational delivery or quality of patient care. Information regarding the cost of tool development and implementation was also extracted.

Data synthesis

Following identification of the patient safety domains covered by the assessment tools, the tools were categorised according to Miller's four competency levels: ‘knows’, ‘knows how’, ‘shows how’, and ‘does’.16 For each of these Miller competency levels, the trainee should be able to demonstrate the ability to imitate or replicate a protocol, apply principles in a familiar situation, adapt principles to new situations, and associate new knowledge with previously learned principles.16 22

All data included in this report were previously published and publicly available. Hence, our study did not require submission to the local institutional review board for ethical approval.

Result

Search results and article overview

The initial literature search identified 4773 citations (figure 1). Of these articles, 209 were filtered for detailed review to determine whether they met the inclusion criteria. Following the title and abstract review by two of the authors (KM, AO), the value of Cohen's κ was calculated to be 0.91. Forty-six articles met our inclusion criteria, one article was retrieved from the references of a selected article, and one article was retrieved from a hand search. In total, 48 articles were selected, and 34 unique assessment tools were identified (table 1). More detailed information on the selected studies is available in the online appendix).23–70

Figure 1

Literature search and study selection process.

Table 1

Characteristics of 48 studies describing 34 tools for patient safety competencies

Forty-five of our selected articles (94%) were published after 2005. The years 2009 (16 articles, 33%) and 2010 (10 articles, 21%) yielded the highest numbers of publications relevant to this review. Almost half of the selected articles (22 articles, 46%) came from the USA, and of the remaining 26, 14 (29%) came from Europe, and 6 (13%) from Canada. Thirty-four of the selected studies (71%) were conducted within single institutions. Just one of our selected articles (2.1%) refers to the implementation cost of the competency assessment.31

Of our 34 identified assessment tools, 12 (35%) failed to adequately explain the purpose of competency assessment. Seventeen tools (50%) in 20 studies were used for the summative evaluation of training programs.23–25 27–29 31–33 50 52 53 59 61 62 64 66–69 Three tools (8.8%) were employed for the formative evaluation of trainees.30 34 70 Two tools (5.9%) were utilised for both formative and summative evaluation purposes.35–41 45–49

Description of tools

Of the 34 assessment tools identified in our 48 selected studies, seven tools (21%) measured the trainees' knowledge of patient safety (the ‘knows’ level) (table 2). Three tools (8.8%) assessed the trainees' applied knowledge using case management (the ‘knows how’ level). Nineteen tools (56%) were used in simulations, with or without standardised patients, to evaluate the trainees' performance (the ‘shows how’ level). Four tools (12%) assessed the trainees' competencies at multiple levels (the ‘knows’, ‘knows how’, and ‘shows how’ levels); three of these tools were ‘Objective Structured Clinical Examinations’ (OSCEs). Two tools (5.9%) implemented the direct observation of trainees with actual patients: one evaluated the non-technical skills of anaesthesia residents in the operating theatre,41 and the other assessed nurses' patient safety handling at inpatients' bedsides (the ‘does’ level).70

Table 2

Description of 34 tools in 48 studies for individual trainee safety competency assessment

Twenty of our 34 identified tools (59%) assessed the competencies of medical professionals. Of the remaining 14 tools, nine (26%) assessed the competencies of nursing professionals, and five (15%) tested both medical professionals and nurses.27 48 49 56 60 61

Assessed content of the Canadian safety competencies

Referring back to the six aforementioned domains of the Canadian Safety Competencies Framework, 25 of our tools (74%) assessed the ‘managing safety risks’ competencies, and 24 tools (71%) measured the ‘working in teams for patient safety’ competencies. Approximately half of the tools (16 tools, 47%) assessed the ‘communicating effectively for patient safety’ competencies, and 12 tools (35%) evaluated the ‘contributing to a culture of patient safety’ competencies. Six tools (18%) assessed the ‘recognizing, responding to, and disclosing adverse events’ competencies, while only four tools (12%) measured the ‘optimizing human and environmental factors’ competencies. The three competency domains ‘managing safety risks’ (medical 15 tools, nursing eight tools), ‘working in teams for patient safety’ (medical 15 tools, nursing five tools), and ‘communicating effectively for patient safety’ (medical 11 tools, nursing four tools) were assessed by a number of tools for both physicians and nurses (figure 2). The ‘contributing to a culture of patient safety’ domain was evaluated for nurses by five of a total of nine tools (56%), and for physicians by only six of 20 tools (30%).

Figure 2

Number of tools available for assessing each of the Canadian Safety Competency domains.

The majority of tools at the lower levels (‘knows’ and ‘knows how’) (seven of a total of 10 tools, 70%) assessed the ‘contributing to a culture of patient safety’ domain. In contrast, most of the tools at the higher levels (‘shows how’ and ‘does’) assessed the ‘working in teams for patient safety' (17 of a total of 20 tools, 85%), ‘managing safety risks’ (15 of a total of 20 tools, 75%), and ‘communicating effectively for patient safety’ (11 of 20 tools, 55%) domains. Four tools used at multiple levels assessed all of the domains except the ‘optimizing human and environmental factors’ domain.

Validity evidence

The issue of content validity was raised in the descriptions of 14 tools (41%). According to the response process, rater-training was described for nine tools (26%), usually only once and briefly. The matter of internal structure was mentioned for 22 tools (65%). The majority of our tools have acceptable reliability. Correlations between evaluated scores and other assessment variables were described for eight tools (24%).

The safety competency results obtained using our assessment tools were compared with technical skills,35 45 55 56 work place performance,43 and written examination scores,28 31 65 whether tests were failed or passed.28 Our safety competency results show a high (r=0.65–0.80)35 45 55 or modest (r=0.34)56 correlation with technical skills. Conversely, they exhibit a low (r=0.13)32 68 or modest (r=0.25)32 35 68 correlation with final written examination scores. Performance scores were also compared across training levels and trainee characteristics for 13 of our assessment tools (38%); 11 tools in 17 studies produced scores that increased with training level,23–26 33 37–39 45 48 49 52 57–59 61 67 while for three tools in four studies this trend was not observed.36 41 53 62 Of 19 tools which were used for the summative assessment, six evaluated both content and internal structure.25 38 39 43 46 59 62 Validity evidence for another five tools was not reported.27 50 64 66 69

Trainees and observers reported their experiences as being generally positive (Kirkpatrick level 1) for seven tools in 10 studies (21%).25 31 34 45 47–50 70 71

Discussion

Developing reliable and valid tools for the assessment of safety competencies is a challenging task.22 In the past few years the number of publications on this subject has increased, and several assessment tools are now available.

When assessing safety competencies, instructors should understand the limitations of different approaches and use a range of methods that fit the specific learning situations of Miller's four competency levels.72 Of the 34 tools identified in this review, 10 can be used to assess the knowledge of key patient safety concepts at Miller's two lower levels, such as adverse events, just culture, and system thinking. Four25 26 62 63 of these 10 tools, which are of proven content validity and internal reliability, are valuable for the evaluation of the cognitive knowledge and reasoning of both medical and nursing students, particularly after a course in basic patient safety science. The remaining six of the 10 tools have limited content validity and reliability. Written examinations such as multiple choice questions, however, tend to measure the potential to perform rather than actual work performance, and more advanced trainees should be assessed through direct observation in simulation scenarios or real practice for the higher competency levels.73 We found 20 tools for the examination of the ‘shows how’ and ‘does’ levels in the domains of ‘managing safety risks’, ‘working in teams for patient safety’, and/or ‘communicating effectively for patient safety’. Of these 20 tools found for the assessment of the competencies of medical and nursing professionals, three56–59 have a proven appropriate content validity and internal reliability, and are suitable for use in an emergency situation. Two tools44–49 out of the 20 are ideal for use in surgery, and a further two are suitable for the assessment of anaesthesiologists in training.35–41 43

Each tool of course has its strengths and limitations, and so complementing methods must be sought in order to overcome these weaknesses.22 72 For instance, no tool covered all the patient safety competency domains, and therefore tools should be combined or alternated.31 33 34 Unfortunately, the various tools cover the domains unevenly, and more work has to be done to investigate tools in the important domains of ‘contributing to safety culture’, ‘optimizing human and environmental factors’, and the sub domain of ‘recognizing, responding to, and disclosing adverse events’.

It should further be noted that the described tools were mainly applied in specific situations, such as in the operating theatre, during the provision of anaesthesia, and for crisis management, and only a few tools have been developed for the assessment of competencies in daily practice in other workplaces. For example, the Anaesthetists' Non-Technical Skills System (ANTS) and the Non-Technical Skills System (NOTECHS) are available for the assessment of safety competencies in their respective clinical situations (the higher competency levels). The importance of context in measuring the higher competency levels means that such tools cannot simply be copied and used in other working environments. Each discipline or speciality must adapt the tools to its own requirements.

Another issue for discussion is the limited proven reliability and validity of the described tools. The majority of the tools assessed internal structure and content validity. A modest inter-rater reliability was reported for some tools, such as the ANTS and the Non-Technical Skills for Surgeons. Sufficient rater-training is a requirement for summative assessment,44 74 75 and yet in the majority of the studies rater-training was either minimally described or not described at all.

The correlation of assessed competency scores with other types of assessment and educational outcomes was evaluated in only a few studies. Four tools showed a strong correlation with technical skills.35 45 55 56 However, for one of these tools, the ANTS, it was pointed out that there was an imperfect distinction between non-technical skills and pure medical knowledge in some of the elements of the ANTS scale.36 One review has stated that “improving patient outcomes as a result of patient safety education represents a particularly daunting task, given that intensive, large-scale quality improvement efforts often fail to demonstrate improvements in health outcomes”.76 This clearly explains why evaluating educational outcomes can be so difficult.

Finally, assessment tools should also have an impact on future learning and practice, be acceptable to learners and faculty, and costs must be considered.77 Although data are scant, these subjects are important for the compliance of the involved staff and trainees and the feasibility of implementing the assessment tools.

The assessment of safety competencies is a new field of education, and it is clearly difficult to develop reliable and valid assessment tools. We would like to express our appreciation for the pioneering work of the researchers in this field, as several useful tools could be identified. Similarly, this review has its own limitations. First, our search strategy for article selection focused on competency rather than knowledge; we searched for articles using the terms ‘competence’, ‘ability’, and ‘skill’, and if articles did not use these specific terms, we might have unfairly omitted them from our study. Hence our search methodology may have excluded articles which described or evaluated patient safety knowledge. We tried to cover this by searching the reference lists of the quoted articles, hand searching relevant journals on medical/nursing education and patient safety, engaging in discussions with experts, and screening abstract books of patient safety conferences. Second, we did not focus on technical skills and medical knowledge. Thus tools used principally to assess technical skills, for example, the Objective Structured Assessment of Technical Skill,78 were excluded, even if they implicitly included some elements of non-technical skills. Making the non-technical skills more explicit in such studies would, in our opinion, enhance patient safety awareness. Finally, studies which assessed team competencies such as the Oxford NOTECHS scale were also excluded, as our focus was the individual learner.79 Therefore, our results may not show the whole picture of the tools for the assessment of safety competencies. Team training is undoubtedly important to patient safety, as is the evaluation of the functioning of a team.

Conclusion

This review has identified many tools for the assessment of safety competencies of healthcare professionals at each of Miller's competency levels. The reliability and validity of these tools, where reported, are however modest at best. The tools do not cover the whole spectrum of patient safety competencies on all of Miller's competency levels, and they often suit only specific working situations. New research should focus more on the domains of ‘contributing to a culture of patient safety’ and ‘optimizing human and environmental factors’. Appropriate use of such tools can motivate a trainees' learning and at least enhance patient safety awareness.

Acknowledgments

We would like to thank the medical librarians, Mr Otten at VU Amsterdam University Library, Medical Library in the Netherlands and Mr Swa at Osaka University Life Sciences Library in Japan for assistance with the literature search. We would like to thank Dr van Luijk, associate professor of Medical Education, VU University Medical Center and Institute for Education and Training, Amsterdam, the Netherlands for his critical reading of the manuscript. These individuals did not receive compensation for their efforts beyond their usual salary.

References

Footnotes

  • Funding Supported in part by the Stichting Kwaliteitsgelden Medisch Specialisten in the Netherlands (SKMS, Quality foundation of the Dutch Medical Specialists). Martowirono is supported by a scholarship for an advanced researcher from the SKMS.

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.