Article Text

Download PDFPDF

Starting off on the right foot: providing timely feedback to learners in quality improvement education
Free
  1. Amanda L Mayo1,2,
  2. Brian M Wong2,3
  1. 1 Division of Physical Medicine and Rehabilitation, Department of Medicine, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
  2. 2 Centre for Quality Improvement and Patient Safety (CQuIPS), Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
  3. 3 Division of General Internal Medicine, Department of Medicine, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
  1. Correspondence to Dr Brian M Wong, Medicine, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON M4N 3M5, Canada; BrianM.Wong{at}sunnybrook.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Quality improvement (QI) education, when optimised, can improve both educational and clinical outcomes, often through learner engagement in QI projects.1 2 However, just as in QI work more broadly, it is hard to improve a QI curriculum when learning outcomes are not measured. With the proliferation of QI education programmes, a variety of tools now exist to assess QI knowledge and skill acquisition to enable evaluation of QI education effectiveness. For example, the Beliefs, Attitudes, Skills, and Confidence in Quality Improvement (BASiC-QI) Scale is a self-assessment tool that assesses learner beliefs, skills and confidence towards QI which can be used to monitor the impact of QI education on learners.3 The Quality Improvement Knowledge Application Tool Revised (QIKAT-R) uses scenarios to assess a learner’s ability to propose an aim, a measure and one focused change (three key elements of the Model for Improvement) that address the system-level issue raised for each of the scenarios.4 The Mayo Evaluation of Reflection on Improvement (MERIT) Tool reviews learner reflections on QI problems of merit and opportunities for improvement with consideration of personal and system-level factors.5 Tools specifically aimed at assessing learner competencies in QI project proposal design and completion have also been developed; these include the Quality Improvement Proposal Assessment Tool (QIPAT-7),6 the Quality Improvement Project Evaluation Report (QIPER)7 and the Multi-Domain Assessment Tool for Quality Improvement projects (MAQIP).8

All of the above tools have variable intent and use in different QI educational contexts. None was specifically developed for Lean Six Sigma methodology education efforts, despite frequent use of this approach in many healthcare sectors.9 In this issue of BMJ Quality & Safety, Myers and colleagues report on the development and validation of a novel tool to assess the skills of learners to use the A3 problem solving framework, common to Lean methodology.10 The A3 problem-solving framework gets its name because its various elements fit on a single A3-size piece of paper. It typically includes sections on the problem background, current conditions or baseline data, target goals for a project, analysis of root causes of the problem, proposed countermeasures or QI interventions, an implementation plan and follow-up steps to ensure iterative QI work. Adapted from a tool originated by Toyota, the authors developed an A3 proposal template to capture the structured approach proposed to analyse and address a QI problem, a tool for proposal assessment and a self-instructional content guide to support training of faculty rating these A3 project proposals. The intent of the tool is to objectively assess the quality of A3 proposals prepared by learners. The tool was designed with online digital supplemental content for those teaching or supervising Lean QI work to support them in reliably rating the A3 proposals. They demonstrated high inter-rater reliability among 12 raters with differing levels of QI expertise using the assessment tool and self-instruction package to rate 23 items across six example A3s. The example A3s were strategically chosen to demonstrate weak, satisfactory and exemplary A3 proposals.

The assessment package was also designed and iteratively refined using QI. This occurred over five development cycles and involved raters that included non-physician Lean experts, physicians with QI teaching experience and QI-naïve physicians who provided feedback on different versions of the tool. The first cycle included a review of the literature and creation of an initial template, content guide and assessment tool that two raters piloted by assessing one sample A3. After incorporating feedback from cycle 1, the second cycle involved revising the materials with added instructions to allow for self-instruction. For cycles 3–5, A3 training examples ranging from weak to strong proposals were used to test inter-rater agreement. The research team also used a structured feedback form and debriefing phone calls to solicit opinions on the use of the self-instruction package as well as the feasibility of performing assessments. The culmination of the development cycles was a final A3 assessment tool with 23 items assessing the A3 proposal document and 10 items requiring knowledge on the local project context. This study illustrates the value of bringing an improvement mindset to the development of an educational assessment tool, reflecting a wider call for the more deliberate use of QI methods to improve health professions education.11

Evaluating the quality of QI project proposals has the potential to improve several aspects of QI education. For learners, project feedback tools such as the A3 assessment tool offer an objective measure of their ability to translate QI concepts and tools into practice. These assessments allow faculty to provide formative feedback at an early stage of the programme to inform ongoing QI learning activities, ultimately improving QI learning outcomes.12 Project proposal assessment tools, such as QIPER, QIPAT-7, MAQIP or the A3 assessment tool have several distinct advantages over knowledge assessment tools such as the QIKAT-R. First, they include a larger set of QI knowledge and skill domains than QIKAT-R, which more narrowly focuses on the ability to set an aim, identify a measure and propose a change. The A3 assessment tool, in addition to these fundamental QI skills, more broadly addresses areas such as problem selection, current state analysis, root cause analysis and rapid cycle change. Second, project assessment tools are contextually relevant to an actual QI problem that the learners are attempting to address, which allows feedback to incorporate considerations of the application of QI skills in real-world clinical settings, rather than applying them to scenarios that may not be relevant to certain specialties.

For faculty, the A3 assessment tool, along with its accompanying self-instruction package, serves as an opportunity for ongoing faculty development in the area of QI practice.13 Rater training plays a critical role in ensuring authentic, reliable and meaningful learner assessment.14 Reliability of QIKAT-R ratings, for instance, varies between those with and without QI experience.15 Myers and colleagues recognised the importance of rater training, focusing a considerable amount of effort towards the creation and iterative refinement of a self-instruction package that faculty would review prior to evaluating QI project proposals. Not only did this contribute to more reliable assessments of QI project proposals, faculty members reported that its use ‘sharpened understanding and ability to evaluate topics where [they] don’t know clinical content as well.’10 One additional advantage of the A3 assessment tool is the self-instructional nature of the rater training, which allows faculty to hone their QI skills on their own (taking 1.5 hours on average), making this a more feasible and scalable solution and avoids overburdening the often small number of advanced experts who are frequently called on to support and develop faculty QI capacity. Thus, the routine incorporation of such project assessment also has the potential to address a long-standing challenge in QI education, which is a lack of faculty capacity in QI to support educational programmes.16

At a programmatic level, the routine use of QI project proposal assessments can identify opportunities to refine and improve QI curricular content and delivery. For example, as part of its design process, the QIPER tool was previously used to rate 24 project presentations. The research team pooled their ratings and identified certain QI domains, most notably the reporting and interpretation of results, that consistently received lower scores.7 These data could be used to inform programme improvements such as adding more didactic instruction and practice opportunities in data analysis, interpretation and reporting. Similarly, a detailed assessment of the fidelity of 421 Plan–Do–Study–Act (PDSA) cycles across 39 projects identified common areas of deficiency, including a lack of intentional use of PDSA, knowledge gaps with respect to using PDSA principles properly and difficulty applying PDSA cycles in practice.17 By characterising the key contributing factors that negatively influenced PDSA fidelity, the programme implemented a number of changes including an updated project selection process, redesign of training, increased hands-on support and investment in training QI support staff. A similar approach could be taken by using the A3 assessment tool to identify those structured A3 sections that receive consistently lower scores by raters, thereby informing the need to improve education on those topics.

The most exciting prospect of all is the role that QI project proposal feedback might play in enhancing the likelihood for current and future project success and improvement of clinical outcomes. Selecting the ‘right’ target is important not only to achieve educational goals and objectives, but, if carefully considered, also ensures that efforts applied to QI project work contribute to system-level change. The A3 assessment tool places particular emphasis on selecting a problem of interest that is relevant both from a severity standpoint (ie, quality problems that have important clinical consequences) and a frequency standpoint (ie, quality problems that affect a large proportion of patients). Other frameworks list additional considerations such as the degree to which the problem falls within the project team’s locus of control, the availability of effective measures to address root causes, implementation issues including the degree to which significant reorganisation is required, resource limitations (including human resources, financial costs, availability of data), and the degree to which the problem aligns with institutional areas of strategic interest and priority.18

A clear strength of the A3 assessment tool is the early attention paid at the proposal stage, to other critical factors such as current state analysis via process mapping, goal definition to ensure feasibility and appropriate scope, and clear linkage of the proposed changes to the underlying root causes. These basic elements establish the foundation for QI project success and ultimately sustainability of improved quality of care19; as such, providing feedback towards the start of the project increases the likelihood that projects will ‘start off on the right foot’. Even though the A3 assessment tool covers a broad set of QI skills, it does not fully address other important QI skills such as change management, leadership, project reflection or reassessment to name a few.20 Yet, these ‘soft’ skills are often needed to effectively carry out the steps outlined in the A3 assessment tool. Thus, more work is needed to identify effective ways to provide learners feedback on this broader set of QI competencies.

Like any other change in health professions education, introducing a routine process of project proposal review and feedback requires careful attention to implementation issues. The way that educators choose to use the tool depends on the aspect of QI education they are seeking to address. For example, if the goal is to use project feedback to guide learners on their application of QI methods, programmes need to dedicate time in the curriculum for learners to prepare a proposal and ensure that faculty not only know how to assess the proposals but also develop the necessary skills to provide meaningful and actionable feedback associated with these ratings. On the other hand, if the goal is to leverage the tool to develop faculty skills in QI, then programmes need to consider how to incentivise faculty participation and recognise their involvement in supporting QI education.21 Ultimately, provision of timely project feedback has the potential to both train learners and build faculty capacity in QI, and to contribute to achieving QI education’s dual aim of improved learning and clinical outcomes.

Ethics statements

Patient consent for publication

References

Footnotes

  • Twitter @Brian_M_Wong

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles