Background The identification and articulation of programme theory can support effective design, execution and evaluation of quality improvement (QI) initiatives. Programme theory includes an agreed aim, potential interventions to achieve this aim, anticipated cause/effect relationships between the interventions and the aim and measures to monitor improvement. This paper outlines the approach used in a research and improvement programme to support QI initiatives in identifying and articulating programme theory: the action effect method.
Background to method development Building on a previously used QI method, the driver diagram, the action effect method was developed using co-design and iteration over four annual rounds of improvement initiatives. This resulted in a specification of the elements required to fully articulate the programme theory of a QI initiative.
The action effect method The action effect method is a systematic and structured process to identify and articulate a QI initiative's programme theory. The method connects potential interventions and implementation activities with an overall improvement aim through a diagrammatic representation of hypothesised and evidenced cause/effect relationships. Measure concepts, in terms of service delivery and patient and system outcomes, are identified to support evaluation.
Discussion and conclusions The action effect method provides a framework to guide the execution and evaluation of a QI initiative, a focal point for other QI methods and a communication tool to engage stakeholders. A clear definition of what constitutes a well-articulated programme theory is provided to guide the use of the method and assessment of the fidelity of its application.
- Evaluation methodology
- Quality improvement methodologies
- Quality measurement
- Implementation science
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
- Evaluation methodology
- Quality improvement methodologies
- Quality measurement
- Implementation science
Need for theory in quality improvement
The number of quality improvement (QI) initiatives is increasing in an attempt to improve quality of care and reduce unwarranted variation. It is essential to understand the effectiveness of these initiatives; however, they commonly lack underlying theory linking a change to its intended outcome, which inhibits the ability to demonstrate causality and hinders widespread uptake.1–3
Programme theory is used to describe an intervention and its anticipated effects and is critical to support both high-quality evaluation and the development of interventions and implementation plans.3–6 Development of programme theory can provide a means to tackle common social challenges of QI such as creating a shared strategic aim and increasing acceptance of interventions.7
While QI methods for the identification and articulation of theory and causal relationships exist,8 ,9 there has been little study of their application in practice in healthcare settings. This paper describes the approach developed by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care (CLAHRC) Northwest London (NWL) to identify and articulate programme theory, the action effect method (AEM).
NIHR CLAHRC for NWL
The NIHR in the UK commissioned regional research programmes, the CLAHRCs, to support systematic and effective translation of evidence into practice, aiming to improve quality of care and outcomes for patients.10 CLAHRC NWL adopted an approach using a suite of QI methods to support a range of research and improvement initiatives across the region, driven by an overarching research agenda to investigate the application and impact of QI methods in healthcare.11–15 In delivering this programme, CLAHRC NWL developed extensive first-hand experience of supporting initiatives to develop programme theory in diagrammatic form.
Background to method development
The AEM was developed using co-design through four iterations that took place over four annual rounds of improvement initiatives, building on driver diagrams.9 In total, 43 unique programme theory diagrams (driver diagrams and action effect diagrams (AEDs)) were produced over this time, each for a specific improvement initiative and setting.
During the early phase of CLAHRC NWL, teams used the driver diagram approach (22 diagrams produced in years 1 and 2 collectively). Driver diagrams are intended to offer an easy-to-read and digestible theory of improvement for an initiative. It was observed, however, that teams struggled to work collaboratively to produce useful theory and perceived diagram construction as low value. Best-practice examples of driver diagrams were provided by expert facilitators,16 but it became evident that there was limited published guidance on what constituted a ‘good’ diagram, the purpose of such diagrams and how individual components were defined. The driver diagrams produced in practice were of limited value as programme theory, which resulted in time delays and reworks to identify suitable metrics and develop evaluation plans (personal communication; Huckvale K, Woodcock T, Poots A, 2014).
To overcome these challenges, a more systematic approach was developed, describing the components of the diagram and how it can be used consistently to clearly articulate programme theory. The term ‘driver’ was often deemed confusing by improvement initiative team members, associating the term with strategic influences such as financial and political motivations rather than the actions that could be undertaken by the initiative. In the new approach, the programme theory diagram was named the Action Effect Diagram to more accurately signify its purpose. This approach retains the benefits of the driver diagram visual layout and clearly specifies the necessary programme theory features and their representation in diagrammatic fashion.
The new approach, the Action Effect Method, has been applied in the later phase of CLAHRC NWL QI initiatives (21 diagrams produced in years 3 and 4 collectively), as well as national and regional QI initiatives in England, Scotland and Australia.
What is the AEM?
The AEM is a facilitated approach to developing an AED, a visual representation of the programme theory for a QI initiative. Programme theory, in the context of improvement initiatives, is defined as the articulation of an overall aim, potential intervention(s) that will be tested in an attempt to achieve this aim, hypothesised cause/effect relationships linking intervention(s) to the aim and measure concepts that link to the cause/effect chains to support evaluation. The AEM initially engages members of a QI initiative in a group session to share their individual perspectives and aspirations for the project to develop a shared aim and identify factors contributing to that aim. The diagram produced in this initial session is developed into a full programme theory diagram as details of the initiative are discussed and agreed. This programme theory may be contributed to further as the initiative develops over time.
Through this articulation of programme theory, the AEM acts as a multipurpose methodology for practitioners and researchers; it supports the design and execution of QI initiatives as part of a suite of QI methods, highlights existing evidence where available and informs related evaluation activities.
The diagram consists of an aim, factors, cause/effect arrows, measure concepts and evidence annotations. Factors are categorised into contributing factors, interventions and implementation activities. The aim sits to the left of the diagram, and intervention(s) and implementation activities to the right (definitions are provided below and in table 1). Cause/effect chains, made up of factors linked by arrows, indicate how actions could result in the intended effect and, ultimately, improvement(s). Because the aim is on the left-hand side of the diagram, the arrows always point from right to left to indicate cause and effect (figure 1). An example of an AED is presented in figures 2 and 3 based on a CLAHRC NWL chronic obstructive pulmonary disease (COPD) QI initiative. Figure 2 represents the initial diagram produced by the QI team, and figure 3 represents the revised diagram prepared retrospectively at the end of the initiative, incorporating learning gained throughout the initiative.17 Selected examples are given in box 1 to provide further explanation of the diagram's features and share our practical experience regarding how to construct a high-quality programme theory diagram.18 Further examples are given in online supplementary appendix 1.
Box 1 Selected examples of action effect diagram (AED) features related to the chronic obstructive pulmonary disease (COPD) AED in figures 2 and 3 (additional examples can be found in online supplementary appendix 1)
Reading the diagram from left to right answers the question, ‘What changes can we make that will result in an improvement?'
▸ Following a single cause/effect chain, one factor influencing the aim is a patient's self-management of their lifestyle including whether the patient smokes. One factor influencing patient smoking behaviour is attendance at and engagement with a smoking cessation service (and so on).
Reading the diagram from right to left answers the question, ‘What are we trying to accomplish?’
The aim should be high-level and patient-focused but specific enough to guide the improvement initiative and subsequent evaluation.
▸ A general aim, improving quality of care for patients with COPD, could be specified as: To improve the health, quality of life and experience of care for patients from hospital X who are discharged following an acute exacerbation of COPD.
The major contributing factors in column 1 should be of a similar type and form a logical group.
▸ In the COPD example, we use the stages of patient care appropriate care in-hospital, self-management postexacerbation and quality of additional clinical care post-exacerbation. The three factors are all of a similar type and form a logical group, in this case, contexts of care.
Interventions and implementation activities
Interventions are intended to become part of routine service delivery, implementation activities are carried out by the quality improvement team.
▸ Interventions can aim to improve consistency of existing service, modify current service, or introduce a new service. Taking ‘consistency of existing service’ as an example, there may be variation in staff competency and confidence in inhaler technique training, requiring improvement to increase consistency and equity of services. An associated implementation activity is Specialised staff education session on inhaler techniques.
Factors connected by an arrow must be clearly related with no illogical leaps.
▸ The casual link between referral to smoking cessation and whether the patient smokes makes an illogical leap. It is not just the referral itself that influences whether a patient smokes in the future; it also matters what happens following a referral, including whether a patient is motivated to attend or able to complete a smoking cessation programme.
▸ Adding attendance at and engagement with smoking cessation programme helps unpack this connection.
Evidence, predictions, assumptions and measures
Cause/effect relationships may be supported by existing evidence.
▸ An evidence-based cause/effect relationship exists between attendance at and engagement with smoking cessation and whether the patient smokes.19
Measure concepts must be clearly associated with the relevant factor or aim.
▸ The success of staff training to influence patient education for inhaler technique can be measured by the proportion of staff designated as inhaler technique providers who have attended the specialised staff training sessions (indicating extent to which intervention took place) and the impact this training had on percentage of patients who received inhaler education.
Reading from either side, the diagram reflects answers to the questions posed in the model for improvement.9 From left to right, the diagram answers the question, ‘What changes can we make that will result in an improvement?’. From right to left, the diagram answers, ‘What are we trying to accomplish?’. Both questions are key elements of constructing and interpreting the AED.
As a consequence of the cause/effect direction, factors represented on an AED will move from those under direct control on the right of a diagram (e.g. an intervention or implementation activity that is directly actionable by a member of the improvement initiative such as training staff), through those that are under direct influence (e.g. by training staff you have direct influence over patients being taught correct inhaler technique), to those that are only under indirect influence (e.g. patients’ self-management overall or quality of care for patients will be influenced by many factors, only some of which are under direct influence of the QI initiative).
The resulting diagram should clearly represent the rationale and intention of a QI initiative and be sufficiently comprehensive and detailed for a non-expert stakeholder to interpret readily. Additionally, the AED has explicit connectivity to other QI methods to support their systematic use as a suite of methods rather than individual activities (table 2).9 ,19–21
At the far left of the diagram, the aim describes the overall objective of the improvement effort and provides the central focus for cause/effect chains to converge. The aim should be of sufficient detail and specificity to guide the improvement initiative and subsequent evaluation efforts, should be patient-centred22 and should represent the most specific aspiration that all members of the initiative can agree on. Using this method, interventions and measure concepts should not be included in the aim.
Placement on the left is deliberate, using the natural tendency in readers of Western languages to read from left to right to encourage an initial focus on the aim rather than potential solution(s).23 ,24
Contributing factors are boxes representing the logical steps required to connect the interventions and the aim, that is, they are caused by the intervention(s) and the achievement of the aim is caused by them. They indicate how the intervention(s) are intended to collectively cause the aim to be achieved. For clarity and objectivity, factors should not include measure concepts or verbs indicating aspiration (e.g. to reduce, to improve).
Factors in the first column, those directly influencing the overall aim, are referred to as major contributing factors. Major here refers to the first (major) division of the aim into things (factors) that contribute to its achievement rather than to their relative importance. They represent a hypothesis: improvements across all major contributing factors are sufficient to achieve the overall aim. Major contributing factors should be of a similar type to one another, or form a logical group, to support review of the programme theory and assessment of missing factors.
Interventions and implementation activities
Interventions and implementation activities specify changes to care delivery and associated activities. AEM distinguishes an intervention as a change to service delivery that is intended to become routine. This may represent new practice, modifications to current service delivery or the desire to improve consistency of delivery of current service provision.
Distinctions between contributing factors, interventions and implementation activities may be subjective, with the specific needs of an initiative shaping the perceived focus. The definition of an intervention can therefore be problematic, particularly as initiatives spread to different settings and local adaptations are made. The articulation of programme theory is an important step to support intervention definition and clarification of interactions with other factors that influence the overall aim. This also supports reproducibility and transferability of interventions between healthcare settings.25 To facilitate communication, ‘highlighting’ boxes may be added to an AED, surrounding a group of factors that all influence another factor or to signify a collection of factors that make up an intervention (figure 3).
Importantly, given the complex nature of improvement in healthcare and the difficulty of predicting what will work in advance, interventions and implementation activities are not final solutions but ideas to be tested and iteratively modified as feedback is received on their effectiveness.17 This connects the AED to plan-do-study-act cycle methodology (table 2).9 ,19 The impact of this iterative development can be seen in the difference between AED produced at the outset and at the end of a QI initiative (figures 2 and 3). The interventions and implementation activities referred to on the AED, and their development over time, require detailed description in supporting documents in order to be reproducible and support transfer of knowledge.
Interventions are connected (with arrows) through the contributing factors to the overall aim forming cause/effect chains. There may be any number of factors in a cause/effect chain. The ordering of factors on the diagram is determined by its position in the cause-and-effect chain(s) that link it to the aim and not by an assessment of its importance. The AED does not draw any conclusions about the relative importance of each factor but can be used in structuring information to support evaluation of relative importance and impact of different factors.
We have developed an aid to support vertical alignment of factors in the diagram once cause/effect chains have been established (see online supplementary appendix 1 example 12). This guidance is only one of potentially several ways to aid the organisation of factors.
The AED is intended to be a live document with iterations or additions made throughout an improvement initiative. Missing contributing factors or substantial new evidence (e.g. publication of guidelines, systematic reviews) identified during an initiative should be added or amended on the AED. When making changes, users need to consider the influence on evaluation; this may involve decisions in the planning stage as to any factors that will not change to support rigorous evaluation.
Evidence, predictions, assumptions and measures
In essence, the cause/effect chains represent the notion that improving against a particular factor will cause improvement against another factor or achievement of the overall aim. The justification for each connection in a cause/effect chain may be based on existing evidence, predictions or assumptions. Predictions may be based on local ideas or on explicit theories of change.26–29 Predictive cause/effect relationships in the diagram can be identified as those that include measure concepts annotated on each box. Assumed cause/effect relationships are defined as pairs of factors where at least one of the pairs is not measured (thereby limiting the ability to assess cause/effect relationships). Evidenced cause/effect relationships can be annotated to distinguish them from assumed or predicted relationships, for example, in figure 3 we use dotted lines for assumed/predicted and solid lines for definition/evidenced relationships with an additional annotation to denote the level of evidence associated with the relationships. Measure concepts can also be annotated on evidenced relationships.
Measure concepts associated with cause/effect chains are developed into well-defined measures used to test individual predictions and evaluate impact of the intervention(s) (personal communication, Woodcock T, Poots A, Huckvale K, et al, 2014). Evaluation of a programme theory is more comprehensive if there is a distribution of measures across the diagram, from process to outcome measures.30 ,31
The AEM provides a thorough specification of a method for articulating the programme theory of a QI initiative through a clear visual representation of cause/effect relationships between an improvement aim and potential interventions, with annotation of related evidence and measure concepts. As well as clearly identifying the components of programme theory expressed in an AED, links to other QI methods are also demonstrated.
The AEM is designed to act as a prospective guide for improvement teams as well as in support of evaluating the impact and the spread and sustainability of QI initiatives by moving away from individually held tacit knowledge.32 Getting a ‘correct’ theory prior to initiation is not the goal of the AEM; iterations of the diagram will occur throughout use of this method. As QI initiatives develop over time, the strength or weakness of the assumed cause/effect chains becomes apparent, as well as other factors that were not considered in the original programme theory.17 This iterative process of theorisation and evaluation helps explain the results of both positive and negative trials and can support both prospective process evaluation30 and ex-post hoc analysis.33 The AEM provides a platform for further research to explore what ‘good’ programme theory is and how it might enable the transfer of learning from one project (e.g. figure 3) to another project.
The need for programme theories and logic models is well articulated.6 ,17 ,34–39 However, there is little practical guidance available on how to construct good quality diagrams. The AEM adds to this through explicit articulation of the components of programme theory and their relationship to one another in diagrammatic form, something that other models often lack. Articulating complex concepts in a single diagram plays an important cognitive role in supporting readers to more readily access large amounts of information to support problem solving and inference-making.40 The AED differs from the ‘cause and effect diagram’ or the Ishikawa diagram outlined in QI literature8 as the cause/effect chains do not represent only potential problems inhibiting an improvement attempt, but more generally the hypothesised relationships between actions and the improvement aim, including those necessary for measurement and evaluation. The AEM builds on the key principles of driver diagrams, but with additional clarity regarding functions and purpose of different components of the diagram providing a more scientifically rigorous approach to the development and articulation of programme theory.41 This methodological specification will support further research to evaluate the benefits of using the AEM in practice.19
Our experience of developing programme theory diagrams reveals that the process of construction is as important as the resulting diagram in supporting the planning and delivery of improvement. When the construction process is well facilitated, it enables patients, academics and healthcare professionals to share and make sense of multiple sources of knowledge (including tacit knowledge) and evidence in a manner that minimises conflict, with the AED acting as a boundary object to aid communication between these groups.42 The agreement of a shared aim promotes greater engagement with a wide range of stakeholders and can promote patient-centred conversations. The diagram itself is a powerful communication tool, demonstrating the connection between strategic and political drivers of senior management with the actions and motivations of frontline staff.43 The AEM does not distinguish or limit scope to certain levels of change but encourages teams to be aware of and consider all relevant factors that can influence outcome, including those out of direct control of the QI team.
The engagement of diverse stakeholders in the articulation of programme theory is still a significant challenge and can be best addressed by expert and neutral facilitation throughout the process. Further research is necessary to assess the range of the social functions of the AEM, both through the use of the diagram as a boundary object and through the process of facilitating its creation.
The AEM gives structure to the identification and articulation of programme theory, an important step of QI initiative development. It provides a framework to guide execution and evaluation of an initiative, a focal point for other QI methods and a communication tool to engage stakeholders. A clear definition of what constitutes a well-articulated programme theory is provided to guide the use of the method and assessment of the fidelity of its application.
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Files in this Data Supplement:
- Data supplement 1 - Online supplement
Contributors JER, TW and DB identified the need for method development and led the intellectual and practical development of the AEM. CM contributed to the development of the AEM. JER and CM prepared the first draft for publication. TW and LI contributed significantly to clarify and articulate the concepts presented in the text and to develop the example COPD diagram and text. All authors contributed to the development and final version of the text.
Funding This article presents independent research commissioned by the National Institute for Health Research (NIHR) under the Collaborations for Leadership in Applied Health Research and Care (CLAHRC) programme for North West London. JR and TW are supported by Improvement Science Fellowships with the Health Foundation. The views expressed in this publication are those of the author(s) and not necessarily those of the Health Foundation, the NHS, the NIHR or the Department of Health.
Competing interests None.
Provenance and peer review Not commissioned; externally peer reviewed.