Article Text
Statistics from Altmetric.com
- Evaluation methodology
- Implementation science
- Implementation science
- Quality improvement methodologies
- Health services research
Within healthcare services worldwide, there is a continual emphasis on innovation, including the development, evaluation and improvement of new and existing healthcare interventions and services to improve patient outcomes. In addition to evaluating efficacy, it is also important to evaluate how innovations are used in ‘real-world’ settings. A key part of this is process evaluation: understanding how interventions and services are implemented and engaged with. For example, recent Medical Research Council guidance on researching the effectiveness of complex interventions highlights the importance of measuring implementation and context, including the measurement of ‘fidelity’.1
‘Fidelity’ has been proposed to have five related domains, including fidelity of design, training, delivery (whether intervention components, as outlined in the intervention protocol, are delivered as planned), receipt (whether participants understand and are able to perform required skills) and enactment (whether participants use skills in daily life).2 Both receipt and enactment have been defined as constructs of ‘engagement’, as they focus on behaviours of intervention participants, rather than the intervention developers or providers.3 While receipt and enactment are constructs of engagement, it is important to distinguish between them when measuring ‘engagement’. For example, measuring receipt can help researchers to determine whether participants have received and understood intervention content, and measuring enactment can help researchers to understand whether receiving the intervention content leads to changed behaviour in terms of enacting intervention components in daily life. As enactment is likely to be an intermediate variable in the causal pathway,4 it is expected that participants’ enactment (or lack of enactment) of intervention skills would impact on intervention outcomes. Understanding both receipt and enactment is therefore crucial in supporting researchers and/or service developers to understand whether intervention effects (or lack of) may be attributed to levels of engagement.2
Measuring enactment of quality improvement interventions
Measuring enactment may be challenging, as it requires measurement of the performance of intervention skills and behaviours within complex healthcare interventions and services. Additionally, there is a lack of consensus on how best to measure participant enactment, and there is a need for the research community to collectively consider how best to evaluate and measure enactment.
In this issue of BMJ Quality & Safety, Ginsburg et al 5 describe the development and validation of a novel measure of enactment called the ‘Overall Fidelity Enactment Scale for Complex Interventions’ (OFES-CI). The measure was developed in the context of a quality improvement intervention delivered by healthcare aides in nursing homes to improve resident care, and is intended to be applicable to the full range of quality improvement interventions and other complex interventions. Future research will hopefully establish the validity of the new measure for other intervention contexts and participants.5
The new measure uses ‘expert’ rater scores to quantify enactment. The validity of these scores was established by comparison with coded secondary qualitative data that had been collected during the wider process evaluation. These data comprised diary entries from quality advisors, open-ended survey questions completed by research participants and observations conducted by trained members of the research team.5 The authors demonstrated that the OFES-CI tool was reliable, had face validity and was feasible to implement.5
Ginsburg et al have filled an important gap in the literature, as a 2017 systematic review found that few previous studies have focused on enactment.3 This could be due to a lack of consensus about the role of enactment in fidelity assessments. For example, some researchers have suggested that it may be difficult to measure enactment due to complexities of enactment behaviours being easily confused with intervention outcomes.6 Additionally, some researchers may perceive enactment to be a measure of intervention effectiveness instead of fidelity.7 8 Given the lack of research focusing on enactment to date, the development of an enactment measure is welcomed.
Previously, a variety of methods have been used to measure engagement (including receipt and enactment). Methods have included: participant and provider self-report, reviewing attendance and other intervention records, direct observation and reviewing how many of the intervention components were used by participants.3 8–10 Within healthcare service research, registry data (eg, Stephens et al 11) and self-report methods (eg, van Schie et al 12) have been used to explore the use of implementation strategies within hospitals following quality improvement interventions. Self-report methods have also been used to determine whether patients enacted COVID-19 remote home monitoring activities (eg, Walton et al 13). However, there is currently no consensus on how best to measure enactment or engagement more broadly.3 There have been calls for the development of high-quality measures of engagement (including enactment) that are acceptable and feasible to use, reliable and valid.3 Therefore, the study by Ginsburg et al 5 extends previous research by considering one way in which it may be possible for researchers to objectively evaluate enactment with high reliability. This is comparable with the gold-standard measure of fidelity of delivery in which multiple researchers reliably rate transcripts of audio/video-recorded intervention sessions.2 14 The authors’ approach to enactment measurement is also novel and unique as it builds on the approach used in objective structured clinical examinations.5
Implications for other complex interventions
The methods outlined in Ginsburg et al 5 could provide a potential method which may support objective evaluation of enactment in some settings and situations, yet it is important to consider whether and how this method can be adapted to evaluate other interventions or services. For example, appropriate methods for measuring enactment may differ depending on the type of intervention and differences in complexity (see 1 15 16 for a discussion around complexity of interventions). Measuring enactment for ‘simple’ interventions, such as medication trials, may differ from measuring enactment of ‘complex’ interventions (delivered within randomised controlled trials) or measuring enactment of ‘complex’ interventions that are already embedded within healthcare services. Ginsburg et al 5 discuss how their study provided an opportunity to evaluate enactment within controlled settings (comparable with clinical examinations), but that further research is needed to explore how methods can be used to observe enactment within real-world settings.5 While these methods may be useful for identifying training gaps that need to be improved, it is not yet known whether the methods outlined in Ginsburg et al 5 can be used to evaluate real-world enactment of intervention skills or inform quality improvement initiatives within real-world settings.
Enactment relates to whether intervention participants use the intervention skills in practice; therefore, enactment and engagement more generally have often been explored within healthcare interventions from the perspectives of patients and carers (eg, 9 13 17). The study by Ginsburg et al 5 has a quality improvement focus and evaluates the implementation of an intervention for healthcare providers. Therefore, in Ginsburg et al,5 the study is set up so that the healthcare aides are the recipients of the intervention, which is delivered by the intervention team. As such, the measure of enactment aims to explore whether the healthcare providers enact the intervention activities/skills in practice.5 This contrasts with other studies of enactment whereby healthcare professionals would deliver the intervention to patients and/or family members (recipients) (eg, 9 17). However, it is important to note that within quality improvement, interventions may focus on healthcare providers and/or patients or family members as recipients of quality improvement interventions (eg, Kamity et al 18). It is therefore important to ensure that researchers select measures of enactment that are appropriately tailored towards the recipients of their quality improvement intervention. Previous research has suggested that researchers should develop high-quality measures of fidelity, including enactment, that are reliable, valid, acceptable and practical to use.3 10 However, the appropriateness and feasibility of enactment measures may depend on who the intervention recipients are. For example, as the method outlined by Ginsburg et al 5 builds on clinical examination approaches, it may be best adapted for use in other quality improvement interventions targeting healthcare professional behaviours. On the other hand, the measurement of enactment of quality improvement intervention behaviours by patients and/or family members may require consideration of other methods, such as self-report, video-recording or ethnography.
Some complex interventions and quality improvement interventions have clearly identified roles in terms of who provides the intervention and who receives the intervention. For example, interventions whereby healthcare providers are given an intervention manual and trained to deliver the intervention to patients and/or family members as recipients (eg, 9 17 19) or interventions where intervention developers provide an intervention to healthcare providers as recipients (eg, Ginsburg et al 5). This in turn provides clarity on who is the target of measurements for fidelity of delivery (the intervention providers), receipt and enactment (the intervention recipients). However, in theory, quality improvement interventions could be multifaceted and multilevelled. For example, healthcare providers could be both intervention providers and recipients within the same complex quality improvement package. For example, intervention providers could receive parts of the intervention to change their behaviour, but also could be trained to provide parts of the intervention to patients and or family members. In these scenarios, researchers would need to measure fidelity of delivery of the intervention components at both levels (those delivered to the healthcare providers and those delivered to patients and carers). Equally, the researcher would need to measure receipt and enactment of intervention skills/activities at both levels (receipt and enactment by healthcare providers and receipt and enactment by patients and/or family members). Therefore, it is important that intervention developers and researchers develop a logic model through which the different levels of the intervention are clearly specified and within which the ‘intervention providers’ and ‘intervention recipients’ are clearly specified. This will support researchers to develop comprehensive fidelity evaluations that include targeted measures of fidelity of delivery, receipt and enactment.
One limitation and area for future research highlighted by Ginsburg et al 5 is the need for research exploring factors that influence enactment, to develop strategies to improve enactment where needed. The factors influencing fidelity, including enactment, have been explored in other studies (eg, 11 13 17 19 20) and offer insight into steps required to improve fidelity of interventions and services in future. This emphasises the need for triangulation of qualitative and quantitative methods when planning process evaluations.
It is well-known that process evaluations should be conducted alongside trials of complex interventions; however, evaluating fidelity of complex quality improvement interventions may be less well considered, yet equally important. Therefore, it is important that researchers consider and measure fidelity of quality improvement interventions4 and attention should be given to measuring enactment as part of these evaluations. However, as discussed, there are many options for measuring enactment for researchers to choose from (including measures that are objective and measures that are subjective). While it is important that researchers aim to measure enactment using measures that are high quality (that is reliable, valid, practical and acceptable), the type of measure that researchers choose to use may depend on various factors, including: who the recipients of the intervention are, the setting in which enactment is being measured, the type of enactment skills/activities that need to be measured, the complexity of the intervention and resources.
Ethics statements
Patient consent for publication
Ethics approval
Not applicable.
References
Footnotes
Twitter @HollyWalton15
Contributors HW developed the idea for this editorial and drafted the editorial.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.