Article Text

Download PDFPDF

Ingredients for change: revisiting a conceptual framework
  1. J Rycroft-Malone1,
  2. A Kitson1,
  3. G Harvey1,
  4. B McCormack2,
  5. K Seers1,
  6. A Titchen1,
  7. C Estabrooks3
  1. 1RCN Institute, Radcliffe Infirmary, Woodstock Road, Oxford OX2 6HE, UK
  2. 2University of Ulster, Royal Hospitals Trust, Belfast, UK
  3. 3University of Alberta, Canada
  1. Correspondence to:
 Ms J Rycroft-Malone, Research & Development Fellow, RCN Institute, Radcliffe Infirmary, Woodstock Road, Oxford OX2 6HE, UK;
 joanne.rycroft-malone{at}rcn.org.uk

Abstract

Finding ways to deliver care based on the best possible evidence remains an ongoing challenge. Further theoretical developments of a conceptual framework are presented which influence the uptake of evidence into practice. A concept analysis has been conducted on the key elements of the framework—evidence, context, and facilitation—leading to refinement of the framework. While these three essential elements remain key to the process of implementation, changes have been made to their constituent sub-elements, enabling the detail of the framework to be revised. The concept analysis has shown that the relationship between the elements and sub-elements and their relative importance need to be better understood when implementing evidence based practice. Increased understanding of these relationships would help staff to plan more effective change strategies. Anecdotal reports suggest that the framework has a good level of validity. It is planned to develop it into a practical tool to aid those involved in planning, implementing, and evaluating the impact of changes in health care.

  • PARIHS framework
  • evidence based practice
  • quality improvement
  • change management
  • learning organisation

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Despite growing acknowledgement within the research community that the implementation of research into practice is a complex and messy task, conceptual models describing the process still tend to be unidimensional, suggesting some linearity and logic.”1

Strategies for improving the delivery of health care at a national and international level include evidence based practice, clinical effectiveness, evidence based clinical guidelines, and audit, and considerable investment is being made in a new infrastructure to support these initiatives. Finding ways to deliver care based on the best possible evidence remains an ongoing challenge. For a healthcare professional entering practice, it would not be unreasonable (although possibly naïve) to expect that, given the political enthusiasm behind such evidence based tools as guidelines, the natural course of action would be for practitioners automatically to use them in their everyday practice. Indeed, some early conceptual models of the implementation of evidence into practice advocated a linear and logical process where the emphasis was on informing and monitoring with a view to changing practice.2,3 More recent experience via projects such as the Promoting Action on Clinical Effectiveness (PACE) programme4 and the South Thames Evidence-based Practice Project (STEP)5 indicate that the reality is messy and challenging and not easily represented by rational models.

BACKGROUND

In 1998 we presented a conceptual framework (the Promoting Action on Research Implementation in Health Services (PARIHS) framework) which we proposed represented the interplay and interdependence of the many factors influencing the uptake of evidence into practice.1 This multidimensional framework was developed in an attempt to represent the complexity of the change process involved in implementing evidence-based practice.

The framework—developed from collective experience gained from research, practice development, and quality improvement projects—suggested that successful implementation can be explained by a function of the relationship between three elements: evidence, context, and facilitation. The framework considers these elements to have a dynamic simultaneous relationship, and each is positioned on a “high” to “low” continuum. The hypothesis offered is that, for implementation of evidence to be successful, there needs to be clarity about the nature of the evidence being used, the quality of the context, and the type of facilitation needed to ensure a successful change process. Theoretical and retrospective analysis of four studies undertaken by the Royal College of Nursing (RCN) Institute led to a proposal that the most successful implementation occurs when (1) the evidence is scientifically robust and matches professional consensus and patient needs (“high” evidence); (2) the context is receptive to change with sympathetic cultures, strong leadership, and appropriate monitoring and feedback systems (“high” context); and (3) there is appropriate facilitation of change with input from skilled external and internal facilitators (“high” facilitation).

Since the framework was proposed and opened up to scrutiny it has received much attention and interest and appears to have face validity to those in the field implementing evidence-based practice and quality improvement. It now seems timely to report on the theoretical developments that the framework has undergone since its conception.

CLARIFYING THE ELEMENTS OF THE FRAMEWORK

Although the implementation framework presented in 1998 resonates with people's real and practical experiences of applying new knowledge to practice, the elements outlined had not been subjected to a systematic analysis derived from relevant literature. Morse6 argues that clarity of the concepts used in practice is needed as relatively little time has been spent on examining the theoretical foundations that underpin the delivery of health care. To provide some theoretical rigour and conceptual clarity to the constituent elements, a concept analysis of each of the three dimensions was conducted.6,7 One of the outcomes of this process has been a refinement of the framework (shown in Appendix 1).

Appendix 1:

CONTENTS OF FRAMEWORK 2001

Evidence

In 1998 we identified three different strands of “evidence” that can be used in clinical decision making—research, clinical experience, and patient preferences. Located on a continuum of high to low, “high” research evidence was presented as systematic reviews and randomised controlled trials (RCTs) and “low” as anecdotal and descriptive information. Similarly, patient preferences were located on a high to low continuum where “high” was indicated by a partnership approach to decision making and “low” by a lack of involvement. It was suggested that, to maximise the uptake of evidence into practice, evidence on all three continua needs to be located towards “high” on these dimensions.

The concept analysis work undertaken still identifies research evidence as only one part of the picture in clinical decision making, but more thought has been given to the factors that might influence its uptake in practice. Importantly, while evidence from research, clinical experience, and the users of health care are recognised as important sources, it is argued here that, whatever source of knowledge is drawn upon, it needs to have been subjected to scrutiny and found to be credible.8 This acknowledges the importance of conducting critical appraisal before considering implementation.

Different sources of evidence will be valued in different ways by different groups of people—for example, research evidence can be counter to patient preferences. One such example concerns the use of beta-interferon in the treatment of multiple sclerosis, where the Appraisal Committee of the National Institute for Clinical Excellence considers that “the modest clinical benefit (of beta-interferon) appears to be outweighed by its very high cost”.9 Upshur10 considers that “the production, interpretation, dissemination and implementation of evidence is a social process subject to the forces and vagaries of social life”. He draws our attention to the “unarticulated or unacknowledged extra-evidential considerations” such as values, underscoring evidence and concludes that “the evidence we seek is partly constituted by what we value and what we need to know” (box 1). In recognising the social aspects of evidence we propose in the framework that individuals and teams need to agree on the results of the appraisal to reach a consensus about it so that it becomes valued as a valid source of evidence (or not).

Box 1 Evidence as a social construct

Ferlie et al11 report a case study of the uptake of low molecular weight heparin (described as a novel drug) as antithrombolytic prophylaxis after elective orthopaedic surgery for hips and knees. Its use in orthopaedic surgery is controversial because the research evidence base is variable. In this study the use of the drug was found to be influenced by the beliefs of a core group of orthopaedic surgeons, the views of practitioners about the “formal science” versus a different model of knowledge based on tacit or experiential knowledge, and other factors such as whether a critical mass of colleagues adopted the new prescribing practices. There was no consensus among the orthopaedic surgeons about the evidence base of the practice and, as a result, uptake of the new drug was “patchy”. It is possible that the chances of successful implementation may have been increased by articulating the differences in opinions and perhaps by seeking to reach a consensus.

Additional refinements as a result of the concept analysis include revisions to the indicators attached to the three strands of evidence. In 1998 “high” evidence was considered to be evidence derived from systematic reviews and RCTs. While evidence from high quality RCTs can answer clinical questions about effectiveness, there are many types of clinical problems and issues which are not about effectiveness. In such cases research evidence drawn from other designs and paradigms is appropriate. For example, it would be more appropriate to conduct an exploratory interview study to investigate patients' experience of having a leg ulcer than to carry out an RCT. The framework therefore now acknowledges that different types of research evidence are needed to answer different clinical questions. What is critical to implementation is that well conceived, designed, and conducted research is drawn upon, whether quantitative or qualitative.

While research evidence aids decision making, it does not dictate the process; clinical experience or professional craft knowledge also make a contribution. Titchen12 defined professional craft knowledge or professional “know how” as the often tacit and sometimes intuitive knowledge that is embedded in practice, and argued that it can be made more widely available if it is “articulated, critically reviewed, generated and validated by individual practitioners and their peers, through critical reflection on practice”. As a result, it is possible for professional craft knowledge to be transformed to propositional knowledge and verified consensually through critical reflection, critique, and debate of clinical experience. Thus, when knowledge from clinical experience is used as part of decision making, we argue that it should be made explicit and verified through these processes. Similarly, Upshur13 argues that clinical common sense needs to be evaluated to the same extent as trial evidence, otherwise no honing of clinical reasoning is possible. This can be achieved by testing against the professional craft knowledge, research knowledge, and theoretical knowledge of others.12

Ferlie et al11 also acknowledge the necessity for a reflective practitioner to examine his or her own practice to identify local patterns. They argue for the need for some scepticism in the successful implementation of interventions based on the principles of evidence-based medicine on three counts:

  • Much of the science is seen in practice as inconclusive or contested.

  • Groups of professionals retain substantial autonomy over their work practices and tend to resist external interventions from research and development functions.

  • Much of the clinical knowledge is tacit and experiential so that the findings of evidence-based medicine are not accepted as valid to practice.

They suggest a “good practice model” which has a strong emphasis on continuing professional development and individual learning rather than formal evidence-based principles. The idea of embracing “reflection on practice” has also been highlighted by Berwick14 who shows how a major improvement in patient care (outcome measures for cardiovascular disease using an RCT design) would not have been recognised from a conventional perspective. In this example, surgeons were encouraged through a process of reflection and evaluation to use their “hard” data to interpret more effectively what was working and what required improvement. These data did not meet the RCT standard but, despite this, Berwick argues that the surgeons' openness and willingness to debate how they could improve their clinical outcomes added incalculable value to the data.

The third strand of evidence in the framework—from users of the health service—has also been reconsidered. While there is a great deal of rhetoric about patient or user involvement in decision making and care, it is an issue that is complex and poorly understood. Recognising that patient preferences should be part of the decision making process, we suggest that patient narratives and experiences should also be seen as a valid source of evidence (box 2). While it is still unclear how best to combine these in human based rather than data based decision making,15 the value of participatory interactions would appear to be important.

Box 2 Patient experience: a valid source of evidence

The RCN publication “Ouch! Sort it out. Children's experiences of pain”16 provides an example of how patients' stories (in this case, children's) can be incorporated into the development of an evidence linked guideline.17 Through techniques such as a drama workshop, video workshop, graffiti wall, sentence completion, play, and interviews, children were given the opportunity to share their experiences of treatment and care and what they would like to happen when they are in pain. This patient experience was then used as one of the evidence sources which fed into the development of a guideline that also incorporated research evidence and expert or practitioner opinion evidence.

Even though the concept analysis has resulted in some of the elements and indicators being refined, we still suggest that it is appropriate to consider evidence on a continuum of “high” to “low”. The same logic therefore applies to successful implementation which is more likely to occur when research, clinical and patient experience are located towards “high”. The challenge remains, however, to understand better how these are combined in clinical decision making and how more effective care can be delivered by melding this broader evidence base.

Context

The context in which healthcare practice occurs can be seen as infinite as it takes place in a variety of settings, communities, and cultures that are all influenced by, for example, economic, social, political, fiscal, historical, and psychosocial factors. In the PARIHS framework the term “context” is used to refer to the environment or setting in which people receive healthcare services or, in the context of getting research evidence into practice, “the environment or setting in which the proposed change is to be implemented”.1 In its most simplistic form, the term here means the physical environment in which practice takes place. Such an environment has boundaries and structures that together shape the environment for practice. The concept analysis suggested that the dominant environment in which healthcare practice exists currently is that of Chin's multiple clusters and multiple systems environment—that is, a turbulent environment characterised by competing “force fields” that are never static and constantly changing.18 However, the concept analysis also identified key characteristics of environment that are conducive to research utilisation—namely, clearly defined boundaries; clarity about decision making processes; clarity about patterns of power and authority; resources, information and feedback systems; active management of competing “force fields” that are never static; and systems in place that enable dynamic processes of change and continuous development.

Bate19 suggests that the way organisational culture is understood in the context of practice is essential to understanding how best to bring about cultural change. Crucially, many diverse and conflicting cultures can operate within the organisational context. We are therefore reiterating the need to have an understanding of the prevailing values and beliefs as a prerequisite to introducing and sustaining change (box 3).

Box 3 Contextual analysis

Dopson et al4 in their evaluation of the Promoting Action on Clinical Effectiveness (PACE) programme highlight the importance of understanding the political and cultural context for achieving change. A contextual analysis identified the receptiveness of the context(s) to change. For the PACE projects this information was not only important in terms of identifying potential barriers to change (individuals and structures), but it was also useful when planning strategies to overcome obstacles or engaging support.

The concept of a learning organisation20 continues to be key to a context that facilitates change. Organisations that value the contributions of individuals, are open, have decentralised decision making, a shared vision, and quality organisational systems tend to build innovative facilitative cultures.20–24 The idea that leadership summarises the nature of human relationships whereby effective leadership gives rise to clear roles and effective team work and effective organisational structures1,25 also remains a key facet of the framework for getting evidence into practice. It is argued that “transformational leaders” as opposed to those who “command and control” have the ability to transform cultures to create a context more conducive to the integration of evidence into practice.24 Further research is required to understand the cause and effect relationship between leadership and culture (box 4).

Box 4 The impact of culture in implementation

Ward and McCormack26 in a two year action research study of ward leaders' development of cultural change found that the dominant organisational culture had a significant impact on the ability of ward leaders to bring about changes in practice. While it was possible to track progress towards the development of a learning culture by systematic evaluation, the power of the overall organisational culture impacted significantly on the practice contexts. The culture here was characterised by an aggressive application of disciplinary procedures, top down and imposed audit, and a traditional approach to learning. In addition, repeated questions were asked about the value and time scale of the project in the current climate of increasing efficiency and drive for immediate answers. While the ward leaders became more empowered to bring about changes in their own practice contexts, the development programme itself did not impact on the overall organisational culture, resulting in the failure of changes to be fully implemented.

Reconsideration has been given to the sub-element of context labelled in 1998 as “measurement”. Measurement is both part of the research process that generates evidence on which to base practice and part of the evaluation or feedback process that demonstrates whether or not changes in practice are appropriate/effective/efficient. As such it is an essential aspect of an environment wishing to implement evidence-based practice. However, recent healthcare reforms such as clinical governance indicate that a reliance on what could be termed “hard” outcome measures alone may not capture the complexities of today's organisations. Moreover, as we argue for the use of three strands of “evidence” in clinical decision making, this stance requires different broader evaluative techniques. We therefore suggest that it is more appropriate to consider monitoring and feedback under the umbrella term of “evaluation” and acknowledge that multiple methods and sources of feedback should be incorporated into an organisation's evaluative frameworks.

The “high” to “low” continuum prevails for the concept of context. Thus, the chances of successful implementation are enhanced in contexts where there is, for example, clarity of roles, decentralised decision making, staff are valued, and a reliance on multiple sources of information on performance.

Facilitation

Our experience has been that facilitators have a key role to play in helping individuals and teams to understand what they need to change and how they need to change it in order to apply evidence to practice.27 Facilitation is “a technique by which one person makes things easier for others”.1 In quality improvement and evidence-based practice more generally, there are other strategies thought to promote individual and organisational change—for example, educational outreach (sometimes referred to as academic detailing), audit, and feedback and computer based reminders28,29—and the research evidence suggests that the most effective implementation strategies are those that adopt a multifaceted approach, combining techniques.

The concept analysis has resulted in a reconceptualisation of facilitation as presented in the framework in 1998. Although the body of literature about the role of change agents is large, there are few explicit or rigorous evaluations of the concept facilitation. The distinguishing factors that have emerged from the concept analysis which make a facilitator distinct from other roles are:

  • It is an appointed role as opposed to that of, for example, an opinion leader who acts as a change agent through his/her own personal reputation and influence.

  • The role may be internal or external (or encompass a combined internal/external approach) to the organisation in which the change is being implemented.

  • The role is about helping and enabling rather than telling or persuading .

  • The focus of facilitation can encompass a broad spectrum of purposes, ranging from the provision of help to achieve a specific task to using methods which enable individuals and teams to review their attitudes, habits, skills, ways of thinking, and working.

  • Given the broad focus of the facilitation concept, a wide range of facilitator roles is possible with corresponding skills and attributes needed to fulfil the role effectively.

The sub-elements of facilitation in the refined framework are purpose, role and skills, and attributes. “High” facilitation relates to the presence of appropriate facilitation and “low” to the absence of or inappropriate facilitation. The term “appropriate” may encompass a range of roles and interventions depending on the needs of the situation.

The purpose of facilitation continua reads from “Task” to “Holistic”. The literature shows that facilitation can vary from a focused process of providing help and support to achieve a specific task (“Task”)—for example, the “Oxford Model”30—to a more complex holistic process of enabling teams and individuals to analyse, reflect, and change their own attitudes, behaviours, and ways of working (“Holistic”)—for example, “Critical Companionship”12 (box 5). As the approach moves towards Holistic, facilitation is increasingly concerned with addressing the whole situation and the whole person(s). The key to “appropriate” facilitation is matching the purpose, role, and skills (each of which can exist as a series of continua) to the needs of the situation.

Box 5 Types of facilitation roles

The “Oxford Model”30 of health promotion activity provides an example of what we have termed “Task” based facilitation. It was established in the early 1980s to introduce more systematic approaches to coronary heart disease prevention in primary health care. Facilitation in this case was applied as a practical technique to support the establishment of systems such as health checks and screening for high risk patients.

Titchen's model of facilitation described as “Critical Companionship”12 is an example of “Holistic” facilitation. The emphasis is on facilitating learning from practice and on the co-creation of new knowledge through critical reflection and dialogue between the practitioner (or learner) and an experienced facilitator (critical companion). The role of the companion is to help individuals and groups of practitioners to use the new theoretical insights to transform self and social systems that hinder improvements in practice.

It is still unclear, however, how relatively effective these different models of facilitation are. Evidence suggests that, in some situations, a practical task-orientated approach is effective31; on the other hand, there is evidence to suggest that practitioners do not apply research findings deductively but need support to particularise them.4,12,32

The role and skills and attributes of facilitators have been considered against the purposes of Task based and Holistic facilitation. In a task orientated approach, for example, the role is likely to be practical and to focus on administrating, supporting, and taking on specific tasks where necessary. In contrast, an “enabling” facilitator role is more likely to be developmental, seeking to explore and release the inherent potential of individuals. When it comes to the skills and attributes required of a facilitator, a wide repertoire of skills, processes, and strategies are needed which they can draw on depending on the particular context and purpose. The expertise is therefore having the flexibility to be able to recognise the requirements of any given situation and to adapt accordingly.

The question remains as to how and whether facilitation is conceptually discrete from the change agent strategies described as educational outreach and local opinion leaders. Elements of the educational outreach visit approach certainly appear to be evident in some of the facilitation models studied in the concept analysis—for example, those described by Fullard et al33 and Cockburn et al.34 Bero et al28 in their review comment specifically on the lack of a common approach across different studies in terms of how particular interventions are categorised, which makes the process of reviewing the effectiveness of roles across a number of studies highly complex. One distinction between the different roles may be whether the change agent is working internally or externally to the environment in which the change is being implemented. For example, facilitators can be external or internal to the organisation, whereas opinion leaders are often internal and educational outreach workers (or academic detailers) tend to be external. There are also other aspects peculiar to a role—for example, academic detailers tend to use marketing principles, techniques, and materials to reinforce their message, an approach not explicitly acknowledged as part of the role of a facilitator. In addition, some facilitators explicitly focus on the need to address and develop organisational systems and culture, whereas this would not be a primary concern of the role of an educational outreach worker, academic detailer, or opinion leader. Another possible distinction might be that the role and methods employed in the educational outreach model do not cover as broad a spectrum of interventions as those described within the concept of facilitation. Overall, however, it would appear that the distinction between the facilitator role and that of other change agents, particularly educational outreach workers, is far from clear.

In terms of the framework as a whole, the analysis suggests that the facilitator has a key role to play, not only in effecting the context in which change is taking place, but also in working with practitioners to make sense of the “evidence” being implemented. The interaction between facilitation and context and evidence is still not fully understood.

Summary

The concept analysis has allowed the project team to scrutinise the sub-elements and indicators and consequently to refine further the framework presented in 1998. However, it would be premature to suggest that this represents a final version. This process has highlighted the timeliness of conducting further work on the framework and the concepts contained within it.

FUTURE PLANS

Research work is currently underway to validate and refine the framework further. This phase will provide a data set to ensure that the framework is comprehensive, but will also provide us with an opportunity to ensure that the language of the framework is comprehensible and relevant to practitioners and those implementing evidence-based practice.

We are aware that the framework has been used by others to structure change and develop practice. In these projects the main elements of the framework have been used as an “aide memoire” to think through the areas that require targeting. So, for example, with regard to evidence, practitioners would be encouraged to seek out research evidence about the topic identified for change, see how that matches with their clinical experience and that of their colleagues, and ascertain how congruent it is with the experience of patients. The framework has also been used to evaluate projects where the framework becomes a post hoc check list.

Given its apparent usefulness, it has the potential to be developed into a practical tool to aid those involved in planning, implementing, and evaluating the impact of changes in health care. It is envisaged that a “toolkit” will be developed which will include a self-assessment tool (based on the elements in the framework) to be completed to assess readiness for change, leading to a set of scores which would indicate the sort of intervention(s) and work that would be required to facilitate implementation. The toolkit will also outline the methods by which change and progress throughout an implementation project can be tracked. The piloting and testing of such a toolkit will form part of a bigger implementation project to be led by the RCN Institute in collaboration with project partners.

CONCLUSIONS

The implicit assumption of this framework remains that the implementation of good quality research is likely to have improved outcomes for patients and is therefore important for quality patient care. While health professionals are still seeking ways of achieving this, the framework presents our conceptualisation of the key ingredients. The concept analysis of the three elements that constitute the framework has been important in verifying and challenging the content as it was originally presented in 1998. The essential elements of the framework are the same in that we believe evidence, context, and facilitation remain key to the process of implementation. The changes made represent the results of a process of critical thinking about what constitutes the sub-elements of evidence, context, and facilitation and how these relate to successful implementation of evidence-based practice. It is therefore the detail of the framework that has been revised. The sub-elements now reflect a critical review of the literature and are more distinct than those first proposed in 1998. Although some of the content of the framework has been refined, the basic mechanics of it remain the same. We still propose that successful implementation is more likely to occur when evidence and context are located towards “high” and appropriate facilitation has been instigated. However, the concept analysis process has also highlighted that more needs to be understood about the relationship(s) between evidence, context, and facilitation and their relative importance when implementing evidence-based practice. If we can increase our understanding of this, we would be better placed to help staff begin to plan and implement more effective change strategies. It is hoped that the planned future work will begin to uncover the answers to some of these complex issues.

Key messages

  • Getting evidence into practice is not realistically represented by models that propose that implementation is a linear and logical process.

  • The PARIHS framework attempts to represent the complexity of the processes involved in implementation and a refinement of a model first published in 1998 is presented.

  • The nature of the evidence, the quality of the context, and the type of facilitation all impact simultaneously on whether implementation is successful.

  • Implementation is more likely to be successful when:

    • Evidence (research, clinical experience, and patient experience) is well conceived, designed, and executed and there is consensus about it.

    • The context in which the evidence is being implemented is characterised by clarity of roles, decentralised decision making, transformational leadership, and a reliance on multiple sources of information on performance.

    • Facilitation mechanisms appropriate to the needs of the situation have been instigated.

  • One of the intended outcomes of this project is to provide practitioners with a tool and resource that they can use to plan, implement, and track their own strategies for change.

Acknowledgments

The authors thank Alison Loftus-Hills for her contribution to the evidence base of facilitation.

REFERENCES

View Abstract