Context The apparent inconsistency between the widespread use of quality improvement collaboratives and the available evidence heightens the importance of thoroughly understanding the relative strength of the approach. More insight into factors influencing outcome would mean future collaboratives could be tailored in ways designed to increase their chances of success. This review describes potential determinants of team success and how they relate to effectiveness.
Method We searched Medline, CINAHL, Embase, Cochrane, and PsycINFO databases from January 1995 to June 2006. The 1995–2006 search was updated in June 2009. Reference lists of included papers were reviewed to identify additional papers. We included papers that were written in English, contained data about the effectiveness of collaboratives, had a healthcare setting, met our definition for collaborative, and quantitatively assessed a relationship between any determinant and any effect parameter.
Findings Of 1367 abstracts identified, 23 papers (reporting on 26 collaboratives) provided information on potential determinants and their relationship with effectiveness. We categorised potential determinants of success using the definition for collaboratives as a template. Numerous potential determinants were tested, but only a few related to empirical effectiveness. Some aspects of teamwork and participation in specific collaborative activities enhanced short-term success. If teams remained intact and continued to gather data, chances of long-term success were higher. There is no empirical evidence of positive effects of leadership support, time and resources.
Conclusions These outcomes provide guidance to organisers, participants and researchers of collaboratives. To advance knowledge in this area we propose a more systematic exploration of potential determinants by applying theory and practice-based knowledge and by performing methodologically sound studies that clearly set out to test such determinants.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Quality improvement collaboratives are a widespread improvement approach. Multidisciplinary teams participate in a structured process to identify best practice and change strategies, apply improvement methods, report results and share information about ways of achieving improvement.1 Estimates of the total investment in collaboratives are unavailable,2 but they represent substantial investments of time, effort and funding from the healthcare system.3
Despite the enormous popularity of collaboratives, evidence supporting their effectiveness remains limited.1 Uncontrolled studies report dramatic improvements in patient care and organisational performance, but almost all have design limitations (eg, no baseline data, no accounting for secular trends, selected samples from self-selected sites). Most controlled studies also have important flaws, including possible differences in baseline measurement, limited data about characteristics of control sites and possible contamination.
The apparent inconsistency between the widespread use of collaboratives and the evidence supporting their effectiveness heightens the importance of thoroughly understanding the collaborative approach. Collaboratives are characterised by participation of teams from multiple sites, and some are more successful than others in achieving their goals. This review describes the potential determinants of team success that have been tested in collaboratives and how they relate to effectiveness. Given the flawed methodologies in most evaluations of collaboratives,1 we take a pragmatic approach in this review, describing the potential determinants that have been examined in the literature and counting how often a significant relationship was found between any determinant and any effect parameter. This enables the best use to be made of potentially valuable information from previous research while acknowledging its limitations.
Data sources and searches
We searched Medline, CINAHL, Embase, Cochrane and PsycINFO databases from January 1995 to June 2006 inclusive1 for free text terms, combining the keywords (non-medical subject heading (MeSH)): ‘quality and improvement and collaborative’ or ‘ (series or project) and breakthrough’. We reviewed reference lists of included papers to identify any additional papers.
The 1995–2006 systematic search was updated in June 2009. We limited our search to Medline, as an efficacy evaluation of our search method showed that a combination of Medline with a review of reference lists was as effective as searching all databases. Reference lists of included papers were scanned for additional papers.
Two authors (MH and LS) assessed each potentially eligible paper. We included papers that were written in English, contained data about effectiveness of care processes or outcomes, had a healthcare setting, and met all five features in the following definition derived from a rigorous analysis of the theoretical literature.4–8
A quality improvement collaborative is an organised, multifaceted approach to quality improvement that involves five essential features:1
There is a specified topic, that is, a subject with large variations in care or gaps between best and current practice.
Clinical experts and quality-improvement experts provide ideas and support for improvement. They identify, consolidate, clarify and share scientific knowledge, best practice and improvement knowledge.
A critical mass of multiprofessional teams from multiple sites are willing to improve care, share and participate.
A model for improvement focuses on setting clear, measurable targets, collecting data and testing changes quickly on a small scale to advance reinvention and learning by doing.
The collaborative process involves a series of structured activities (meetings, listserv, visiting facilitators) in a given timeframe to advance improvement, exchange ideas and share the participating team's experiences.
All papers meeting our inclusion criteria were reviewed anew, and MH and LS independently selected papers that quantitatively assessed a relationship between any determinant and any effect parameter.
Disagreements were resolved by consensus.
Two authors (MH and LS or HB) independently extracted study characteristics, types of determinants and their relationships with effectiveness. Disagreements were resolved by consensus. All relationships between any potential determinant and any effect parameter were included.
Data synthesis and analysis
We could not use formal meta-analytical techniques for pooling results because of the many different effect measures.
From each paper we included every comparison in which a relationship between any potential determinant and any effect parameter was tested, and categorised it using the essential features of collaboratives. We grouped similar determinants tested in various papers and merely counted how often a potential determinant was tested and how often a statistically significant relationship was found. As most of the included papers used fundamentally flawed methodology (see Results) it was impossible to weigh the evidence for a determinant of success.
We report separately about the research on determinants of short-term success (changes following participation) and long-term success (maintenance and spread).
Table 1 shows the flow of papers through the review. Of the 121 eligible papers we excluded 98 because they did not quantitatively test potential determinants of success; 60% of these papers hypothesised about determinants in their discussion. We selected 23 papers (describing 26 collaboratives) that assessed a relationship between any potential determinant and any effect parameter (table 2 and online appendix).9–31
All papers were published after 2000; 17 (74%) were published between 2004 and 2007. The various authors related potential determinants of success to a range of effect parameters. Twenty-two papers describe the determinants of short-term success by exploring the changes that the collaborative introduced. Some papers give information about determinants of maintenance22 ,31 and spread of changes.22 ,23 ,31 One paper used an effect parameter combining short-term changes (following participation) and long-term changes (both maintenance and spread).24
Thirteen (56%) papers describe effect parameters related to the collaborative topic; for example, Chin et al16 used ‘the percentage of patients with a glycated haemoglobin test in the past 12 months’ as an effect parameter in their Diabetes Collaborative. The topic diversity resulted in huge effect parameter variation. Nine papers related data from patient chart reviews to determinants, two used patient reports of care received as an effect parameter, and two related team self-reported data about progress to determinants. The remaining 10 studies (44%) used more generic effect parameters like ‘success’ (various definitions), ‘number of changes’ or ‘depth of changes’ (ie, expected impact), and related team self-reported progress data to potential determinants of success. Overall, half of the papers (12 papers, 52%) relied on self-reported effectiveness data from participating teams to assess effectiveness.
The authors of 13 papers (57%) set out to empirically test determinants of success; eight of these related theory-based determinants to effect parameters. The remaining 10 papers made observations about determinants following completion of the project in subgroup analyses using the potential determinant as a grouping variable.
Determinants of short-term success: changes after participation in a collaborative (220 comparisons) (Table 3)
The specified topic
Two papers (three comparisons) tested whether the topic influenced success.9 ,27 Bartlett et al9 (one comparison) concluded that clinical projects were more successful than operational improvement projects. They based their conclusions on the proportion of successfully completed projects (32 of 33 (97%) clinical vs 24 of 30 (80%) operational). However, using the number of projects started as the denominator (as most other studies do) shows little difference between the two types of projects (32 of 47 (66%) clinical vs 24 of 39 (62%) operational).
For four chronic illness collaboratives, Shortell et al27 showed that, for one of two comparisons tested, focusing on asthma versus diabetes, congestive heart failure or depression significantly influenced the ‘depth of changes’. In asthma care it seemed more difficult to successfully implement changes with an expected great impact.
Experts provide ideas and support for improvement
Nembhard20 (one comparison) showed that teams from organisations making significant improvements rated the collaborative faculty (ie, the experts) as significantly higher in helpfulness than teams from organisations that made more modest improvements (p<0.05).
Four papers tested whether the perceived value of the ‘ideas and support provided’ was related to success (nine comparisons).20–22 ,31 In general, this was not the case. One comparison showed a positive and significant relationship: good performance was significantly correlated with ‘team reported learning new ideas in the first learning session’ (r=0.4877, p<0.014).22
A critical mass of multiprofessional teams from multiple sites
Many potential influences of the sites hosting the teams were tested. Landon et al and McInnes et al, studying the same HIV collaborative, concluded that there were no significant differences in organisational effect parameters26 or in any of the professional performance scores25 for clinics that obligatorily or voluntarily participated (14 comparisons).
Four papers examined ‘organisational readiness and commitment’ and how this influenced success (eight comparisons) with mixed results.21–23 ,27 Shortell et al27 concluded that an organisation's culture seems important (four comparisons). Maintaining a balance among cultural values of participation, achievement, openness to innovation, and adherence to rules and accountability tended toward significant positive association with the ‘number of changes’ (p<0.10), but not with ‘depth of changes’.27 They described a negative association between patient satisfaction focus and the number of changes (p<0.01) and depth of changes (p<0.05).27 Whether ‘participation has added value for the organisation as a whole’ (one comparison) or whether ‘the facility was ready to test changes’ (one comparison) did not influence the success rate.22 Two papers21 ,23 described different results for the importance of alignment of goals; this positively influenced success in one,21 but not the other23 (two comparisons).
Six papers explicitly studied the influence of ‘leadership support’ (10 comparisons). Five papers found no relationship with success (6 comparisons).18 ,19 ,21–23 ,31 Meredith et al31 (four comparisons) showed that ‘lack of leadership support’ was negatively associated (r=−0.52, p = 0.03) and ‘leadership support’, positively, with the number of changes (r=0.58, p<0.05); leadership support was not significantly related to the number of perceived big successes (two comparisons).
Four papers explored the importance of ‘resources’ and ‘time’ (seven and five comparisons, respectively).21–23 ,31 Overall, no relation with success appeared. Meredith et al31 found mixed results (two comparisons): successfully obtaining resources was negatively associated with the number of changes (r=−0.47, p = 0.06), while no relation with the number of perceived ‘big successes’ appeared.
The influence of baseline performance remains unclear (15 comparisons). Two papers concluded that there was more success with lower baseline performance (five of six comparisons; one comparison showed no effect).17 ,29 In contrast, Landon et al25 (eight comparisons) and Lannon et al18 (one comparison) concluded that the collaborative was not more effective for clinics with lower baseline performance.
Four papers explored the potential positive effect of ‘involving staff’ in the organisation (seven comparisons).20 ,21 ,23 ,31 Most (six) comparisons showed no additional effect. Meredith et al31 showed that extra effort of working on changes with the physician staff positively related to the number of changes (r=0.63, p<0.01) and trended toward significant associations with the number of perceived big successes (r=0.48, p=?).
Two papers (11 comparisons) assessed the mediating effect of ‘engagement of nurses’.13 ,30 The papers showed a varying influence: four comparisons showed positive effects and seven comparisons showed no relationship. Nurse engagement was associated with higher levels of education, knowledge and self-management behaviours of patients with chronic heart failure,30 and with better outcomes for reconciling medications.13
Four papers studied multiple aspects of ‘team climate’ (17 comparisons).19 ,21 ,23 ,27 Ten comparisons in two papers showed no effect of a shared vision regarding how and what to improve, nor did they show positive effects of the perceived safety regarding the chosen methods of investigation and conflict solving.21 ,23 Seven comparisons in three papers showed that teams that interacted well were more successful: teams with members who understood one another's strengths and weaknesses, had mutual respect, high perceived team effectiveness and high team functioning scores, and had worked as a team previously were more successful.19 ,21 ,27
Six papers explored the influence of ‘team composition’ on success (14 comparisons).13 ,21–23 ,27 ,31 Shortell et al27 (one comparison) showed that team size exhibited a curvilinear effect in regard to the depth of changes: larger teams had a positive effect on the depth of the changes up to certain size (3.90 (1.42) p<0.10). Then, as the teams became even larger, a negative association appeared (−0.19 (0.09) p<0.05). Four papers (five comparisons) assessing the mediating effect of a physician on the team found no relationship with effect.21–23 ,27 Positive effects of an administrator on the team depended on the effect parameter taken; one of two comparisons was significant.13 Strong team leadership (two comparisons) improved success in one study,21 but had no additional effect in another.23 Having a team champion (two comparisons) produced no positive effect,27 nor did the composite score ‘team characteristics’ in Meredith et al31.
Three papers (four comparisons) assessing the importance of ‘previous quality-improvement experience’ for successful improvement found no relation overall.18 ,19 ,21 Schouten et al19 (two comparisons) found a negative relationship between ‘previous knowledge of and experience with improvement’ and indicators reflecting well organised services (−3.34; 95% CI −5.31 to −1.38), and found no relationship with length of hospital stay.
The model for improvement
Young et al14 tested the influence of ‘goal setting’ with inconsistent results—it led to improvement (five comparisons) for some services, but not for others (five comparisons).
Seven papers explored the influence of various ‘measurement aspects’ (17 comparisons), with no clear conclusion about positive influence.13 ,15 ,20–23 ,31 Amarasingham et al15 (one comparison) showed that better ‘automation and usability of the information system’ was associated (p = 0.02) with better patient outcomes. The results from three other papers did not confirm this (four comparisons).21 ,23 ,31 No positive effect of ‘whether or not the teams gathered data from patients’ was found (two comparisons),21 ,23 nor did ‘preparedness for measurement’ result in more success (three comparisons).21 ,22 The influence of using Plan–Do–Study–Act cycles was tested in seven comparisons,13 ,20 ,21 ,23 four showed positive influences of frequency (p<0.05 and p<0.001)13, perceived helpfulness (p<0.10)20 and the ability to quickly complete the first test of change (p = 0.005).21 ,23
The collaborative process
Five papers explored the influence of ‘intensity of intervention’ on effect (36 comparisons).10–12 ,16 ,21 Carlhed et al11 compared a traditional time-consuming and resource-consuming collaborative approach to one with fewer meetings (two instead of four), and web-based education and communication. No significant difference in improvement of effect parameters (five comparisons) appeared. Similarly, Chin et al16 measured whether standard versus high-intensity activities following participation in a collaborative had more effect; they showed mixed results (22 comparisons), with some significant improvements (three comparisons) and decrements (three comparisons) in diabetes care. Three papers (nine comparisons) explored the influence of degree of engagement, describing no effect of more engagement overall.10 ,12 ,21
Three papers looked at the influence of specific collaborative activities (eight comparisons).20 ,22 ,23 Three comparisons showed the positive influence of ‘being on preconference calls’ (two comparisons) and the ‘timeliness of submission of reports’ (one comparison).22 ,23 Similarly, teams from organisations that made significant improvement consistently (four of five comparisons) rated learning session interaction, listserv discussions, monthly report exchange and monthly conference calls as significantly helpful (p<0.05).20
The importance of ‘exchange and sharing’ remains unclear (22 comparisons).21 ,22 ,28 Marsteller et al found a positive relationship in six of 18 comparisons28 while Mills and Weeks and Weeks et al21 ,22 (four comparisons) found no relationship between exchange and sharing information and successful improvement.
Determinants of long-term success: sustained changes (26 comparisons) and spread of changes (76 comparisons)
Two papers explored determinants of teams successfully sustaining changes at 6 and 18 months after completion of the collaborative.22 ,31 Three papers explored determinants of spreading changes to other teams or organisations after 18, 12 and 6 months, respectively (see table 3).22 ,23 ,31
Experts provide ideas and support for improvement
Two papers explored the influence of the experts’ ‘ideas and support for improvement’ on long-term success (eight comparisons).22 ,31 Teams reporting getting new ideas during the learning sessions were more likely to maintain gains (one comparison)—but not more likely to spread gains (two comparisons)—than teams who did not get new ideas (p = 0.018).22 Teams that learned methods to test changes (two comparisons) were more likely to apply them in different physical locations (p = 0.002) or to different topics (p = 0.002)—but not more likely to maintain gains (one comparison)—than teams that did not learn these methods.22 Simplifying the tools provided (two comparisons) did not help maintenance or spread.31
Critical mass of multiprofessional teams from multiple sites
‘Organisational readiness and commitment’ (eight comparisons) did not seem to influence long-term success,22 ,23 nor did ‘frontline staff support’ (two comparisons)23 nor ‘leadership support’ (nine comparisons).22 ,23 ,31
The availability of ‘resources’ did not (in eight of nine comparisons) promote maintenance or spread,22 ,31 with one exception: Mills et al23 concluded that teams reporting ‘sufficient resources to meet their aims’ in the first face-to-face meeting were more likely to spread information to other hospitals (Spearman's r = 0.50, p=0.043). Similarly, ‘sufficient time’ did not (in six of seven comparisons) promote maintenance or spread,22 ,23 ,31 again with one exception: ‘perceived time constraints’ (one comparison, r = 0.64, p = 0.006) were positively associated with spread.31
Whether the ‘team was intact’ at the 6-month follow-up (three comparisons) related to sustained changes and spread.22
The model for improvement
In 17 comparisons, various aspects of measurement were explored. Weeks et al22 (three comparisons) showed that teams that continued to collect data were more likely than teams that did not to maintain gains (p≤0.05) and to expand effort to different locations and topics (p≤0.05). Similarly, Mills et al23 (two comparisons) concluded that teams that gathered data from patients at the end of the project were more likely than others to spread information to other units within their hospital (Spearman's r = 0.485, p = 0.041, but not to other hospitals. Meredith et al31 (two comparisons) found that perceived information technology problems (r=0.61, p = 0.009) positively related to spread but not to maintenance.
The collaborative process
‘Exchange and sharing’ (six comparisons) produced mixed results; using information provided by other teams (two of three comparisons) helped maintenance and spread, but sharing information with other teams did not (three comparisons).22
This paper represents the first systematic review of the determinants of success of quality improvement collaboratives. It shows that while many determinants were tested, only a few related to empirical success. Twenty-two papers provided data about determinants of short-term success. Overall, 59 of the 220 tests of potential determinants of short-term success were significant. Some aspects of teamwork enhanced short-term success, as did participation in specific collaborative activities. The influence of baseline performance, engagement of nurses, and of exchange and sharing remains—thus far—unclear. Only two papers studied whether the changes introduced by the collaborative were sustained; three others examined spread. Of the 102 comparisons tested, 15 related to maintenance (4 of 26) or spread (11 of 76). The limited number of studies makes it difficult to conclude how to obtain sustained effects or spread. If teams remain intact and continue to gather data, chances of maintenance and spread may be better.
We excluded 98 papers (table 1) that did not quantitatively test potential determinants of success. Roughly 60%, however, hypothesised in their discussion section on determinants of success. Based on popular management and quality theories they suggested that successful collaboratives require a broad range of actions and supportive contextual factors, including leadership, sufficient resources, measurement and teamwork. Overall, the included papers provided little empirical support for these hypothesised determinants of success.
This review aimed to assemble information on determinants of success by summarising outcomes of subgroup analyses. The review has limitations. Overall, the majority of studies used uncontrolled designs, measured self-selected samples of self-selected sites using self-reported effectiveness data, and reported subgroup analyses without prespecifing their hypotheses. So, most research does not meet quality criteria as set, for example, by the Cochrane Collaboration32 and it does not meet the criteria for subgroup analyses as recently described by Sun et al.33 In addition, in subgroup analyses an assessment is made of the relationship between a potential determinant of success and the improvement achieved, by relating the variation in effect to the variation in the determinant score. However, the papers provided little to no information on variation among participating sites in either the effect parameter or the potential determinant of success. A lack of variation in the effect parameter or the potential determinant of success—for example, because of selection bias—might be responsible for not finding a relationship between a determinant and the collaborative's success. As in any review, we may have missed relevant studies. However, we checked our 1995–2006 systematic search with free text words in a search strategy that included MeSH terms based on key words in the relevant studies. This did not reveal any new studies. An evaluation of the efficacy of this search method led us to limit our update to the Medline database. Our search was limited to English-language journals. This might introduce publication bias if the determinants described in these studies differ systematically from those appearing in other languages. The way in which collaboratives were reported within the papers sometimes made it difficult to determine selection criteria and acceptance rates for the evaluation for intervention and control groups. Occasionally multiple papers reported on the same collaboratives; however, most used different effect parameters or different determinants of success. When determinants and effect parameters partly overlapped, this may have resulted in an overestimation of the influence of determinants tested.21 ,23 In our opinion excluding the Mills et al23 paper—which provided unique data on spread—from table 3 would not affect our results in any substantive way.
The conclusions must be seen as a necessary preliminary to building an evidence base for determinants of success of quality improvement collaboratives. They provide the base for the generation of hypotheses that need to be tested in future research using rigorous designs, valid data and valid subgroup analyses. The outcomes of this review provide guidance for future organisers, participants and researchers about which determinants might increase a collaborative's chances of success. To advance knowledge in this area we propose a more systematic exploration of potential determinants of success using methodologically strong study designs in which theory-based and practice-based knowledge are applied to the essential features of collaborative studies. This review shows that the authors of eight papers (35%) clearly set out to empirically explore theory-based determinants. These studies focus on interaction and sharing based on social network theory; the influence of theoretical social psychological processes, and contextual factors as described in the theory of diffusion of innovation; team effectiveness; and research in team performance, organisational learning and microsystems. Even the theory-based determinants did not consistently relate to success. This might be because there was little variation between participating sites in either the effect parameter or the potential determinant of success; because the theoretical concepts were not optimally operationalised; or because other important features of the theory were not tested. Perhaps other theories that might apply to the collaborative model have been overlooked. It is therefore worthwhile to systematically extract and operationalise determinants from the various theories that apply to collaborative improvement. The multi-dimensionality of ‘success’ makes it important to be very specific as to the exact hypothesis. Determinants may be effective in one aspect of success and ineffective in another.
As well as applying this theory-based knowledge to the collaborative approach, we should consider experts’ opinions of success factors. These relate to the focus of the collaborative, the participants and their host organisation, and the style and method of implementing the collaborative. Much experience has been gained from performing collaboratives, which could guide future research to determinants of success or failure.4–8
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Files in this Data Supplement:
- Data supplement 1 - Online appendix
Contributors All authors conceived and designed the original study. MEJLH, LMTS and HB collected and analysed the data; RPTMG interpreted the data. All authors drafted and revised the manuscript, and approved the final version. MEJLH acts as the guarantor.
Funding No funding was obtained for this study.
Competing interests All authors have completed the Unified Competing Interest form at http://www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author). MEJLH and RPTMG have no competing interests. LMTS works at an institution that also organises Quality Improvement Collaboratives. HB has been, in previous work, responsible for funding and organising Quality Improvement Collaboratives.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement Additional data are included in the appendix.