What gets published: the characteristics of quality improvement research articles from low- and middle-income countries
- 1Institute for Healthcare Improvement, Cambridge, Massachusetts, USA
- 2Department of Pediatrics, University of North Carolina School of Medicine, Chapel Hill, North Carolina, USA
- 3Department of Medicine, Weill Cornell Medical Center, New York, New York, USA
- Correspondence to Zoe K Sifrim, Low and Middle Income Countries, Institute for Healthcare Improvement, 20 University Road, 7th Floor, Cambridge, MA 02138, USA;
Contributors All authors contributed to the overall conceptualisation and study design of the systematic review. ZKS designed and executed the search, analysed the data, and drafted the initial manuscript. KSM designed the search and analysis strategy and edited and revised the manuscript. PMB edited and revised the manuscript. All authors have agreed on the final version of the manuscript.
- Accepted 20 February 2012
- Published Online First 23 March 2012
Objectives Reports of quality improvement (QI) research from low- and middle-income countries (LMICs) remain sparse in the scientific literature. The authors reviewed the published literature to describe the characteristics of such reports.
Methods The authors conducted a systematic search for QI research articles from LMICs catalogued in the PubMed databases prior to December 2011, complemented by recommendations from experts in the field. Articles were categorised based on bibliometric and research characteristics. Twenty papers were randomly selected for narrative analysis regarding strategies used to present the methods and results of interventions.
Results Seventy-six articles met the inclusion criteria. Publication rate accelerated over time, particularly among observational studies. Most studies did not use a concurrent control group; pre-/post-study designs were most common overall. Four papers were published in top-tier journals, 17 in journals at the top of their specialty and 20 in quality-specific journals. Among the papers selected for narrative analysis, four distinct components were observed in most: a description of the problem state, a description of the improvement processes and tools, a separate description of the interventions tested and a description of the evaluation methods.
Discussion The small number of articles identified by this review suggests that publication of QI research from LMICs remains challenging. However, recent increases in publication rates, especially among observational studies, may attest to greater interest in the topic among scientific audiences. Though the authors are not able to compare this sample with unpublished papers, the four components observed by them in the narrative analysis seem to strengthen QI research reports.
- Quality improvement
- less-developed countries
- healthcare systems
- health services research
- quality improvement methodologies
- breakthrough groups
- comparative effectiveness research
- implementation science
Health system strengthening has been widely cited in recent years by the WHO, and other leading global health agencies, as an essential strategy for improving population-level health outcomes in resource-constrained settings.1 2 Effective interventions for the prevention and treatment of major causes of morbidity and mortality exist worldwide,3 4 yet major gaps persist between healthcare needs and health system performance.1 These gaps threaten the ability of low- and middle-income countries (LMICs) to achieve important national and international goals, including the Millennium Development Goals (MDGs).5 These frustrating realities have forced many public health advocates to issue calls for a new direction in science towards the discovery of approaches that will compel health systems to ‘deliver’ or ‘implement’ what we already know.2 6
Quality improvement (QI) is an established strategy for strengthening health systems in a variety of contexts,6 and it has increasingly been applied to improve health system performance in LMICs.7–9 Since 1990, the US Agency for International Development (USAID) has used QI methods to improve health system performance in multiple countries,10 and numerous organisations now use this method globally.11–14 Despite the application of QI methods in LMICs, reports describing this work remain sparse in the scientific literature and may impede further diffusion of QI interventions.
The SQUIRE guidelines,15 developed to guide authors writing about QI research, recognised that iterative, context-specific QI research may not lend itself to the traditional reporting format geared towards experimental protocols in tightly controlled research settings.16 The challenges of describing QI results in a way that is easily understood by a traditional scientific audience has created a bottleneck for dissemination of QI reports, which has slowed the spread of QI as a health systems strengthening strategy in LMICs.
Here, we seek to understand how QI efforts from LMICs are currently reported in the scientific literature. We report the results of a systematic review of published QI research from LMICs describing the evolution of publishing in this space over the past 20 years. We catalogue the study designs, the clinical and geographic target areas, the journals most frequently publishing QI work and authors frequently appearing in the literature. From a sample of the articles, we reflect on the strategies QI researchers may use to report QI interventions in a manner convincing to scientific audiences.
Using the global search terms ‘quality improvement’ or ‘system(s) strengthening’ combined with one or more of a string of terms used to denote the setting as a LMIC (web appendix 1), we searched the PubMed databases for papers appearing there prior to December 2011. Included articles present original research describing the results of a QI intervention in a LMIC, as defined by the International Monetary Fund.17 We excluded all articles that were not results-oriented, as the purpose of this review was to focus on how reports of QI interventions in LMICs are presented in the scientific literature. In addition to the search, we solicited recommendations of articles from experts in the field.
Papers meeting the inclusion criteria were categorised based on the year in which they were published, the journal, the institutions or organisations involved in the work, the country and region in which the intervention took place, the clinical target area, and the study design. To determine the authoring institutions, we included all organisations mentioned in the text as an implementer or funder, and all non-hospital, non-university and non-in-country government agencies listed in the authors’ affiliations.
We categorised journals as ‘major-general,’ ‘major-specific,’ ‘quality-specific,’ or ‘other.’ Major-general journals were defined as those with a 1-year impact factor great than 10, geared towards no specific medical/health discipline.18 Major-specific journals were those listed by the ISI Web of Knowledge as being among the top ten journals in their category.18 Quality-specific journals were those that included the word ‘quality’ in their title. All journals not falling in those categories were considered ‘other.’ The geographic regions were those defined by the World Bank.19 Studies covering interventions in multiple countries or regions were counted in all relevant countries/regions. In considering study design, we used the definitions found in box 1 to categorise the papers.
Study design definitions
Observational: Makes a one-time observation, with no comparison time period or group.
Pre/post: Compares a pre-intervention state to a post-intervention state, with no concurrent comparison group.
Time series: Uses time-series analysis to evaluate the project, or compares the situation at more than two periods in time, with no concurrent comparison group.
Quasi-experimental: Compares the change over time in an intervention group with the change over time in a non-randomised comparison or control group.
Randomised: Compares the change over time in an intervention group with the change over time in a randomly selected control group.
Meta-analysis: Evaluates multiple studies in aggregate.
We analysed papers for trends across categories, using two-proportion z tests to determine associations among the various attributes of papers, where appropriate. Calculations were carried out in Minitab 16 (Minitab, 2011).
A random sample of 20 papers was selected for narrative analysis of the strategies used to present and describe the methods and results of quality interventions. A coding scheme to describe key elements was developed through an initial brainstorm and was then tested and iteratively refined through application to the random sample.
The PubMed search yielded 291 articles, of which 52 met the inclusion criteria. An additional four articles were referenced in a review article that surfaced in the PubMed search,20 and two relevant studies were referenced in included studies from the search. These 58 articles were supplemented by 11 articles found through solicitation of recommendations from experts in the field. Finally, seven articles known previously by the authors that were not identified through other means were included. Thus, a total of 76 articles were included in this analysis (full list available in web appendix 2).
QI research publication in LMICs has accelerated in recent years, from zero to three articles per year in the 1990s, to one to six annually in the first decade of the 2000s and 16 articles in 2010 alone. Eleven articles from 2011 had appeared in PubMed by the end of November of that year.
The 76 articles uncovered were published in 48 distinct journals. Thirty-one of these journals published only one qualifying article. Four articles (5.3%) were published in major-general journals, 17 (22.4%) in major-specific journals, 20 (26.3%) in quality-specific journals and 35 (46.1%) in all other journals. These minor, non-quality journals contributed the most of any category to the spike in publications seen in 2010: 13 of the 16 articles published that year came from this category (figure 1). Major-specific journals and quality journals have each published at a consistent rate (roughly zero to two articles each year) throughout the time period of observed publications, whereas articles have only begun appearing in major-general journals since 2004.
The most frequently observed journals were The International Journal for Quality in Health Care (11 articles), The Joint Commission Journal on Quality and Patient Safety (formerly The Joint Commission Journal on Quality Improvement, five articles) and Health Policy and Planning (five articles). These journals did not experience the same 2010 spike in publication seen across all journals (only 1 of the 16 articles published in 2010 was found in these three journals). The majority of the papers published since 2010 were in a new journal that had never before published a qualifying paper (n=22 of 27).
The single most common study design among the articles was a pre-/post-design. Twenty-nine articles (38.2%) used a pre-/post-analysis to evaluate the impact of a specific QI intervention. Sixteen articles (21.1%) were purely observational with no comparison to a previous state or control group, and 15 (19.7%) used a time-series design. Thus, 60 studies out of the 76 (78.9%) did not use a concurrent comparison or control group. Eight articles (10.5%) used a non-randomised, quasi-experimental design; four (5.3%) used a randomised controlled design; two (2.6%) were meta-analyses; and two studies (2.6%) did not describe their methods in sufficient detail to determine the design. The meta-analyses did not include any previously published studies that overlapped with our inclusion criteria.
The distribution of study designs stayed roughly constant across each type of journal. However, two of the four randomised studies were published in the major-general journals. The study designs demonstrated significant change over time, with increasing numbers of observational studies in recent years (8.2%, n=4, of all papers prior to 2010 vs 44.4%, n=12, since then, p=0.0002) and decreasing numbers of pre-/post-studies (46.9%, n=23 vs 22.2%, n=6, p=0.03) and quasi-experimental studies (16.3%, n=8 vs 0%, n=0, p=0.03) (figure 2).
A wide range of institutions has participated in QI research in LMICs. Forty-three different organisations were involved in the 76 studies. The institutions most frequently observed were the Clinton Foundation (six articles), University Research Co., LLC (five), USAID (five), the WHO (four) and FHI (three). The Clinton Foundation is a recent entrant into this field; all of its articles have been published since 2008, including three in 2010.
The majority of the 76 reports of QI activities were from work in sub-Saharan Africa (n=42 or 55.3%), followed by Latin America and the Caribbean (n=15 or 19.7%). South Africa hosted the most interventions of any single country (n=7 or 9.2%). Among the 42 studies in sub-Saharan Africa, more than half (n=23) took place in Eastern Africa, 28.6% (n=12) in Western Africa and 16.7% (n=7) in Southern Africa. Only one (2.3%) was set in a Central African country, the Democratic Republic of Congo.21
East and South Asian interventions have appeared significantly more frequently in the literature in recent years; 12 papers from those regions have been published since 2007 (26.7% of the total since 2007), compared to no papers prior to 2007 (p=0.0009). Over the same time period, Latin American studies appeared significantly less frequently. Thirteen papers were published from that region prior to 2007 (37.1% of the total), versus two since 2007 (4.9% of the total) (p=0.0002). The distribution of settings was relatively constant across the three types of journals, although all four of the articles in major-general journals were set in sub-Saharan Africa. No study set in the Middle East and North Africa was published in a major-specific journal.
The most frequently targeted systems for improvement were those related to maternal and/or child health (n=23 or 30.3%), general facility quality (ie, those that used QI on a system-wide basis or tackled multiple clinical objectives (n=12 or 15.8%)), HIV/AIDS (n=7 or 9.2%) and tuberculosis (n=7 or 9.2%). The pace of publication in these areas was similar to the overall pace of publication, with accelerated publication seen within the past five years. All of the HIV-specific articles were published in 2009 and 2010.
Twenty articles were randomly selected for further full-text analysis to better understand how QI authors describe their methods and results for scientific audiences. These articles were representative of the full list of papers in terms of both journal type and study design.
We observed four distinct topic areas that QI authors generally present to explain their research: the assessment of the initial problem state; the improvement principles, processes and tools used; the actual interventions; and the methods for evaluating the effects of the intervention. Table 1 demonstrates the heterogeneous strategies used to describe each component, as well as the type of results that each paper reported.
We set out to describe the evolution and characteristics of results-focused papers on quality improvement in LMICs published in the scientific literature over the past two decades. Our analysis reveals that the pace and nature of QI publication from low-resource settings have changed considerably. QI methods have diffused throughout the scholarly landscape, with new entrants to the field authoring papers and a range of journals recently declaring an interest in publishing in this space. Likewise, the study designs have shifted somewhat; observational studies of interventions were more common in recent years, perhaps reflecting a growing interest in QI methods. However, even before this shift, the majority of QI research from LMICs over the past two decades did not use a concurrent control group.
The accelerating pace and range of publication uncovered in this review indicate considerable opportunities for QI practitioners from LMICs to disseminate their findings in the peer-reviewed literature. Nonetheless, the 76 articles likely represent a very small portion of the QI efforts ever undertaken in LMICs. The small number of articles that we were able to find may be due in part to a lack of consistency of descriptors for QI and the limitations of traditional search strategies for identifying such articles.20 42 However, we supplemented the systematic search with a generous inclusion of articles recommended by senior experts in the field. We are thus confident that we have uncovered a significant portion of the English-language QI reports from LMICs. We conclude that the small number of QI reports we encountered is an accurate reflection of the low number of results-focused QI reports that meet the requirements of peer-reviewers and editors.
Perhaps our most important finding is that studies lacking a concurrent control group make up the majority of QI reports from LMIC settings. QI study designs that utilise historical data as the counter-factual (so-called ‘historical controls’ or ‘pre-/post’-studies) are not at odds with publication requirements of most journals. In the absence of a concurrent control, however, QI authors must use other methods to establish the plausibility of a causal link between QI interventions and observed results.43 We observed some strategies that may be effective for doing so through our narrative analysis.
Without a control group of unpublished papers, we cannot make any authoritative claims about our observations. Four distinct components of the research were often described in papers in our sample: the problem state, the improvement process, the interventions tested and the study design or evaluation methods used. While we cannot know if these components are also present in most unpublished papers, their presence throughout our sample deserves comment. These components may be important because together they seem to strengthen the linkage between QI interventions undertaken and results observed.
The description of the problem state is a key element of the report, because it frames the intervention as a remedy to specific shortcomings in care. In our sample, the most frequently cited gaps were health outcomes failures, knowledge or capacity gaps and process failures. Outcomes failures alone may in some cases be sufficient to establish the need for an improvement intervention (as was the case for three papers in our sample of 20); however, we would argue that in most cases a process failure or knowledge gap provides a better foundation on which to build the plausibility of an effective improvement intervention. Indeed, 15 of the papers used at least one of these two strategies to describe the problem state. Hermida et al33 and Doherty et al31 each offer particularly good examples of these approaches.
An effective description of improvement activities may be important because it can illustrate how specific QI processes or instruments brought about change ideas for the health system. For example, most papers in our sample cited the use of workshops or meetings (n=9), performance feedback mechanisms (n=8) and audit or record review (n=7). One model to follow might be Weinberg et al,41 who were able to describe the usefulness of QI tools without coming across as promotional. They referenced Nolan and colleagues' Model for Improvement,44 using a paragraph to explain the model in the abstract, prior to describing how it was specifically brought to bear in a brainstorming session. The authors also published and explained the process maps representing pre-improvement, mid-improvement and final processes in the two intervention hospitals.
The credibility of research findings hinges in part on clarity around the intervention. We observed that the improvement process was almost never framed as the primary ‘intervention,’ but rather as the method used to arrive at the specific healthcare intervention. Clarity on the intervention requires that it be described in sufficient detail to allow the change, or components of it, to be replicated in other settings. For instance, most papers described changes to systems or processes in care delivery (n=13) and staffing or human resource changes, including educational interventions (n=12). Many papers also reported the clinical changes introduced (n=8). The more detail that can be provided, the better the chances that readers can test similar interventions in their own settings, and might convince sceptics that the changes introduced could plausibly account for the improved results observed.
Although a detailed description of the problem state, the improvement process and the interventions implemented go a long way towards establishing the plausibility of scientific claims, none of these topics can supplant a thorough description of the study design and evaluation methods. Indeed, virtually all (97.4%) of the 76 papers described their study design. The study design will affect the types of results that can be reported. As in any scientific paper, a detailed description of the evaluation methods, including a justification for the study design, sampling methods, data collection and analysis, with all assumptions explained, is expected and should be described in enough detail to be repeated.
In the papers we observed, these four elements—descriptions of the problem state, improvement activities, changes implemented and evaluation methods—seemed to lend credibility to implied causation of improved results, and allow for repetition of methods in new settings. We noted a variety of formats that were effective; for example, Doherty et al31 Hermida et al33 and Amoroso et al22 offer helpful models for structure and the use of figures and tables. By covering each of the four elements distinctly and clearly, we felt that these authors helped readers to understand the sequence of steps for conceptualising, developing, implementing and evaluating improvements in care processes.
While the evidence base for QI is growing, results-focused QI reports from LMICs remain poorly represented in peer-reviewed journals. This difficulty is due, in part, to matching the expectations of journal editors and reviewers with the realities of conducting results-based QI research. However, this review indicates that QI reports have been welcomed in recent years in the peer-reviewed literature, and the surge of observational studies may indicate editorial interest in QI methods. While the results of our narrative analysis are neither exhaustive nor predictive, we distilled our observations to a short list of questions that prospective authors may consider as they write up QI reports for publication (box 2). The examples we have uncovered offer useful guidance for QI researchers hoping to publish their work from LMICs and support their efforts to bring QI lessons and knowledge to a wider scholarly audience.
Recommendations for authors: sections to cover
The problem state:
Have you included a clear description of the problem state, or the need to be addressed in the setting in which the improvements took place? Ideally, this should summarise process failures and/or knowledge and capacity gaps.
The improvement process and tools
Have you described improvement processes in neutral terms? Can the reader envision how QI tools or principles led to the tested change ideas?
The interventions tested
Have you described the key interventions that led to improvements? This section is complete if the reader can intuitively believe the relationship between the intervention and the outcomes.
Have you explained and justified all evaluation methodological decisions?
The authors would like to acknowledge Jane Roessner and Frank Davidoff for their assistance in the development and refinement of this article.
Funding No external funding was obtained for the execution of this project. The salaries of the authors were provided by the Institute for Healthcare Improvement.
Competing interests None.
Provenance and peer review Not commissioned; internally peer reviewed.
Data sharing statement All data has been published in the manuscript or the web appendix.