Article Text

other Versions

PDF
A case report of evaluating a large-scale health systems improvement project in an uncontrolled setting: a quality improvement initiative in KwaZulu-Natal, South Africa
  1. Kedar S Mate1,2,
  2. Wilbroda Hlolisile Ngidi3,
  3. Jennifer Reddy3,
  4. Wendy Mphatswe3,
  5. Nigel Rollins3,4,
  6. Pierre Barker1,5
  1. 1Institute for Healthcare Improvement, Cambridge, Massachusetts USA
  2. 2Department of Medicine, Weill Cornell Medical Center, New York, New York, USA
  3. 320,000+ Partnership, Department of Pediatrics, University of KwaZulu-Natal, Durban, South Africa
  4. 4Department of Maternal, Newborn, Child and Adolescent Health and Development, World Health Organization, Geneva, Switzerland
  5. 5Department of Pediatrics, School of Medicine, University of North Carolina, Chapel Hill, North Carolina, USA
  1. Correspondence to Dr Kedar S Mate Department of Medicine, Weill Cornell Medical College, 525 E. 68th Street, New York, NY 10065, USA; kmate{at}ihi.org

Abstract

Objective New approaches are needed to evaluate quality improvement (QI) within large-scale public health efforts. This case report details challenges to large-scale QI evaluation, and proposes solutions relying on adaptive study design.

Study design We used two sequential evaluative methods to study a QI effort to improve delivery of HIV preventive care in public health facilities in three districts in KwaZulu-Natal, South Africa, over a 3-year period. We initially used a cluster randomised controlled trial (RCT) design.

Principal findings During the RCT study period, tensions arose between intervention implementation and evaluation design due to loss of integrity of the randomisation unit over time, pressure to implement changes across the randomisation unit boundaries, and use of administrative rather than functional structures for the randomisation. In response to this loss of design integrity, we switched to a more flexible intervention design and a mixed-methods quasiexperimental evaluation relying on both a qualitative analysis and an interrupted time series quantitative analysis.

Conclusions Cluster RCT designs may not be optimal for evaluating complex interventions to improve implementation in uncontrolled ‘real world’ settings. More flexible, context-sensitive evaluation designs offer a better balance of the need to adjust the intervention during the evaluation to meet implementation challenges while providing the data required to evaluate effectiveness. Our case study involved HIV care in a resource-limited setting, but these issues likely apply to complex improvement interventions in other settings.

  • Continuous quality improvement
  • Health services research
  • Randomised controlled trial
  • Evaluation methodology

Statistics from Altmetric.com

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Linked Articles