Article Text
Statistics from Altmetric.com
Debates over the degree to which standards of evidence and methods from traditional clinical research can or should apply to quality improvement (QI) have recurred over the past 10 years.1–4 When, if ever, do we need a randomised controlled trial (RCT) demonstrating benefit to decide that an intervention has worked? Can we recommend QI interventions for widespread adoption even without supportive RCTs? On one side of the debate, some have argued that QI and the RCT are like oil and water—never the twain shall mix. Certainly, many have argued, we should not presume that RCTs represent the gold standard for evidence in QI.
On the face of it, the report by Mate et al5 supports this oil and water view of RCTs and QI interventions. The authors report their struggles conducting a pragmatic, multisite RCT of a complex intervention to reduce perinatal transmission of HIV in KwaZulu-Natal Province, South Africa. The intervention included socioadaptive strategies,6 ,7 such as engaging local health system leaders, securing a commitment to the aims of the project, and providing participating health centres with the tools to perform data-driven improvement cycles. It also promoted specific best practices for key steps in the prevention of perinatal transmission of HIV (eg, increasing the proportion of women receiving early antenatal care that includes HIV counselling and testing, increasing the proportion of mothers with low CD4 counts who receive treatment, and so on). The authors initially planned to evaluate this complex intervention using an equally complex study design—a step-wedge, cluster RCT involving 48 clusters of clinics (for a total of 222 individual clinics) in three waves of intervention and control sites; hence, the ‘step-wedge’ label.
It will come as no surprise to most readers that this double dose of complexity—from the intervention itself and the trial design—overwhelmed …