Background Guideline developers addressing quality of evidence commonly confront studies with missing data.
Objectives To develop a framework for assessing risk of bias resulting from missing participant data for continuous outcomes in systematic reviews.
Methods We developed a range of progressively more stringent imputation strategies to challenge the robustness of the pooled estimates. We applied our approach to two systematic reviews.
Results We used 5 sources of data for imputing means for participants with missing data: [A] the best mean score among the intervention arms of included trials, [B] the best mean score among the control arms of included trials, [C] the mean score from the control arm of the same trial, [D] the worst mean score among the intervention arms of included trials, [E] the worst mean score among the control arms of included trials. Using these sources of data, we developed four progressively more stringent imputation strategies. In the first example review, effect estimates were diminished and lost significance as the strategies became more stringent, suggesting the need to rate down confidence in estimates of effect for risk of bias. In the second review, effect estimates maintained statistical significance using even the most stringent strategy, suggesting missing data does not undermine confidence in the results.
Discussion Our approach provides rigorous yet reasonable and relatively simple, quantitative guidance that guideline developers can use for judging the impact of risk of bias as a result of missing participant data in systematic reviews of continuous outcomes.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.