Powered by the Evidence-based Practice Centers
Evidence Reports All of EHC
Evidence Reports All of EHC

SHARE:

FacebookTwitterFacebookPrintShare

The Empirical Evidence of Bias in Trials Measuring Treatment Differences

Research Report Sep 29, 2014
Download PDF files for this report here.

Page Contents

People using assistive technology may not be able to fully access information in these files. For additional assistance, please contact us.

Structured Abstract

Background

To comprehensively and systematically review and compare empirical evaluations of specific types of bias on effect estimates in randomized controlled trials (RCTs) reported in systematic reviews.

Methods

MEDLINE®, the Cochrane Library, and the Evidence-based Practice Center methods library located at the Scientific Resource Center. Additional studies were identified from reference lists and technical experts. We included meta-epidemiological studies (studies drawing from multiple meta-analyses), meta-analyses, and simulation studies (in relation to reporting bias only) intended primarily to examine the influence of bias on treatment effects in RCTs.

Approaches to minimizing potential biases considered in the review included selection bias through randomization (sequence generation and allocation concealment); confounding through design or analysis; performance bias through fidelity to the protocol, avoidance of unintended interventions, patient or caregiver blinding and clinician or provider blinding; detection bias through outcome assessor and data analyst blinding and appropriate statistical methods; detection/performance bias through double blinding; attrition bias through intention-to-treat analysis or other approaches to accounting for dropouts; and reporting bias through complete reporting of all prespecified outcomes. Two people independently selected, extracted data from, and rated the quality of included studies. We did not pool the results quantitatively due to the heterogeneity of included studies.

Results

A total of 38 studies of trials (48 publications) met our inclusion criteria, from our review of 4,844 abstracts. Of these, 35 had usable evidence. Some studies concerned the effect of more than one type of bias on effect estimates. We reviewed 23 studies on allocation concealment, 14 studies on sequence generation, 2 studies on unspecified bias in randomization, 2 studies on confounding, 2 studies on fidelity to protocol and unintended interventions, 4 studies on patient and/or provider blinding, 8 studies on assessor blinding, 2 studies on appropriate statistical methods, 18 studies on double blinding, 15 studies on attrition bias, and 9 studies on selective outcome reporting.

Although a trend toward exaggeration of treatment effects was seen across bodies of evidence for most biases, the magnitude and precision of the effect varied widely across studies. We generally found evidence that was precise and consistent in direction of effect for assessor and double blinding, specifically in relation to subjective outcomes, and for selective outcome reporting. Evidence was generally consistent in direction of effect but with variable precision across studies for allocation concealment, sequence generation, and assessor blinding of objective or mixed outcomes. In contrast, evidence was generally inconsistent and imprecise in relation to confounding, adequate statistical methods, fidelity to the protocol, patient/provider blinding, and attrition bias.

Studies differed markedly on a number of dimensions including measures/scales used to measure biases, the thoroughness of reporting of trial conduct that was required, approaches to statistical modeling and adjustment for potential confounding, types of outcomes and stratification by treatment or condition. Within many epidemiological studies, the included meta-analyses or trials varied along these dimensions as well.

Conclusions

Theory suggests that bias in the conduct of studies would influence treatment effects. Our review found some evidence of this effect in relation to some aspects of RCT study conduct. When the bias was present, commonly the treatment effect was increased, but rarely were the estimates precise in the individual studies. However, because this evidence is limited and uncertain with respect to the magnitude of the impact, this does not necessarily imply that systematic reviewers can eliminate assessment of risk of bias. Due to the complexity of evaluating precision in meta-epidemiological studies developed from potentially heterogeneous meta-analyses or trials, we cannot be sure that studies were sufficiently powered. We suggest that systematic reviewers consider subgroup analyses, with and without studies with flaws in relation to specific biases of importance for review questions. Future studies evaluating the impact of biases on treatment effect should follow the lead of the BRANDO study and use modeling approaches that include careful construction of large datasets of trials (and eventually observational studies) designed to look at the effect of specific aspects of study conduct and the interrelationship between bias concerns.

Project Timeline

The Empirical Evidence of Bias in Trials Measuring Treatment Differences

Oct 31, 2012
Topic Initiated
Sep 29, 2014
Research Report
Page last reviewed November 2017
Page originally created November 2017

Internet Citation: Research Report: The Empirical Evidence of Bias in Trials Measuring Treatment Differences. Content last reviewed November 2017. Effective Health Care Program, Agency for Healthcare Research and Quality, Rockville, MD.
https://effectivehealthcare.ahrq.gov/products/treatment-effects-bias/research

Select to copy citation