Skip to main content
Effective Health Care Program
Home » Products » Selective Outcome Reporting as a Source of Bias in Reviews of Comparative Effectiveness » Selective Outcome Reporting as a Source of Bias in Reviews of Comparative Effectiveness

Selective Outcome Reporting as a Source of Bias in Reviews of Comparative Effectiveness

Research Report

People using assistive technology may not be able to fully access information in these files. For additional assistance, please contact us.

Structured Abstract


The objectives of this exploratory study were to: (1) describe the frequency of selective outcome reporting (SOR) and selective analysis reporting (SAR) in randomized controlled trials (RCTs) included in reviews of comparative effectiveness for outcomes of benefit; (2) explore potential predictors for SOR and SAR; and (3) assess the reliability and validity of the Outcome Reporting Bias in Trials (ORBIT) classification system for missing or incomplete outcome reporting.


We selected three comparative effectiveness reviews (CERs) funded by the Agency for Healthcare Research and Quality that included drug–drug comparisons. Within each CER, we then specified one outcome that fulfilled explicit criteria (the “index outcome”) and examined the RCTs in the CER that reported that outcome. We then searched trial registries for study registration information and results for each RCT. Using available registry information to complement information in the methods section of the publication, we determined the frequency of SOR and SAR, and we examined prespecified predictors of SOR and SAR. Lastly, using the ORBIT classification of SOR, we attempted to examine the inter-rater reliability of ORBIT and its validity, comparing information contained within the publication to assessments of SOR, using the additional information obtained from trial registries.


RCTs published in 2005 or later and reporting the index outcome were not consistently listed in trial registries, with 29 percent, 67 percent, and 75 percent of trials registered for each of the three CERs. In addition, publications did not consistently report trial registration. Results were infrequently listed in, even after 2008, when reporting became mandatory for certain types of trials. Trial registration frequently occurred after the study was completed (in 25 percent, 50 percent, and 42 percent of trials in each of the three CERs). Changes occurred in the specification of the index outcome in the registry in 42 percent and 17 percent of trials in two CERs (the index outcome in the third CER was never mentioned in the registry). We did not find the ORBIT classification tool particularly useful: it was difficult to implement, and the nine classes were difficult to reliably distinguish. In addition, ORBIT classes did not describe a type of SOR and SAR that we frequently encountered: the addition of outcomes measures, subgroups, and other analyses to published results that were not prespecified in the publication’s methods section or listed in the registry. Finally, trial registries were of little use in identifying SOR unless trial results were listed in the registry and of no use in identifying SAR.


We identified numerous challenges in identifying and characterizing SOR and SAR in this pilot study of three CERs. Existing tools were suboptimal: ORBIT does not encompass the type of SOR and SAR where results in the publication were not prespecified in the methods section or in the registry. The design of our study (focusing on RCTs with results in the CER) precluded identifying certain types of SOR where the outcomes were not reported at all in the study. The presentation and content of could be improved to better assist the systematic reviewer in identifying potential SOR and SAR. Further research is needed to develop efficient, tailored approaches to identifying and characterizing SOR and SAR in trials.