Skip Navigation
Department of Health and Human Services

How are Comparative Effectiveness Reviews Conducted?

Comparative effectiveness reviews follow an explicit set of principles for systematic reviews

  1. The questions that are most important to patients and health care decisionmakers are carefully chosen. The extent to which current scientific literature can answer these questions is then examined. Studies that measure health outcomes are given more weight than studies of intermediate outcomes, such as a change in a laboratory measure. Studies that measure benefits and harms over extended periods of time are usually more relevant than studies that examine outcomes over short periods.
  2. The types of research studies that provide useful evidence for a particular treatment are defined, collected and assessed.To assess the effectiveness of other interventions, such as the efficacy of a drug, reviews may focus on the results of randomized controlled trials. For other questions, or to compare results of trials with those from everyday practice, observational studies may play a key role. The hallmark of the systematic review process is the careful assessment of the quality of the collected evidence, with greater weight given to studies following methods that have been shown to reduce the likelihood of biased results. Although well-done randomized trials generally provide the highest quality evidence, well-done observational studies may provide better evidence when trials are too short, include too few participants, or have important methodological flaws.
  3. An assessment is made as to whether efficacy studies are applicable to the patients, clinicians and settings for whom the review is intended. A number of factors may limit how applicable the results from efficacy studies are to certain patient populations. Patients are often carefully selected, excluding patients who are sicker or older and those who have trouble adhering to treatment plans. Racial and ethnic minorities may also be underrepresented. Efficacy studies also often use regimens and follow-up protocols that maximize the benefits and limit the harms of certain treatments. As a result, the findings of such studies do not accurately reflect the real world outcomes of these treatments.

    Effectiveness studies are intended to provide results that are more applicable to “average” patients. However, they remain much less common than efficacy studies.

    A comparative effectiveness review examines the efficacy data thoroughly to ensure that decisionmakers can assess the scope, quality, and relevance of the available data and point out areas of clinical uncertainty. Clinicians can judge the relevance of the study results to their practice and should note where there are gaps in the available scientific information. Identified gaps in the available scientific evidence can provide important insight to organizations that fund research.
  4. The benefits and harms for different treatments and tests are presented in a consistent way so that decisionmakers can fairly assess the important tradeoffs involved for different treatments or diagnostic strategies. Expressing benefits in absolute terms (for example, a treatment prevents one event for every 100 treated patients) is more meaningful than presenting results in relative terms (for example, a treatment reduces events by 50 percent). These reviews also highlight areas in which evidence indicates that benefits, harms, and tradeoffs are different for distinct patient groups. Reviews do not attempt to set a standard for how results of research studies should be applied to patients or settings that were not represented in the studies. With or without a comparative effectiveness review, these are decisions that must be left to a clinician’s best judgment.