Powered by the Evidence-based Practice Centers
Evidence Reports All of EHC
Evidence Reports All of EHC

SHARE:

FacebookTwitterFacebookPrintShare

Updating Comparative Effectiveness Reviews: Current Efforts in AHRQ’s Effective Health Care Program

Methods Guide – Chapter Jul 27, 2011
Download PDF files for this report here.

Page Contents

This is a chapter from "Methods Guide for Effectiveness and Comparative Effectiveness Reviews."

This report has also been published in edited form: Tsertsvadze A, Maglione M, Chou R, et al. Updating comparative effectiveness reviews: current efforts in AHRQ's Effective Health Care Program. J Clin Epidemiol 2011 Nov;64(11):1208-15. PMID: 21684114.

Comparative Effectiveness Reviews are systematic reviews of existing research on the effectiveness, comparative effectiveness, and harms of different health care interventions. They provide syntheses of relevant evidence to inform real-world health care decisions for patients, providers, and policymakers. Strong methodologic approaches to systematic review improve the transparency, consistency, and scientific rigor of these reports. Through a collaborative effort of the Effective Health Care (EHC) Program, the Agency for Healthcare Research and Quality (AHRQ), the EHC Program Scientific Resource Center, and the AHRQ Evidence-based Practice Centers have developed a Methods Guide for Comparative Effectiveness Reviews. This Guide presents issues key to the development of Comparative Effectiveness Reviews and describes recommended approaches for addressing difficult, frequently encountered methodological issues.

The Methods Guide for Comparative Effectiveness Reviews is a living document, and will be updated as further empiric evidence develops and our understanding of better methods improves. Comments and suggestions on the Methods Guide for Comparative Effectiveness Reviews and the Effective Health Care Program can be made at https://effectivehealthcare.ahrq.gov.

This research was funded through contract number 290-02-0021 (EPC2) from the Agency for Healthcare Research and Quality to the following Evidence-based Practice Centers: University of Ottawa, RAND Corporation, Oregon, University of Connecticut, RTI-University of North Carolina, and Johns Hopkins.

None of the authors has a financial interest in any of the products discussed in this document.

Suggested citation: Tsertsvadze A, Maglione M, Chou, R, Garritty C, Coleman C, Lux L, Bass E, Balshem H, Moher D. Updating Comparative Effectiveness Reviews: Current Efforts in AHRQ’s Effective Health Care Program. Methods Guide for Comparative Effectiveness Reviews. (Prepared by the University of Ottawa EPC, RAND Corporation–Southern California EPC, Oregon EPC, University of Connecticut EPC, RTI–University of North Carolina EPC, Johns Hopkins Bloomberg School of Public Health EPC under Contract No. 290-02-0021 EPC2). AHRQ Publication No. 11-EHC057-EF. Rockville, MD: Agency for Healthcare Research and Quality. July 2011. Available at: https://effectivehealthcare.ahrq.gov.

This report has also been published in edited form: Tsertsvadze A, Maglione M, Chou R, et al. Updating comparative effectiveness reviews: current efforts in AHRQ’s Effective Health Care Program. J Clin Epidemiol 2011 Nov;64(11):1208-15. PMID: 21684114.

Authors

Alexander Tsertsvadze, M.D., M.Sc.a

Margaret Maglione, M.P.P.b

Roger Chou, M.D.c

Chantelle Garritty, M.Sc.a

Craig Coleman, Pharm.D.d

Linda Lux, M.P.A.e

Eric Bass, M.D., M.P.H.f

Howard Balshem, M.S.c

David Moher Ph.D.a

aUniversity of Ottawa Evidence-based Practice Center, Ottawa, Ontario, Canada

bRAND Corporation–Southern California Evidence-based Practice Center, Santa Monica, CA

cOregon Evidence-based Practice Center, Portland, OR

dUniversity of Connecticut Evidence-based Practice Center, Hartford, CT

eRTI–University of North Carolina Evidence-based Practice Center, Research Triangle Park, NC

fJohns Hopkins Bloomberg School of Public Health Evidence-based Practice Center, Baltimore, MD

The findings and conclusions in this document are those of the authors, who are responsible for its contents; the findings and conclusions do not necessarily represent the view of AHRQ or the Veterans Health Administration. Therefore, no statement in this report should be construed as an official position of these entities, the U.S. Department of Health and Human Services, or the U.S. Department of Veterans Affairs.

Key Points

  • Comparative Effectiveness Reviews (CERs) need to be regularly updated as new evidence is produced. Lack of attention to updating may lead to outdated and sometimes misleading conclusions that compromise health care and policy decisions.
  • The objective of this project was to review the current knowledge and efforts on updating systematic review (SRs) as applied to CERs.
  • There is little information about what proportion of SRs needs updating. Similarly, there is no consensus on when to initiate updating and how best to carry it out.
  • This paper outlines considerations for updating CERs by providing the following:
    • A definition of the updating process
    • When to update CERs
    • How to update CERs
    • How to present, report, and interpret results from updated CERs
    • Current and future research efforts

Background

To maintain relevance, systematic reviews (SRs) need to be regularly updated as new evidence is produced.1,2 The lack of attention to updating may lead to evidence-based conclusions becoming outdated and sometimes misleading, thus compromising health care and policy decisions. These problems could lead to a waste of resources, provision of redundant or ineffective health care, failure to implement more effective health care, and possibly cause harm. Disseminating the updated reviews will increase the awareness of new findings among relevant stakeholders and the likelihood that new evidence is incorporated into clinical practice. There is little information about what proportion of SRs are in need of updating at any given time, when to initiate updating, or how best to carry it out. Although the Cochrane Collaboration has invested substantial effort in preparing updates and keeping SRs up to date, other groups have published very few updates. One methodological survey,3 based on 300 SRs indexed in MEDLINE during November 2004, reported that 37.6 percent of the 125 Cochrane SRs and 2.3 percent of the 88 non-Cochrane reviews were updates.

In the absence of a standard method to determine when or how to update any given SR, some organizations have made recommendations about the frequency with which the evidence base needs to be updated. The Cochrane Collaboration has an established policy that reviews be assessed and updated every 2 years, or that a commentary be added to explain why this is done less frequently.4 Updating all SRs based on an arbitrarily defined time interval could result in inefficient use of resources, as SRs from diverse clinical areas will vary in how frequently they need to be updated depending on the pace of developments occurring in a given clinical area.

The U.S. Preventive Services Task Force (USPSTF) has addressed the issue of updating its clinical guideline recommendations.13 Because of resource limitations, they set priorities and order in which updates are conducted. This process involves a review of clinical evidence often based on evidence from SRs. A committee determines updating priorities based on the public health importance of the topic (burden of suffering and expected effectiveness of preventive services to reduce that burden), the potential for a USPSTF recommendation to affect clinical practice (based on existing controversy or the belief that a gap exists between evidence and practice), and the availability of new evidence that has the potential to change prior recommendations.

The Drug Effectiveness Review Project, the collaboration between the Oregon Evidence-based Practice Center (EPC) and the Center for Evidence-based Policy of Oregon established in 2003 (https://www.ohsu.edu/), has conducted SRs of comparative effectiveness and safety for drugs of the same class. The updating process has included an annual scan of literature using the same search strategy as for the previous report, but limited to MEDLINE. After identified article abstracts are reviewed, a decision is made whether to update the report. If the decision is made to update the report, then key questions for potential modifications are assessed to accommodate new evidence (e.g., new drugs, safety alerts, and new indications). The incorporation of newly identified evidence follows the same methodology as one used for an original review report.

The U.S. Agency for Healthcare Research and Quality (AHRQ) faces a similar dilemma in relation to keeping their evidence synthesis research up to date. An important cornerstone of AHRQ's research is the Effective Health Care (EHC) Program of which one of its mandates is to produce Comparative Effectiveness Reviews (CERs). A CER is a type of SR that synthesizes the available scientific evidence on a specific topic, beyond the effectiveness of a single intervention, by comparing the relative benefits and harms among a range of available treatments or interventions for a given condition.6 CERs like other SRs are also susceptible to becoming out of date.

This paper reviews current knowledge and efforts on updating SRs as applied to CERs.

Why Update CERs?

Whether a CER needs to be updated depends on many factors, as several reasons may exist for undertaking an update. The most common reason is to include newly published studies or studies that have been updated with information not previously presented. Newly identified studies may report on newly emerged interventions, devices, technologies, diagnostic tests, procedures, harms, and efficacy outcomes. Updating may be conducted to include delayed publications to minimize the impact of time lag bias or to add missing or unpublished data obtained from authors of primary studies.7 In some cases, the passage of time may bring about new understanding of disease mechanisms that may change the scope of key questions originally asked.

Updates may present a good opportunity to correct various errors or incorporate relevant older evidence in the original CER report, as studies may have been missed by the original searches because of inadequately conducted initial searches or incorrect application of study inclusion/exclusion criteria. In addition, subsequent publications of previously published studies may also provide relevant evidence not presented previously.

Definition of Update

The term “to update” means “to extend up to the present time” or “to include the latest information.”8 Moher and Tsertsvadze proposed a formal definition of update for SRs to mean a discrete event aiming to search for and identify “new evidence” to incorporate into a previously completed SR.9 Central to updating is the effort to identify such “new evidence,” irrespective of date of publication. We take this view to mean any relevant evidence not included in the previously completed review, not just new studies published since the last review. We believe this definition is appropriate given the purpose of CERs, and it is in keeping with the Cochrane Collaboration’s definition.4,10 The authors explain that a feature of an updated review distinguishing it from a new review is that during updating constituent elements of the originally formulated protocol (e.g., search strategy, eligibility criteria, and key questions) may be retained and sometimes extended/modified to accommodate newly identified evidence (e.g., new intervention, new outcome, or new subpopulation).9

When To Update CERs

The optimal timing for conducting an update for a CER depends on many factors: rapidity of scientific developments in a given clinical area, nature of the health condition in question, and public health importance. No standard methodology exists for assessing the need for updating a review at a given point in time.11 Conducting periodic literature surveillance12 and obtaining expert opinion13,14 are helpful sources for efficiently identifying new relevant evidence to determine when to update.

Surveillance searching is one common technique to monitor emergence of new evidence for the purpose of updating. Although because of efficiency considerations, surveillance search strategies typically are not comprehensive, they are useful in flagging CERs in need of updating. Sampson and colleagues12 tested and compared the feasibility and performance of five different surveillance search techniques alone or in combination for identifying relevant new evidence needed for updating SRs. The surveillance searches (i.e., related articles, clinical queries, CENTRAL, core clinical journals, citing article) were carried out for a cohort of 77 SRs. For each surveillance technique, the authors calculated recall (i.e., the proportion of identified relevant studies) and screening burden (i.e., the number of studies to be reviewed to identify relevant evidence for updating). The technique based on the combination of the PubMed-related articles search and subject searching with clinical queries was the most effective approach, yielding 71 new records per review with an inter-quartile range from 42 to 161. Identifying new evidence on harms warrants at least the same rigor in surveillance search as that for benefits; it should be an integral part of the updating process. The databases of peer-reviewed literature should be periodically searched for new studies reporting adverse events or SRs, meta-analyses and HTA reports focusing on harms to achieve greater efficiency with respect to time and resources spent. Drug warnings often based on adverse events data (e.g., case reports, case-series) reported by consumers or medical providers can be found in nationally licensed databases (e.g., U.S. Food and Drug Administration). Such case reports or case-series are not often submitted for journal publication, therefore to supplement searches of the peer-reviewed literature, we recommend searching such databases.15

Experts in the field are often aware of new developments before they become public. These developments include new controversies, drugs or devices in development, ongoing trials and observational studies, papers in submission or in press, and reports of adverse events (i.e., case reports). Expert opinion has been used in updating clinical practice guidelines.16,17 While reviewers are updating a CER, they may find expert opinion useful as a supplemental source for identifying new evidence.13 The experts may be asked their opinion about whether the conclusion of any given review is still valid and whether or not they are aware of any new evidence that may change this conclusion.14

The body of empirical evidence indicating how frequently or when any given SR needs to be updated is small and inconsistent.7 For example, findings reported in studies by French18 and Shojania19 convey conflicting messages regarding how frequently SRs need to be updated.

French and colleagues18 surveyed and followed up 362 SRs in the Cochrane Database of SRs from their original publication in 1998 (Issue 2) to 2002 (Issue 2). The authors reported that 70 percent (254/362) of these reviews had been updated during the 4-year period. Of the updated SRs, only 9 percent (23/254) had changes in their conclusions.

Shojania and colleagues19 proposed several quantitative and qualitative signals indicating when any given SR needs updating. They defined a quantitative signal as a change in statistical significance for an effect estimate using a conventional threshold of α=0.05 or a relative change of ≥ 50% in the magnitude of an effect. The authors defined a qualitative signal as a qualitatively different characterization of effectiveness that affects clinical decisionmaking (e.g., a new harm, a new alternative therapy, expansion of treatment to a new patient subgroup). The median time to a qualitative or quantitative signal for updating of 100 SRs was 5.5 years (95% CI: 4.6-7.6). Twenty-three percent of SRs had signals indicating the need for updating within 2 years, 15 percent within 1 year, and 7 percent at the time of publication. The odds of signals for updating were significantly higher for cardiovascular topics than for other topics. This work suggests the presence of several indicators that likely coexist to varying degrees, and it highlights the potential of signal detection in the updating process. The identification of a qualitative signal requires far fewer resources than determination of a quantitative signal.

In 2008, AHRQ asked the Southern California Evidence-based Practice Center (SCEPC) to determine whether 11 AHRQ-funded CERs representing different clinical areas and published since 2005 needed updating.14 To assess the need for updating for specific CERs, SCEPC applied a modification of a method proposed by Shekelle and colleagues,16 which is a combination of abbreviated literature review of several preselected, high-impact generalist, and specialty peer-reviewed journals for each clinical area, expert opinion, and the review of U.S. Food and Drug Administration (FDA) Web site. For each CER, the recommendations for updating (e.g., needs updating now, may need updating in future, no need for updating now) were based on changes in four indicators: (a) evidence on the benefits and harms of existing interventions, (b) available interventions, (c) outcomes considered important, and (d) evidence that current practice is optimal. Of the 11 CERs published in 2005 or later, 4 were recommended for current updating and 4 for future updating, and the remaining 3 were deemed not in need of updating for some time.

How To Update CERs

If new studies are published, new harms have emerged, a new more effective intervention(s) is introduced, or existing (or new) interventions are extended to new patient groups, the question of updating for an individual EPC moves from “when to update,” which may be based on priorities and available resources, to “how to update.”

The updating process for any given CER can be viewed as a continuum stretching over a wide range of activities from a single update search to a comprehensive expanded search including old and new searches and incorporating new evidence across all sections of a CER. Moreover, the updating process may be different for CERs with and without meta-analysis in terms of updating scope, methodology, and amount of needed resources.

Therefore, the rational choice of the scope for an update search will depend largely on where a given investigator stands along the continuum of updating process and available resources allocated to updating.20

Assessment of Key Questions and Constituent Elements for an Update

Because medical disciplines are constantly evolving through emergence of new evidence, it is recommended that reviewers assess the key question(s) of the original CER at the initial stage of updating. Specifically, they should determine the extent to which the constituent elements of the key research question(s) denoting Population, Intervention, Comparator, and Outcome (PICO) may have changed. If an update search does not identify any relevant evidence, the key question(s) and CER section(s) of the original report will not be modified. However, the status of the CER will be registered as ‘updated’ by including information on the search dates and time-periods covered by the search.

When newly identified evidence does not entail the modification of any PICO elements of a key question (e.g., no new subpopulation, no new intervention, or no new outcome was identified), the update process will consist of only incorporating this evidence into relevant sections of the report (e.g., Results and Conclusion). However, if newly identified evidence includes a new PICO element (e.g., new harm and/or new subpopulation was identified), the inclusion/exclusion criteria will need to be extended and the key question(s) modified with respect to the given PICO element in order to accommodate this evidence in relevant sections of the updated CER (e.g., Methods, Results, and Conclusion). The identification of evidence on the same intervention, comparator, and outcome as specified in a key question of the original CER, but for people with a newly identified health condition, would not be an update of the previous CER, since it entails the exploration of a new key question.

The assessment process of the updating scope and corresponding modifications are depicted in Table 1.

Table 1. Scope of updating and corresponding actions using original or modified search strategy
Scope of Newly Identified Evidence Warranting an Action to Update Action for a Key Question Changes After Updating (Updated vs. Original CER)
CER=comparative effectiveness review; PICO=Population/Intervention/Comparator/Outcome; KQ=key question
Search performed but no evidence
  • None
  • No change in the CER or KQ
  • KQ status = updated
Evidence from new studies (without identification of a new PICO element)
  • Update Results and Conclusion sections
  • No change in KQ
  • Updated Results and Conclusions sections
New evidence from already included studies (without identification of a new PICO element)
  • Update Results and Conclusion sections
  • No change in KQ
  • Updated Results and Conclusions sections
Identification of a new PICO element
  • New subpopulation(s) only
  • New intervention(s) only
  • New comparator(s) only
  • New outcome(s) only
  • Update Methods, Results and Conclusion sections
  • Extend the inclusion/exclusion criteria for
    • the population
    • the intervention
    • the comparator
    • the outcome
  • Modify KQ with respect to a new PICO element (population, intervention, comparator, or outcome)
  • Updated Methods, Results and Conclusions sections

General Search Strategies for Updating CERs

Once a decision has been made to conduct an update of a CER, it is important to perform comprehensive searches that adhere to the general principles for conducting a systematic search as recommended in the AHRQ methods guide.15 This includes searches of multiple literature sources (e.g., SRs, bibliographic databases, Web sites, allied health professional databases, pharmacoepidemiologic databases, governmental regulatory cites, scientific information packets, and miscellaneous resources). The guide recommends searching several major bibliographic databases such as MEDLINE, EMBASE, CINAHL, Cochrane CENTRAL, and PsycInfo.15 Some authors suggest the search of other supplemental sources such as reference lists of key citations.13

Moreover, there are some specific approaches to searching listed below that are particularly relevant to the process of updating. During any given update, the original search strategy can frequently be carried over to the update. Investigators should also use the opportunity to review the search strategy and modify search terms, databases and other sources searched, if necessary, and have it peer-reviewed, if not previously done.21 For example, use of governmental and nongovernmental clinical trials registries has expanded; their inclusion could provide useful information on in-progress or unpublished trials as well as unpublished outcomes.22,23 Investigators should also consider previous decisions regarding the inclusion/exclusion of grey literature, non-English language literature, or other sources of evidence.24,25 Additional information worth considering in updating may be requested through contacting manufacturers of pharmaceutical or biotechnical products.

To limit the number of citations to review, one strategy is to limit the start date for update searches. However, delays between publication in journals and indexing in MEDLINE and other electronic databases occur and are variable in duration.26 Therefore, we recommend that reviewers use a start date at least 1 year before the end date of the original search. Searches could be based on the “entry date” (date the publication was added to MEDLINE) rather than the publication year.27 This search technique results in more complete retrieval of relevant records, including those that have become available since the date of the last search, thereby minimizing publication bias.

When newly identified evidence through an update includes a new PICO element (e.g., new harm, new subpopulation), resulting in corresponding modifications to the key question(s), it is recommended that a repeated search covering the start date of search for the original CER be conducted to ensure there are no missed studies reporting the new PICO element.

Statistical Methods Relevant to Updating Meta-Analyses

Updating or assessing the need for updating a meta-analysis as a part of any given CER will necessitate the use of statistical method(s). A recent SR surveyed and appraised various methods and/or strategies describing the process of updating SRs.7 This review identified two statistical methods (cumulative meta-analysis and identifying null meta-analyses ripe for updating).28-31

Cumulative meta-analysis (CMA) is a statistical procedure in which the combined effect estimate is sequentially updated by incorporating results from each newly available study.29-31 This technique documents trends in a treatment effect over time and provides up-to-date information. When done prospectively, it may be useful in identifying the earliest time at which the statistical evidence that an intervention is effective or harmful is sufficient.30 However, CMA can be costly and time consuming, and it may pose the potential for an inflated rate of type-I error arising from repeated hypothesis testing.32 Moreover, the use of this procedure is limited only to instances when all PICO elements of the key question remain constant over time. In one extension of CMA proposed by Mullen and colleagues,33 a least-squares regression line is fitted to points corresponding to the effect size for each successive cumulatively added study. The slope of this line helps reviewers to gauge the stability of effect size (including no effect) more objectively than through visual inspection. The cumulative slope is a useful tool in determining when the updating process should stop to avoid waste of resources in the absence or presence of effect for any given health intervention.

Barrowman and colleagues28 proposed a method to assess whether the amount of new evidence that has accrued is sufficient to turn a statistically nonsignificant meta-analytic result into a significant one, thereby rendering the meta-analysis in question “ripe for updating.” Thus, this approach helps to identify meta-analyses with negative results (i.e., non-significant pooled estimate) in need of updating. It requires searching, screening, and only partial data extraction (i.e., number of newly identified additional participants), rather than a complete updating implemented through addition of each new study. Depending on the configuration of computer simulation, this approach was shown to classify correctly whether a statistically nonsignificant result of a meta-analysis was outdated with a sensitivity ranging from 49 percent to 62 percent and a specificity ranging from 80 percent to 90 percent.

Evolution of Methods When Conducting an Update

Methods used to conduct CERs (e.g., methods for pooling, assessing the risk of bias, grading the strength of evidence) continue to evolve. If some methods have changed between the original and the to-be-updated CERs, we recommend that investigators compare the methods used in the original CER with the newly developed methods. If the new methodology is an obvious improvement over the older one, the CER team should ideally rereview (e.g., appraise, grade) all previously and newly included studies using the new methodology for sake of consistency between the assessments and conclusions of the original and updated review.

Moreover, critical feedback obtained on the original review can provide useful information regarding correct choices for the analyses the reviewers might consider conducting in an updated CER. For example, if a CER is criticized for its use of a fixed-effect over random-effects model for pooling results of individual studies, conducting sensitivity analyses using both pooling methods (or only random-effects model, if deemed appropriate) in the update might be reasonable.

Incorporating New Evidence and Reporting an Update

After reviewers identify new evidence, they must incorporate it into the update. The amount of resources, complexity of methods, and logistic efforts needed for incorporation of an update in a CER will depend on the amount of newly identified evidence (e.g., number of new studies) and the degree of consistency of evidence-based findings in the original versus the updated CER.

One commonly used approach is to incorporate the new evidence into the previous review by updating results (i.e. search yield, number of studies, quality assessments, effect estimates, and conclusions) and other respective sections of the review as appropriate. The reviewers can summarize the updated evidence in a distinct section at the end of the review (i.e., “summary of update results and discussion” sections).

To make updates most useful to readers, reviewers need to describe clearly the purpose of the update, the methods used to conduct it, and the results. Reviewers should explicitly note any changes in the scope, methods, and understanding of the mechanism of an intervention’s action on a disease for the key question in the updated versus the original review. The rationale for introducing any new methodology or different conceptual framework in the updated report compared to the original one also needs to be described. Important elements to focus on include the search strategy (including sources, search terms, the start and end dates covered by searches), the yield of the searches, important characteristics of new evidence (number, type, size, and quality of studies; study participants; outcomes), and main results, including how the conclusions of the update differ from those of the original review. Evidence that has the most impact on the conclusions of the update should be emphasized and described in detail. If reviewers have not identified new evidence for part of the review, they should still update the report by including all the details of last search (see above), results of search yield (e.g., no new studies), and the currency of the conclusions (i.e., no change and still judged to be accurate). When incorporating evidence on a new intervention, outcome or subpopulation group, we suggest adding a new section in the Results chapter of the CER report.

For more efficient presentation of update results, we suggest including a summary table (Table 2, given as an example) and the PRISMA study flow diagram34 in the CER report. Currently, the SCEPC is developing the recommended format of the summary table.

The updating process will have optimal credibility if it is conducted and reported transparently. To ensure continued transparency, the EHC Program should publish the titles of CERs selected for updating. Updated CERs should include a description of how they were updated. There should be adequate opportunity provided for public comment on both the CERs chosen for updating as well as subsequent updated draft reports. Posting a list of key questions for CERs that will be updated will ensure that a broad range of stakeholders (e.g., biopharmaceutical and device manufacturers, governmental agencies, academic institutions) have the opportunity to provide relevant new evidence that the project team might consider as informative to the decisionmaking process.

Table 2. Example of a summary table for an update of key questions within comparative effectiveness review
Comparison
(Design)
2001 Report 2009 Update Did the conclusion for KQ change?
Outcome (binary) and population N studies Summary result N new studies Summary Result New PICO element(s) Conclusion
N=number; PL=placebo; Tx=treatment; RCT=randomized controlled trial; KQ=key question
µ Trials could not be pooled due to heterogeneity in methodology of their conduct
F Bold and not bolded fonts denote pooled and individual study point estimates of relative risk (95 percent confidence interval), respectively
‘A’ vs. ‘No Tx’ (RCTs)
  • Outcome-1 (e.g., efficacy)
  • Sub-population-1 (e.g., males)
5
  • 1.5 (1.1, 1.7)£
    N=5
2
  • 1.4 (1.2, 1.6)
    N=7
None ‘A’ more effective than ‘No Tx’ in males No
1
  • 1.6 (1.2, 2.0)
Outcome-2 (e.g., new harm) in subpopulation-1 (e.g., males) ‘A’ more harmful than ‘No Tx’ in males KQ may need modification to accommodate new results
2
  • 1.7 (1.1, 2.3)
    N=2
Outcome-1 (e.g., efficacy) in subpopulation-2 (e.g., females) ‘A’ more effective than ‘No Tx’ in females
1
  • 1.1 (0.7, 1.3)
Outcome-2 (e.g., new harm) insubpopulation-2 (e.g., females) No evidence that ‘A’ is more harmful than ‘No Tx’ in females
‘A’ vs. ‘PL’ (RCTs)
  • Outcome-1 (e.g., efficacy)
  • Sub-population-1 (e.g., males)
3
  • 0.9 (0.8, 1.4)
    N=3
0
  • 0.9 (0.8, 1.4)
    N=3
None No evidence of difference in efficacy between ‘A’ and ‘PL’ in males No
‘A’ vs. ‘B’ (Non-RCTs) µ
  • Outcome-1 (e.g., efficacy)
  • Sub-population-1 (e.g., males)
2
  • 2.3 (1.5, 3.4)
  • 1.2 (0.7, 1.9)
2
  • 1.6 (1.1, 3.0)
  • 2.0 (1.2, 3.3)
  • 2.3 (1.5, 3.4)
  • 1.2 (0.7, 1.9)
None Some evidence that ‘A’ more effective than ‘B’ in males Yes
‘A’ vs. ‘C’ (RCTs) 3
  • 1.1 (0.9, 2.2)
    N=3
New treatment ‘C’ for outcome-1 (e.g., efficacy) in subpopulation-1 (e.g., males) No evidence of difference in efficacy between ‘A’ and ‘C’ in males KQ may need modification to accommodate new results

Issues of Authorship and Challenges of Updating CER

Ideally, the original CER authors should be asked to conduct the update. But this approach may be problematic for many reasons. Over time, authors may be working on new topics, may have changed institutions or affiliations, or may not be interested in updating already published CER. Garritty and colleagues found that of the health care agencies and organizations involved in conducting SRs that were surveyed, only 54 percent (56/103) were able to draw on the same authors of the original review for updating.11 This phenomenon poses significant problems for the cost, time, and practicality of an update. Naturally, new reviewers would require additional time to become familiar with a CER. In addition, knowledge of project history would be diminished or perhaps lost, and issues of replication and transparency could arise if the original CER was not well reported. These factors combined would add to costs and jeopardize the feasibility of updating.

If an update involves new authors, it is important to discuss author issues as early in the updating process as possible. One objective would be to ascertain the level of involvement and authorship of the original CER team in the update. These discussions can be informed by examining current international policies and guidance on authorship suggested by the International Committee of Medical Journal Editors (http://www.icmje.org) and contributions of authors.35

Current and Future Research Efforts

In the near future, a standardized guideline for updating of CERs applicable across EPCs across the range of health care interventions and treatment modalities (e.g., devices, pharmaceutical products, surgery, diagnostic tests, and other procedures) is needed. This guideline could incorporate a step-wise use of selected updating strategies and methods that have been empirically shown as valid, reliable, and resource-efficient. Ideally, such a guideline would include specific recommendations on three important dimensions: (1) setting updating priorities based on factors such as public health burden, severity of health condition, number of outdated key questions for a given CER; (2) clarifying the responsibilities and authorship (especially when authors of the original report change their institutional affiliations or are difficult to locate) for updating CERs; and (3) implementing the updating process (e.g., triggers for updating, timing and sources for evidence surveillance).

To date, there has been insufficient research to inform which strategy or method used for updating is most reliable, applicable, and cost effective.7 Future research should compare different approaches used for updating evidence to help to identify most robust and efficient strategies and methods to carry out updating. Furthermore, methods developed in other fields (e.g., health economics, bibliography) need to be considered to inform when and how to update CERs. For example, value-of-information analysis may determine a benefit for making a decision to update a CER in terms of reduced uncertainty even if conclusions of the original CER are unchanged.36

As an ongoing effort, the EPCs of Tufts Medical Center, Southern California, and University of Ottawa have jointly piloted and elaborated the process of assessing the need of updating for selected CERs by comparing two methods developed at the SCEPC-based Research and Development corporation (the RAND method)14 and University of Ottawa (the Ottawa method).19 The RAND method is based on the combination of external domain expert opinion, an abbreviated search, and determination of the validity of conclusions in the original CER. The Ottawa method relies on the identification of qualitative and quantitative signals through literature search used in the original report but limited to five major general-interest medical journals, supplemented with a small number of specialty journals. If the original report includes a meta-analysis, a quantitative signal is considered.

Based on the previous work,14,19 the EPCs of Southern California (RAND), University of Ottawa, and Emergency Care Research Institute initiated a joint collaboration to develop and implement a system of ongoing literature surveillance to identify triggers (or signals) for updating systematic reviews within the EPC program of the AHRQ. This project is being coordinated across the three participating centers to ensure consistency in application of methods.

This joint collaboration emphasizes the importance and usefulness of international harmonization of the updating process for maintaining, modifying, and disseminating the updated findings of CERs in future.

References

  1. Chalmers I, Enkin M, Keirse MJ. Preparing and updating systematic reviews of randomized controlled trials of health care. Milbank Q 1993;71(3):411-437.
  2. Chalmers I, Haynes B. Reporting, updating, and correcting systematic reviews of the effects of health care. BMJ 1994;309(6958):862-865.
  3. Moher D, Tetzlaff J, Tricco AC, et al. Epidemiology and reporting characteristics of systematic reviews. PLoS Med 2007;4(3):e78.
  4. Higgins JPT, Green S, Scholten RJPM. Chapter 3. Maintaining reviews: updates, amendments and feedback. In: Higgins JPT, Green S, editors. Cochrane Handbook For Systematic Reviews of Interventions Version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008. Available at: https://us.cochrane.org. Accessed May 15, 2011.
  5. Guirguis-Blake J, Calonge N, Miller T, et al. Current processes of the U.S. Preventive Services Task Force: refining evidence-based recommendation development. Ann Intern Med 2007;147(2):117-122.
  6. Agency for Healthcare Research and Quality. Effective Health Care Program. 2010. Available at: https://effectivehealthcare.ahrq.gov. Accessed February 23, 2011.
  7. Moher D, Tsertsvadze A, Tricco AC, et al. A systematic review identified few methods and strategies describing when and how to update systematic reviews. J Clin Epidemiol 2007;60(11):1095-1104.
  8. Merriam-Webster’s Collegiate Dictionary. 10th ed. Springfield, Massachusetts: Merriam-Webster. 1996.
  9. Moher D, Tsertsvadze A. Systematic reviews: when is an update an update? Lancet 2006;367(9514):881-883.
  10. Higgins JPT, Green S. editors. Cochrane Handbook for Systematic Reviews of Interventions Version 5.0.2 [updated September 2009]. The Cochrane Collaboration, 2009. Available at: https://us.cochrane.org.
  11. Garritty C, Tsertsvadze A, Tricco AC,et al. Updating systematic reviews: an international survey. PloS one 2010;5(4):e9914.
  12. Sampson M, Shojania KG, McGowan J, et al. Surveillance search techniques identified the need to update systematic reviews. J Clin Epidemiol 2008;61(8):755-762.
  13. Greenhalgh T, Peacock R. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources. BMJ 2005;331(7524):1064-1065.
  14. Shekelle P, Newberry S, Maglione M, et al. Assessment of the Need to Update Comparative Effectiveness Reviews: Report of an Initial Rapid Program Assessment (2005-2009). Rockville, MD: Agency for Healthcare Research and Quality. 2009.
  15. Relevo R, Balshem H. Finding Evidence for Comparing Medical Interventions. Methods Guide for Comparative Effectiveness Reviews. AHRQ Publication No. 11-EHC021-EF. Rockville, MD: Agency for Healthcare Research and Quality. January 2011. Available at: https://effectivehealthcare.ahrq.gov/products/methods-guidance-finding-evidence/methods/. Accessed May 15, 2011.
  16. Shekelle P, Eccles MP, Grimshaw JM,et al. When should clinical guidelines be updated? BMJ 2001;323(7305):155-157.
  17. Gartlehner G, West SL, Lohr KN, et al. Assessing the need to update prevention guidelines: a comparison of two methods. Int J Qual Health Care 2004;16(5):399-406.
  18. French SD, McDonald S, McKenzie JE,et al. Investing in updating: how do conclusions change when Cochrane systematic reviews are updated? BMC Med Res Methodol 2005;5:33.
  19. Shojania KG, Sampson M, Ansari MT, et al. How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med 2007;147(4):224-233.
  20. Garritty C, Tricco A, Sampson M, et al. Updating Systematic Reviews: the Policies and Practices of Health Care Organizations Involved in Evidence Synthesis. [MSc thesis]. University of Toronto; 2009.
  21. Sampson M, McGowan J, Cogo E, et al. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol 2009;62(9):944-952.
  22. DeAngelis CD, Drazen JM, Frizelle FA, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. JAMA 2004;292(11):1363-1364.
  23. Manheimer E, Anderson D. Survey of public information about ongoing clinical trials funded by industry: evaluation of completeness and accessibility. BMJ 2002;325(7363):528-531.
  24. Bennett DA, Jull A. FDA: untapped source of unpublished trials. Lancet 2003;361(9367):1402-1403.
  25. Moher D, Pham B, Klassen TP, et al. What contributions do languages other than English make on the results of meta-analyses? J Clin Epidemiol 2000;53(9):964-972.
  26. McAuley L, Pham B, Tugwell P, et al. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? Lancet 2000;356(9237):1228-1231.
  27. Bergerhoff K, Ebrahim S, Paletta G. Do we need to consider ‘in process citations’ for search strategies? 12th Cochrane Colloquium. October 26, 2004; Ottawa, Ontario, Canada.
  28. Barrowman NJ, Fang M, Sampson M, et al. Identifying null meta-analyses that are ripe for updating. BMC Med Res Methodol 2003;3(1):13.
  29. Lau J, Antman EM, Jimenez-Silva J, et al. Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med 1992;327(4):248-254.
  30. Lau J, Schmid CH, Chalmers TC. Cumulative meta-analysis of clinical trials builds evidence for exemplary medical care. J Clin Epidemiol 1995;48(1):45-57.
  31. Baum ML, Anish DS, Chalmers TC, et al. A survey of clinical trials of antibiotic prophylaxis in colon surgery: evidence against further use of no-treatment controls. N Engl J Med 1981;305(14):795-799.
  32. Chalmers T. Problems induced by meta-analyses. Stat Med 1991;10(6):971-979.
  33. Mullen B, Muerllereile P, Bryant B. Cumulative meta-analysis: a consideration of indicators of sufficiency and stability. Pers Soc Psychol Bull 2001;27:1450-1462.
  34. Moher D, Liberati A, Tetzlaff J, et al. The PRISMA Group. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med 2009;6(7):e1000097.
  35. Rennie D, Flanagin A, Yank V. The contributions of authors. JAMA 2000;284(1):89-91.
  36. Claxton K, Ginnelly L, Sculpher M, et al. A pilot study on the use of decision theory and value of information analysis as part of the NHS Health Technology Assessment programme. Health Technol Assess 2004;8(31):1-103, iii.

Project Timeline

Updating Comparative Effectiveness Reviews: Current Efforts in AHRQ's Effective Health Care Program

Jul 27, 2011
Topic Initiated
Jul 27, 2011
Methods Guide – Chapter
Page last reviewed December 2019
Page originally created November 2017

Internet Citation: Methods Guide – Chapter: Updating Comparative Effectiveness Reviews: Current Efforts in AHRQ’s Effective Health Care Program. Content last reviewed December 2019. Effective Health Care Program, Agency for Healthcare Research and Quality, Rockville, MD.
https://effectivehealthcare.ahrq.gov/products/methods-guidance-updates/methods

Select to copy citation