Powered by the Evidence-based Practice Centers
Evidence Reports All of EHC
Evidence Reports All of EHC



Identifying Signals for Updating Systematic Reviews: A Comparison of Two Methods

Research Report Jun 15, 2011
Download PDF files for this report here.

Page Contents

Persons using assistive technology may not be able to fully access information in this file. For additional assistance, please contact us.

Structured Abstract


Methods of assessing the need for systematic reviews to be updated have been published, but agreement among them is unclear.


To compare two methods for assessing the need to update an evidence review, using three evidence reports on the effects of omega-3 fatty acids on cancer, cognition and aging, and cardiovascular diseases (with separate analyses for fish oil and alpha-linolenic acid). The RAND method combines a targeted literature search with the assessments of content experts. The Ottawa method relies on a quantitative and qualitative assessment of the study results from a similar targeted search.

Data Sources

A MEDLINE search was conducted on a limited set of journals, including five pivotal general medical journals and a small number of specialty journals, from 1 year prior to release of the original reports using their search strategies.


The search results were screened using the original eligibility criteria. Study-level data and findings of existing systematic reviews, randomized controlled trials, and large observational studies addressing the original key questions were abstracted. Using the RAND method, we contacted experts—including members of the original technical expert panels and the original peer reviewers—and sought their opinions regarding the status of the original reports and any new references. The results of the literature reviews and expert opinions were combined to determine the need for updating based on predetermined criteria. Using a modification of the Ottawa method, new trial data were meta-analyzed with the original meta-analysis results. A quantitative signal for the need to update was based on statistical differences with the original meta-analyses. Qualitative signals, such as differences in characterizations of effectiveness, new information about harm, and caveats about the previously reported findings, were sought for outcomes without existing meta-analyses. Agreement between the RAND and Ottawa methods was assessed for each report with the kappa statistic.


Overall agreement between the two methods ranged from “nonexistent” (kappa = 0.19, for fish oil and cardiovascular disease) to “almost perfect” (kappa = 1.0 for cognitive function). Many of the disagreements between the methods were due to a situation where the original review had a Key Question with no evidence and some evidence was identified in the update. In these situations, the RAND method produced a positive signal for updating and Ottawa's method produced a negative signal. A sensitivity analysis that reclassified these situations as agreement between the two methods yielded much better estimates of agreement: for three of the four conditions, agreement was “substantial” to “almost perfect” and overall agreement was “substantial.”


The RAND method and the modified Ottawa method agree reasonably well in their assessment of the need to update reviews. Both methods alone or in combination may be considered as appropriate tools. Future research would confirm these conclusions for a larger cohort of reviews and assess the predictive validity of the methods with actual updates.

Project Timeline

Identifying Signals for Updating Systematic Reviews: A Comparison of Two Methods

Dec 6, 2010
Topic Initiated
Jun 15, 2011
Research Report
Page last reviewed November 2017
Page originally created November 2017

Internet Citation: Research Report: Identifying Signals for Updating Systematic Reviews: A Comparison of Two Methods. Content last reviewed November 2017. Effective Health Care Program, Agency for Healthcare Research and Quality, Rockville, MD.

Select to copy citation