Powered by the Evidence-based Practice Centers
Evidence Reports All of EHC
Evidence Reports All of EHC

SHARE:

FacebookTwitterFacebookPrintShare

Surveillance and Identification of Signals for Updating Systematic Reviews: Implementation and Early Experience

Research Report Jun 7, 2013
Download PDF files for this report here.

Page Contents

People using assistive technology may not be able to fully access information in these files. For additional assistance, please contact us.

Structured Abstract

Background

The question of how to determine when a systematic review needs to be updated is of considerable importance. Changes in the evidence can have significant implications for clinical practice guidelines and for clinical and consumer decision-making that depend on up-to-date systematic reviews as their foundation. Methods have been developed for assessing signals of the need for updating, but these methods have been applied only in studies designed to demonstrate and refine the methods , and not as an operational component of a program for systematic reviews.

The Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice (EPC) program commissioned RAND's Southern Californian Evidence-based Practice Center (SCEPC) and University of Ottawa Evidence-based Practice Center (UOEPC), with assistance from the ECRI EPC, to develop and implement a surveillance process for quickly identifying Comparative Effectiveness Reviews (CERs) in need of updating.

Approach

We established a surveillance program that implemented and refined a process to assess the need for updating CERs. The process combined methods developed by the SCEPC and the UOEPC for prior projects on identifying signals for updating: an abbreviated literature search, abstraction of the study conditions and findings for each new included study, solicitation of expert judgments on the currency of the original conclusions, and an assessment of whether the new findings provided a signal according to the Ottawa Method and/or the RAND Method, on a conclusion-by-conclusion basis. Lastly, an overall summary assessment was made that classified each CER as being of high, medium, or low priority for updating. If a CER was deemed to be a low or medium priority for updating, the process would be repeated 6 months later; if the priority for updating was deemed high, the CER would be withdrawn from subsequent 6-month assessments.

Results

Between June 2011 and June 2012, we established a surveillance process and completed the evaluation of 14 CERs. Of the 14 CERs, 2 were classified as high priority, 3 as medium priority, and 9 as low priority. Of the 6 CERs released prior to 2010 (meaning over 18 months before the start of the program) 2 were judged high priority, 2 were judged medium priority, and 2 were judged low priority for updating. We have shown it is both useful and feasible to do such surveillance, in real time, across a program that produces a large number of systematic reviews on diverse topics.

Project Timeline

Surveillance and Identification of Signals for Updating Systematic Reviews: Implementation and Early Experience

Aug 7, 2012
Topic Initiated
Jun 7, 2013
Research Report
Page last reviewed November 2017
Page originally created November 2017

Internet Citation: Research Report: Surveillance and Identification of Signals for Updating Systematic Reviews: Implementation and Early Experience. Content last reviewed November 2017. Effective Health Care Program, Agency for Healthcare Research and Quality, Rockville, MD.
https://effectivehealthcare.ahrq.gov/products/systematic-review-updates/research

Select to copy citation