Skip Navigation
Department of Health and Human Services www.hhs.gov
 
 

Article Alert

The free Article Alert service delivers a weekly email to your inbox containing the most recently published articles on all aspects of systematic review and comparative effectiveness review methodologies.

  • Medical, psychological, educational, etc., methodology research literatures covered
  • Curated by our seasoned research staff from a wide array of sources: PubMed, journal table of contents, author alerts, bibliographies, and prominent international methodology and grey literature Web sites
  • Averages 20 citations/week (pertinent citations screened from more than 1,500 citations weekly)
  • Saves you time AND keeps you up to date on the latest research


Article Alert records include:

  • Citation information/abstract
  • Links: PMID (PubMed ID) and DOI (Digital Object Identifier)
  • Free Full Text: PubMed Central or  publisher link (when available)
  • RIS file to upload all citations to EndNote, RefWorks, Zotero, or other citation software

To sign up for free email updates of Article Alert, contact the Scientific Center Resource Library at methods@epc-src.org.

 

The Article Alert for the week of June 22, 2015 (sample articles)

Rücker G, Schwarzer G. Automated drawing of network plots in network meta-analysis. Res.Synth.Methods. Epub 2015 Jun 9. PMID: 26060934.

In systematic reviews based on network meta-analysis, the network structure should be visualized. Network plots often have been drawn by hand using generic graphical software. A typical way of drawing networks, also implemented in statistical software for network meta-analysis, is a circular representation, often with many crossing lines. We use methods from graph theory in order to generate network plots in an automated way. We give a number of requirements for graph drawing and present an algorithm that fits prespecified ideal distances between the nodes representing the treatments. The method was implemented in the function netgraph of the R package netmeta and applied to a number of networks from the literature. We show that graph representations with a small number of crossing lines are often preferable to circular representations.

 

Langan D, Higgins JP, Simmonds M. An empirical comparison of heterogeneity variance estimators in 12,894 meta-analyses. Res.Synth.Methods. Epub 2015 Jun 6

Heterogeneity in meta-analysis is most commonly estimated using a moment-based approach described by DerSimonian and Laird. However, this method has been shown to produce biased estimates. Alternative methods to estimate heterogeneity include the restricted maximum likelihood approach and those proposed by Paule and Mandel, Sidik and Jonkman, and Hartung and Makambi. We compared the impact of these five methods on the results of 12,894 meta-analyses extracted from the Cochrane Database of Systematic Reviews. We compared the methods in terms of the following: (1) the extent of heterogeneity, expressed as an I2 statistic; (2) the overall effect estimate; (3) the precision of the overall effect estimate; and (4) p-values testing the no effect hypothesis. Results suggest that, in some meta-analyses, I2 estimates differ by more than 50% when different heterogeneity estimators are used. Conclusions naively based on statistical significance (at a 5% level) were discordant for at least one pair of estimators in 7.5% of meta-analyses, indicating that the choice of heterogeneity estimator could affect the conclusions of a meta-analysis. These findings imply that using a single estimate of heterogeneity may lead to non-robust results in some meta-analyses, and researchers should consider using alternatives to the DerSimonian and Laird method.

 

Schmidt FL. History and development of the Schmidt-Hunter meta-analysis methods. Res.Synth.Methods. Epub 2015 Jun 10

In this article, I provide answers to the questions posed by Will Shadish about the history and development of the Schmidt-Hunter methods of meta-analysis. In the 1970s, I headed a research program on personnel selection at the US Office of Personnel Management (OPM). After our research showed that validity studies have low statistical power, OPM felt a need for a better way to demonstrate test validity, especially in light of court cases challenging selection methods. In response, we created our method of meta-analysis (initially called validity generalization). Results showed that most of the variability of validity estimates from study to study was because of sampling error and other research artifacts such as variations in range restriction and measurement error. Corrections for these artifacts in our research and in replications by others showed that the predictive validity of most tests was high and generalizable. This conclusion challenged long-standing beliefs and so provoked resistance, which over time was overcome. The 1982 book that we published extending these methods to research areas beyond personnel selection was positively received and was followed by expanded books in 1990, 2004, and 2014. Today, these methods are being applied in a wide variety of areas.