Powered by the Evidence-based Practice Centers
Evidence Reports All of EHC
Evidence Reports All of EHC

SHARE:

FacebookTwitterFacebookPrintShare

Evaluating Practices and Developing Tools for Comparative Effectiveness Reviews of Diagnostic Test Accuracy: Methods for the Joint Meta-Analysis of Multiple Tests

Research Report Jan 16, 2013
Download PDF files for this report here.

Page Contents

People using assistive technology may not be able to fully access information in these files. For additional assistance, please contact us.

Note: This is one of four related projects designed to document the current standards and methods used in the meta-analysis of diagnostic tests, validate newly proposed methods, develop new statistical methods to perform meta-analyses of diagnostic tests, and then to incorporate these insights into computer software that will be available to all EPCs and others conducting reviews of diagnostic tests. The other projects can be accessed on the right side of this page.

Structured Abstract

Background

Existing methods for meta-analysis of diagnostic test accuracy focus primarily on a single index test rather than comparing two or more tests that have been applied to the same patients in paired designs.

Objectives

We develop novel methods for the joint meta-analysis of studies of diagnostic accuracy that compare two or more tests on the same participants.

Methods

We extend the bivariate meta-analysis method proposed by Reitsma et al. (J Clin Epidemiol. 2005; 58[10]:982–90) and modified by others to simultaneously meta-analyze M ≥ 2 index tests. We derive and present formulas for calculating the within-study correlations between the true-positive rates (TPR, sensitivity) and between the false-positive rates (FPR, one minus specificity) of each test under study using data reported in the studies themselves. The proposed methods respect the natural grouping of data by studies, account for the within-study correlation between the TPR and the FPR of the tests (induced because tests are applied to the same participants), allow for between-study correlations between TPRs and FPRs (such as those induced by threshold effects), and calculate asymptotically correct confidence intervals for summary estimates and for differences between summary estimates. We develop algorithms in the frequentist and Bayesian settings, using approximate and discrete likelihoods to model testing data.

Application

Published meta-analysis of 11 studies on the screening accuracy of detecting trisomy 21 (Down syndrome) in liveborn infants using two tests: shortened humerus (arm bone), and shortened femur (thigh bone). Secondary analyses included an additional 19 studies on shortened femur only.

Findings

In the application, separate and joint meta-analyses yielded very similar estimates. For example, in models using the discrete likelihood, the summary TPR for a shortened humerus was 35.3 percent (95% credible interval [CrI]: 26.9, 41.8%) with the novel method, and 37.9 percent (27.7 to 50.3%) when shortened humerus was analyzed on its own. The corresponding numbers for the summary FPR were 4.8 percent (2.8 to 7.5%) and 4.8 percent (3.0 to 7.4%).

However, when calculating comparative accuracy, joint meta-analyses resulted in shorter confidence intervals compared with separate meta-analyses for each test. In analyses using the discrete likelihood, the difference in the summary TPRs is 0 percent (−8.9, 9.5%; TPR higher for shortened humerus) with the novel method versus 2.6 percent (−14.7, 19.8%) with separate meta-analyses. The standard deviation of the posterior distribution of the difference in TPR with joint meta-analyses is half of that with separate meta-analyses.

Conclusions

The joint meta-analysis of multiple tests is feasible. It may be preferable over separate analyses for estimating measures of comparative accuracy of diagnostic tests. Simulation and empirical analyses are needed to better define the role of the proposed methodology.

Journal Publications

Thomas A. Trikalinos, David C. Hoaglin, Kevin M. Small, Norma Terrin, and Christopher H. Schmid. (2014). Methods for the joint meta-analysis of multiple tests. Research Synthesis Methods. doi:10.1002/jrsm.1115

Project Timeline

Evaluating Practices and Developing Tools for Comparative Effectiveness Reviews of Diagnostic Test Accuracy: Methods for the Joint Meta-Analysis of Multiple Tests

Feb 25, 2011
Topic Initiated
Jan 16, 2013
Research Report
Page last reviewed November 2017
Page originally created November 2017

Internet Citation: Research Report: Evaluating Practices and Developing Tools for Comparative Effectiveness Reviews of Diagnostic Test Accuracy: Methods for the Joint Meta-Analysis of Multiple Tests. Content last reviewed November 2017. Effective Health Care Program, Agency for Healthcare Research and Quality, Rockville, MD.
https://effectivehealthcare.ahrq.gov/products/diagnostic-tests-accuracy/research

Select to copy citation