Skip Navigation
Department of Health and Human Services

Article Alert

The free Article Alert service delivers a weekly email to your inbox containing the most recently published articles on all aspects of systematic review and comparative effectiveness review methodologies.

  • Medical, psychological, educational, etc., methodology research literatures covered
  • Curated by our seasoned research staff from a wide array of sources: PubMed, journal table of contents, author alerts, bibliographies, and prominent international methodology and grey literature Web sites
  • Averages 20 citations/week (pertinent citations screened from more than 1,500 citations weekly)
  • Saves you time AND keeps you up to date on the latest research

Article Alert records include:

  • Citation information/abstract
  • Links: PMID (PubMed ID) and DOI (Digital Object Identifier)
  • Free Full Text: PubMed Central or  publisher link (when available)
  • RIS file to upload all citations to EndNote, RefWorks, Zotero, or other citation software

To sign up for free email updates of Article Alert, contact the Scientific Center Resource Library at


The Article Alert for the week of September 29, 2014 (sample articles)

Madigan D, Stang PE, Berlin JA, Schuemie M, Overhage JM, Suchard MA, Dumouchel B, Hartzema AG, Ryan PB. A Systematic Statistical Approach to Evaluating Evidence from Observational Studies. Annual Review of Statistics and Its Application. 2014 Jan;1(1):11-39

Threats to the validity of observational studies on the effects of interventions raise questions about the appropriate role of such studies in decision making. Nonetheless, scholarly journals in fields such as medicine, education, and the social sciences feature many such studies, often with limited exploration of these threats, and the lay press is rife with news stories based on these studies. Consumers of these studies rely on the expertise of the study authors to conduct appropriate analyses, and on the thoroughness of the scientific peer-review process to check the validity, but the introspective and ad hoc nature of the design of these analyses appears to elude any meaningful objective assessment of their performance. Here, we review some of the challenges encountered in observational studies and review an alternative, data-driven approach to observational study design, execution, and analysis. Although much work remains, we believe this research direction shows promise.


Meader N, King K, Llewellyn A, Norman G, Brown J, Rodgers M, Moe-Byrne T, Higgins J, Sowden A, Stewart G. A checklist designed to aid consistency and reproducibility of GRADE assessments: development and pilot validation. Syst.Rev. 2014 Jul 24;3(1):82. PMID: 25056145.

Background: The grading of recommendation, assessment, development and evaluation (GRADE) approach is widely implemented in health technology assessment and guideline development organisations throughout the world. GRADE provides a transparent approach to reaching judgements about the quality of evidence on the effects of a health care intervention, but is complex and therefore challenging to apply in a consistent manner.
Methods: We developed a checklist to guide the researcher to extract the data required to make a GRADE assessment. We applied the checklist to 29 meta-analyses of randomised controlled trials on the effectiveness of health care interventions. Two reviewers used the checklist for each paper and used these data to rate the quality of evidence for a particular outcome.
Results: For most (70%) checklist items, there was good agreement between reviewers. The main problems were for items relating to indirectness where considerable judgement is required.
Conclusions: There was consistent agreement between reviewers on most items in the checklist. The use of this checklist may be an aid to improving the consistency and reproducibility of GRADE assessments, particularly for inexperienced users or in rapid reviews without the resources to conduct assessments by two researchers independently.


Thomas J, O'Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst.Rev. 2014 Jun 20;3(1):67. PMID: 24950727.

Background: Systematic reviews that address policy and practice questions in relation to complex interventions frequently need not only to assess the efficacy of a given intervention but to identify which intervention - and which intervention components - might be most effective in particular situations. Here, intervention replication is rare, and commonly used synthesis methods are less useful when the focus of analysis is the identification of those components of an intervention that are critical to its success.
Methods: Having identified initial theories of change in a previous analysis, we explore the potential of qualitative comparative analysis (QCA) to assist with complex syntheses through a worked example. Developed originally in the area of political science and historical sociology, a QCA aims to identify those configurations of participant, intervention and contextual characteristics that may be associated with a given outcome. Analysing studies in these terms facilitates the identification of necessary and sufficient conditions for the outcome to be obtained. Since QCA is predicated on the assumption that multiple pathways might lead to the same outcome and does not assume a linear additive model in terms of changes to a particular condition (that is, it can cope with 'tipping points' in complex interventions), it appears not to suffer from some of the limitations of the statistical methods often used in meta-analysis.
Results: The worked example shows how the QCA reveals that our initial theories of change were unable to distinguish between 'effective' and 'highly effective' interventions. Through the iterative QCA process, other intervention characteristics are identified that better explain the observed results.
Conclusions: QCA is a promising alternative (or adjunct), particularly to the standard fall-back of a 'narrative synthesis' when a quantitative synthesis is impossible, and should be considered when reviews are broad and heterogeneity is significant. There are very few examples of its use with systematic review data at present, and further methodological work is needed to establish optimal conditions for its use and to document process, practice, and reporting standards.