The free Article Alert service delivers a weekly email to your inbox containing the most recently published articles on all aspects of systematic review and comparative effectiveness review methodologies.
- Medical, psychological, educational, etc., methodology research literatures covered
- Curated by our seasoned research staff from a wide array of sources: PubMed, journal table of contents, author alerts, bibliographies, and prominent international methodology and grey literature Web sites
- Averages 20 citations/week (pertinent citations screened from more than 1,500 citations weekly)
- Saves you time AND keeps you up to date on the latest research
Article Alert records include:
- Citation information/abstract
- Links: PMID (PubMed ID) and DOI (Digital Object Identifier)
- Free Full Text: PubMed Central or publisher link (when available)
- RIS file to upload all citations to EndNote, RefWorks, Zotero, or other citation software
To sign up for free email updates of Article Alert, contact the Scientific Resource Center at firstname.lastname@example.org.
The Article Alert for the week of October 12, 2015 (sample articles)
Stegert M, Kasenda B, von Elm E, You JJ, Blümle A, Tomonaga Y, Saccilotto R, Amstutz A, Bengough T, Briel M, et al. An analysis of protocols and publications suggested that most discontinuations of clinical trials were not based on preplanned interim analyses or stopping rules. J.Clin.Epidemiol. Epub 2015 Jun 4. PMID: 26361993.
Objectives: To investigate the frequency of interim analyses, stopping rules, and data safety and monitoring boards (DSMBs) in protocols of randomized controlled trials (RCTs); to examine these features across different reasons for trial discontinuation; and to identify discrepancies in reporting between protocols and publications.
Study Design and Setting: We used data from a cohort of RCT protocols approved between 2000 and 2003 by six research ethics committees in Switzerland, Germany, and Canada.
Results: Of 894 RCT protocols, 289 prespecified interim analyses (32.3%), 153 stopping rules (17.1%), and 257 DSMBs (28.7%). Overall, 249 of 894 RCTs (27.9%) were prematurely discontinued; mostly due to reasons such as poor recruitment, administrative reasons, or unexpected harm. Forty-six of 249 RCTs (18.4%) were discontinued due to early benefit or futility; of those, 37 (80.4%) were stopped outside a formal interim analysis or stopping rule. Of 515 published RCTs, there were discrepancies between protocols and publications for interim analyses (21.1%), stopping rules (14.4%), and DSMBs (19.6%).
Conclusion: Two-thirds of RCT protocols did not consider interim analyses, stopping rules, or DSMBs. Most RCTs discontinued for early benefit or futility were stopped without a prespecified mechanism. When assessing trial manuscripts, journals should require access to the protocol.
Copyright © 2015 Elsevier Inc. All rights reserved.
- DOI: http://dx.doi.org/10.1016/j.jclinepi.2015.05.023
- PubMed: http://www.ncbi.nlm.nih.gov/pubmed/26361993
Driessen E, Hollon SD, Bockting CL, Cuijpers P, Turner EH. Does Publication Bias Inflate the Apparent Efficacy of Psychological Treatment for Major Depressive Disorder? A Systematic Review and Meta-Analysis of U.S. National Institutes of Health-Funded Trials. PLoS One. 2015 Sep 30;10(9):e0137864. PMID: 26422604.
Background: The efficacy of antidepressant medication has been shown empirically to be overestimated due to publication bias, but this has only been inferred statistically with regard to psychological treatment for depression. We assessed directly the extent of study publication bias in trials examining the efficacy of psychological treatment for depression.
Methods and Findings: We identified U.S. National Institutes of Health grants awarded to fund randomized clinical trials comparing psychological treatment to control conditions or other treatments in patients diagnosed with major depressive disorder for the period 1972-2008, and we determined whether those grants led to publications. For studies that were not published, data were requested from investigators and included in the meta-analyses. Thirteen (23.6%) of the 55 funded grants that began trials did not result in publications, and two others never started. Among comparisons to control conditions, adding unpublished studies (Hedges' g = 0.20; CI95% -0.11~0.51; k = 6) to published studies (g = 0.52; 0.37~0.68; k = 20) reduced the psychotherapy effect size point estimate (g = 0.39; 0.08~0.70) by 25%. Moreover, these findings may overestimate the "true" effect of psychological treatment for depression as outcome reporting bias could not be examined quantitatively.
Conclusion: The efficacy of psychological interventions for depression has been overestimated in the published literature, just as it has been for pharmacotherapy. Both are efficacious but not to the extent that the published literature would suggest. Funding agencies and journals should archive both original protocols and raw data from treatment trials to allow the detection and correction of outcome reporting bias. Clinicians, guidelines developers, and decision makers should be aware that the published literature overestimates the effects of the predominant treatments for depression.
- FREE FULL TEXT: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4589340/pdf/pone.0137864.pdf
- DOI: http://dx.doi.org/10.1371/journal.pone.0137864
- PubMed: http://www.ncbi.nlm.nih.gov/pubmed/26422604
- PubMed Central: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4589340
Kahan BC, Rehal S, Cro S. Risk of selection bias in randomised trials. Trials. 2015 Sep 10;16(1):405. PMID: 26357929.
Background: Selection bias occurs when recruiters selectively enrol patients into the trial based on what the next treatment allocation is likely to be. This can occur even if appropriate allocation concealment is used if recruiters can guess the next treatment assignment with some degree of accuracy. This typically occurs in unblinded trials when restricted randomisation is implemented to force the number of patients in each arm or within each centre to be the same. Several methods to reduce the risk of selection bias have been suggested; however, it is unclear how often these techniques are used in practice.
Methods: We performed a review of published trials which were not blinded to assess whether they utilised methods for reducing the risk of selection bias. We assessed the following techniques: (a) blinding of recruiters; (b) use of simple randomisation; (c) avoidance of stratification by site when restricted randomisation is used; (d) avoidance of permuted blocks if stratification by site is used; and (e) incorporation of prognostic covariates into the randomisation procedure when restricted randomisation is used. We included parallel group, individually randomised phase III trials published in four general medical journals (BMJ, Journal of the American Medical Association, The Lancet, and New England Journal of Medicine) in 2010.
Results: We identified 152 eligible trials. Most trials (98%) provided no information on whether recruiters were blind to previous treatment allocations. Only 3% of trials used simple randomisation; 63% used some form of restricted randomisation, and 35% did not state the method of randomisation. Overall, 44% of trials were stratified by site of recruitment; 27% were not, and 29% did not report this information. Most trials that did stratify by site of recruitment used permuted blocks (58%), and only 15% reported using random block sizes. Many trials that used restricted randomisation also included prognostic covariates in the randomisation procedure (56%).
Conclusions: The risk of selection bias could not be ascertained for most trials due to poor reporting. Many trials which did provide details on the randomisation procedure were at risk of selection bias due to a poorly chosen randomisation methods. Techniques to reduce the risk of selection bias should be more widely implemented.