Powered by the Evidence-based Practice Centers
Evidence Reports All of EHC
Evidence Reports All of EHC

SHARE:

FacebookTwitterFacebookPrintShare

Transparency of Reporting Requirements: Effectiveness of Treatments Options for the Prevention of Complications and Treatment of Symptoms of Diabetic Peripheral Neuropathy

Research Protocol Sep 6, 2016
Download PDF files for this report here.

Page Contents

Background

The underlying principle of systematic reviews is a consideration of all relevant available evidence. As standards have developed on how to conduct and report systematic reviews, an Achilles heel has remained: are we really considering all available evidence? Missing relevant information in systematic reviews, because of reporting bias such as publication bias and outcome reporting bias, may lead to biased and flat out wrong conclusions. Mandating that information about trials be reported through registries, such as ClinicalTrials.gov, has been proposed as a way to assess and possibly ameliorate the effects of reporting bias.

ClinicalTrials.gov is administered by the National Library of Medicine. In 2007, the legal requirements were expanded to ensure registration of all trials and to enable public searching of the database. As of 2008, basic summaries of trial results must be submitted for certain applicable trials, including phase 2-4 drug, biologic or device trials. ClinicalTrials.gov captures several data elements including number of enrolled and completed trial participants, participant characteristics, summary results for pre-specific primary and secondary outcome measures, and adverse events by organ system.

Objectives

Study Purpose: To address questions about how to access and integrate information from ClinicalTrials.gov into systematic reviews, as well as the impact of such inclusion on the conclusions of the reviews.

We will conduct a pilot study and prepare a report that addresses the following questions:

  1. Which studies were in the EPC report alone, ClinicalTrials.gov alone or in both?
  2. For the completed studies which were in both:
    1. What were the differences, if any, in pre-specified outcome measures, statistical plan and size of the study reported in the peer reviewed literature vs. ClinicalTrials.gov?
    2. Were results reported in ClinicalTrials.gov for any of the studies? If they were, what were the differences, if any, in the results reported in the peer reviewed literature vs. ClinicalTri-als.gov?
  3. For studies in ClinicalTrials.gov that were not completed or discontinued:
    1. For the discontinued studies, were there reasons given for discontinuation? If so, what were they?
    2. For studies that are ongoing but not completed, what was the date of initiation of the studies? Are the studies proceeding according to the original schedule or is there information in ClinicalTrials.gov indicating a delay in completion? If there is a delay in completion, what is the reason given?
  4. What is the impact on the conclusions of the EPC report with and without the information from ClinicalTrials.gov? What would be the impact on the strength of evidence (including impact of knowledge of outcomes measured in studies but not reported in the peer reviewed literature)?

We will conduct this study in our review "Effectiveness of Treatments Options for the Prevention of Complications and Treatment of Symptoms of Diabetic Peripheral Neuropathy on Diabetic Peripheral Neuropathy" (DPN), which began at the end of September 2015. The DPN review seeks to address two key questions with sub-questions. For this project we will focus on the following sub-question:

Key Question 2a: What is the safety and effectiveness of pharmacologic treatment options (antidepressants, antiepileptics, and topical and subcutaneous treatments) to improve the symptoms of diabetic peripheral neuropathy and health-related quality of life among adults age 18 or older with type 1 or type 2 diabetes mellitus?

Methods

Datasources and searching methods

Datasources will include ClinicalTrials.gov and other standard electronic databases- including Medline, which we will use for the DPN review. Searching of ClinicalTrials.gov is not straightforward.1 We will be broad in our search and apply the same eligibility criteria as in the DPN project to screen the results with two independent screeners.

For our preliminary searching for the proposal, we first used the advanced search function and the condition term (diabetic peripheral neuropathy [DISEASE]: 330 studies). To estimate the number completed, we then added criteria to limit by study design, status and crudely excluded pain studies: "Interventional" [STUDY-TYPES] AND NOT ("not yet recruiting" OR "terminated" OR "withdrawn") [OVERALL-STATUS] AND diabetic peripheral neuropathy [DISEASE] AND NOT pain [DISEASE] =97 studies (64 completed studies and 33 ongoing studies) 50 of which applicable to KQ2a (36 completed studies and 14 ongoing studies). Prior work has suggested that about half of the trials in ClinicalTrials.gov will also be in the peer reviewed literature.2 [Note that we excluded pain studies from this preliminary search as for the DPN project we are proposing using existing systematic reviews. Given that the integration of systematic reviews includes the consideration of the full body of evidence,3,4 we may consider running the pilot with the trials included for this subquestion as well.]

We plan to further develop our preliminary search when the search for the DPN review is finalized.

Study selection and matching with peer-reviewed publications

Studies will be eligible if identified through standard searching or registered in ClinicalTrials.gov. We will limit to phase 3 and 4 trials with completion dates between September 2008 to present. Trials will be excluded if they do not meet the eligibility criteria for the DPN systematic review.

We will match studies identified in ClinicalTrials.gov using their embedded PubMed citations and the National Library of Medicine's National Clinical Trial Identifier (NCT) listed in published articles. Notably, in a study by Zarin and colleagues in 2011, of the 2324 ClinicalTrials.gov results entries, only 14% were linked to a PubMed citation through the NCT number.4,5 Where we do not identify a match using the NCT identifier, we will manually search Medline using terms for the interventions, principal investigator and date of trial completion as search criteria.6 Based on methods developed by Hartung and colleagues, we will consider a PubMed publication to match a ClinicalTrials.gov registered trial if the intervention was the same AND 1 or more groups in the trial had an identical number of study participants.6 We will use all publications that match each trial.

We will create codes to distinguish the trials that matched a Medline publication using the NCT identifier, with matches identified through manual searches, from those trials without matches.

Data extraction

Two staff team members will extract data from ClinicalTrials.gov and matched publications. We will extract the following elements into pre-designed data extraction forms on SRDR (see draft table of extraction elements). Project staff will complete 2 sets of evidence tables: the first set will only include data from ClinicalTrials.gov, and the second set will also have the data from the matched publications, if available.

Table 1.Preliminary data extraction elements

Item Data Extraction Elements
Trial design Design (randomized?)
Number of groups
Trial start date, trial end date
Trial discontinuation Early discontinuation? Reason for discontinuation.
Ongoing trial Any delays? Reasons for delays (if any)
Population Total enrollment, sample size in each arm, drop-outs
Participants included in analysis for each outcome
Intervention and comparator Description of the intervention and comparator
Outcomes Description of pre-specified primary outcomes, number of primary outcomes
Description of secondary outcome
Analysis Description of the pre-specified statistical analysis plan
Results of primary and secondary outcomes Results, direction and magnitude, if any were reported
Adverse outcomes  
Funding Funding source and role
History of Changes Summary of changes and rationale for them

We will discuss the comparison in terms of the information available from ClinicalTrials.gov; information could be the same as in published reports or could include additional studies, additional or different outcomes, and/or different or additional results. We will assess qualitative discordance (preliminarily defined as change in direction of conclusion and/or level of grade). If a meta-analysis is possible, sensitivity analysis, running analysis with and without the information from ClinicalTrials.gov will be completed. A second point of comparison will be available as other DPN team members will be synthesizing and grading based on the typical evidence table for the DPN project; this will provide a second comparison of conclusions with and without information from ClinicalTrials.gov. Finally, the full team will consider if the final conclusions are influenced by any indication of reporting bias based on what was reported in ClinicalTrials.gov versus in the peer-reviewed literature.

Assessment of risk of bias

We will complete risk of bias assessment for any studies uniquely identified from ClinicalTrials.gov. We will use the same tools as used for the published studies in our DPN project (i.e., Cochrane Risk of Bias tool).

Data synthesis

Description of the identified studies (question 1)

For the first question we will describe all studies we have identified in ClinicalTrials.gov. We will report "Which studies were in the EPC report alone, ClinicalTrials.gov alone or in both?" We will describe which studies are ongoing and which have been completed and trial completion dates (since it may take 1 year or longer for trial results to appear in peer reviewed literature).

Description of incomplete or discontinued trials (question 3)

We will create separate tables for those studies that are incomplete or discontinued to address Question 3: "For studies in ClinicalTrials.gov that were not completed or discontinued:

  1. For the discontinued studies, were there reasons given for discontinuation? If so, what were they?
  2. For studies that are ongoing but not completed, what was the date of initiation of the studies? Are the studies proceeding according to the original schedule or is there information in ClinicalTrials.gov indicating a delay in completion? If there is a delay in completion, what is the reason given?"

These data will be extracted as above to address this question.

Comparison of data elements and results from ClinicalTrials.gov and matched publications

Next we will address the 2nd question, "For the completed studies which were in both:

  1. What were the differences, if any, in pre-specified outcome measures, statistical plan and size of the study reported in the peer reviewed literature vs. ClinicalTrials.gov?
  2. Were results reported in ClinicalTrials.gov for any of the studies? If they were, what were the differences, if any, in the results reported in the peer reviewed literature vs. ClinicalTrials.gov?"

Two investigators will receive data extraction tables, which will be masked to the source of information (ClinicalTrials.gov or peer reviewed literature). Investigators will independently assess for discrepancies and then discuss these comparisons. Where discrepancies exist, we will also review the summary of changes to describe a rationale for the different results or plans. We will classify discrepancies between the elements extracted from ClinicalTrials.gov and the matched publications. When available we will use existing frameworks and tools to assess for differences.

  • Identification of the primary outcome. Notably, according to an analysis by Zarin and colleagues, out of 2178 clinical trials with posted results in ClinicalTrials.gov, 20% had more than two reported primary outcome measures and 5% had more than five. For assessing consistency of the pre-specified primary outcome (s), we will use a framework developed by Zarin and colleagues.4 For this tool the primary outcome could differ in the following ways: description of outcome (i.e. different "primary outcome" reported), different domain used, different measurement or diagnostic test used, different reporting of the same measure (e.g. change in pain scale or percentage from baseline), different results of the same reported measure. For trials with multiple publications and outcomes, we will assess each outcome separately, but will designate one as the "main" primary.
  • Adverse event and deaths. ClinicalTrials.gov began to mandate reporting of adverse events in September 2009 as serious adverse events and non-serious adverse events. We will compare the total adverse events reported in ClinicalTrials.gov compared with the total reported in the matched publications.
  • Comparison of prespecified statistical plan
  • Sample sizes, total and per arm

Incorporating the ClinicalTrials.gov findings into the review (Question 4)

What is the impact on the conclusions of the EPC report with and without the information from ClinicalTrials.gov? What would be the impact on the strength of evidence (including impact of knowledge of outcomes measured in studies but not reported in the peer reviewed literature)?

For each outcome and comparator, we will synthesize the information obtained with and without ClinicalTrials.gov, using summary evidence tables that include the results from ClinicalTrials.gov, indicated with grey color coded rows. We will highlight discrepant outcomes and results between the published and unpublished results, based on our review, described above.

We will conduct the following for each outcome by drug comparison:

  • We will grade the level evidence with and without the ClinicalTrials.gov results We will qualitatively describe of qualitative discordance (within an outcome and drug comparison) between results from ClinicalTrials.gov and published literature, in terms of direction of conclusions
  • Where ClinicalTrials.gov provides additional results, and we are able to conduct meta-analyses, we will conduct sensitivity analyses, with and without the additional data from ClinicalTrials.gov.
  • Finally, the full team will consider if the final conclusions are influenced by any indication of reporting bias based on what was reported in ClinicalTrials.gov versus in the peer-reviewed literature.

Our prior methods work will inform this step, including work we have participated in about the predictive validity of grades, that there is a lack of reliability in grading.7 It may be difficult to parse out differences in grading due to the different type or amount of information available (with or without information from ClinicalTrials.gov) or due to the subjective nature of grading. Having personnel very experienced in synthesis and grading helps; having multiple points of comparison may also help with this parsing.

Throughout the process we will log challenges and issues, as well as track the time and effort to complete this work.

References

  1. Riveros C, Dechartres A, Perrodeau E, et al. Timing and completeness of trial results posted at ClinicalTrials.gov and published in journals. PLoS Med. 2013 Dec;10(12):e1001566; discussion e. PMID: 24311990.
  2. Robinson KA, Chou R, Berkman ND, et al. Integrating Bodies of Evidence: Existing Systematic Reviews and Primary Studies Methods Guide for Effectiveness and Comparative Effectiveness Reviews. Rockville MD; 2008.
  3. Robinson KA, Chou R, Berkman ND, et al. Twelve recommendations for integrating existing systematic reviews into new reviews: EPC guidance. J Clin Epidemiol. 2015 Aug 7PMID: 26261004.
  4. Zarin DA, Tse T, Williams RJ, et al. The ClinicalTrials.gov results database—update and key issues. N Engl J Med. 2011 Mar 3;364(9):852-60. PMID: 21366476.
  5. Xu X, Yang Y, Wang R, et al. Perinatal exposure to di-(2-ethylhexyl) phthalate affects anxiety- and depression-like behaviors in mice. Chemosphere. 2015;124:22-31.
  6. Hartung DM, Zarin DA, Guise JM, et al. Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications. Ann Intern Med. 2014 Apr 1;160(7):477-83. PMID: 24687070.
  7. Gartlehner G, Dobrescu A, Evans TS, et al. The predictive validity of quality of evidence grades for the stability of effect estimates was low: a meta-epidemiological study. J Clin Epidemiol. 2015 Sep 3PMID: 26342443.

Project Timeline

Supplemental Project To Assess the Transparency of Reporting Requirements for Studies Evaluating the Effectiveness of Treatment Options for Symptoms of Diabetic Peripheral Neuropathy

Sep 1, 2016
Topic Initiated
Sep 6, 2016
Research Protocol
Jul 28, 2017
Page last reviewed June 2021
Page originally created November 2017

Internet Citation: Research Protocol: Transparency of Reporting Requirements: Effectiveness of Treatments Options for the Prevention of Complications and Treatment of Symptoms of Diabetic Peripheral Neuropathy. Content last reviewed June 2021. Effective Health Care Program, Agency for Healthcare Research and Quality, Rockville, MD.
https://effectivehealthcare.ahrq.gov/products/transparency-neuropathy/research-protocol

Select to copy citation