Powered by the Evidence-based Practice Centers
Evidence Reports All of EHC
Evidence Reports All of EHC

SHARE:

FacebookTwitterFacebookPrintShare

Communication and Dissemination Strategies To Facilitate the Use of Health-Related Evidence

Research Protocol ARCHIVED Jul 31, 2012
Download PDF files for this report here.

Page Contents

Background and Objectives for the Systematic Review

The Agency for Healthcare Research and Quality (AHRQ) Effective Healthcare (EHC) Program funds individual researchers, research centers, and academic organizations to work with AHRQ to produce effectiveness and comparative effectiveness research for clinicians and consumers.1 Comparative effectiveness research (CER) compares the benefits, harms, and effectiveness of health interventions for the prevention, diagnosis, treatment, and management of clinical conditions and the improvement of health care delivery. The purpose of CER is to assist patients and consumers, clinicians and other providers, and purchasers and payers to make informed decisions that will improve health care at both the individual and population levels.1

One EHC goal is to make CER accessible to these decisionmakers. The Institute of Medicine’s list of 100 priority topics for CER highlights the importance of translating and disseminating this research.2 The specific topic (“compare the effectiveness of dissemination and translation techniques to facilitate the use of CER by patients, clinicians, payers, and others”) was listed among the first quartile of topics recommended for initial focus. Many hope that better communication and dissemination of CER will result in more widespread use of such information.

Coupled with these mandates is the fact that the ad hoc Uncertainty Committee of the EHC Stakeholder Group is interested in promoting effective ways to communicate uncertainty about health and health care evidence to end-users. The committee would like to know what approaches to conveying uncertainty increase the likelihood that audiences receiving such information will understand it and be able to factor it into their decisionmaking.

This systematic review has three related components; all focus on promoting informed health and health care decisions among patients and providers. First, it addresses the comparative effectiveness of communicating the evidence in various contents and formats that increases the likelihood that it will be understood and used by the target audience. Second, it examines the comparative effectiveness of a variety of approaches for disseminating the evidence from those who develop it to its potential users. Third, it examines the comparative effectiveness of various ways of communicating uncertainty associated with health and health care evidence to different target audiences.

Terminology and Definitions

Transforming scientific evidence for its use in practice, commonly known as research translation, involves many processes and strategies. High-quality studies must be conducted and the body of evidence must then be synthesized and summarized, often in the form of systematic reviews. Research evidence presented in complex and technical jargon must be altered to simpler language that potential end-users will find easier to understand; it must then be disseminated to those audiences; and, finally, providers and others must incorporate it into existing health care processes and systems to improve health.

The terminology for each of these steps overlaps considerably. We list three key definitions to help readers understand the scope of our review, which focuses on the communication and dissemination of health and health care evidence and effective ways to present associated uncertainty (see Table 1). We deliberately avoid the term “translation” in our review because it is broadly and diversely defined. Implementation processes to improve health outcomes are beyond the scope of this review.

Table 1. Definitions of concepts relevant for this review
Concept or Construct Definition As It Relates to Health and Health Care
Health communication The study and use of communication strategies to inform and influence individual and community decisions that affect health.3 It links the fields of communication and health and is increasingly recognized as a necessary element of efforts to improve personal and public health.
Dissemination The targeted distribution of information and intervention materials to a specific public health or clinical practice audience. The intent is to spread knowledge and the associated evidence-based interventions.4, 5
Implementation The use of strategies to adopt and integrate evidence-based health interventions and change practice patterns within specific settings.6

In the sections below, we present background information for the three areas of the review—communication techniques, dissemination strategies, and communicating uncertainty.

Communication Techniques

Government agencies and institutions, advocacy groups, media organizations, researchers, and other interested stakeholders can all carry out communication activities. They use various techniques to communicate evidence so that target audiences can understand it better. For purposes of our review, communication techniques fall into the broad area of “health communication” and focus on making evidence interpretable, persuasive, and actionable. The John M. Eisenberg Center for Clinical Decisions and Communications Science translates AHRQ’s comparative effectiveness review information to create a variety of materials ranging from evidence summaries to decision aids and other products.

To our knowledge, there is no overarching framework of communication strategies to guide our review. Multiple systematic reviews, however, have explicated key communication techniques that are of interest to the field such as:

  • Tailoring the message—Communication designed for an individual based on information from the individual.
  • Targeting the message to audience segments—Communication designed for subgroups based on group membership or characteristics such as age, gender or sex, race, cultural background, language, and other “psychographic” characteristics such as a person’s attitudes about particular subject matter.
  • Using narratives—Communication delivered in the form of a story, testimonial, or entertainment education.
  • Framing the message—Communication that conveys the same messages in alternate ways (e.g., what is gained or lost by taking an action or making a choice).

Several other communication techniques exist such as applying plain language principles, varying the source of the evidence, and using theoretically driven messages. These communication strategies are widely used and can be considered best practices; however, they are not included in this review given our focus on comparative effectiveness of different techniques.

Table 2 summarizes the evidence for the effectiveness of the four communication techniques examined in this review: tailoring the message, targeting the message to audience segments, using narratives, and framing the message. These systematic reviews focus on the effectiveness of the communication techniques relative to not using any technique, that is, relative to “usual care.” Thus, these reviews establish the contribution of each technique when compared with not using any communication technique.

Table 2. Systematic, meta-analytic, or theoretical reviews supporting focus on various communication strategies
Author and Date
Number in Study
Search Dates
Communication Strategy Main Conclusions Supporting Inclusion of Strategy
Noar et al., 20077
N = 58,454
Through 2005

Lustria et al., in press8
N = 20,180
1999–2009
Tailored communication Tailored communication delivered via print or the Internet is more effective than nontailored communication in increasing knowledge and changing behavior. Effect sizes can vary based on length of followup, variables tailored, type of behavior, population studied (general vs. chronic illness), and number of intervention contacts.
Slater, 19959
Nonsystematic review

Noar et al., 200910
N = 94,896
1998–2007
Targeted communication to audience segments Communication that is targeted to audience segments is a strategy used to make information more relevant based on group membership characteristics. Characteristics can be determined by role, demographic, or social psychological variables. Although we have not found a systematic review on this approach, meta-analysis shows its practice is more common in large-scale communication efforts due to its potential effectiveness.
Hinyard and Kreuter, 200711
Theoretical review
N not reported

Winterbottom et al., 200812
N = 3,986
Narratives Narrative forms of communication increase information processing and increase the persuasiveness of messages; people become transported into a situation that can enhance emotions, attitudes, and behaviors.
O’Keefe and Jensen, 200613
N = 50,780
Through 2006

Latimer et al., 201014
N = 6,679
Through July 2008
Message framing Messages framed as emphasizing the benefits of preventive action are significantly better than loss-framed messages, although the difference is small.

Dissemination Strategies

Dissemination is the targeted distribution of information and intervention materials to a specific public health or clinical practice audience. The intent is to spread knowledge and the associated evidence-based interventions.4,5 Dissemination occurs through a variety of channels, social contexts, and settings. Evidence dissemination has several very broad goals: (1) to increase the reach of evidence; (2) to increase people’s motivation to use and apply evidence; and (3) to increase people’s ability to use and apply evidence.

Dissemination strategies aim to spread knowledge and the associated evidence-based interventions on a wide scale within or across geographic locations, practice settings, or social or other networks of end-users such as patients and health care providers. In examining influences that help spread innovations along the continuum between passive diffusion of information and active dissemination, Greenhalgh et al.15 created an inventory of strategies aimed at influencing individual, social, and other networks of adopters.

Existing systematic reviews and dissemination research show that passive dissemination strategies are not as effective as active strategies. For example, in a synthesis of 41 systematic reviews, Grimshaw and colleagues16 reported that active, multifaceted approaches were most effective.16 Additional research also supports this conclusion. Interventions that rely solely on passive information transfer are relatively ineffective, but active knowledge-translation strategies are usually effective (although the effects are modest). Educational outreach and academic detailing are the most consistently effective interventions reported. Interventions that incorporate two or more distinct strategies (i.e., that are multifaceted) are consistently more likely to work than single interventions.17

We distinguish dissemination strategies from implementation strategies, with the latter focusing on actually undertaking the process to institutionalize the new evidence in clinical practice.

Communicating Uncertainty

Uncertainty is inherent in health and health care evidence and can limit its use. Uncertainty creates multiple challenges, including difficulties: (1) determining whether preventive services and treatments should be implemented in clinical practice, (2) determining for whom and in what settings preventive services and treatments should be implemented, and (3) communicating evidence so that consumers can make informed decisions. By optimizing the presentation of uncertainty, evidence creators, synthesizers, and disseminators can enable people to make the best possible decisions.

To date, most work on presenting uncertainty has focused on stochastic uncertainty: the chance or probability of an event occurring. This work has generally focused on alternate presentations of disease risk, side effects, treatment benefits, and treatment harms. Multiple systematic evidence reviews and randomized trials18-22 have demonstrated that:

  • Qualitative or non-numeric presentations of probability (e.g., “likely,” “certain,” “rare”) are open to individual interpretation.19,22
  • Percentage and “x/1,000” presentations are more understandable than “1 in x” presentations of probability;22-24 and “x/1,000” presentations are better than percentage presentations for representing conditional probabilities.
  • Using the same denominator in “x/1,000” presentations22,24,25 facilitates understanding.
  • Absolute relative risk and relative risk reduction are more understandable than number needed to treat presentations.18-22
  • Absolute relative risk tends to be less persuasive than relative risk reduction.18-22

Little work has focused on other types of uncertainty, although some conceptual pieces have offered a framework for study. For instance, Han et al.26 identified several relevant domains of uncertainty that influence health care. These include uncertainty about the strength of evidence (also called ambiguity), uncertainty about the significance of particular risks (including their timing or severity), uncertainty about the complexity of information (e.g., the multiplicity or stability of risks), and uncertainty resulting from ignorance about risks.

For our review, we define the concept of uncertainty relative to the schemes for grading the strength of evidence for AHRQ’s Evidence-based Practice Center (EPC) Program. The overall strength of evidence grade is made up of judgments about four required domains. As taken from Owens et al.,27 these domains are as follows:

  1. Risk of bias—“the degree to which the included studies have a likelihood of adequate protection against bias (i.e., good internal validity).”
  2. Consistency—“the degree to which reported effect sizes from the included studies appear to have the same direction or magnitude of effect.”
  3. Directness—“whether the evidence links the interventions directly to the health outcomes.”
  4. Precision—“the degree of certainty surrounding an effect estimate with respect to a given outcome.”

Each domain may individually contribute to the uncertainty about the evidence. When the overall strength of evidence is high, the uncertainty is low. Conversely, when the overall strength of evidence is insufficient or low, uncertainty is high.28

End-users need to understand the overall balance of benefits and harms (i.e., the “net benefit”) of preventive services and treatments. Determining net benefit requires synthesizing the evidence across multiple studies and judging the magnitude of the overall benefit relative to harm (e.g., net benefit, marginal or uncertain benefit, and net harm). What constitutes a “sufficient” margin of benefit for evidence to provide “net benefit” is open to interpretation and constitutes another important source of uncertainty.

End-users also need to grasp whether the evidence is applicable for their own unique populations and settings. Assessing applicability requires considering whether the preventive service or treatment tested would be expected to have the same biologic effect in the population and setting in which it might be applied. In contrast to AHRQ’s EPCs, the U.S. Preventive Services Task Force (USPSTF) makes specific determinations of net benefit and also includes applicability in their judgments about evidence grade.

Once those who are synthesizing evidence determine strength of evidence, net benefit, and applicability, various groups must communicate the information to consumers. Explaining such findings and their implications can be challenging. Politi et al.29 suggest using subjective descriptions, various depictions of numbers, or visual aids to represent uncertainty and its degree.

Rationale and Relevance for Conducting the Review

AHRQ sponsors research to improve the quality, effectiveness, and safety of health care in the United States. Evidence reports and technology assessments generated through AHRQ’s EHC Program provide science-based information about common, relevant health conditions and technologies to serve the needs of patients, clinicians, insurance payers, and other end-users. Findings from clinical, health services, and comparative effectiveness studies—especially as assembled for systematic reviews and similar documents—need to be communicated and disseminated effectively to influence optimal and timely practice and health policies.30

Because systematic reviews evaluate multiple studies, they are inherently complex. Nuanced descriptions of benefits, harms, strengths of evidence, and uncertainties make evidence reports difficult to understand for many people. Evidence reports are typically targeted at scientific researchers in related fields, rather than the patients or clinicians who ultimately make health care decisions. Ensuring that research evidence is delivered to these audiences in easy-to-understand formats is critical to the success of evidence-based research. Common communication and dissemination barriers, including not seeing or being exposed to the information, can impede its use in decisionmaking.31-33

Given AHRQ’s mission, a critical goal is to evaluate the effectiveness of strategies to make evidence report findings widely available and techniques to ensure that such findings are correctly understood. By evaluating the comparative effectiveness of communication techniques and dissemination strategies, this review will inform efforts to make research easily accessible for patients and clinicians.

The Key Questions

To recap, our review has three Key Questions (KQs), listed below.

KQ 1

  1. What is the comparative effectiveness of communication techniques to promote the use of health and health care evidence by patients and clinicians?
  2. How does the comparative effectiveness of communication techniques vary by patients and clinicians?

KQ 2

  1. What is the comparative effectiveness of dissemination strategies to promote the use of health and health care evidence for patients and clinicians?
  2. How does the comparative effectiveness of dissemination strategies vary by patients and clinicians?

KQ 3.

What is the comparative effectiveness of different ways of explaining uncertain health and health care evidence to patients and clinicians?

Below we describe the population, intervention, comparators, outcomes, and settings (PICOTS) for our review (see Table 3).

Table 3. Population, intervention, comparators, outcomes, and settings (PICOTS)
Domain Description
Population Recipients of health and health care evidence, also called “target audiences,” which include:
  • Adult patients and the adult public at large
  • Clinicians, including physicians, nurses, midlevel providers, and/or pharmacists
Interventions Specific clinical interventions are described below.

Techniques to communicate evidence:
  • Tailoring the message
  • Targeting the message to audience segments
  • Using narratives
  • Framing the message
  • Using a multipronged approach with any of the communication techniques described above (e.g., tailoring and targeting)
Strategies to disseminate evidence that will:
  • Increase reach of the evidence (e.g., telephone; postal mail/e-mail; electronic/digital media, social media, mass media; interpersonal outreach)
  • Increase people’s motivation to use and apply the evidence (e.g., opinion leaders, champions, social networks)
  • Increase people’s ability to use and apply the evidence (e.g., additional resources, skills building)
  • Use a multipronged approach with any of the dissemination strategies described above (e.g., social marketing, academic detailing)
Techniques to explain uncertain evidence using:
  • Different presentation formats (e.g., graphical, numeric, non-numeric)
  • Any communication technique, including the ones above and hypothetical situations
Comparators Alternate presentations of the specified interventions
Outcomes Specific outcomes are described below

Intermediate outcomes for all target audiences
  • Awareness of the evidence
  • Knowledge about the evidence
  • Discussions about the evidence
  • Self-efficacy to use the evidence
  • Behavioral intentions to use or apply the evidence
Ultimate outcomes for patients:
  • Health-related decisions or behaviors
  • Clinical outcomes
Ultimate outcomes for clinicians:
  • Behavior
Timing Any length of followup will be permissible
Settings Clinical or community settings in the United States, such as:
  • Inpatient and outpatient settings and clinics of all types
  • Academic health care institutions
  • Churches, fraternal organizations, professional or social clubs, pharmacies, and homes

Analytic Framework

We present our analytic framework in Figure 1. As noted in the box to the far left, we plan to examine studies that use research-based evidence as the source of information for their communication techniques (KQ 1) and dissemination strategies (KQ 2). We will define research-based evidence as evidence that has been assembled, reviewed, and presented by evidence developers and has been used to make recommendations. Examples of the sources of evidence that we will consider acceptable are:

  • National Guidelines Clearinghouse
  • U.S. Preventive Services Task Force
  • Community Guide to Preventive Services
  • The Cochrane Collaboration
  • National Institute for Health and Clinical Excellence
  • Specific Institutes of the National Institutes of Health (e.g., National Heart, Lung and Blood Institute; National Institute of Diabetes and Digestive and Kidney Diseases; National Cancer Institute);
  • Scottish Intercollegiate Guidelines Network
  • AHRQ-funded Evidence-based Practice Centers
  • The 6th Joint National Committee

We will only include research-based evidence that is health related. For communication and dissemination (KQs 1 and 2), we will only include health-related evidence that seeks to promote informed decisions about individual-level human health, reflecting our general interest in prevention, diagnosis, and treatment.

Strategies and techniques discussed in this review could be beneficial for several audiences. These include (1) patients and the general public and (2) clinical service providers, including physicians, nurses, mid-level providers, and/or pharmacists who deliver health care. For KQs 1 and 2, we plan to examine how the effectiveness of communication techniques and dissemination strategies varies for different target audiences, including patients and clinicians. Techniques and strategies that work well for one audience may not work as well for another audience. For KQ 3, we will focus on studies that explore communication techniques to explain uncertain evidence.

We will include studies that examine intermediate outcomes. These can be awareness of the evidence; knowledge of the evidence; discussions about the evidence; self-efficacy about the evidence; and behavioral intentions to use or apply the evidence. We will also include studies that measure ultimate outcomes. These can be the following: for patients—health-related decisions or behavior and clinical outcomes; and for clinicians—behavior. We expect that most studies will focus on intermediate outcomes because they occur sooner and, thus, are more practical to study.

Figure 1. Analytic framework

We present our analytic framework in Figure 1. As noted in the box to the far left, we plan to examine studies that use research-based evidence as the source of information for their communication techniques (KQ 1) and dissemination strategies (KQ 2). Strategies and techniques discussed in this review could be beneficial for multiple audiences. These include (1) patients and the general public; (2) clinical service providers, including physicians, dentists, nurses, or other professionals who deliver health care. For KQs 1 and 2, we plan to examine how the effectiveness of communication techniques and dissemination strategies varies for different target audiences, including patients and clinicians. Techniques and strategies that work well for one audience may not work as well for another audience. For KQ 3, we will focus on studies that explore ways to explain uncertain evidence. We will include studies that examine intermediate outcomes. These can be awareness of the evidence; knowledge of the evidence; discussions about the evidence; self-efficacy about the evidence; and behavioral intentions to use or apply the evidence. We will also include studies that measure ultimate outcomes. These would be health-related decisions or behavior and clinical outcomes for patients; and behavior for clinicians. We expect that most studies will focus on intermediate outcomes because they occur sooner and, thus, are more practical to study.

Methods

A. Criteria for Inclusion/Exclusion of Studies in the Review

Criteria for inclusion and exclusion of studies address both the PICOTS model outlined in Section II (see Table 3) and other important study design and publication issues. The criteria also address intervention content that is specific to each KQ. In Table 4, we present the inclusion/exclusion criteria common to all three KQs. We then present the inclusion/exclusion criteria that are specific to each KQ (Tables 5, 6, and 7 respectively).

Table 4. Study-specific inclusion/exclusion criteria
Category Criteria for Inclusion Criteria for Exclusion
aWe will update searches when the draft report is out for peer review.
bWe will complete a hand-search of systematic reviews and only use systematic reviews for background information.
Language English All non-English publications
Dates of publication
  • 01/01/2000 to present for communication and dissemination
  • 01/01/1966 to present for uncertainty
 
Study design
  • Individual randomized controlled trials
  • Clustered randomized controlled trials
  • Quasi-experimental trials (KQ 3 only)
  • Nonrandomized trials (KQ 3 only)
  • All nonexperimental studies
  • Qualitative research
Study duration No limits  
Publications Complete articles
  • Systematic reviews
  • Meta-analyses
  • Protocols
  • Studies published only as abstracts
  • Studies with no original data (i.e., no experimental data)
  • Narrative reviews
  • Editorials, letters to editors, and similar publications
Populations Adults (≥19 years)
  • General public/patients
  • Clinicians
  • Children (<19 years)
  • Incarcerated populations
  • Federal and State policymakers
Comparators
  • Alternate presentations of specified interventions
  • Comparisons with usual practice (except for KQ 3 when the evidence is sparse)
Settings
  • Inpatient and outpatient settings and clinics of all types
  • Academic health care institutions
  • Community-based settings such as churches, fraternal organizations, professional or social clubs, pharmacies, and homes
  • Primary and secondary schools
  • Prisons and jails
Geographic setting
  • France, Germany, Italy, The Netherlands, the United Kingdom, the United States, Austria, Belgium-Luxembourg, Brazil, Denmark, Finland, Greece, Ireland, Israel, Norway, Poland, Portugal, Spain, Sweden, Switzerland, Turkey, Australia, Canada, Japan, and South Africa
Any other country not specified for inclusion
Sample sizes
  • N ≥ 100 total individuals in the study
  • No limits on size of clusters
  • N < 100 total individuals in the study
Other Access to entire article Inability to retrieve full article

We will focus the content on original research articles that are available in full-text form, are published in English, and involve randomized trials with at least 100 total individuals in the study (e.g., 50 individuals per arm in a study with two arms). Existing systematic reviews suggest that a sufficient amount of high-quality evidence is available in the form of randomized controlled trials with sample sizes greater than 100.

For communication and dissemination, we will include studies from January 2000 to the present. Multiple systematic reviews on communication and dissemination have been published since 2000. These reviews highlight evidence for their support, although none of these reviews examine the comparative effectiveness of different techniques and strategies (see background section). For uncertainty, we will include studies from January 1966 to present because no previous reviews have addressed uncertainty.

We will focus on studies examining the adult population 19 years of age and older, including the general public or patients and clinicians of all races and ethnicities and all levels of income, insurance coverage, and literacy. We will define clinicians to include physicians, nurses, midlevel providers, and/or pharmacists. We will exclude Federal and State policymakers because they have less direct impact on clinical decisionmaking when compared with patients and providers.

Admissible settings include inpatient and outpatient settings and clinics of all types; academic health care institutions; and community-based settings such as churches, fraternal organizations, professional or social clubs, pharmacies, and homes. The settings must be countries located in Blocks 1 or 2 based on a recent world-system analysis by Kick et al. (2011).34 Blocks 1 and 2 contain countries in the core of the world system. Block 1 countries include: France, Germany, Italy, The Netherlands, the United Kingdom, and the United States. Block 2 countries include: Austria, Belgium-Luxembourg, Brazil, Denmark, Finland, Greece, Ireland, Israel, Norway, Poland, Portugal, Spain, Sweden, Switzerland, and Turkey. If the settings are not located in Blocks 1 or 2, they must be located in the following “core” countries in Block 3 that have conducted considerable comparative effectiveness research: Australia, Canada, Japan, or South Africa.

We plan to include studies that have at least one outcome of interest. Intermediate outcomes are applicable to all target audiences. They may include awareness about the evidence; knowledge about the evidence; discussions about the evidence; behavioral intentions to use or apply the evidence; and self-efficacy. For the general public or patients, ultimate outcomes may include health-related decisions or behaviors and clinical outcomes. For clinicians, ultimate outcomes may include behavior.

Initially, the health and health care evidence may relate to prevention or diagnosis/treatment of any health condition. We will sort the studies into prevention and diagnosis/treatment categories after the abstract and/or full-text review process and confer with AHRQ about the possibility of including both the prevention and diagnosis/treatment categories based on the number of studies in each category and available resources.

Communication (KQ 1): For KQ 1, we will include studies that compare two or more of the included communication techniques head to head. Techniques of interest include tailored communication, communication targeted at audience segments; use of narratives; and message framing (see Table 5). These strategies are designed to make information clearer, easier to understand, and more relevant to end-users.

Multicomponent techniques seek to increase the overall impact of evidence across geographic and practice settings and across target audiences. Initially, we will include all studies that use a multicomponent approach with a combination of two or more communication techniques (e.g., tailoring and targeting) compared to single techniques. After the abstract and/or full-text review process, we will review the combinations of communication techniques, identify the most frequent combinations, and likely focus our efforts on synthesizing and analyzing the most frequent combinations. We will confer with AHRQ about limiting the combinations of communication techniques included in this systematic review based on the number of studies addressing each combination and available resources.

Table 5. Included communication techniques (KQ 1)
Communication Potential Approaches
Tailoring the message:
Tailoring is a multistep and multidimensional process that involves assessing individual characteristics, creating individualized messages (using conceptually or empirically based algorithms that are usually computer driven), and then delivering these messages using a variety of appropriate strategies and channels. The three main tailoring strategies (content matching, personalization, and feedback) are often used in combination and can occur within a single message.
  • Computerized database of messages that can be combined in response to answers to preprogrammed questions asked of an individual
  • Electronic algorithm to design messages based on individual input regarding a limited number of questions
  • Attempts to direct messages to individuals’ status on key theoretical determinants (knowledge, outcome expectations, normative beliefs, efficacy, and/or skills) of the behavior of interest
  • Incorporating recognizable aspects of participants to convey (implicitly or explicitly) that the messages are specifically designed for them. This is more than a personalized letter (e.g., “Dear Jane”)
  • Providing messages to participants about their psychological or behavioral states. Individualized feedback may have then been provided synchronously (e.g., via chat, telephone, or face to face) or asynchronously (e.g., via e-mail or a discussion board or by postal mail)
Targeting the message:
Targeting (also referred to as audience segmentation) involves the development of an intervention approach for a defined population subgroup that takes into account characteristics that are shared by the group (e.g., age, sex, race, ethnicity, spoken language). Once a group is segmented the messages should be designed in a way to be maximally effective for that target group.
  • Targeting can be accomplished by manipulating language, visuals, music, or choice of behavior topic that make the message more interesting, relevant, or appealing to specific subgroups.
Using narratives:
“Story-like prose pieces that focus on elaborating one example of an event, and they provide appealing detail, characters, and some plot, presented in either the first or third person” (Winterbottom et al., 2008, p. 2080). The characters and the situations in stories serve as a model for emulation and learning.
  • Personal stories, case studies, anecdotes, testimonials, and experiential sharing (e.g., a personal account of an individual’s experience in donating an organ to a sibling)
  • Entertainment education (e.g., talking about an issue in a soap opera storyline) and photo novellas
Framing the message:
Presenting the same evidence/information in different ways.
  • Messages that emphasize the positive consequences of compliance are referred to as a positive (gain) frame, whereas a version that stresses the negative consequence of noncompliance is called a negative (loss) frame. Studies should explicitly state that the stimuli differed in terms of a gain or loss frame. For example:
    • Positive (gain) frame: “Get active! Enhance your health!” vs. “A lack of activity increases risk for diabetes.”
    • Negative (loss) frame: “With drug X, you have a 5% chance of dying” vs. “With drug X, you have a 95% chance of surviving.”
One or more of the above goals/strategies:
Combining multiple communication techniques may be more effective than single strategies.
  • A multicomponent approach uses several communication techniques in concurrent combination or in sequence to increase the comprehension and understanding of evidence.

We plan to exclude studies that compare one of these communication techniques to “usual care” (i.e., meaning paralleling standard practice and not representing any of the included techniques or strategies that serve as interventions of interest), because several prior reviews have previously examined this. We will also exclude studies that compare permutations of the included communication techniques, which is comparison within the rows above as opposed to across the rows above, for the same reason. For example, a study that compares targeting information to different types of audience segments (e.g., by race or ethnicity, sex or gender, and/or age groups) will be excluded. We will exclude studies that examine interpersonal communication techniques given that these are more costly to implement and less practical when reaching large-scale audiences. We will also exclude decision aids given the volume of other research (e.g., Cochrane collaboration) focusing on them.

Dissemination (KQ 2):

Active dissemination strategies involve active efforts to spread evidence-based information via specific strategies and channels.

Usual care/practice for dissemination is passive dissemination— passive, uncontrolled spread of information of evidence or no spread of information. Examples include posting information to an evidence developer’s Web site and posting scientific publications in a searchable database.

As described above, evidence dissemination has several broad goals. We plan to include active dissemination strategies that are designed to do one or more of the following (see Table 6): (1) increase the reach of information (e.g., postal and electronic mail; electronic/digital, social, and mass media); (2) increase people’s motivation to use and apply evidence (e.g., using champions, opinion/thought leaders, peer and social networks); and (3) increase people’s ability to use and apply evidence (e.g., by also providing additional resources or information; skills-building efforts). We will include comparisons of two or more of the included dissemination strategies head to head or, in other words, comparisons between rows. We also will include comparisons within the rows below given the current state of the literature and the lack of comparative information within these groupings.

Multicomponent strategies seek to increase the overall impact of evidence across geographic and practice settings and across target audiences. Initially, we will include all studies that use a multicomponent approach with a combination of two or more dissemination strategies (e.g., social marketing, academic detailing) compared to single strategies. After the abstract and/or full-text review process, we will review the combinations of dissemination strategies, identify the most frequent combinations, and likely focus our efforts on synthesizing and analyzing the most frequent combinations. We will confer with AHRQ about limiting the combinations of dissemination strategies included in this systematic review based on the number of studies addressing each combination and available resources.

We plan to exclude studies that compare the above strategies to “usual care” (i.e., meaning passive, uncontrolled spread of information of evidence or no spread of information such as posting information to an evidence developer’s Web site and posting scientific publications in a searchable database), because passive dissemination strategies are generally not effective.16 We plan to exclude studies in which the primary purpose of the intervention is implementation (see the definition in section I), even when the intervention has an effect of raising awareness and educating patients or clinicians (such as reminders and audit-and-feedback). An example of implementation is when a clinical practice adopts or tries out a new treatment approach that is based on newly available health or health care evidence.

Table 6. Included dissemination strategies (KQ 2)
Dissemination Potential Approaches
Improve reach of evidence:
Distributing evidence widely to many audiences and across many settings increases the reach of information
  • Any information delivered via a human carrier employed by a government-run postal service to a new destination or a for-profit mail delivery service like Fed Ex or United Parcel Service.
  • Any information delivered via phone and/or Web-based e-mail, text messages, or electronic programs such as PDA (personal digital assistant) resources or phone apps
  • Any information delivered via Internet-based social networking sites such as Facebook, Twitter, YouTube, My Space, Foursquare™, LinkedIn, et cetera. Sometimes there are problem or group-specific social networks for professional organizations and patient subgroups; these would also fall into social media as long as they have a “social” network component as described above.
  • Any information delivered via TV, radio, print newspapers, print magazines, or billboards.
  • Information delivery via phone, Webinar, or in-person visits, including purposeful delivery of brochures/pamphlets; can include pharmacists, nurses, doctors, counselors, but does not include a motivational component.
Motivate recipients to use and apply evidence:
Increasing interest in the evidence
  • Champions (aka a cheerleader), such as someone who takes ownership of the evidence and visibly promotes it within his or her own organization or across other settings. Champions help overcome social and political pressures imposed by an organization, role model personal commitment to the program, and involve others in its use.
    • For example, an evidence developer might train/enlist the help of a local champion to promote evidence within his or her organization
  • Opinion/thought leaders (frequently has an endorsing or persuasive element), such as a recognized expert in his or her field who lends his or her name to dissemination efforts to establish credibility. They may or may not actually participate in the work and do not necessarily have any relationship with the organization to which evidence is to be disseminated. They could endorse the intervention, have a role in its development, or advise on strategies. The idea here is that an opinion leader is endorsing the idea being disseminated.
    • For example, an opinion leader might be the chief executive officer of a company or the head of a department, or an external expert in a particular field applicable to the evidence, or a well-recognized figure like the Surgeon General of the United States
  • Social networks, such as a network of individuals who are friends, colleagues, or know each other. The relationships can be informal (friends, peers, or family) or formal (patient/provider/nurses) that have defined role obligations.
Enhance the ability to use and apply evidence (regardless of delivery mode)
Providing additional resources about the evidence, such as how it can be incorporated into current practice or specific suggestions for change, enhances a traditional dissemination strategy
  • Provision of supporting “how-to” materials, including physical materials that might be used by a practice to put evidence into use. This might include tracking sheets to be given to patients and risk calculators to be used by clinicians. It might also include tailored toolkits that explain implementation of evidence in specific settings.
  • Supporting materials do not include brochures, counseling resources, or resources that originate from the practice. They must originate from the evidence developer and be given to the end-user.
  • Skill training, capacity building, and problem solving including training in any skill that would allow appropriate use of evidence (to overcome barriers); might include training in recognizing the quality of evidence or the circumstances under which it can be reasonably used; and also includes training in various counseling techniques that would facilitate evidence implementation and interactive seminars.
One or more of the above goals/strategies:
Combining multiple dissemination strategies—including ways to increase reach, motivation, or ability—may be more effective than single strategies
  • A multicomponent approach uses several dissemination strategies in concurrent combination or in sequence to increase the reach of evidence, enhance the end-user(s)’ motivation to use and apply evidence or to adopt it.

Uncertainty (KQ 3): Health and health care evidence inherently involves some degree of uncertainty. We focus this review on uncertainty in a body of evidence and how to effectively communicate this uncertainty to target audiences in ways that allow informed decisions. We will specifically examine studies that compare ways to explain the following components of uncertainty: overall grade for strength of evidence, risk of bias, consistency, precision, and directness (see Table 7). We will also consider studies that compare ways to explain net benefit (of prevention or therapeutic services). Finally, we will look at the issue of applicability (i.e., generalizability or what is sometimes termed external validity). In this context, we will look at studies that attempted to explain that although research evidence may exist on a particular topic, it may not be generalizable for one or more reasons. Strategies to explain the different types of uncertainty in evidence may use numeric, non-numeric, or visual presentation formats. We will also examine relevant communication techniques described, including the ones for KQ 1 and hypothetical situations, if the technique is used to communicate uncertainty.

Table 7. Included components of uncertainty in an entire body of evidence and study-specific uncertainty (KQ 3)
Component Description
Sources: Owens et al. (2010)41 and AHRQ (2011)38.
Overall strength of evidence The strength of the evidence represents the degree of confidence that the estimates of effects underlying evidence are correct and is used to provide a comprehensive evaluation of the evidence and an assessment of whether additional evidence might change conclusions.

Strength of evidence requires a value judgment based on the risk of bias, consistency, precision, and directness of evidence (see definitions below).
Risk of bias The risk of bias is the degree to which individual studies are protected from systematic errors or bias. Biases may result from study design, study conduct, or confounding by other external variables.

Risk of bias is analogous to the quality of the evidence: good/fair/poor.
Consistency The consistency of a body of evidence reflects the degree to which studies present similar findings—in both direction and magnitude of effect. Evidence lacking consistency includes studies with greatly differing or conflicting effect estimates.

Lack of consistency is when studies suggest different effect sizes (have a different sign) or completely different (i.e., conflicting) effects or affects where the size of the effect is appreciably different.
Precision Precision reflects the degree of random error surrounding an effect estimate with respect to a given outcome; such studies express dispersion around a point estimate of risk, such as a confidence interval, which indicates the reproducibility of the estimate.
Directness Directness is the degree to which the evidence links the interventions directly to the question of interest.

For instance, evidence on the benefits of screening is often not directly available (i.e., there are no studies that enroll subjects and assign them to appropriate treatment or not). Therefore, recommendations about screening are derived indirectly from evidence that a preclinical disease can be detected and that there is benefit in treating that same disease once symptomatic.
Net benefit Net benefit describes the balance or trade-offs in benefits and harms for prevention or treatment services.

Net benefit is based on a judgment call by policymakers. Overall there may be net benefit, clinical equipoise (benefit that is too close to call at the population level), or net harm.
Applicability Applicability reflects whether an intervention is expected to have the same effect in the population in which it will be used as compared with the effect in the population in which it was studied.

Other forms of uncertainty also affect decisionmaking but are beyond the scope of this review. We will not examine interventions designed to help individuals cope with uncertainty. We will also exclude studies that compare alternate presentations of point estimates, as these studies have been well summarized in previous reviews on risk communication.26,35-37

In addition, we will exclude studies that address uncertainty arising from any of the following circumstances: multiple causes of illness, changes in risks over time, lack of knowledge about evidence that is available, unclear patient values, trade-offs between benefits and harms in limited-resource settings, concerns about clinicians’ competence, concerns about how a medical illness will affect family and friends, imperfect diagnostic testing, or uncertain prognosis. We will not include cost-effectiveness studies.

B. Searching for the Evidence: Literature Search Strategies for Identifying Relevant Studies To Answer the KQs

We will systematically search, review, and synthesize the scientific evidence for each KQ. The steps that we will take to accomplish the literature review are described below. To identify articles relevant to each KQ, the EPC librarian will begin with three focused PubMed-MEDLINE searches on the comparative effectiveness of: (1) communication techniques to promote the use of health and health care evidence, (2) dissemination strategies to promote the use of health and health care evidence, and (3) different methods used to explain uncertain evidence. We will search using a variety of medical subject headings (MeSH terms) and major headings, as well as free-text and title and abstract text-word searches. Relevant terms are listed in Table 8. Search results will be limited to studies on humans published from 01/01/2000 onward for communication and dissemination given the previous systematic reviews and from 01/01/1966 onward for uncertainty given the lack of previous reviews on the latter. We will include only randomized controlled trials for KQs 1 and 2 given the amount of available literature. For KQ 3, we will also include the following experimental study types in MEDLINE: comparative studies, controlled clinical trials, or cross-over studies. We will limit the searches to studies published in English given the scope of this review.

Using analogous search terms, the librarian will also search the Cochrane Library and Cochrane Central Trials Registry for trials on these topics. Further, she will search Web-of-Science to trace citations of known uncertainty frameworks and capture articles on uncertainty, and search PsychINFO for communication and uncertainty articles given the high likelihood of relevant publications in the psychological literature. We will conduct quality checks to ensure that our main searches identify “known studies.” To limit KQ 1 and KQ 2 searches to relevant comparative effectiveness literature, we will further limit searches to comparative effectiveness studies by including only studies that have any of the following keywords throughout their citation in EndNote (Thomson Reuters, Philadelphia, PA): comparative effectiveness, evidence based, evidence-based, and recommendation or recommendations. This is analogous to a text-word search in MEDLINE. We will not further refine KQ 3 results given our broader approach to this literature.

We expect some overlap in results among the three searches (for the three KQs). We will remove duplications in our EndNote database and track the yield from each search.

Table 8. Initial literature search terms for each of the targeted KQ searches in PubMed
Interventions Search Terms
KQ 1: Communication techniques to promote the use of health and health care evidence "Information Dissemination/methods"[Majr] OR "Decision Making"[Majr] OR "Patient Education as Topic"[Mesh] OR "Narration"[Majr] OR OR] OR "Persuasive Communication"[Majr] OR "Health Education/methods"[Majr]
KQ 2: Dissemination strategies to promote the use of health and health care evidence "Diffusion of Innovation"[Mesh] OR "Information Dissemination"[Mesh] OR "Evidence-Based Medicine/education"[Mesh] OR "Evidence-Based Medicine/methods"[Mesh] OR "Information Services/utilization"[Mesh] OR "Practice Guidelines as Topic/standards"[Mesh] OR "Guideline Adherence/statistics and numerical data"[Mesh] OR " ] OR "Physician's Practice Patterns/standards"[Mesh] OR "Physician's Practice Patterns/statistics and numerical data"[Mesh] OR "Physician's Practice Patterns/trends"[Mesh] OR "Social Marketing"[Mesh] OR "social marketing"[tiab] OR "academic detailing"[tiab] OR "dissemination strategy"[tiab] OR "dissemination strategies"[tiab] OR (disseminat*[ti] AND guideline*[ti])
KQ 3: Methods of explaining uncertain health and health care evidence ("Uncertainty"[Mesh] OR uncertainty OR "low evidence" OR "conflicting evidence" OR "missing evidence" OR "strength of evidence" OR "Research Design/statistics and numerical data"[Mesh] OR "Therapeutic Equipoise"[Mesh] OR ambigu* OR complexity OR vagueness OR precision OR "risk of bias" OR "Bias (Epidemiology)"[Mesh] OR "net benefit") AND ("Communication"[Mesh])

We will hand search bibliographies of included articles. In addition, in an effort to avoid retrieval bias, we will manually search the reference lists of landmark studies and background articles on this topic to look for any relevant citations that electronic searches might have missed.

We will conduct an updated literature search (of the same databases searched initially) concurrent with the peer review process. Any literature suggested by Peer Reviewers or public comment respondents will be investigated and, if appropriate, incorporated into the final review. Appropriateness will be determined by the same methods listed above.

Determining Article Inclusion

Two trained members of the research team will independently review all titles and abstracts identified through searches for eligibility against our inclusion/exclusion criteria. Studies marked for possible inclusion by either reviewer will undergo a full-text review. For studies without adequate information to determine inclusion or exclusion, we will retrieve the full text and then make the determination. All results will be tracked in an EndNote database.

We will retrieve and review the full text of all articles included during the title/abstract review phase. Two trained members of the research team will independently review each full-text article for inclusion or exclusion on the basis of the eligibility criteria described earlier. If both reviewers agree that a study does not meet the eligibility criteria, the study will be excluded. If the reviewers disagree, conflicts will be resolved by discussion and consensus or by consulting a third, senior member of the review team. Reasons for exclusion will be tracked and reported along with the main reason(s) for exclusion in a report appendix. The disposition of all items, (starting with the initial yields of the searches) through to articles finally retained for synthesis, will be reported in a flow diagram conforming to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standards. We will account for studies reported in multiple articles.

C. Data Abstraction and Data Management

For studies that meet inclusion criteria, we will abstract relevant information into evidence tables. We will design data abstraction forms to gather pertinent information from each article, including characteristics of study populations, settings, interventions, comparators, study designs, methods, and results. Table 9 displays data items that will be extracted. Trained reviewers will extract the relevant data from each included article into the evidence tables. All data abstractions will be reviewed for completeness and accuracy by a second member of the team.

Table 9. Data items to extract
Data To Extract Examples of Data Items
Study characteristics and methods
  • Study design
  • Study objectives
  • Intervention and comparators
  • Setting(s)
  • Duration
  • Outcomes measured
  • Sample size
  • Eligibility criteria
  • Sampling strategy
  • Units and methods of randomization
  • Sample retention
  • Statistical analysis, including adjustment for multiple comparisons, clustering, and use of intention-to-treat analysis
  • Covariates used in the analysis
Participant characteristics
  • Age group
  • Gender (or sex)
  • Education
  • Race and/or ethnicity
  • Income
  • Health literacy/numeracy
Outcome characteristics
  • Definition of outcomes
  • Measures used
  • Source of outcome data
  • Results in intervention and control groups

D. Assessment of Methodological Quality of Individual Studies

To assess the risk of bias of studies, we will use criteria described in the AHRQ Methods Guide for Effectiveness and Comparative Effectiveness Reviews.38 We will use questions adapted from the RTI Item Bank,39 the Cochrane Risk of Bias tool, and previous work by the USPSTF.40 We will assess the potential for selection bias (including attrition bias), measurement bias (such as performance bias, detection bias), confounding, and power. We will also assess potential biases in reporting. We will qualitatively synthesize the results and determine a rating of low, medium, or high risk of bias. In general, a study with a low risk of bias has a strong design, measures outcomes appropriately, uses appropriate statistical and analytical methods, reports low attrition and little or no differential attrition, and reports methods and outcomes completely. Studies with a medium risk of bias are those with some bias but not enough to invalidate results that do not meet all criteria required for low risk of bias. These studies may have some flaws in design or execution (e.g., imbalanced recruitment, high attrition) but they provide information (say, through sensitivity analysis) to enable the reader the ability to evaluate to determine that those flaws are not likely to cause major bias. Missing information often leads to ratings of medium as opposed to low risk of bias. Studies with a high risk of bias are those with at least one major flaw that is likely to cause significant bias and thus might invalidate the results. Major flaws preclude the ability to draw causal inferences between the intervention and the outcome.

Two independent reviewers will assess the risk of bias for each study. Disagreements between the two reviewers will be resolved by discussion and consensus or by consulting a third, senior member of the team.

E. Data Synthesis

Data synthesis and analysis is a core step in developing a systematic review. Given the diversity of our three KQs, wide range of interventions, and plethora of outcomes under consideration, we anticipate that we will synthesize most of our data qualitatively. In addition, we expect a fair amount of heterogeneity across studies. Therefore, we will integrate the information qualitatively into understandable text and summary tables.

We will determine whether quantitative synthesis using meta-analysis is appropriate. The decision will be based on the total number of studies and, assuming a sufficient number of studies are potential candidates for such analyses, on an assessment of both the clinical and the statistical heterogeneity of the data. We will assess clinical heterogeneity by comparing studies on their PICOTS characteristics. If studies are similar and we proceed with quantitative analyses, we will assess statistical heterogeneity by calculating the chi-squared statistic and the I2 statistic (the proportion of variation in study estimates due to heterogeneity). We will conduct any meta-analyses we perform using random effects models, given that this is the most conservative approach.

We expect to organize our report into three separate results chapters—one for each KQ (communication, dissemination, uncertainty). Within each chapter, we will organize our results first by outcome and subsequently by the types of interventions compared. For each outcome and, within each outcome, each comparison type, we will examine the consistency and precision of effect.

We will pay particular attention to moderators of study effects as a way to explain seemingly disparate effects. Possible moderators of interest for all key questions include: risk of bias, study size, and target audience. Other moderators will vary by KQ (communication, dissemination, uncertainty) and may include the following:

For our review of communication techniques:

  • Literacy/numeracy level of audience intervention intensity and/or complexity
  • Message delivery setting
  • Message source

For our review of dissemination techniques:

  • Care delivery setting
  • Organizational readiness and supports
  • Type of media, mode, or channel

For our review of techniques for communicating uncertainty:

  • Literacy/numeracy of audience
  • Format of presentation (graphical, numeric, non-numeric, combination)
  • Participant optimism/anxiety
  • Amount/degree of uncertainty.

F. Grading the Strength of Evidence for Individual Outcomes

We will grade the strength of evidence on the basis of guidance established for the EPC Program.38,41 Developed to grade the overall strength of a body of evidence, this approach incorporates four key domains: risk of bias (including study design and aggregate quality), consistency, directness, and precision of the evidence. The grades of evidence that can be assigned are defined in Table 10. Grades reflect the strength of the body of evidence to answer the KQs on the comparative effectiveness of the interventions in this review. Two reviewers will independently assess each domain for each key outcome listed in the analytic framework, and conflicts will be resolved by consensus or, if necessary, by adjudication by a third, senior investigator.

Table 10. Definitions of the grades of overall strength of evidence
Grade Definition
Source: Owens et al., 201041
High High confidence that the evidence reflects the true effect. Further research is very unlikely to change our confidence in the estimate of effect.
Moderate Moderate confidence that the evidence reflects the true effect. Further research may change our confidence in the estimate of the effect and may change the estimate.
Low Low confidence that the evidence reflects the true effect. Further research is likely to change our confidence in the estimate of the effect and is likely to change the estimate.
Insufficient Evidence either is unavailable or does not permit estimation of an effect.

G. Assessing Applicability

We will assess the applicability both of individual studies and of the body of evidence for specific KQs.38 For individual studies, we will examine characteristics that may limit applicability based on the PICOTS structure. Such conditions may be associated with heterogeneity of treatment effect and the ability to generalize the effectiveness of an intervention to use in everyday practice. Examples include the following:

  • Population: narrow eligibility criteria,
  • Outcomes: use of composite outcomes that mix outcomes of different significance to patients,
  • Settings: restrictions to certain types of health care institutions when services might be rendered in many different locales or venues, and
  • Timing: studies of different duration that may have various implications for applicability.

We will abstract and report key characteristics that may affect applicability into evidence tables. To assess the applicability of a body of evidence, we will consider the consistency of results across studies that represent an array of different populations.

References

  1. Agency for Healthcare Research and Quality. Effective Health Care: What Is the Effective Health Care Program? Rockville, MD: U.S. Department of Health and Human Services; 2012. Available at https://effectivehealthcare.ahrq.gov/about/. Accessed March 29, 2012.
  2. Institute of Medicine. Initial priorities for comparative effectiveness research. Washington, DC: The National Academies Press; 2000.
  3. U.S. Department of Health and Human Services. Health.gov. Washington, DC: Office of the Assistant Secretary for Health, Office of the Secretary, U.S. Department of Health and Human Services. Available at www.health.gov/communication/resources/Default.asp. Accessed April 5, 2012.
  4. Lomas J. Diffusion, dissemination, and implementation: who should do what? Ann N Y Acad Sci 1993 Dec 31;703:226-35; discussion 35-7. PMID: 8192299.
  5. National Institutes of Health. NIH Conference. Building the Science of Dissemination and Implementation in the Service of Public Health. 2007 Sep 10-11. Available at www.obssr.od.nih.gov/di2007/about.html. Accessed February 21, 2012.
  6. Lomas J. Diffusion, dissemination, and implementation: who should do what? Ann N Y Acad Sci 1993 Dec 31;703:226-35; discussion 35-37. PMID: 8192299.
  7. Noar SM, Benac CN, Harris MS. Does tailoring matter? Meta-analytic review of tailored print health behavior change interventions. Psychol Bull 2007 Jul;133(4):673-93. PMID: 17592961.
  8. Lustria MLA, Noar SMC, Van Stee SK, et al. A meta-analysis of web-delivered, tailored health behavior change interventions. J Health Commun 2012; in press.
  9. Slater MD. Choosing segmentation strategies and methods for health communication. In: Maibach E and Parrot EL, eds. Designing health messages. Thousand Oaks: Sage Publications; 1995. p. 186-98.
  10. Noar SM, Palmgreen P, Chabot M, et al. A 10-year systematic review of HIV/AIDS mass communication campaigns: have we made progress? J Health Commun 2009 Jan-Feb;14(1):15-42. PMID: 19180369.
  11. Hinyard LJ, Kreuter MW. Using narrative communication as a tool for health behavior change: a conceptual, theoretical, and empirical overview. Health Educ Behav 2007 Oct;34(5):777-92. PMID: 17200094.
  12. Winterbottom A, Bekker HL, Conner M, et al. Does narrative information bias individual's decision making? A systematic review. Soc Sci Med 2008 Dec;67(12):2079-88. PMID: 18951673.
  13. O'Keefe DJ, Jensen JD. The relative persuasiveness of gain-framed and loss-framed messages for encouraging disease prevention behaviors: a meta-analytic review. J Health Commun 2007 Oct-Nov;12(7):623-44. PMID: 17934940.
  14. Latimer AE, Brawley LR, Bassett RL. A systematic review of three approaches for constructing physical activity messages: what messages work and what improvements are needed? Int J Behav Nutr Phys Act 2010 May 11;7:36. PMID: 20459779.
  15. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82(4):581-629. PMID: 15595944.
  16. Grimshaw JM, Shirran L, Thomas R, et al. Changing provider behavior: an overview of systematic reviews of interventions. Med Care 2001 Aug;39(8 Suppl 2):II2-45. PMID: 11583120.
  17. Majumdar SR, Soumerai SB. Why most interventions to improve physician prescribing do not seem to work. CMAJ 2003 Jul 8;169(1):30-1. PMID: 15238494.
  18. McGettigan P, Sly K, O'Connell D, et al. The effects of information framing on the practices of physicians. J Gen Intern Med. 1999 Oct;14(10):633-42. PMID: 10571710.
  19. Edwards A, Elwyn G, Covey J, et al. Presenting risk information—a review of the effects of "framing" and other manipulations on patient outcomes. J Health Commun 2001 Jan-Mar;6(1):61-82. PMID: 11317424.
  20. Moxey A, O'Connell D, McGettigan P, et al. Describing treatment effects to patients. J Gen Intern Med 2003 Nov;18(11):948-59. PMID: 14687282.
  21. Covey J. A meta-analysis of the effects of presenting treatment benefits in different formats. Med Decis Making 2007 Sep-Oct;27(5):638-54. PMID: 17873250.
  22. Visschers VH, Meertens RM, Passchier WW, et al. Probability information in risk communication: a review of the research literature. Risk Anal 2009 Feb;29(2):267-87. PMID: 19000070.
  23. Cuite CL, Weinstein ND, Emmons K, et al. A test of numeric formats for communicating risk probabilities. Med Decis Making 2008 May-Jun;28(3):377-84. PMID: 18480036.
  24. Woloshin S, Schwartz LM. Communicating Data about the Benefits and Harms of Treatment: A Randomized Trial. Ann Intern Med 2011 July 19;155(2):87-96. PMID: 21768582
  25. Garcia-Retamero R, Galesic M. Communicating treatment risk reduction to people with low numeracy skills: a cross-culturalcComparison. Am J Public Health 2009 Dec;99(12):2196-202. PMID: 19833983..
  26. Han PK, Klein WM, Lehman T, et al. Communication of uncertainty regarding individualized cancer risk estimates: effects and influential factors. Med Decis Making 2011 Mar-Apr;31(2):354-66. PMID: 20671211.
  27. Owens DK, Lohr KN, Atkins D, et al. Grading the strength of a body of evidence when comparing medical interventions. In: Methods Guide for Effectiveness and Comparative Effectiveness Reviews. Rockville, MD: Agency for Healthcare Research and Quality; 2009. Available at www.ncbi.nlm.nih.gov/books/NBK47091.
  28. Sawaya GF, Guirguis-Blake J, LeFevre M, et al. Update on the methods of the U.S. Preventive Services Task Force: estimating certainty and magnitude of net benefit. Ann Intern Med 2007 Dec 18;147(12):871-5. PMID: 18087058.
  29. Politi MC, Han PK, Col NF. Communicating the uncertainty of harms and benefits of medical interventions. Med Decis Making 2007 Sep-Oct;27(5):681-95. PMID: 17873256.
  30. Helfand M, Tunis S, Whitlock EP, et al. A CTSA agenda to advance methods for comparative effectiveness research. Clin Transl Sci 2011 Jun;4(3):188-98. PMID: 21707950.
  31. Smith C. The role of health professionals in informing cancer patients: findings from The Teamwork Project (phase one). Health Expect 2000 Sep;3(3):217-9. PMID: 11281931.
  32. Mitton C, Adair CE, McKenzie E, et al. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q 2007 Dec;85(4):729-68. PMID: 18070335.
  33. Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999 Oct 20;282(15):1458-65. PMID: 10535437.
  34. Kick EL, McKinney LA, McDonald S, et al. A multiple-network analysis of the World System of Nations, 1995-1999. In: Scott J and Carrington P, eds. Sage handbook of social network analysis. Thousand Oaks, CA: Sage Publications; 2011. p. 311-27.
  35. Kuhn KM, Budescu DV. The relative importance of probabilities, outcomes, and vagueness in hazard risk decisions. Organ Behav Hum Decis Processes 1996 Dec;68(3): 301-17.
  36. Ibrekk H, Morgan GM. Graphical communication of uncertain quantities to nontechnical people. Risk Analysis1987 Dec;7(4):519-29.
  37. Steginga SK, Occhipinti S. Decision making about treatment of hypothetical prostate cancer: is deferring a decision an expert-opinion heuristic? J Psychosoc Oncol 2002;20:69-84.
  38. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. AHRQ Publication No. 10(11)-EHC063-EF. Rockville, MD: Agency for Healthcare Research and Quality; April 2011. AHRQ Publication No. 10(12)-EHC063-EF. Chapters available at www.effectivehealthcare.ahrq.gov.
  39. Viswanathan M, Berkman ND. Development of the RTI item bank on risk of bias and precision of observational studies. J Clin Epidemiol. 2012 Feb;65(2):163-78. PMID: 21959223.
  40. Higgins JPT, Altman DG. Assessing risk of bias in included studies. In: Cochrane handbook for systematic reviews of interventions. John Wiley & Sons, Ltd; 2008. p. 187-241.
  41. Owens DK, Lohr KN, Atkins D, et al. AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions—Agency for Healthcare Research and Quality and the Effective Health-Care Program. J Clin Epidemiol 2010 May;63(5):513-23. PMID: 19595577.

Definition of Terms

See Section I.

Summary of Protocol Amendments

Not applicable.

Review of KQs

For all EPC reviews, the EPC reviewed KQs and refined them as needed with input from Key Informants and the Technical Expert Panel (TEP) to ensure that the questions are specific and explicit about what information is being reviewed. In addition, for comparative effectiveness reviews, the KQs were posted for public comment and finalized by the EPC after review of the comments.

Key Informants

Key Informants are the end-users of research, including patients and caregivers, practicing clinicians, relevant professional and consumer organizations, purchasers of health care, and others with experience in making health care decisions. Within the EPC program, the Key Informant role is to provide input into identifying the KQs for research that will inform health care decisions. The EPC solicits input from Key Informants when developing questions for systematic review or when identifying high-priority research gaps and needed new research. Key Informants are not involved in analyzing the evidence or writing the report and have not reviewed the report, except as given the opportunity to do so through the peer or public review mechanism.

Key Informants must disclose any financial conflicts of interest greater than $10,000 and any other relevant business or professional conflicts of interest. Because of their role as end-users, individuals are invited to serve as Key Informants and those who present with potential conflicts may be retained. The Task Order Officer (TOO) and the EPC work to balance, manage, or mitigate any potential conflicts of interest identified.

Technical Experts

Technical experts constitute a multidisciplinary group of clinical, content, and methodological experts who provide input regarding methodological issues and scoping the reviews. They are selected to provide broad expertise and perspectives specific to the topic under development. Divergent and conflicted opinions are common and perceived as healthy scientific discourse that results in a thoughtful, relevant systematic review. Therefore, study questions, design, and/or methodological approaches do not necessarily represent the views of individual technical and content experts. Technical experts provide information to the EPC to identify literature search strategies and recommend approaches to specific issues as requested by the EPC. Technical experts do not do analysis of any kind nor contribute to the writing of the report.

Technical experts must disclose any financial conflicts of interest greater than $10,000 and any other relevant business or professional conflicts of interest. Because of their unique clinical or content expertise, individuals are invited to serve as technical experts and those who present with potential conflicts may be retained. The TOO and the EPC work to balance, manage, or mitigate any potential conflicts of interest identified.

Peer Reviewers

Peer Reviewers are invited to provide written comments on the draft report based on their clinical, content, or methodological expertise. Peer review comments on the preliminary draft of the report are considered by the EPC in preparation of preparing the final draft of the report. Peer Reviewers do not participate in writing or editing of the final report or other products. The synthesis of the scientific literature presented in the final report does not necessarily represent the views of individual reviewers. The dispositions of the peer review comments are documented and will, for comparative effectiveness reviews and technical briefs, be published 3 months after the publication of the evidence report.

Potential reviewers must disclose any financial conflicts of interest greater than $10,000 and any other relevant business or professional conflicts of interest. Invited Peer Reviewers may not have any financial conflict of interest greater than $10,000. Peer Reviewers who disclose potential business or professional conflicts of interest may submit comments on draft reports through the public comment mechanism.

EPC Team Disclosures

Not applicable.

Role of the Funder

This project was funded under Contract No. 290-2007-10056-I #7 from the Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services. The TOO reviewed contract deliverables for adherence to contract requirements and quality. The authors of this report are responsible for its content. Statements in the report should not be construed as endorsement by the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services.

Project Timeline

Communication and Dissemination Strategies To Facilitate the Use of Health-Related Evidence

Mar 5, 2012
Topic Initiated
Jul 31, 2012
Research Protocol Archived
Nov 20, 2013
Page last reviewed December 2019
Page originally created November 2017

Internet Citation: Research Protocol: Communication and Dissemination Strategies To Facilitate the Use of Health-Related Evidence. Content last reviewed December 2019. Effective Health Care Program, Agency for Healthcare Research and Quality, Rockville, MD.
https://effectivehealthcare.ahrq.gov/products/medical-evidence-communication/research-protocol

Select to copy citation