Skip to main content
Effective Health Care Program

Public Reporting as a Quality Improvement Strategy: A Systematic Review of the Multiple Pathways Public Reporting May Influence Quality of Health Care

Research Protocol
Download PDF439.3 KB

Background and Objectives for the Systematic Review

Closing the Quality Gap: Revisiting the State of the Science (hereafter, CQG series) is a collection of evidence reviews that focuses on improving the quality of health care through critical assessment of relevant evidence for selected settings, interventions, and clinical conditions. The CQG series aims to assemble the evidence about effective strategies to close the “quality gap”—the difference between what is expected to work well for patients based on known evidence and what actually happens in day-to-day clinical practice across populations of patients. The evidence review, “Public Reporting as a Quality Improvement Strategy,” will be included in the CQG series.

A. Background and Context

Research demonstrates that health care frequently fails to meet the current standards of quality care.1,2 Errors, suboptimal management or control of disease, and overutilization or underutilization of services occur when high-quality, evidence-based health care is not provided. All these factors have potentially serious consequences for patients and their families, including higher mortality, increased morbidity, decreased quality of life, and higher cost of care. Additionally, low-quality care and inconsistencies in quality are linked to health care disparities.3, 4

Federal and State governments, community quality collaboratives, and other organizations are investing resources in public reporting as one possible intervention to bridge the gap between current and high-quality practice in health care. Public reporting can be broadly defined as the provision of information about an organization or individual to a large audience. The assumptions underlying public reporting are: 1) given choices and information, patients and purchasers will choose higher quality providers; and 2) health care providers will strive to provide high-quality care when information about their performance is publicly available to patients, their peers, policymakers, and the media. These assumptions are based on theories in economics5,6 and behavior change.7 According to economic theory, public reporting corrects asymmetries in information. Public reporting accomplishes this by making previously unobservable quality of health care more transparent so everyone involved can use the information. Behavior change models and quality improvement theories stress the importance of accessible information on measurable, actionable processes and outcomes as motivation for practice improvement. Public reporting in this context can provide data that translate to goals or targets for practice change and quality improvement and to incentives to improve.

Situating public reporting in the context of quality improvement in health care requires the more specific definition we will use for this systematic review:

Public Reporting is data, publicly available or available to a broad audience free of charge or at a nominal cost, about a health care structure, process or outcome at any provider level (individual clinician, group, organization). While public reporting is generally understood to involve comparative data across providers, for purposes of this review we are adopting a broader approach to include findings in which a single provider is compared to a national/regional data report on performance for which there are accepted standards or best practices.


A recent Agency for Healthcare Research and Quality (AHRQ) series on best practices in public reporting,8-10 along with conferences about creating and using reports and other decision-support tools to engage consumers and providers, demonstrate that there is a need and an audience for a review of public reporting as a quality improvement strategy. A systematic review can substantially contribute to the field because questions remain regarding the extent to which public reports result in quality improvements. Previous research has examined the effect of public reporting on quality at several levels (e.g., health plans, individual clinicians) and in a variety of settings including hospitals,11,12 nursing homes,13 postacute care,14 and home care.15 However, the results were inconsistent. For example, some studies have reported improvements in specific health services, while other studies have documented unintended negative consequences, including motivating providers to select lower risk patients in order to improve their quality score. A review published in 2008 (including studies through 2006) concluded that although there is scant evidence that publishing performance data improves quality of care and that evaluation of public reporting systems is needed, some evidence suggests that public reports stimulate quality improvement activities at the hospital level.16

This review is timely, given the significant changes that have occurred in the scope and nature of public reporting since the last published review.16 Medicare now provides quality data via sections of the Medicare.gov Web site that include Hospital Compare, Nursing Home Compare, Home Health Compare, Dialysis Facility Compare, and Physician Compare. Health data from many more sources are also available with minimal restrictions to patients, health care providers, and purchasers. New technologies allow for innovative data collection (e.g., Global Positioning System tracking of asthma inhaler use, aggregating data from consumer feedback sites, and apps known as mashups that simplify the combination of data from multiple sources) and make more data available in real time.17 These efforts and continuing commitments to transparency and patient-centered health care are likely to contribute to substantial increases in the amount of publicly available health care–quality data. Changes under the 2010 Affordable Care Act (Public Law 111-48) may also increase the availability of data and the number of people making decisions about health care services.

Available studies may also offer insights not only into the effectiveness of public reporting for quality improvement, but they may also provide key information on such issues as when information is needed,18 how it is best formatted and presented, and what is perceived as useful by different audiences.19 A synthesis that includes consideration of these and other characteristics of reports and contextual factors can be used to inform the development of public reporting as a more effective quality improvement strategy.

The way in which health care data are publicly reported may affect the impact they have on intermediate and ultimate outcomes. Specific examples of report characteristics for which evidence may be available include the extent to which reports are:

Acceptable/Appropriate
Patients and health care providers find the data believable and have confidence in data quality/accuracy, and the data are applicable to their situation, including whether reports are general, disease specific, or specific to subgroups of the population.
Accessible
The reports can be understood by people in the populations. The format, language, and graphics can be understood by the target audiences. The target population can understand the meaning of the report. Accessibility also includes how reports are publicized and promoted.
Actionable
Patients: Reports are available when and where a decision needs to be made.
Individuals or organizations that provide care: Reports are related to practices they can, or perceive they can, change, or reports are related to other factors they can influence.

In addition to the characteristics of the public reports, contextual factors that could make a health care decision more or less amenable to influence from public reports include three nested levels. First, there are the characteristics of the specific decision to be made (e.g., what type of care is needed, how many health care options are available, how much time before the decision needs to be made, and what a provider can influence). Second, a person or organization makes each specific decision, and the characteristics of the decisionmaker (patient/patient representative/purchaser or individuals/health care organizations that deliver care) may be important. For example, patient literacy is assumed to affect the impact of public reports or the importance of peer approval to a provider may motivate change. Third, the decision and the decisionmaker exist in an environment that includes factors such as market characteristics, public policies, and organizational requirements, all of which may enhance or diminish the impact of public reporting.

B. Objectives of the Systematic Review

Given the resources devoted to public reporting and the desire to synthesize existing research knowledge to inform future public-reporting efforts, the objectives of this systematic review are:

  1. To determine the effectiveness of public reporting as a quality improvement strategy by evaluating the evidence available about whether public reporting results in improvements in health care delivery and patient outcomes (Key Question [KQ] 1) and evidence of harms resulting from public reporting (KQ 2).
  2. To determine whether public reporting leads to changes in health care delivery or changes in patients’ or purchasers’ behaviors (intermediate outcomes) that may contribute to improved quality of care (KQs 3 and 4).
  3. To identify characteristics of public reports and contextual factors that can increase or decrease the impact of public reporting (KQs 5 and 6).

The Key Questions

A. Key Questions for Objective 1

Question 1

Does public reporting result in improvements in the quality of health care (including improvements in health care delivery structures, processes or patient outcomes)?

Question 2

What harms result from public reporting?

B. Key Questions for Objective 2

Question 3

Does public reporting lead to change in health care delivery structures or processes (at levels of individual providers, groups, or organizations [e.g., health plans, hospitals, nursing facilities])?

Question 4

Does public reporting lead to change in the behavior of patients, their representatives, or organizations that purchase care?

C. Key Questions for Objective 3

Question 5

What characteristics of public reporting increase its impact on quality of care?

Question 6

What contextual factors (population characteristics, decision type, and environmental) increase the impact of public reporting on quality of care?

D. PICOTS

Specifying the Population, Comparators, Outcomes, Timing, and Settings (PICOTS) for a systematic review is an approach used to generate answerable research questions, to determine inclusion/exclusion criteria, and to organize reports.

For our review of public reporting as a quality improvement strategy, the PICOTS are as follows:

Populations

  1. Individuals or organizations that deliver health care and make decisions about how to deliver care.




    These include health care providers in all settings (inpatient, outpatient, nursing facility, home care, etc.) and at all levels (health plan, facility, group practice, individual provider) unless specifically excluded in the scope or inclusion criteria.
  2. Patients (or their representatives) making health care decisions and organizations that purchase health care services.




    Patients include any person seeking or receiving health care services. Patients may also be represented by family or designated guardians in specific decisions or by advocacy groups that call for changes in care delivery. Purchasers or organizations that purchase care for patients are included in this population as they make choices concerning which individuals and organizations that provide care are available to patients or they may promote the use of certain providers.

Intervention

Public reporting of performance data on patient outcomes or health care delivery. Public reporting for this review is defined above under Background and Context.

Comparators

No publicly reported data or comparisons across different reports, different contexts for public reports, or differences in content and formats of reports.

Outcomes (Specified for Each KQ)

  • KQ 1. Improvements in quality of health care (includes improvements in health care–delivery structure or processes or patient outcomes).




    Improvements in care and patient outcomes may be combined in some studies and reviews under the heading of “clinical outcomes.” For this KQ the focus is on improvement. Examples of potential outcomes in this category include decline in mortality for cardiac surgery patients, an increase in actual implementation of a guideline, or greater availability of service with known value. The actual improvements in care delivery and patient outcomes are the goals of quality improvement and public reporting when it is used as a quality improvement strategy. In looking for improvement, it is possible that the findings will be no improvement or “negative improvement,” which is worsening of quality.




    Change is an intermediate outcome included in KQ 3, as it is not a given that all change will lead to improvement; furthermore, some studies may only measure the change care processes or providers’ behaviors and not have sufficient data to determine the impact of that change.




    Quality improvement in health care is the focus of the CQG series, and this review will conform to the definition for the series, which states that the “series aims to assemble the evidence about effective strategies to close the ‘quality gap,’” which simply refers to the difference between what is expected to work well for patients based on known evidence and what actually happens in day-to-day clinical practice across populations of patients. In this statement the implied definition of quality is “what is expected to work well,” which is similar to the Institute of Medicine definition, “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.”20 We will apply this broad definition when determining if the public reporting in studies to be included is aimed at improvements in quality of care.
  • KQ 2. Harms include any negative consequence or adverse events for any members of the populations listed above that resulted from public reporting.




    Harms could occur for patients and purchasers, or the individuals and organizations that provide care. Examples of harms could include:



    1. Reduced access to services if providers select patients or offer services in a different way (e.g., pull out of a home care market) in order to improve their publically reported quality ranking or score
    2. Compromised data quality and reduced confidence in data if people attempt to manipulate the publically reported data
    3. Reduced patient engagement and/or negative outcomes if patients believe, based on a report, that they are receiving services from a high-quality provider and therefore do not need to be vigilant and involved in their own care; a report provides too much information and reduces comprehension; or the meaning of the data is not understood and therefore not used
    4. Increased anxiety due to understanding that health care is not perfect and worry about one’s own health condition or care
    5. Misclassification of providers by the reporting resulting in negative impacts on market share, contracting arrangements, or reputation
    6. Public reporting that results in no improvement or worsening of quality for any reason (including those listed above)
  • KQ 3. Changes in health care delivery structures and processes.




    This intermediate outcome, changes in health care delivery, may be of particular interest in this review. Individual providers or organizations might change processes (e.g., adopt guidelines, change policies, increase quality improvement efforts) or structures (e.g., electronic ordering, automated reminders, staff capacity) in an effort to improve their performance on the outcomes or indicators that are publically reported. However, this change in delivery may or may not necessarily lead to improvement in quality of care—the ultimate outcome of interest. Changes could result in improvement, no improvement, or worsening of outcomes, or the study design may not include measures of the ultimate impact on quality of care.
  • KQ 4. Changes in patients’ or their representatives’ or purchasers’ health care behavior.




    Patients’ and purchasers’ behaviors include but are not limited to their selection of health care providers or use of health services. Their behaviors may also include more general advocacy for higher quality of care and for better information and decision support. Patient behaviors are limited to those related to the reporting of quality data. Changes can be negative as well as positive. An example of a positive change would be increased comprehension of health information by patients. Negative changes could include patients becoming overwhelmed by data and dismissing all reports, relying too much on a rating and not becoming engaged in their own care, or not understanding reports and relying on less reputable sources of information. These negative changes could result in harms. Change in behaviors can also include information seeking and developing the ability to retrieve the information desired.
  • KQs 5 and 6 focus on evidence that the outcomes listed above are affected by characteristics of the reports and contextual factors. This is particularly important, given the quality improvement focus of this review which makes the emphasis different from other reviews. Quality improvement requires consideration not just of what works but also of what works for whom and when. Understanding if the literature can tell us more about how the impact of public reporting varies across report characteristics and different contexts is important if the results of our review are to help inform future public reporting efforts. Particular attention will be paid to these characteristics and factors as we abstract information from the identified articles.

Timing

No minimum duration of followup time from the availability of the public report to the measurement of the intermediate or ultimate outcome will be required.

Settings

Any level or setting for health care delivery including health plans, health systems, hospitals, outpatient services or practices, individual clinicians, hospice, home health care, or nursing facilities.

Analytic Framework

The analytic framework in Figure 1 represents relationships among the Populations, Intervention, and Outcomes that are the focus of this systematic review and illustrates how these relationships translate into the KQs. The relationship between intermediate outcomes and the ultimate improvement in the quality of health care is included. It is represented with dashed lines and does not have corresponding KQs because this review will not explicitly evaluate evidence about these relationships. Rather the focus will be on whether public reporting results in either the intermediate outcomes or improvements in quality of care.

Figure 1. Analytic framework

This figure depicts how the intervention, public reporting of health care quality data, may impact two populations: 1. Individuals and organizations that deliver care and 2. Patients, their representatives and organizations that purchase care. The figure shows public reporting ultimately having an impact on quality of care. Public reporting may result in improvements in the quality of health care (Key Question 1) or harms (Key Question 2). These are the main outcomes of interest for this review, however, intermediate outcomes are included in the framework between the intervention and the quality of care to convey that these intermediate outcomes are of interest when they are reported in available research. These intermediate outcomes include changes in structures and processes by those that deliver care (Key Question 3) and changes in patients’ or purchasers’ related behavior (Key Question 4). The figure also represents the possibility that the characteristics of the public reports may affect the impact of public reports on the quality of care and intermediate outcomes (Key Question 5). Finally the figure illustrates that contextual factors may influence the impact of public reporting on the intermediate outcomes or the quality of care (Key Question 6).

Abbreviations: KQ = key question; QI = quality improvement

Methods

A. Criteria for Inclusion/Exclusion of Studies in the Review

Research studies will be included if they conform to the definition of public reporting and the PICOTS listed above and address at least one of the KQs. Given the nature of the intervention, we will include a variety of study designs such as trials/experiments, nonrandomized experiments, observational studies, systematic reviews, and evaluation case studies. Grey literature will be searched using grants databases, specific grey literature databases, and a targeted Web site review. If an English abstract is available for non–English-language articles, it will be evaluated in terms of content. At the full-text review, it will be determined whether the non–English-language article adds significantly to the literature and whether it is feasible to obtain a translation.

Studies will be excluded if:

  • The data are not publicly available or are unavailable to a large group such as all members of a health plan. Studies in which the data are available to a limited number of stakeholders or to a certain type of stakeholder for feedback, quality improvement, benchmarking, or internal organization operations will not be included.
  • The data are available but have to be purchased for more than a nominal subscription fee (e.g., a nominal fee would be a subscription to Consumer Reports or a similar publication or Web site).
  • Data included in the report are only for one organization or individual and cannot be compared to other organizations directly or to data for a group (national, State, regional) of organizations or of individuals.
  • The public reporting is only about services that are not directly health related or medical (e.g., food service, room décor).
  • The public reporting is only about individual providers other than physicians and nurses (e.g., dentists, dieticians, therapists).
  • The study has no original data or is a commentary, an editorial, or a nonsystematic review.
  • The study was published before 1980.
  • No English abstract is available for a non–English-language article.

B. Searching for the Evidence: Literature Search Strategies for Identification of Relevant Studies To Answer the Key Questions

Our search will incorporate searches of bibliographic databases using keywords and indexing terms, as well as a forward search of citations of previously conducted systematic reviews (e.g., Fung et al.16). Keyword and index searching will be used to identify studies of public reporting not included in previous reviews either because they are more recent or because they address different topics related to public reporting. For example, public reporting has expanded into additional health care settings (e.g., nursing facilities, home care) and may be employing new technologies (e.g., applications that provide quality data that can be customized).

We will search for systematic reviews in The Cochrane Database of Systematic Reviews. We will then conduct a search for both reviews and individual studies in MEDLINE®, EMBASE®, EconLit, and PsychINFO®. Based on these results, additional searching for studies will be conducted in the Business Source® Premier, CINAHL® (Cumulative Index of Nursing and Allied Health Literature), and PAIS (Public Affairs Information Services) International databases. Web of Science or SciVerse Scopus (citation databases) will be used to identify articles that cite key studies, and these searches will be evaluated to determine if they contribute to the search results. Identified systematic reviews will be used to identify original studies for inclusion. The Grey Literature Report database maintained by the New York Academy of Medicine will be searched for additional studies and reports. The references of included articles, key Web sites, and recommendations of stakeholders and experts will be used to identify additional grey literature. The searches will include studies published or reported between January 1980 and March 2011. Two of the earliest public reports in the United States were the data on hospital mortality rates issued by the Health Care Financing Administration in 1986 and the mortality reports issued by the New York Cardiac Surgery Reporting System in 1989. Starting from January 1980 should insure that the entire contemporary history of public reporting is represented. The searches will be updated while the draft report is under review, and any new studies identified will be reviewed and included in the final report.

Key word and index term searches will be based on strategies used in previous systematic reviews and on words and terms used in selected recent articles. The search term lists will be reviewed by librarians with expertise in both biomedical and social science literature searching and will be provided to stakeholders and experts for comments and suggestions. The key concepts and search strings that will be the basis for all our searches are included in Table 1. The strategies and results from all the searches will be included in the final report.

Titles and abstracts of studies identified in the search will be subjected to dual review to determine if they should undergo a full-text review. At each step in the process an initial subset of citations will be reviewed, discussed, and then reconciled before continuing to increase inter-reviewer reliability. Decisions made by reviewers will be documented, and the level of initial agreement will be reported. Any discrepancies will be resolved through discussion and third-party review if necessary. We will retain data on excluded studies and document the reasons for their exclusion.

Table 1. Public reporting review: concepts and search strings
Concept Search string
Information dissemination and quality Benchmarking/ or Information Services/ or Information Dissemination/ or Disclosure/ or Access to Information/ or Mandatory Reporting/ or Quality indicators, health care/ or Quality assurance, health care/ or Quality improvement/ or "process assessment (health care)"/ or "outcome assessment (health care)"/ or (quality adj2 indicator$).ti,ab
Health care settings exp Hospitals/ or exp Physicians/ or Nursing Homes/ or Home Care Services/ or Competitive Medical Plans/ or Health Maintenance Organizations/ or Managed Care Programs/ or Insurance, Health/ or Medicare/ or Medicaid/ or Hospices/ or Ambulatory Care/ or Skilled Nursing Facilities/ or Group Practice/ or exp Primary Health Care/ or Institutional Practice/ or Private Practice/ or Family Practice/ or Physicians, Family/ or Professional Practice/ or Allied Health Personnel/ or Outpatient clinics, hospital/ or Academic Medical Center/ or Health Care Sector/ or Hospital Administration/ or Public Health Administration/ or Long Term Care Facilit$.ti,ab. or health care cent$3.ti,ab. or health care provider$.ti,ab. or (coronary or cardiac or cardiolog$).ti,ab.
Patient/consumer and provider behavior Consumer Participation/ or Consumer Advocacy/ or Consumer Satisfaction/ or Patient Satisfaction/ or Decision Making/ or Choice Behavior/ or Attitude of Health Personnel/ or Physician's Practice Patterns/ or Nurse's Practice Patterns/ or Professional Practice/ or Guideline Adherence/ or Patient Selection/ or Patient Participation/ or Hospital Mortality/ or (decision$ or choice$ or choos$ or behav$ or patient outcome$).ti,ab.
Title abstract adjacency (((Dissem$ or Disclos$ or Profil$ or Inform$ or Indicator$ or Metric$ or Rank$ or Compar$ or Score$ or Rating$ or Rate$ or data or measure$ or criteria or standard$ or account$ or report$ or release$ or initiative$ or Star) adj5 (Performan$ or assessment$ or evaluat$ or quality or public$ or consumer$ or patient$ or transparen$ or provider$)) or score card$ or (quality adj2 report$) or report card$ or league table$ or (star adj2 rating) or (Star adj2 performance)).ti,ab.
Known report cards (Medicare Compare or nursing home compare or Calhospital Compare or California State Report Card or California Hospital Outcomes or myhealthcareadvisor or Massachusetts Health Quality or (Pennsylvania adj3 coronary) or (Hospital Quality adj2 Safety Survey) or Home health Compare or Physician Compare or (New York adj2 Cardiac adj2 Report$) or (New York adj5 surg$) or Cleveland Health Quality Choice or (HCFA adj5 mortality) or (HCFA adj5 death) or Federal employee health benefit guide or QualityCounts or CAHPS or HEDIS).ti,ab.

C. Data Abstraction and Data Management

To increase consistency among the data abstractors, all will independently abstract data from eight articles. They will then meet to review differences in their results and agree on common procedures. Ongoing accuracy will be monitored by randomly selecting articles abstracted by one abstractor to be checked by a second. The exact number of articles to be checked will depend on the number identified for abstraction but will be no less than 10 percent of the total completed by each abstractor.

We will extract data from all included studies on domains such as:

  • Study design
  • Description of the public report
    • Type of health care setting or provider
    • Geographic location (country, State, city, or region)
    • The topic of the public reporting (e.g., mortality for cardiac procedures, patient-provider communication as measured in the CAHPS survey, health care–acquired infection rates)
  • Findings for each outcome (KQs 1–4)
  • Any reported characteristics of the public reports related to whether the reports are accessible, appropriate, or actionable, as described above, including the form format (Web-based or print), distribution strategy including timing, and perceptions of report quality or relevance to the populations; if this information is not included in selected studies this will be noted
  • Reported contextual factors including characteristics of the decision (e.g., selection for care in the future or for a more immediate need); characteristics of the decisionmakers (e.g., any information on literacy, engagement, usual sources of information, or key influences on behavior); and the broader context of the decision (e.g., the availability of services in the market or public policies requiring reporting)

D. Assessment of Methodological Quality of Individual Studies

Our assessment of the quality of studies will be based on the recommendations in the AHRQ Methods Guide for Effectiveness and Comparative Effectiveness Reviews (hereafter, Methods Guide).21 Study designs will be classified according to type (e.g., randomized trial, nonrandomized trial, observational study, etc.) as part of the data abstraction phase, and each major type of study will be assessed according to relevant criteria. Quality ratings will be made on all articles by two raters. Differences will be resolved by discussion and the use of a third rater.

  • We will evaluate trials in terms of how the intervention and comparison groups were constructed (e.g., randomization), the comparability of the groups at baseline and throughout the study (including loss to followup for individuals or changes in the compositions for studies of groups), and the implementation of the intervention.
  • Our evaluation of nonrandomized experiments and observational studies will focus on potential bias in the construction of comparisons (either across groups or over time), potential confounders that could be responsible for effects on outcomes, and how potential confounders are addressed in the study.
  • If systematic reviews are used only to identify other studies, they will not be evaluated. If we identify systematic reviews that correspond to the KQs in this review, we will use a modification of the AMSTAR (assessment of multiple systematic reviews) measurement tool to evaluate the systematic reviews as recommended in the AHRQ Methods Guide 21 before we consider whether to incorporate summarized evidence from these reviews into our report.

E. Data Synthesis

We will construct summary tables that include study characteristics and quality ratings, population and public report characteristics, and outcome ascertainment and results for each study. We will synthesize the included studies according to the KQs they address and the health care settings in which they were conducted. We will also contrast and summarize results by other groupings (e.g., report format or the quality indicators included in the reports) if we are able to identify differences in effectiveness across other groups of reports. These will be summarized in tables or narratives as appropriate given the number of studies identified.

If several studies are identified that are similar with regard to type of public reporting (the intervention), outcomes, and study design, we will consider a quantitative meta-analysis. If this is not possible, we will use qualitative groups of public reporting interventions to identify trends in study findings and to compare different approaches to public reporting.

F. Grading the Evidence for Each Key Question

We will assess the body of evidence for each KQ according to the recommendations in the chapter “Grading the Strength of a Body of Evidence When Comparing Medical Interventions” in the AHRQ Methods Guide.21 These assessments will be performed independently for each KQ by two raters, at least one of whom will be the principal investigator, a coinvestigator, or an expert reviewer. They will meet to discuss and resolve differences in evidence grading and will document their decisions. Our assessment of the strength of the evidence will be based on judgments about risk of bias, consistency, directness, and precision of the evidence for each outcome. The evidence for outcomes across the included studies will be graded as high (high confidence that the evidence reflects the true effect; further research is unlikely to change our confidence or and the estimate of the effect), moderate (moderate confidence that the evidence reflects the true effect; further research may change our confidence and the estimate of the effect), low (low confidence that the evidence reflects the true effect; further research is likely to change our confidence and the estimate of the effect), or insufficient (evidence is unavailable or does not permit a conclusion).

G. Assessing Applicability

The applicability of studies of public reporting will be assessed by comparing studies to the PICOTS definitions and to the definition of public reporting. Applicability may vary according to the characteristics of the population and to the characteristics of the reports. For example, national studies may be more generally applicable; whereas, studies conducted in one geographic area may or may not be applicable to other geographic areas because of differences in their health care markets, particularly with regard to the availability of health care providers or health plans Alternatively, national studies conducted in one country may be less applicable to other countries with health care systems that differ significantly. Characteristics of the specific populations studied (e.g., high education and health literacy, older age, etc.) may also limit the generalization of one study’s findings to expected results in populations with very different characteristics. Similarly, the data included in the public reports, their formatting, and their mode of delivery (e.g., paper, Web, apps, etc.) may limit the applicability of findings from studies of specific types of public reports to expected results from reports that are substantially different in form and content. For these reasons, we will abstract data about the reports and the context when it is provided and use these in our assessment of applicability.

References

  1. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003 Jun 26;348(26):2635-45. PMID: 12826639.
  2. National Committee for Quality Assurance. The state of health care quality 2009. Washington, DC: National Committee for Quality Assurance. Available at: http://www.ncqa.org/Portals/0/Newsroom/SOHC/SOHC_2009.pdf. Accessed August 11, 2011
  3. Agency for Healthcare Research and Quality. National healthcare disparities report, 2003: summary. Rockville, MD: Agency for Healthcare Research and Quality; February 2004. Available at: http://www.ahrq.gov/qual/nhdr03/nhdrsum03.htm. Accessed August 11, 2011.
  4. Sequist TD, Adams A, Zhang F, et al. Effect of quality improvement on racial disparities in diabetes care. Arch Intern Med 2006 Mar 27;166(6):675-81. PMID: 16567608.
  5. Akerlof GA. The market for 'lemons': quality uncertainty and the market mechanism. Q J Econ 1970;84(3):488-500.
  6. Stigler GJ. The economics of information. J Polit Econ 1961;69(3):213-25.
  7. Grol R. Beliefs and evidence in changing clinical practice. BMJ 1997 Aug 16;315(7105):418-21. PMID: 9277610.
  8. Hibbard JH, Sofaer S. Best Practices in Public Reporting No.1: How To Effectively Present Health Care Performance Data to Consumers (Prepared by Center for Health Improvement under Contract No. HHSA290200710022T). Rockville, MD: Agency for Healthcare Research and Quality; May 2010. AHRQ Publication No. 10-0082-EF.
  9. Sofaer S, Hibbard JH. Best Practices in Public Reporting No. 2: Maximizing Consumer Understanding of Public Comparative Quality Reports: Effective Use of Explanatory Information (Prepared by Center for Health Improvement under Contract No. HHSA290200710022T). Rockville, MD: Agency for Healthcare Research and Quality; June 2010. AHRQ Publication No. 10-0082-1-EF.
  10. Sofaer S, Hibbard JH. Best Practices in Public Reporting No. 3: How To Maximize Public Awareness and Use of Comparative Quality Reports Through Effective Promotion and Dissemination Strategies (Prepared by Center for Health Improvement under Contract No. HHSA290200710022T). Rockville, MD: Agency for Healthcare Research and Quality; June 2010. AHRQ Publication No. 10-0082-2-EF.
  11. Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood)2003 Mar-Apr;22(2):84-94. PMID: 12674410.
  12. Hibbard JH, Stockard J, Tusler M. Hospital performance reports: impact on quality, market share, and reputation. Health Aff (Millwood) 2005 Jul-Aug;24(4):1150-60. PMID: 16012155.
  13. Mukamel DB, Spector WD, Zinn JS, et al. Nursing homes' response to the nursing home compare report card. J Gerontol B Psychol Sci Soc Sci 2007 Jul;62(4):S218-25. PMID: 17673535.
  14. Werner RM, Konetzka RT, Stuart EA, et al. Impact of public reporting on quality of postacute care. Health Serv Res 2009 Aug;44(4):1169-87. PMID: 19178586.
  15. Jung K. Proposal under development: assessing the impact of quality report cards on home health care. Totten A, personal communication, August 15, 2010.
  16. Fung CH, Lim YW, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008 Jan 15;148(2):111-23. PMID: 18195336.
  17. Health 2.0 Network. Health 2.0. Available at: http://www.health2con.com/. Accessed March 1, 2011.
  18. Jin GZ, Sorensen AT. Information and consumer choice: the value of publicized health plan ratings. J Health Econ 2006 Mar;25(2):248-75. PMID: 16107284.
  19. Vaiana ME, McGlynn EA. What cognitive science tells us about the design of reports for consumers. Med Care Res Rev 2002 Mar;59(1):3-35. PMID: 11877877.
  20. Committee on Quality of Health Care in America; Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academies Press; 2001. p. 232.
  21. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. AHRQ Publication No. 10(11)-EHC063-EF. Rockville, MD: Agency for Healthcare Research and Quality; March 2011. Chapters available at: www.effectivehealthcare.ahrq.gov.

Definition of Terms

Definition of Public Reporting (repeated from section 1)

Public Reporting is data, publicly available or available to a broad audience free of charge or at a nominal cost, about a health care structure, process or outcome at any provider level (individual clinician, group, organization). While public reporting is generally understood to involve comparative data across providers, for purposes of this review we are adopting a broader approach to include findings in which a single provider is compared to a national/regional data report on performance for which there are accepted standards or best practices.

Additional clarification on the definition of public reporting

Comparative:Requiring that the data in a public report be comparative is meant to assure that the data reported are relevant for decisionmaking and to exclude marketing and promotional materials and “word-of-mouth” reports such as individual opinions posted on Web sites. If a single organization or provider presents only their own data, but this data includes information from a common, available source such as Medicare Compare, the Healthcare Effectiveness Data and Information Set (HEDIS), or Consumer Assessment of Healthcare Providers and Systems (CAHPS®), it is still considered comparative.

Accepted standards or best practices: This is intended to be interpreted broadly in that it does not require that indicators included in the public report be evidence based; however, it is meant to exclude reports that include facts about a provider or unique data (e.g., a nonstandard patient satisfaction survey) provided for marketing or promotional purposes that are not comparative, as defined above.

Studies that include cost-effectiveness among reported indicators or that look at an outcome with resource utilization implications, such as readmissions, which are currently accepted as a measure of quality of care, will be included. Public reports of cost of services alone will be excluded, as the relationship between cost and quality or quality improvement is not understood.

Summary of Protocol Amendments

Amendments: None to date

In the event of protocol amendments, the date of each amendment will be accompanied by a description of the change and the rationale.

Review of Key Questions

For all Evidence-based Practice Center (EPC) reviews, KQs were reviewed and refined as needed by the EPC with input from the Technical Expert Panel (TEP) to assure that the questions are specific and explicit about what information is being reviewed.

Technical Experts

Technical Experts comprise a multidisciplinary group of clinical, content, and methodological experts who provide input in defining populations, interventions, comparisons, or outcomes as well as identifying particular studies or databases to search. They are selected to provide broad expertise and perspectives specific to the topic under development. Divergent and conflicted opinions are common and perceived as health scientific discourse that results in a thoughtful, relevant systematic review. Therefore study questions, design and/or methodological approaches do not necessarily represent the views of individual technical and content experts. Technical Experts provide information to the EPC to identify literature search strategies and recommend approaches to specific issues as requested by the EPC. Technical Experts do not do analysis of any kind nor contribute to the writing of the report and have not reviewed the report, except as given the opportunity to do so through the public review mechanism.

Technical Experts must disclose any financial conflicts of interest greater than $10,000 and any other relevant business or professional conflicts of interest. Because of their unique clinical or content expertise, individuals are invited to serve as Technical Experts and those who present with potential conflicts may be retained. The TOO and the EPC work to balance, manage, or mitigate any potential conflicts of interest identified.

Peer Reviewers

Peer reviewers are invited to provide written comments on the draft report based on their clinical, content, or methodological expertise. Peer review comments on the preliminary draft of the report are considered by the EPC in preparation of the final draft of the report. Peer reviewers do not participate in writing or editing of the final report or other products. The synthesis of the scientific literature presented in the final report does not necessarily represent the views of individual reviewers. The dispositions of the peer review comments are documented and will, for CERs and Technical briefs, be published three months after the publication of the Evidence report.

Potential Reviewers must disclose any financial conflicts of interest greater than $10,000 and any other relevant business or professional conflicts of interest. Invited Peer Reviewers may not have any financial conflict of interest greater than $10,000. Peer reviewers who disclose potential business or professional conflicts of interest may submit comments on draft reports through the public comment mechanism.