Powered by the Evidence-based Practice Centers
Evidence Reports All of EHC
Evidence Reports All of EHC

SHARE:

FacebookTwitterFacebookPrintShare

Making Healthcare Safer (MHS) IV: Patient Monitoring Systems to Prevent Failure to Rescue

Review Question: Based on the evidence published since the last MHS report, how effective are the patient safety practices (PSPs) involving patient monitoring systems and what are their unintended effects?

Contextual Questions

  1. How do the PSPs prevent or mitigate harms?
  2. What are common barriers and facilitators to implementation?
  3. What resources (e.g., cost, staff, time) are required for implementation?
  4. What toolkits are available to support implementation?

The Agency for Healthcare Research and Quality (AHRQ) Making Healthcare Safer (MHS) reports consolidate information for healthcare providers, health system administrators, researchers, and government agencies about PSPs that can improve patient safety across the healthcare system—from hospitals to primary care practices, long-term care facilities, and other healthcare settings. In Spring of 2023, AHRQ launched its fourth iteration of the MHS Report (MHS IV).

Patient monitoring systems were identified as high priority for inclusion in the MHS IV reports using a modified Delphi technique by a Technical Expert Panel (TEP) that met in December 2022. The TEP included 15 experts in patient safety with representatives of governmental agencies, healthcare stakeholders, clinical specialists, experts in patient safety issues, and a patient/consumer perspective. See the MHS IV Prioritization Report for additional details.1

For the purpose of this rapid review, we focus on electronic patient monitoring systems that scan patient data for signs of clinical deterioration in order to alert a clinician of a potential adverse condition. Such early warning systems generate a risk or deterioration score from multiple input parameters and may use artificial intelligence (AI) or machine learning algorithms.2 This definition of an early warning system does not include approaches that rely on a single vital sign threshold.

Failure to rescue is a persistent patient safety problem despite wide implementation of rapid response systems (RRSs) that are intended to improve the recognition and response to patients experiencing clinical deterioration outside of the intensive care unit environment. Clinical deterioration may progress to cardiorespiratory arrest which in itself carries a high mortality (nearly 75%) for hospitalized patients.3, 4 RRSs frequently are not activated in a timely manner or at all for patients who clearly meet conditions for an RRS activation. These failures to activate and delays in activation have been associated with increased morbidity and mortality when compared to deteriorating patients who received prompt activations of the RRS.5-7 This failure of the recognition and activation component of the RRS is referred to as afferent limb failure.

PSPs addressing afferent limb failure have focused on improving patient monitoring systems to enhance recognition and activation. These improvements include implementation of early warning scores, automatic calculation of early warning scores, comparisons of different early warning scores, continuous monitoring as opposed to intermittent monitoring (which is the standard on general hospital wards), automatic escalation protocols, and the application of AI and machine learning to vital sign monitoring data.

In the previous MHS III report, patient monitoring systems were reviewed.8 That report identified the number of rescue events (RRS or code team response) as a major process measure used in studies of this type of PSP but they questioned the relationship of this process measure to relevant outcomes such as mortality. The report did note that three studies that used a continuous monitoring intervention found significant reductions in the incidence of rescue events.9-11 One study found no statistical improvement in the number of rescue events but did find that the number of initial activations went down (enough patients had multiple activations such that the total number was not impacted).12 However, none of the studies found a decrease in other outcomes such as mortality or transfer to a higher level of care. The MHS III report also noted that a systematic review by Cardona-Morrell found that intermittent monitoring systems were associated with a statistically significant reduction in mortality, while continuous monitoring systems were not.13 Heterogeneity in the studies likely contributed to this counter-intuitive result. Only one study identified in the MHS III report found a reduction in transfers to an intensive care unit9 while several others did not.10-12, 14, 15  Four studies (3 continuous monitoring studies and 1 intermittent monitoring study)10, 11, 15, 16 reported average hospital length of stay, and three of these found a significant effect of a patient monitoring system.10, 15, 16 Few unintended consequences were identified.12, 17

In the prioritization process, the MHS IV TEP noted the widespread scope of patient monitoring systems but continuing uncertainty about the effectiveness of PSPs involving patient monitoring systems. The TEP agreed that it was important to review the topic again because they expected an increasing number of studies about the use of AI in patient monitoring systems.

The main purpose of this review is to assess the evidence on the effectiveness of PSPs involving electronic patient monitoring systems that scan patient data for signs of clinical deterioration to alert a clinician of a potential adverse condition. The report should be of interest to critical care specialists and hospitals that must make decisions about deployment of patient monitoring systems on hospital wards.

For this rapid review, strategic adjustments will be made to streamline traditional systematic review processes and deliver an evidence product in the allotted time. We will follow adjustments and streamlining processes proposed by the AHRQ Evidence-based Practice Center (EPC) Program. Adjustments include being as specific as possible about the questions, limiting the number of databases searched, modifying search strategies to focus on finding the most valuable studies (i.e., being flexible on sensitivity to increase the specificity of the search), and restricting the search to studies published since January 1, 2018 (when the search was done for the MHS III) report  in English, and having each study assessed by a single reviewer. The EPC team plans to use the AI feature of DistillerSR (AI Classifier Manager) as a second reviewer at the title and abstract screening stage, as described below in the section on Data Extraction.

We will search for good or fair quality systematic reviews published since the last MHS report, using the criteria developed by the United States Preventive Services Task Force Methods Workgroup18 for assessing the quality of systematic reviews:

  • Good - Recent relevant review with comprehensive sources and search strategies; explicit and relevant selection criteria; standard appraisal of included studies; and valid conclusions.
  • Fair - Recent relevant review that is not clearly biased but lacks comprehensive sources and search strategies.
  • Poor - Outdated, irrelevant, or biased review without systematic search for studies, explicit selection criteria, or standard appraisal of studies.  

We will rely primarily on the content of any good or fair systematic review that is found. We will not perform an independent assessment of original studies cited in any such systematic review.

For Contextual Questions 1 (rationale), 2 (barriers and facilitators), and 3 (resources), we will draw on information reported in the studies addressing the main Review Question. For Contextual Question 4, we will identify publicly available patient safety toolkits developed by AHRQ or other organizations that could help to support implementation of the PSPs. To accomplish that task, we will review AHRQ's Patient Safety Network (PSNet) and AHRQ's listing of patient safety related toolkits and we will include any toolkits mentioned in the studies we find for the Review Question. We will identify toolkits without assessing or endorsing them.

 

We will search for original studies and systematic reviews on the Review Question according to the inclusion and exclusion criteria presented in Table 1.

Table 1. Inclusion and Exclusion Criteria

Study ParameterInclusion criteriaExclusion criteria
Population
  • Hospitalized patients on general hospital wards (non-ICU patients)
Studies of ICU and emergency department patients (who are always or often on continuous monitoring) who are not eligible for rapid response system activation.
Intervention
  • Change in the standard of care for general ward patient monitoring.
  • Change in monitoring modality (e.g., continuous vs. intermittent, wireless vs. wired) or incorporate a new parameter/technology, not simply a new device manufacturer.
  • Change in parameters monitored, implementation of scoring system/index either manually or electronically in the electronic health record, AI/machine learning algorithms, or in communication cascades for monitoring data outputs
  • No change as compared to standard of care process
  • Change in manufacturer and/or vendor of the monitoring system only (for example, switching pulse oximeter brands unless it included a new technology)
ComparatorTime period or patient population adhering to previous standard of care monitoring practice
  • No defined comparison group
  • Comparator group is not appropriate (would not have equivalent exposure to the intervention)
Outcome
  • Activation of rapid response system (a process measure)
  • Cardiorespiratory arrest
  • Serious adverse events (as defined by the study)
  • Hospital mortality
  • Measurement of alarm/alert fatigue
  • Transfer to higher level of care
  • Hospital length of stay
No outcome of interest
TimingAll studies and new systematic reviews since inclusion period of the MHS III report (January 1, 2018)Published before January 1, 2018
Study Time PeriodDefined study periods with and without the patient monitoring system change for historically controlled trials. 
For cohort studies and trials, the study time frame also needs to be specified.
Study time periods are not defined.
SettingAny in-hospital general ward setting that implements an intervention of increased monitoring or change in monitoring modality, or applies an algorithmic approach (AI, early warning score, etc.) to the existing monitoring standard or modifies how monitoring information is communicated to providers.ICU, emergency department, operating room, invasive procedural locations such as interventional cardiology, and/or outpatient settings
Type of studiesRCTs, non-randomized trials, and observational studies with a comparison group.
  • Study design not specified or no control described.
  • Opinion surveys or subjective data
  • Not published in English
  • Low-income or lower middle-income country (according to World Bank Criteria)

AI = artificial intelligence; ICU = intensive care unit; MHS = Making Healthcare Safer; RCT = randomized controlled trial; vs = versus

Our search strategy will focus on databases expected to have the highest yield of relevant studies, including PubMed and the Cochrane Library, supplemented by a narrowly focused search for unpublished reports that are publicly available from governmental agencies or professional societies having a strong interest in patient monitoring systems, including the Society of Critical Care Medicine, Society of Hospital Medicine, AHRQ, National Quality Forum, Joint Commission, and International Society for Rapid Response Systems.

To efficiently identify studies that meet the eligibility criteria, we will distribute citations from the literature search to team members, with plans to have the title and abstract of each citation reviewed by a single team member. The team will use the DistillerSR AI Classifier Manager as a semi-automated screening tool to conduct the review efficiently at the title and abstract screening stage. The title and abstract of each citation will be reviewed by a team member, and then the AI Classifier Manager will serve as a second reviewer of each citation.  Citations will be included for full text review if both a team member and the AI Classifier Manager agree to include them. If there is a disagreement between a team member and the AI, the lead investigator will resolve it. The full text of each remaining potentially eligible article will be reviewed by a single team member to confirm eligibility and extract data. The team will ask a second team member to check a randomly selected 10% sample of the articles to verify that important studies were not excluded and confirm the accuracy of extracted data.

Information will be organized to answer the Review Question and Contextual Questions, and will include author, year, study design, frequency and severity of the harms, measures of harm, characteristics of the PSP, outcomes, unintended consequences, implementation barriers and facilitators, required resources, and description of toolkits. In the results section of the report, the following information will be presented:

  1. Care setting;
  2. Patient population;
  3. Description of the intervention studied;
  4. Outcome measures;
  5. Outcomes;
  6. Findings
  7. Risk of bias or study quality.

To streamline data extraction, we will sort eligible studies by specific PSP, and focus on extracting information about characteristics, outcomes, and barriers/facilitators most pertinent to a specific PSP.

For studies that address the Review Question about the effectiveness of PSPs, the primary reviewer will use the Cochrane Collaboration’s tool for assessing the risk of bias of randomized controlled trials (RCTs) or the ROBINS-I tool for assessing the Risk Of Bias In Non-randomized Studies – of Interventions.19, 20 When assessing RCTs, we will use the 7 items in the Cochrane Collaboration’s tool that cover the domains of selection bias, performance bias, detection bias, attrition bias, reporting bias, and other bias.19 When assessing non-randomized studies, we will use specific items in the ROBINS-I tool that assess bias due to confounding, bias in selection of participants into the study, bias in classification of interventions, bias due to deviations from intended interventions, bias due to missing data, bias in measurement of outcomes, and bias in selection of the reported results.20 The risk of bias assessments will focus on the main outcome of interest in each study.

The Task Leader will review the risk of bias assessments and any disagreements will be resolved through discussion with the team.

Selected data will be compiled into evidence tables and synthesized narratively. We will not conduct a meta-analysis. For the Review Question about the effectiveness of PSPs, we will record information about the context of each study and whether the effectiveness of the PSP differs across patient subgroups. If any of the PSPs have more than one study of effectiveness, we will grade the strength of evidence for those PSPs using the methods outlined in the AHRQ Effective Health Care Program (EHC) Methods Guide for Effectiveness and Comparative Effectiveness Reviews.21 Evidence grading would not add value for PSPs that do not have more than one available study.

We will report if the effectiveness of the PSP differs across patient subgroups but will not conduct subgroup analyses.

We will submit the protocol to AHRQ and to the PROSPERO international prospective register of systematic reviews.

EPC core team members must disclose any financial conflicts of interest greater than $1,000 and any other relevant business or professional conflicts of interest. Related financial conflicts of interest that cumulatively total greater than $1,000 will usually disqualify EPC core team investigators from participation in the review.

Peer reviewers are invited to provide written comments on the draft report based on their clinical, content, or methodological expertise. The EPC considers all peer review comments on the draft report in preparation of the final report. Peer reviewers do not participate in writing or editing of the final report or other products. The final report does not necessarily represent the views of individual reviewers.

We will ask at least one clinical content expert and one methodological expert to review the draft report. Potential peer reviewers must disclose any financial conflicts of interest greater than $5,000 and any other relevant business or professional conflicts of interest. Invited peer reviewers may not have any financial conflict of interest greater than $5,000.

This project is funded under Contract No. 75Q80120D00003/75Q80122F32009 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The AHRQ Task Order Officer will review contract deliverables for adherence to contract requirements and quality. The authors of this report are responsible for its content. Statements in the report should not be construed as endorsement by AHRQ or the U.S. Department of Health and Human Services.

  1. Rosen M, Dy SM, Stewart CM, Shekelle P, Tsou A, Treadwell J, Sharma R, Zhang A, Vass M, Motala A, Bass EB. Final Report on Prioritization of Patient Safety Practices for a New Rapid Review or Rapid Response. Making Healthcare Safer IV. (Prepared by the Johns Hopkins, ECRI, and Southern California Evidence-based Practice Centers under Contract No. 75Q80120D00003). AHRQ Publication No. 23-EHC019-1. Rockville, MD: Agency for Healthcare Research and Quality. July 2023. DOI: https://doi.org/10.23970/AHRQEPC_MHS4PRIORITIZATION. Posted final reports are located on the Effective Health Care Program search page.
  2. Cho KJ, Kwon O, Kwon JM, et al. Detecting Patient Deterioration Using Artificial Intelligence in a Rapid Response System. Crit Care Med. 2020 Apr;48(4):e285-e9. doi: 10.1097/CCM.0000000000004236. PMID: 32205618.
  3. Reardon PM, Fernando SM, Murphy K, et al. Factors associated with delayed rapid response team activation. J Crit Care. 2018 Aug;46:73-8. doi: 10.1016/j.jcrc.2018.04.010. PMID: 29705408.
  4. Chua WL, See MTA, Legio-Quigley H, et al. Factors influencing the activation of the rapid response system for clinically deteriorating patients by frontline ward clinicians: a systematic review. Int J Qual Health Care. 2017 Dec 1;29(8):981-98. doi: 10.1093/intqhc/mzx149. PMID: 29177454.
  5. Difonzo M. Performance of the Afferent Limb of Rapid Response Systems in Managing Deteriorating Patients: A Systematic Review. Crit Care Res Pract. 2019;2019:6902420. doi: 10.1155/2019/6902420. PMID: 31781390.
  6. Walco JP, Mueller DA, Lakha S, et al. Etiology and Timing of Postoperative Rapid Response Team Activations. J Med Syst. 2021 Jul 14;45(8):82. doi: 10.1007/s10916-021-01754-3. PMID: 34263364.
  7. Whebell SF, Prower EJ, Zhang J, et al. Increased time from physiological derangement to critical care admission associates with mortality. Crit Care. 2021 Jun 30;25(1):226. doi: 10.1186/s13054-021-03650-1. PMID: 34193243.
  8. Hall KK, Shoemaker-Hunt S, Hoffman L, et al.  Making Healthcare Safer III: A Critical Analysis of Existing and Emerging Patient Safety Practices. Rockville (MD); 2020. (Prepared by Abt Associates Inc. under Contract No. 233-2015-00013-I.) AHRQ Publication No. 20-0029-EF. Rockville, MD: Agency for Healthcare Research and Quality; March 2020.
  9. Taenzer AH, Pyke JB, McGrath SP, et al. Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before-and-after concurrence study. Anesthesiology. 2010 Feb;112(2):282-7. doi: 10.1097/ALN.0b013e3181ca7a9b. PMID: 20098128.
  10. Brown H, Terrence J, Vasquez P, et al. Continuous monitoring in an inpatient medical-surgical unit: a controlled clinical trial. Am J Med. 2014 Mar;127(3):226-32. doi: 10.1016/j.amjmed.2013.12.004. PMID: 24342543.
  11. Weller RS, Foard KL, Harwood TN. Evaluation of a wireless, portable, wearable multi-parameter vital signs monitor in hospitalized neurological and neurosurgical patients. J Clin Monit Comput. 2018 Oct;32(5):945-51. doi: 10.1007/s10877-017-0085-0. PMID: 29214598.
  12. Fletcher GS, Aaronson BA, White AA, et al. Effect of a Real-Time Electronic Dashboard on a Rapid Response System. J Med Syst. 2017 Nov 20;42(1):5. doi: 10.1007/s10916-017-0858-5. PMID: 29159719.
  13. Cardona-Morrell M, Prgomet M, Turner RM, et al. Effectiveness of continuous or intermittent vital signs monitoring in preventing adverse events on general wards: a systematic review and meta-analysis. Int J Clin Pract. 2016 Oct;70(10):806-24. doi: 10.1111/ijcp.12846. PMID: 27582503.
  14. Bailey TC, Chen Y, Mao Y, et al. A trial of a real-time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013 May;8(5):236-42. doi: 10.1002/jhm.2009. PMID: 23440923.
  15. Kollef MH, Chen Y, Heard K, et al. A randomized trial of real-time automated clinical deterioration alerts sent to a rapid response team. J Hosp Med. 2014 Jul;9(7):424-9. doi: 10.1002/jhm.2193. PMID: 24706596.
  16. Bellomo R, Ackerman M, Bailey M, et al. A controlled trial of electronic automated advisory vital signs monitoring in general hospital wards. Crit Care Med. 2012 Aug;40(8):2349-61. doi: 10.1097/CCM.0b013e318255d9a0. PMID: 22809908.
  17. McGrath SP, Perreard IM, Garland MD, et al. Improving Patient Safety and Clinician Workflow in the General Care Setting With Enhanced Surveillance Monitoring. IEEE J Biomed Health Inform. 2019 Mar;23(2):857-66. doi: 10.1109/JBHI.2018.2834863. PMID: 29993903.
  18. U.S. Preventive Services Task Force Procedure Manual. Appendix VI. Criteria for Assessing Internal Validity of Individual Studies. U.S. Preventive Services Task Force. July 2017.
  19. Higgins JP, Altman DG, Gotzsche PC, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011 Oct 18;343:d5928. doi: 10.1136/bmj.d5928. PMID: 22008217.
  20. Sterne JA, Hernan MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016 Oct 12;355:i4919. doi: 10.1136/bmj.i4919. PMID: 27733354.
  21. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. AHRQ Publication No. 10(14)-EHC063-EF. R Agency for Healthcare Research and Quality.  Rockville, MD: 2014.

Project Timeline

Making Healthcare Safer (MHS) IV: Patient Monitoring Systems to Prevent Failure to Rescue

Jul 30, 2024
Topic Initiated
Aug 2, 2024
Research Protocol
Page last reviewed August 2024
Page originally created July 2024

Internet Citation: Research Protocol: Making Healthcare Safer (MHS) IV: Patient Monitoring Systems to Prevent Failure to Rescue. Content last reviewed August 2024. Effective Health Care Program, Agency for Healthcare Research and Quality, Rockville, MD.
https://effectivehealthcare.ahrq.gov/products/patient-monitoring-systems/protocol

Select to copy citation