An automated system for identifying patients at risk for complications associated with the use of mechanical ventilators provided significantly more accurate results than did traditional surveillance methods, which rely on manual recording and interpretation of individual patient data. In their paper published in Infection Control & Hospital Epidemiology, a Massachusetts General Hospital (MGH) research team report that their system – using an algorithm developed through a collaboration among the hospital's Division of Infectious Diseases, Infection Control Unit, and the Clinical Data Animation Center (CDAC) – was 100 percent accurate in identifying at-risk patients when provided with necessary data.
"Ventilator-associated pneumonia is a very serious problem that is estimated to develop in up to half the patients receiving mechanical ventilator support," says Brandon Westover, MD, PhD, of the MGH Department of Neurology, director of CDAC and co-senior author of the report. "Many patients die each year from ventilator-associated pneumonia, which can be prevented by following good patient care practices, such as keeping the head of the bed elevated and taking measures to prevent the growth of harmful bacteria in patients' airways."
Traditional surveillance of patients receiving mechanical ventilation involves manual recording every 12 hours, usually by a respiratory therapist, of ventilator settings – which are adjusted throughout the day to accommodate the patient's needs. Those settings, which reflect the pressure required to keep a patient's lungs open at the end of a breath and the percentage of oxygen being delivered to the patient, are reviewed by an infection control practitioner for signs that indicate possible ventilator-associated pneumonia.
Lead and corresponding author Erica Shenoy, MD, PhD, of the MGH Division of Infectious Diseases, the Infection Control Unit and hospital epidemiology lead for CDAC says, "In our study, manual surveillance made many more errors than automated surveillance – including false positives, reporting cases that on review, did not meet criteria for what are called ventilator-associated events; misclassifications, reporting an event as more or less serious than it really was; and failure to detect and report cases that, on closer inspection, actually met criteria. In contrast, so long as the necessary electronic data were available, the automated method performed perfectly."
Updated surveillance standards issued in 2013 by the National Health and Safety Network of the U.S. Centers for Disease Control and Prevention (CDC) specified three levels of ventilator-associated events, which can be thought of as corresponding to yellow, orange and red alerts to the risk or presence of ventilator-associated pneumonia:
- Ventilator-associated condition (VAC) – an increase in a patient's need for oxygen without evidence of infection,
- Infection-related ventilator-associated complication (IVAC) – increased oxygen need accompanied by signs of infection, such as fever, elevated white blood cell count or an antibiotic prescription,
- Possible ventilator-associated pneumonia (PVAP) – evidence of bacterial growth in the respiratory system, along with the factors listed above.
The CDC specifications were designed to enable large-scale, automated surveillance for ventilator-associated pneumonia, allowing efficient monitoring of infection rates throughout a hospital or a hospital system. To reduce the time required to manually record and review ventilator settings and medical charts, along with the possibility of human error, members of the MGH research team developed an algorithm to provide automated, real-time monitoring of both ventilator settings and information from the electronic health record. Based on that data, the algorithm determined whether criteria were met for a ventilator-associated event and, if so, which level of event: VAC, IVAP, or PVAP.
Initial testing and debugging of the automated system was carried out from January through March of 2015 in four MGH intensive care units. During that time 1,325 patients were admitted to the units, 479 of whom received ventilator support. A retrospective analysis comparing manual versus automated surveillance of data gathered from patients cared for during this development period revealed that the automated system was 100 percent accurate in detecting ventilator-associated events, distinguishing patients with such events from those without, and predicting the development of ventilator-associated pneumonia. In contrast, the accuracy of manual surveillance for each of those measures was 40 percent, 89 percent and 70 percent.
A validation study to further test the algorithm was conducted using data from a similar three-month period in the subsequent year, during which 1,234 patients were admitted to the ICUs, 431 of whom received ventilator support. During that period, manual surveillance produced accuracies of 71 percent, 98 percent and 87 percent, while results for the automated system were 85 percent, 99 percent and 100 percent accurate. The drop-off in accuracy of the automated system during the validation period reflects a temporary interruption of data availability while software was being upgraded, and the team subsequently developed a monitoring system to alert staff to any future interruptions.
Westover says, "An automated surveillance system could relieve the manual effort of large-scale surveillance, freeing up more time for clinicians to focus on infection prevention. Automated surveillance is also much faster than manual surveillance and can be programmed to run as often as desired, which opens the way to using it for clinical monitoring, not just retrospective surveillance. Real-time, automated surveillance could help us design interventions to prevent, halt or shorten the course of an infection, something we hope to explore as we continue developing this project."
###
Westover is an assistant professor of Neurology, and Shenoy is an assistant professor of Medicine at Harvard Medical School. The co-senior author of the Infection Control & Hospital Epidemiology paper is David Hooper, MD, chief of the MGH Infection Control Unit. Additional co-authors are Eric Rosenthal, MD, Yu-Ping Shao, MS, Manohar Ghanta, MS, and Valdery Moura Junior, MS, MBA, MGH Neurology and CDAD; Erin Ryan, MPH, CCRP, Dolores Suslak, MSN, CIC, and Nancy Swanson, RN, CIC, MGH Infection Control Unit, and Siddharth Biswal, MS, Georgia Institute of Technology.
Support for the study includes National Institute of Allergy and Infectious Diseases grant K01 AI110524, National Institute of Neurological Disorders and Stroke grant 1K23 NS090900, and grants from the Andrew David Heitman Neuroendovascular Research Fund and The Rappaport Foundation.
Massachusetts General Hospital, founded in 1811, is the original and largest teaching hospital of Harvard Medical School. The MGH Research Institute conducts the largest hospital-based research program in the nation, with an annual research budget of more than $900 million and major research centers in HIV/AIDS, cardiovascular research, cancer, computational and integrative biology, cutaneous biology, genomic medicine, medical imaging, neurodegenerative disorders, regenerative medicine, reproductive biology, systems biology, photomedicine and transplantation biology. The MGH topped the 2015 Nature Index list of health care organizations publishing in leading scientific journals and earned the prestigious 2015 Foster G. McGaw Prize for Excellence in Community Service. In August 2017 the MGH was once again named to the Honor Roll in the U.S. News & World Report list of "America's Best Hospitals."
Media Contact
Julie Cunningham
[email protected]
617-724-6433
@MassGeneralNews
http://www.mgh.harvard.edu
http://dx.doi.org/10.1017/ice.2018.97