The Covid-19 pandemic created an unprecedented strain on healthcare systems across the globe. Beyond the clinical, financial, and emotional impact of this crisis, the logistical implications have been daunting, with crippled supply chains, diminished capacity for elective procedures and outpatient care, and a vulnerable labor force. Among the most challenging aspects of the pandemic has been predicting its spread. The healthcare delivery infrastructure in much of the United States has faced the equivalent of an impending hurricane but without a national weather service to warn us where and when it will hit, and how hard.
To build a forecasting model that works at the local level – within a hospital’s service area, for example — the Beth Israel Deaconess Medical Center (BIDMC), relied on an embedded research group, the Center for Healthcare Delivery Science, that reports to the CMO and is dedicated to applying rigorous research methods to study healthcare delivery questions. We used a series of methods derived from epidemiology, machine learning, and causal inference, to take a locally focused approach to predicting the timing and magnitude of Covid-19 clinical demands for our hospital. This forecasting serves as an example of a new opportunity in healthcare operations that is particularly useful in times of extreme uncertainty.
In early February, as the U.S. was grappling with the rapid spread of SARS-COV-2, the virus that causes Covid-19, the healthcare community in Boston began to brace for the months ahead. Later that month, participants in a biotechnology conference and other residents returning from overseas travel were diagnosed with the new disease.
It was the start of a public health emergency. To understand how to respond, our hospital needed a Covid-warning system, just as coastal towns need hurricane warning systems. Our hospital is an academic medical center with over 670 licensed beds, of which 77 are intensive care beds. We knew it was hurricane season, but when would the storm arrive, and how hard would it hit? We were uncertain about what lay ahead.
Hurricane season — but where is the storm?
Lesson 1: National forecasting models broke down when predicting hospital capacity for Covid-19 patients because no local variables were included.
Our institution turned first to national models. The most widely used national model applied curve-fitting methods (which draw a best-fit curve on a series of data points) on earlier Covid-19 data from other countries to predict future developments in the United States. National models did not consider local hospital decision-making or local-level socioeconomic factors which dramatically impact key variables like population density, pre-existing health status, and reliance on public transportation. For example, social media data showed many student-dense neighborhoods in Boston emptying after colleges canceled in-person classes at the beginning of March, which meant fewer people were in Boston to contract the virus. Another critical variable in hospital capacity forecasting, the rate of hospitalization for people with Covid-19, varied as the weeks went on, even though national models held this variable constant. For example, early on our hospital was choosing to admit rather than send home many SARS-COV-2 positive patients, even with mild infections, because the clinical trajectory of the disease was so uncertain. Thus we needed a dynamic hyper-local model.
Building our storm alert system
Lesson 2: Local infection modeling required a range of different research methods, and the trust and commitment of operational leaders who recognized the value of the work.
The hospital turned to our research center to achieve these goals. The center, which is embedded in the hospital and reports to the Chief Medical Officer (Dr. Weiss), brought applied machine learning and epidemiological approaches to construct a hyper-local alert system.
To demonstrate the feasibility of forecasting local hospital-capacity needs for managing Covid-19 patients, we built a preliminary SIR model (a traditional epidemiological framework that models the number of Susceptible, Infected and Recovered people in a population), which was integrated into our institution’s incident command structure, an ad hoc team created with members of the hospital and disaster management leadership to respond to the pandemic. However, the accuracy of SIR models depends on the accuracy of estimates of disease characteristics such as incubation time, infectious period, and transmissibility, variables that are still not well understood. Therefore, we turned to machine learning approaches, harnessing real-time data from our electronic medical record to determine these variables directly from real patients. We also gathered Covid-patient census data from multiple hospitals simultaneously, using a common machine-learning technique called multi-task learning to capitalize on limited data. These methods allowed us to estimate when the demand for hospital capacity to treat Covid-19 patients would peak and plateau — predicting the timing to within five days of the true peak and more accurately modeling the slope of the peak and decline than national models did.
Had leadership relied on national models, they would have expected a sharper peak and decline, and a peak two weeks earlier than the actual peak. Our modeling affected key decisions, including the need to bolster personal protective equipment (PPE) supplies; to gauge the necessity of even urgent procedures, and postpone them if necessary in order assure we had the capacity to absorb the peak; and to establish staffing schedules that continued farther into the future than those originally planned.
Predicting the next hurricane
Lesson 3: Effective modeling in confusing times may require rapidly developing new methods for predicting the next storm.
Hospitals now face a difficult challenge. We need to open our doors to the patients without Covid-19 who didn’t seek care or whose care was deferred. But how do we make sure to have enough protective equipment for safely bringing back outpatient procedures? And when can nurses who had been redeployed to our ICUs return to the floors and interventional areas such as the endoscopy suite and cardiac catheterization lab? Complicating these questions is whether we will see another rise in infections with changes in state-wide policies, reopening of schools and businesses, or a coming influenza season.
In this new phase, we now need to develop methods for understanding how people will move within a community (going to school and visiting stores, for instance) and how much they will interact with one another and, therefore, affect the risk of infection over time. To this end, we constructed a risk index for local businesses by comparing pre-pandemic traffic to traffic as they reopen, and whether they are indoors or partly or entirely outdoors. Businesses where visitors are densely packed in indoor spaces, especially for longer periods, have a higher risk index — meaning they are more likely to be the site of infection spread. Using our risk index, we created and validated a model for identifying such potential “super-spreader” businesses in our service area. This analysis is part of another body of research that will undergo peer review and publication and, therefore, its results are provisional. Meanwhile, we can use our work with businesses to further inform our forecasting model by examining traffic in business locations we have identified as high-risk and assessing whether incorporating these data improves the ability of our model to predict the demand on hospital capacity.
Integrating rigorous research methods into hospital operations
Lesson 4: Given the profound future uncertainty in healthcare, small investments in trusted internal research groups that can answer operational questions with new methods can yield substantial returns.
Our institution made a prescient investment in creating an embedded and trusted research group made up of clinicians, economists, and epidemiologists studying healthcare operations. The team has brought specialized machine learning methods and expertise in extracting conclusions from messy data to quickly and accurately solve emerging real-world problems — capabilities that traditional business analytics groups are less likely to have. Other organizations can similarly unite the rigor and flexibility of methodological experts with the need to rapidly answer operational questions in dynamic and even chaotic environments.
The authors would like to thank Manu Tandon, Venkat Jegadeesan, Lawrence Markson, Tenzin Dechen, Karla Pollick and Joseph Wright for their valuable contributions to this work.
If our free content helps you to contend with these challenges, please consider subscribing to HBR. A subscription purchase is the best way to support the creation of these resources.