Placeholder while article actions load

When the 1918 influenza pandemic swept the globe, American and European epidemiologists reeled. They hadn’t predicted the devastation. The data were shaky, but something like 50 million people died worldwide, an incomprehensible number even for a period when the average life span at birth was 45 to 50 years old. How, epidemiologists asked, could they make their reactive science predictive?

Prediction was becoming a vital aspect of science, and those fields that aspired to predictive success looked to physics. What they found, more important than predictive power, were techniques for measuring and managing uncertainty. Examining the historical relationships between uncertainty and prediction that defined their efforts helps us to understand the challenges covid-19 modelers face today and reminds us that models are useful, even if imperfect.

As deaths from the influenza pandemic declined, Albert Einstein became famous — on the back of a prediction. In November 1919, measurements taken during a solar eclipse confirmed that the sun’s gravitational field deflected the path of starlight according to the predictions of Einstein’s general theory of relativity. Newspapers worldwide trumpeted the results and Einstein catapulted to international stardom, unseating Isaac Newton’s view of the universe in the process.

Predicting the effect of gravity on starlight might seem like an unlikely route to fame, but arriving as it did amid the ravages of the flu and World War l, Einstein’s predictive success offered a salve to a world gripped by uncertainty, as historian Matthew Stanley has argued. But the data did not speak for themselves. The conclusion that Einstein was right required the judicious selection and analysis of data produced by finicky instruments deployed in challenging field conditions. Although recent historical work has defended the analysis, Einstein’s prediction was only vindicated by wrestling with uncertainty.

Following the dual catastrophes of World War I and the influenza pandemic, epidemiology also sought predictive tools. Traditional public health-oriented fieldwork techniques led to a decline in infectious diseases such as typhoid fever, cholera, smallpox, scarlet fever, diphtheria and tuberculosis in the first decades of the 20th century, but they were no match for the 1918 flu.

Epidemiologists turned to a pluralistic model, embracing the mathematical tools of physicists to conduct epidemic modeling. They looked at the laboratory work of bacteriologists and, later, virologists to understand the changing virulence of organisms. They undertook traditional field practice to unravel community infection. The goal was to make predictions about the reoccurrence and seasonal appearance of diseases such as measles and summer diarrhea that were not declining, and about looming epidemic threats such as influenza and plague.

Spanning the Atlantic, a group coalesced around the idea that prediction was the most valuable part of the science. Leading the charge in Britain was Major Greenwood at the Ministry of Health. Why were some diseases seasonal? How did diseases increase or decrease in virulence? What role did weather, and later host immunity, play in forecasting disease events? How did an endemic disease, specific to a particular region, explode into an epidemic or pandemic? How could experimental laboratory studies of outbreaks in mice hold the key to predicting outbreaks in humans? To answer these questions, Greenwood emphasized the “need of a systematic plan of forecasting epidemiological events,” and argued that the present state of epidemiology was one that “only gives warning of rain when unfurled umbrellas pass along the street.”

By the late 1920s and into the 1930s, new mathematical models, informed by techniques developed in physics, began to dominate epidemiological forecasting. But discussions of epidemic waves, endemicity and herd immunity were predicated on mathematical modeling, as well as on deep recognition that public health decisions needed to be implemented in light of the models’ uncertainties. Over subsequent decades, epidemiologists would struggle to tame that uncertainty and produce a prediction like the one that made Einstein famous.

In the fall of 1957, a new strain of influenza — H2N2 — emerged in China and quickly spread across the world, killing around 1 million people globally. Based on forecasting models, many epidemiologists believed this was a variant of the 1918-19 virus and not a “new” disease. Having predicted another epidemic wave, some epidemiologists such as Maurice Hilleman at the Walter Reed Army Institute of Research warned of the impending crisis and helped rush forward a new vaccine.

But Western Europe and North America took little preventive action by way of quarantines, lockdowns or school closings, and the media paid the pandemic little mind. Much was the same during a subsequent influenza pandemic — H3N2 — in 1968. Flu pandemics had become normalized, and pandemic modeling for the flu was regarded as so uncertain as to be unreliable.

By the late 1990s some epidemic models projected catastrophic pandemic events from zoonotic spillovers, first in 1997 with H5N1, in 2003 with SARS and again in 2009 with swine flu. Yet none blossomed into globally devastating pandemics. Many epidemiologists predicted that SARS-CoV-1, a biological cousin to the current coronavirus, for example, would explode throughout the world, though it remained relatively isolated to China and other parts of Asia, which implemented strict quarantines. In July 2003, just as the world braced for the disease to propagate globally, the WHO declared the pandemic over.

Epidemiologists had accurately predicted the emergence of these illnesses, but the global devastation their models forecast did not come to pass, leading to further skepticism of the value of predictive epidemiology.

In the early months of the coronavirus pandemic, we inherited this rich and complicated history of epidemiology, as well as the central question it raises: Why trust an uncertain model? That much early epidemic modeling failed to correctly predict the course of the pandemic was no fault of the epidemiologists modeling a real-time disaster, who lacked basic data points for case fatality rates, infection rates, virus reproduction and the role of asymptomatic carriers. Poor data leads to high uncertainty.

Early model predictions motivated the “flatten the curve” slogan, stoking hopes that the pandemic’s imagined long tail would soon arrive. Both the models and the dictum failed. Some early models and self-styled experts greatly exaggerated death projections in 2020, while others underestimated the impact of the virus. More than anything else, models oversaturated public discourse. Predictions that the 2020 summer heat would lower transmission rates were not borne out.

By the time safe and effective vaccines were rolled out in Europe and the United States in early 2021, public trust in epidemic models had already deeply eroded, in part because of early failures. When a third wave struck in 2021, model mania morphed into discussions of vaccine uptick and efficacy. We had other types of data to visualize. The models turned moribund.

Models also lost their sheen because some were weaponizing them to manufacture doubt, particularly around the question of herd immunity. Predictions that normal life would swiftly resume did not materialize. Vaccine hesitancy became a stubborn roadblock to effective pandemic prevention, but so too did the spurious use of models. Even after experiencing new waves, delta in summer 2021 and then omicron in late 2021, few have listened to modelers sounding the alarms about the current sharp rise in cases in Europe and Asia. Most states and municipalities have relaxed mask policies and stopped pushing vaccination mandates.

But the history of covid-19 may still be written largely as a story of the success of epidemiology — of determining the role of asymptomatic carriers, infection rates and the value of preventive public health strategies including mask-wearing, air filtration and vaccines. Future epidemiological modelers will also use the messy but unprecedentedly rich data of the coronavirus pandemic to improve their tools so that we might better face down the next pandemic.

The failure more worrying than that of the early models is the widespread attitude that sees uncertainty itself as a failure of science. As it was in 1919, the promise of perfect prediction is a seductive but empty one. A model succeeds only insofar as it manages uncertainty.

Our pandemic is far from over, even with spurious, unfounded claims of its endemicity. Now, more than ever, we should be following covid epidemiology, even the latest models. But not for their certainty. Rather, we should appreciate that uncertainty bedevils all predictive methods. To use an epidemiological aphorism attributed to George E. P. Box: “All models are wrong, but some are useful.”



Source link

By admin