Tech News

All together now: the most reliable is the covid-19 model set

[ad_1]

Each week, the team not only predicts each point, but predicts a single result (say, there will be 500 deaths in a week). They also present probabilistic predictions that quantify uncertainty by calculating the probability of the number of cases or deaths from time to time, as they become increasingly narrow, targeting the central prediction. For example, a model can predict that the probability of seeing 100 to 500 deaths is 90 percent, that the 50 percent is between 300 and 400, and that the 10 percent probability is between 350 and 360.

“It’s like a bull’s eye, it’s more and more focused,” Reich says.

Funk added, “The more rigorous you define the goal, the less likely you are.” The balance is good, as the broad prediction will be arbitrarily correct and unnecessary. “It should be as accurate as possible,” Funk says, “also giving the right answer.”

By gathering and evaluating all the individual models, the set tries to optimize their information and alleviate their shortcomings. The result is a probabilistic forecast, a statistical average, or a “median forecast”. Consensus is essentially a finer and therefore more realistic expression of uncertainty. All elements of the doubt come out on average.

Research by Reich’s lab, which focused on predicted deaths and evaluated about 200,000 predictions from mid-May 2020 to the end of December 2020 (a study that will soon have additions for another four months), found that the performance of individual models was very good. variable. One pattern may be accurate in one week, it may be far the next week. But, as the authors wrote, “by combining the predictions of all groups, the set showed the best overall probability accuracy.”

These ensemble exercises not only enhance predictions, but also serve to build people’s confidence in the models, says Ashleigh Tuite, an epidemiologist at the Dalla Lana School of Public Health at the University of Toronto. “One of the lessons of ensemble modeling is that none of the models is perfect,” says Tuit. “It simply came to our notice then. In general, models have difficulty predicting turning points – if the peaks or things suddenly start to accelerate or slow down. ”

“Models are not oracles.”

Alessandro Vespignani

The use of set modeling is not limited to the pandemic. The truth is that when we do the weather on Google we consume predictions of probability sets every day and considering that there is a 90 percent chance of precipitation. It is the gold standard for both weather and climate forecasts.

“It’s been a real success and path for about three decades,” says Tilmann Gneiting, computational statistician at the Institute for Theoretical Studies for Heidelberg Institutes and the Karlsruhe Institute of Technology in Germany. Prior to the sets, weather forecasts used a single number model and, in a crude way, created a deterministic weather forecast that was “ridiculously unsafe and unreliable,” says Gneit (weather forecasters, aware of this problem, suffered gross results) subsequent statistical analysis which created a relatively reliable probability of precipitation predictions in the 1960s).

Gneiting warns, however, that the analogies between infectious diseases and weather forecasting have their limitations. On the one hand, the probability of precipitation does not change in response to human behavior — rain, umbrella, or no umbrella — while the trajectory of the pandemic responds to our preventive measures.

Prediction in a pandemic is a system that depends on a feedback loop. “Models are not oracles,” says Alessandro Vespignani, a computational epidemiologist and ensemble hub assistant at Northeastern University, who spreads complex networks and infectious diseases by focusing on “techno-social” systems that drive opinion mechanisms. “Any model provides a response that depends on certain assumptions.”

When people process a prediction of a model, subsequent behavioral changes change assumptions, change the dynamics of the disease, and the prediction is incorrect. In this way, modeling can be a “prophecy of self-destruction”.

And there are other factors that can increase uncertainty: seasons, variations, availability or adoption of vaccines; and policy changes like the CDC’s quick decision about masking. “All of these are unknown, and if you want to capture the uncertainty of the future, they would really limit what you can say,” says Justin Lessler, an epidemiologist and assistant at Johns Hopkins Bloomberg School of Public Health. COVID-19 forecasting center.

The set of death prediction studies determined that accuracy deteriorates and uncertainty grows as models make predictions for the future – the error was twice as large as four weeks before and one week (four weeks are the limit for significant short-term); The 20-week time horizon was five times the error).

“It’s fair to discuss when things worked and when things didn’t.”

Johannes Bracher

But evaluating the quality of the models – including the warts – is an important secondary goal of predicting centers. It’s easy enough to do, as short-term forecasts clash with the reality of day-to-day amounts as a measure of success.

Most researchers distinguish between this type of “forecast model” in order to make explicit and verifiable predictions about the future, which is only possible in the short term; In the face of a “scenario model”, hypothetical “what if”, examining possible arguments that may be developed in the medium or long term (since scenario models should not be predictions, they should not be evaluated retrospectively in the face of reality).

During the pandemic, the critical focus has shifted to models that often had wrong predictions. “Although long-term projections are difficult to evaluate, we should not ignore short-term forecasts compared to reality,” says Johannes Bracher, a biostatistician at the Heidelberg Institute for Theoretical Studies and the Karlsruhe Institute of Technology. who coordinates a German and Polish center, and advises the European Center. “It’s fair to discuss when things work and when they don’t,” he says. But an informed discussion requires knowing and taking into account the limitations and intentions of the models (sometimes the harshest criticisms were those that confused scenario models with predictive models).

“The main question is, can we improve?”

Nicholas Reich

Also, when forecasts are particularly unsustainable in any situation, models should say so. “If we learn one thing, cases are very difficult to model even in the short term,” says Bracher. “Deaths are indicators that are more backward and easier to predict.”

In April, some European models were too pessimistic and lost a sudden drop in cases. A public debate arose about the accuracy and reliability of pandemic models. Weighing on Twitter, Bracher asked, “Is it any wonder that the models are (rarely) wrong? After a 1-year pandemic, I would say: no. ”This is even more important, he says, as models indicate a degree of certainty or uncertainty, that cases are unpredictable and take a realistic stance about the future trajectory. to be seen as, ”Bracher says.

Rely more on some models than others

As one often quoted statistical aphorism says, “all models are wrong, but some are useful.” Bracher warns that “If you approach the Ensemble model, you are somehow saying that all models are useful, that each model can lead to something,” even if some models have more or more reliable information than others.

Observing this fluctuation prompted Reich and others to try to “train” the Ensemble model — that is, Reich explains, “to set algorithms that teach some models to“ trust ”some models more than others and to learn how harmonious combinations of models work. . ”Bracher’s team now forms a small set, built only from models that have consistently performed well in the past, amplifying the cleanest signal.

“The main question is, can we improve?” Reich says. “The original method is very simple. There seems to be a way to improve by taking a simple average of all these models. ”So far, however, it’s been harder than expected – small improvements seem feasible, but dramatic improvements may be impossible.

An additional tool to improve the overall view of the pandemic beyond weekly looks is to look at the time horizon, four or six months later, with these “scenario modeling”. Last December, driven by the rise in cases and the close availability of the vaccine, Lessler and collaborators launched COVID-19 Scenario Modeling Center, In consultation with the CDC.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button