Alarmist or accurate? How the Burnet’s COVID-19 modelling stacks up

For our free coronavirus pandemic coverage, learn more here.

Examine, a weekly newsletter written by national science reporter Liam Mannix, is sent every Tuesday. Below is an excerpt – sign up to get the whole newsletter in your inbox.

Has the modelling Australia has relied on during this pandemic been inaccurate?

At first glance, you would come away with that impression. Remember the 50,000 to 150,000 deaths predicted at the start of the pandemic? Or the surging case numbers in NSW, or the overflowing hospitals in Victoria? So far, almost none of this has come to pass.

This has led to sharp criticism of the nation’s modellers, in particular the Burnet Institute, which has supplied key modelling to the Victorian and NSW governments. The Daily Telegraph last month described its work as “alarmist … dire predictions that have failed to come true”.

Let’s examine the evidence.

Models, accuracy and influencing the future

With rare exceptions, nearly every senior modelling scientist I have spoken to during the pandemic has been upset about how their modelling is covered by the media and understood by the public. If you’re in the business of science journalism, this is a problem. I must confess, despite having written about models a lot, I still don’t feel great about my coverage.

There are several problematic areas. First, uncertainty. The public and the media, scientists argue, have not appreciated the level of uncertainty contained in a model. Predicting the future is hard.

Some modellers complain about the focus on the models’ median outputs. Instead, they say, we should look at the entire range, and focus more on the trend than the numbers.

“The critical problem for decision-making is that the future is unpredictable – models cannot predict numbers accurately,” Graham Medley, a leading modeller in the UK, wrote in The Spectator.

For example, the Burnet’s Victorian road map model projected a peak in cases of between 2778 and 6761. The trend is clearer: cases are going to go up. Fair enough. We can do more to report the uncertainty (although I have been criticised by some modellers for calling their models too uncertain – sometimes you can’t win).

University of Melbourne professor James McCaw, who supplies modelling to the federal government, says: “Forecasts are like the weather: aiming to be accurate over two to four weeks. Scenarios can run for months, and attempt to guide policy by examining ‘What if …?’ questions.

“Scenarios are still calibrated to what has occurred in the past, but they do not, and cannot, anticipate all of the changes that will occur into the future. Therefore, they do not provide a quantitative prediction of the future epidemic. The Burnet’s models are scenarios, not forecasts.

“In that sense, they can’t be ‘alarmist’ or ‘optimistic’. And they can’t be ‘right’ or ‘wrong’.”

But perhaps the most compelling argument about models I came across is this: scenario modelling, by its very nature, influences the world it is trying to model. Horror models lead to strong public policy responses, and those horrors are avoided. Australia did not experience 50,000 to 150,000 deaths from COVID-19 because we resolutely refused to allow 60 per cent of the population to be infected, by closing our borders and instituting lockdowns.

The Burnet Institute’s director, Professor Brendan Crabb, said on a podcast last month: “[People say] your models were wrong because all those cases and deaths did not happen. Just skipping over the fact the models led to policy change that prevented them in the first place.”

Let’s examine some of the modelling’s predictions and what actually happened.

Model 1: Burnet modelling for the Victorian road map

The Burnet’s modelling underpinned Victoria’s road map to reopening. It predicted the seven-day average of new cases would peak between 2778 and 6761 on December 15, hospitalisations would peak between 1950 and 4400, and there would be 2202 deaths.

It nailed the caseloads, with current levels tracking as predicted, indeed a little above the median result.

But it was well off on the hospitalisations, with the total number of people in hospital tracking along the bottom end of the projected range.

Victoria has recorded 232 deaths in the current outbreak. The Burnet had us at 319 by this stage, per analysis by Professor Michael Fuhrer. The institute says both numbers are within the range of uncertainty.

The Burnet argues we should be less focused on the numbers and more focused on the scenarios, in particular the possibility of cutting caseloads by increasing testing and social distancing.

Why was the prediction wide of the mark on hospitalisations? Because, says Professor Margaret Hellard, co-author of the modelling, sick Victorians are staying in hospital for a far shorter amount of time than predicted.

If you look on page 13 of the Burnet’s modelling, you will note that length of hospital stay is listed as “unknown” under “uncertain assumptions”.

Summary: Spot on with cases, deaths within range of uncertainty, missed on hospitalisations, but that was an expected limitation. Points go to the Burnet.

Model 2: 8000 cases a day in Sydney

On August 2, in an MJA Insight article, the Burnet’s modelling team predicted new cases in Sydney would rise to 8000 a day by August 25.

However, the institute says the modelling did not account for just-introduced restrictions (new restrictions were introduced July 28). And, again, after the horror modelling was published, the NSW government acted.

On August 12, the government further tightened restrictions in three council hotspots. On August 14, the entirety of regional NSW went into lockdown. On August 20, a curfew was introduced and people were given only an hour out of home, and on August 23, a mask mandate was rolled out.

Summary: Again, we see the effect of restrictions announced after the model was published.

Model 3: The NSW modelling, late August

Done at the request of the NSW government, this modelling (released on September 7) projected a peak average daily caseload of 1219 to 2046 between September 13 and 20 across greater Sydney under current restrictions.

Hospital demand was forecast to peak at between 2286 and 4016 cases, with peak ICU demand of 947.

What happened? NSW’s seven-day caseload peaked at 1422 on September 7. Within the predicted range, but a week early.

NSW’s hospitalisations peaked at 1268 on September 21, with a peak ICU demand of 242 on the same day.

These estimates both fall below the entire range of uncertainty.

This is where things get interesting. The Burnet went back to see what happened.

Its initial modelling of the outbreak assumed restrictions would be maintained. This did not happen.

Instead, on August 23, NSW introduced a curfew, closed many of the shops and schools, and limited outdoor exercise to an hour a day for the 12 council areas with major outbreaks.

This cut the epidemic’s effective reproduction number from 1.35 to 1, the Burnet’s follow-up work found. Sixteen days after the restrictions were implemented, the pandemic peaked and turned around.

The institute says the miss on hospitalisations was caused by the average length of stay in hospital being overestimated.

Summary: Looks like a miss, but I think this is actually a win for the Burnet. You can see a clear line from the Burnet providing such hairy modelling to the NSW government to it acting to enhance restrictions – which the institute in turn found helped the state avoid the predictions.

What do other scientists think?

Several modellers have complained to me that they are unhappy at copping criticism of their models from non-modellers. If you’re not a modelling expert, you can’t really critique the science, they argue.

I have sympathy for this argument, so I reached out to seven independent modellers around Australia.

Of those who responded, none thought the Burnet work alarmist or inaccurate and all stressed the difficulty journalists and the public have in understanding their work.

Professor James Trauer, head of epidemiological modelling at Monash University’s School of Public Health, was the only one willing to directly comment on the Burnet Institute’s work.

“I think their modelling has been very good on the whole,” he said. “The underlying code is publicly available, which is a major plus in my view and not the case for the large majority of what the Doherty has done for the Commonwealth.

“In terms of whether it has been accurate, I think it has generally been very good.”

(Professor Trauer holds an adjunct position at the Burnet).

If you liked this article, consider signing up to receive Examine, a free newsletter, each week in your inbox.

Most Viewed in National

From our partners

Source: Read Full Article