Covid modelling FAQs

Take a deep dive into Burnet’s COVID-19 modelling to gain a greater understanding of this complex science that has helped inform and shape Australia’s COVID-19 pandemic response.

Modelling has become a vital scientific tool to assist governments in developing effective public health measures to address the devastating impact of COVID-19 and other emerging infectious diseases.

Burnet’s COVASIM model has helped inform public health policy by the Victorian, NSW and Australian governments in the challenging first two years of the COVID-19 pandemic.

COVASIM – an individual-based model assessing the impact of easing COVID-19 restrictions – examines how public health restrictions could be fine-tuned to alleviate the social and economic burden of lockdowns, but without compromising suppression of community transmission of the virus.

Hear from our modellers in a special How Science Matters podcast episode – Modelling COVID-19: Can we predict the future? featuring Professor Margaret Hellard AM and Dr Nick Scott.

Learn more about our COVID-19 modelling.

Check out the FAQs below to help demystify the science and answer your burning questions.

Q: What do modellers actually do?

Dr Nick Scott: It’s a lot of computer-based work. We set up our models on the computer, it involves writing code and sifting through datasets and incorporating them in the code.

We also do a lot of maths, and so the code consists of a series of equations that determine how people will behave in the population that we’re modelling.

Q: What is a model and what is the aim of a model?

Professor Margaret Hellard AM: Models are created to show the likelihood of something happening as a result of a decision. We’re trying to turn a milieu of numbers and information into a clear narrative to provide guidance to policymakers – it’s actually quite difficult.

People want exact numbers from models, but models can only offer ranges and probabilities or likelihoods of something occurring.

I think of models like life. You’re trying to understand what future risk might be. We do it all the time.

We do it with our finances. We do it with the car we buy. We do it with the house we buy. We do it all the time in life, but don’t realise we’re doing it.

We’re trying to understand the likelihood of something occurring in the future and the risk around that. Models are just trying to do that in a more precise way.

Q: Why do models give a range instead of a specific number?

Dr Nick Scott: Models provide a range and not one number for an outcome. Quite often the people we are working with will want the model to predict the future, “What is going to happen if we do this?” Our answer will be, “Well, you have a 10 per cent chance of this occurring, and a 30 per cent chance of this occurring, and so on.” There is a wide range of options linked to probabilities that we are providing.

With COVID-19 for example, you just need to look at the real world to know that anything can happen.

You can have one person enter the community with an undiagnosed infection and nothing can come of it, or you can have one person enter and everything can come of it.

The model needs to capture that, which means that our outcomes need to capture that uncertainty as well.

Q: Do models predict the future? Why is it that models can be quite wrong when comparing to actual data?

Dr Nick Scott: There’s a bit of confusion out there about the difference between forecasting and running scenario analyses.

There’s a lot of models that do forecasts. They try to just predict what the numbers are going to be tomorrow or next week or next month.

And then there are models running scenario analyses, that are useful for policy.

You can be in a situation where you’ve got 10 options available to you, and so you run 10 different scenario analyses looking at what happens if we did any of these. By definition, nine of them are going to be wrong in the sense that they won’t match the data, because that scenario never actually happened in the real world.

This was the case with all the early models that were saying Australia was going to have tens of thousands of COVID-19 deaths in 2020. It’s not that those models were wrong, it’s just that they ran a scenario where we didn’t have public health responses to the pandemic.

And that never happened. Fortunately.

Professor Margaret Hellard AM: Models are one tool among a suite that should be used to inform COVID-19 responses. It’s important that people don’t come to expect the scenarios laid out in them because models are not an exact science. There is no absolute certainty in a global pandemic. Models are helpful tools but they’re still assumptions. The reality of what you have on the ground is what you have to deal with.

Q: Does Burnet modelling instruct governments as to what policies are needed to be implemented?

Professor Margaret Hellard AM: Burnet modelling can inform policy but does not outline specific solutions. It is important to note where the modelling team’s work starts and where it ends. Modelling presents options to governments and policymakers, and the possible consequences of those options based on current information.

It is then in the hands of government and policymakers to make decisions on what restrictions or responses are needed. For example, it may be that a similar outcome will occur if you have light restrictions consistently being held for a period of time, or have no restrictions with low case numbers but go in and out of harsh lockdowns as needed. These are the decisions for government. These are conversations that they will know far better than us, what they think should be the trade-off.

Q: How does the COVASIM model work, what data is used to inform it and do you update it when more data becomes available?

Dr Nick Scott: Model outcomes are driven by the data that goes into them and the estimates that go into them.

Professor Margaret Hellard AM: In early 2020, when we looked for true contact data, hardly any was there, we had to use data from studies 10 years ago. But now we can use more recent data from our own work. We set up studies such as the collaborative Optimise Study, to gather information on people’s social contacts to inform the models.

It’s a complex business, the initial setting up, and then the constant adjustment.

Dr Nick Scott: With COVASIM, we simulate individual people in the model and that allows us flexibility because it means that each person in the model can have their own characteristics.

We don’t look at just the average number of contacts that someone has; we actually look at the distribution of that. If we think about social networks and how many friends we might have it’s a really long-tail distribution.

There are a lot of people who might have a small number of close friends they come in contact with all the time. And then there are groups of people who come in contact with hundreds of people. We can allocate those characteristics to individual people.

What that means is that whenever we run the model, sometimes if you introduce new cases, you can get lucky and the first person who gets infected just has a few contacts, or you can get really unlucky and the first couple of people who get infected have a large number of contacts, causing a large outbreak.

We get these distributions and we allocate them across the whole population.

If we look at a lot of the transmissions, we need to calibrate the model to make sure that it produces what we’ve already observed. And so, we do need to make sure that it’s reliable in that sense.

Learn more about our COVID-19 modelling, including our work and our track record in COVID-19 and for other infectious diseases and health issues.

/Public Release. View in full here.