COVID-19 Simulation Review Bonanza 1: Washington Post

Okay, time to get started on the simulation that gave me the idea of doing this series.

Two days ago, the Washington Post published an article with an in-browser agent-based model of a disease dubbed “simulitis”, which is meant to roughly resemble COVID-19 in simplistic terms. The article is freely available, and can be accessed here: https://www.washingtonpost.com/graphics/2020/world/corona-simulator/

(Oh, wait... there's no “simulitis” in the URL there. But don't be confused, it really is a simplified digital version of coronavirus.)

Anyway, the article showcases four simulations of a viral spread:

The main thrust of the article is that a) taking measures help, and that b) social distancing works better than quarantining.

Simulation screenshot of a quarantined system (early stage). Source: *Washington Post*.

At the end of the article, the authors note that the simulation is not fully realistic, because Covid-19 can kill, whereas everyone stays alive in this run.

The article was tremendously popular, even to the extent that Ex-POTUS Barack Obama tweeted about it, see:

https://twitter.com/BarackObama/status/1239267360739074048

Now, let's look at this in a bit more detail...

The Review

In the review, we cover four areas (as mentioned before) in order: Robustness, Completeness, Quality of Claim, and Presentation. I maintain this order, because the last one is pointless if the first one isn't there, and the third one is not a total disaster if the underlying model is at least of some use. So let's have a look at the

Robustness

The code is a simplistic implementation that runs in the client's browser, and has a number of assumptions. Several of those are clearly justified and documented: for instance, simulitis always spreads to other people when they touch each other so that you don't have to run for weeks to get a result. But there are a number of other assumptions that are not so explicit.

First, the quarantining run starts with a tiny gap, which changes as the run progresses. See the picture above for the starting stage, and the picture here for what it looks like later on:

Simulation screenshot of a quarantined system (late stage, quarantine area is on the left). Source: *Washington Post*.

In this image, we can see two important assumptions in this simulation: 1. the quarantine weakens as time progresses, and 2. agents are not placed back into the quarantine area after they've been classified as ill.

Now these assumptions are not explained in the article, and may be much harder to justify. In regards to (1) there may be a case for weaking quarantine conditions over time, but does that make sense when the number of infections rise? And in regards to (2) I think the whole point of quarantining is that newly ill persons are placed in it.

The other two runs showcase social distancing, and those have a more straightforward implementation that is easier to justify. One implicit assumption there is that the percentage of people doing social distancing remains the same throughout the simulation. I think this assumption is slightly questionable, but less controversial than the two I highlighted in the previous simulation.

Basically, one could reasonably expect social distancing measures to become less effective over time, as people lose the discipline to adhere to them. In that case, the percentage of stationary agents would have to decrease over time. But, of course, one could also think that discipline goes up over time. It'd be interesting how the extened of public discipline affects the accuracy of these runs.

Lastly, it is not clear to what extent the simulation has been validated against data.

Robustness rating: 3/5. The basic mechanisms broadly make sense, but the quarantine run has major issues.

Completeness

We have a basic proof-of-concept simulation in the browser, so obviously the completeness of it all is going to be limited. Many of the shortcomings and omissions are detailed in the article, so I won't count those down for completeness.

One limitation that is not mentioned, however, is the fact that only direct person-to-person transmission is incorporated, and that there are no mechanisms for transmission via surfaces.

Although person-to-person transmission is understood to be the main mechanism, there is also evidence that points to transmission via surfaces. In addition, people move around randomly, whereas real-life movement is likely to be more structured and clustered.

Other elements lacking include: (a) the possibility to have a reduced infection rate (it's 100% in the simulation, so every touch is an infection), which would make the progression somewhat more realistic, and (b) the lack of mortality, as noted by the authors already.

But overall, given the overall simplicity, it seems that many important elements have been incorporated.

Completeness rating: 3/5. No surface transmission and fully randomized movement limit the rating, but given the simplicity of the model, the design choices made in terms of incorporating elements are not unreasonable.

Quality of Claims

As a reminder, the main claims in the article are that a) taking measures help if people stick to them, and that b) social distancing works better than quarantining.

The first claim (a) is absolutely well justified, and the simulations clearly showcase how good discipline helps 'flatten the curve', irrespective of the measure taken.

The claim that social distancing works better than quarantining (b) is not sound, however, because several of the weaker assumptions in the simulation bias the whole system towards making quarantining ineffective. Because these assumptions are not at all mentioned in the article I would argue that this claim is not well justified at all.

Quality of Claim rating: 2.5/5. Two claims, of which one is clearly evidenced and one is borderline misleading. So it seems fair to give half points.

Presentation

Running simulations inside a newspaper article is a great idea, and the system works really smoothly. There are no dials or buttons to play with, but that may be for the better as that could soak up more compute resources from the user's side. The presence of different runs in different parts of the article adds even more depth to the experience, and lastly the simulations neatly pause when you scroll away.

So overall, I think this is really well-done in terms of presentation. I only deduct half a point because I find the color scheme somewhat vomit-inducing...

Presentation rating: 4.5/5. This is how you present simulations in a newspaper article!

Wrap-up

The overall picture looks like this:

RECOMMENDATION:

Play with the code, follow the advice, and forget about the rest of the article ;).

-> Continue to the Next Review (2)