Share Now

# Defining the assessment problem

Your first step in an assessment should always be to specify clearly and unambiguously exactly what quantity you are estimating.  Decision analysts like to invoke the clairvoyant (or clarity) test [MH90 p.50, H88], which stipulates that two hypothetical clairvoyants, who both can perfectly see the future, would agree on the outcome or value of the quantity. For my present challenge, the assessment of the number of deaths from COVID-19 in the US in 2020, do you see any potential ambiguities? The primary source of ambiguity revolves around which deaths should be counted. Location and time are fairly clear: we are considering human deaths that occur within the 50 US states and 5 US territories between 1 Jan 2020 and the 31 Dec 2020 in the time zone where the death occurs. Somewhat more difficult is agreeing on when to consider the COVID-19 virus as the cause of death, especially when you consider that those who are already gravely ill are the most at risk. Someone in that category may have died anyway, even if they hadn’t contracted it.  A good criterion would be to say that the death certificate, as registered with the National Center for Health Statistics (NCHS), should list the underlying or contributing cause of death as COVID-19 or from COVID-19 complications.  Another potential ambiguity arises because the virus, like any other virus, is actively mutating [NS20], so at some point, a descendant virus might be reclassified as a different infection. In the event that happens, I’ll include all descendant viruses. Although these fine distinctions will have minimal, if any, impact on my assessment process at the granularity that is possible with this assessment, a primary benefit of being this precise upfront is that it makes it possible next year to evaluate how I did. I picked the criteria I did precisely because the US Centers for Disease Control uses these same criteria on their annual Summary of the Influenza Season [CDC18], so next year we can compare to their reported numbers. I look forward to conducting a critical review of my logic in this assessment a year from now, and learning from it.

When you create a forecast, it is really important that you come up with a probability distribution and not just a single number. If you are forecasting a binary event — whether something will or will not happen — then your forecast should be in the form of a probability (or equivalently, betting odds [A19]). The value of your forecast is greatly diminished if all you do is produce one guess. For example, if you were to send me a prediction that the S&P 500 index will close the year at 2,900, as an investor I would find this virtually useless.  But if you provide me with a distribution, and I believe you to be a well-calibrated forecaster, I can now use that to make rational decisions. It tells me your confidence, and how probable less likely outcomes are.
##### Introduction to probability distributions
If you are already comfortable with probability distributions, you can skip ahead to the next section. If you are scared away from probability distributions because you have no background in statistics, your anxiety is misplaced. Nowadays, you can makes use of distributions in your own decision making, and even create your own, without even knowing what a Normal distribution is, thanks in part to the availability of easy to use modeling software, such the tool I work on, Analytica [A20]. One way to present a probability distribution for a numeric quantity (such as the one I’m forecasting in this article) is as a cumulative distribution function, or just CDF, or just probability function, such as the following. Each point of the curve gives the probability (from the Y-axis) that the actual total cost will be less than or equal to the value on the X-axis. So, for example, in this plot, there is an 82% chance that the total cost will turn out to be less than or equal to $150M (=$150,000,000). You can equivalently say that there is a 18% chance that the total cost will exceed $150M. As alternate way of plotting the exact same graph is to flip it in the Y-axis, which is known as the probability exceedance plot. Each data point on the plot is the probability from the Y-axis that the actual value will be equal to or greater than the value on the Y-axis. Now you can discern that there is an 18% chance of exceeding 150M, you don’t have to subtract 82% from 100%. CDF plots are more common, but exceedance plots have an advantage when you care about very rare but high-cost events in the right tail. For example, suppose you care about the really rare cases that occur only 0.01% of the time. You can’t see these on either of the above plots because it becomes less than one pixel from the axis, but you can see these quite well if you use a log-scaled Y-axis. When you want to express your own forecast as a probability distribution, how do you do it without having any background in statistics? What you need to do is decide on three (or more) percentile predictions. The most common convention is to use 10-50-90, denoting the very low-end outcome, the middle outcome, and the very high-end outcome. In the total cost example, you may have come up with that distribution be estimating that there is a 10% chance the actual cost will be less than$61M, a 10% chance it will exceed $170M, and an equal chance it will be less than or greater than$100M.  With these three numbers, you can specify the full distribution in Analytica by setting the Definition of Total_cost to

UncertainLMH( 61M, 100M, 170M )

When you show the result, and select either CDF or Exceedance probability views, you get (approximately) the above graphs. It is that easy.  (Caveat: The exceedance probability view is new in the Analytica 5.4 release). There is one further consideration, however. This distribution we just specified allows negative value outcomes. In this example, they occur at very low, but non-zero probability, in other cases they may come out as more likely. So when negative values are nonsensical, you can also include a lower bound:

UncertainLMH( 61M, 100M, 170M, lb:0 )

When estimating a proportion of frequency, you should use a lower bound of 0 and an upper bound of 1 by adding ub:1. In the present article, I start out estimating 25-75-99 percentiles instead of 10-50-90, and eventually refine this into an even more nuanced 10-25-75-99, along with a lower bound in both cases. Although these are not symmetrical, it doesn’t require any additional statistical knowledge to turn these into the full distribution, but I do use a different function: Keelin( estimates, percentiles, I, lb ), where «estimates» are my estimates at each «percentile», both being arrays along with index «I», and lower bound «lb». With that, you have all the tools you’ll (perhaps ever) need to create your own forecasts as probability distributions. But, you still need to make forecasts, so read on.
##### Accuracy, calibration, and informativeness of forecasts
How do you know if a probabilistic forecast is correct? This question sounds reasonable enough, but it does not make sense and it exposes a misunderstanding of what a probabilistic forecast is. Suppose Alice and Bob are both excellent meteorologists, and for tomorrow Alice forecasts a 30% chance of rain whereas Bob forecasts as 80% chance. Is one of them wrong? The answer is that they both could be equally good forecasts. What counts is how they do over time with repeated forecasts. Suppose you look through Alice’s track record to find all the times she predicted something had a 30% probability and then look up whether the event happened or not, and you find that the event happened 30% of the time and did not happen 70% of the time. It would be reasonable to conclude that Alice’s forecast is accurate, or well-calibrated, and therefore, her 30% prediction this time is credible. However, you also check Bob’s track record and find that his 80% forecasts happen 80% of the time, so he is also well-calibrated.   Hence, both forecasts are good, even though they are different! So when looking only at a single forecast, there is no right or wrong probability distribution for one forecast. A forecast is accurate because the person who makes the forecast is well-calibrated, not because of the specifics of the forecast itself. But the key to this is that the forecaster is held accountable — once we learn what happens, we can measure how often her 10%, 20%, 30%,… predictions actually happen. And if you use well-calibrated forecasts to make decisions, the outcomes that result from your decisions will be predictable. The term accuracy, when applied to a probabilistic forecast,  means that the person, organization or process that produced the forecast is well-calibrated. A forecast is a prediction about a non-recurring event. This leads to confusion, as well as to heated debates about frequentist vs. subjective probability because probability distributions are also excellent representations of recurring random processes. But a forecast does not predict the outcome of a recurring process — it assigns probabilities to a single event with exactly one outcome that will (hopefully) be known with certainty or not at some point in the future. Thus,
every forecast is a subjective assessment, it the form of subjective probabilities, and it represents an encapsulation of a body of knowledge in a useful and accessible form for decision making.

# Cognitive Bias

In most respects, humans are excellent at forecasting. We can predict what the driver in front of us is likely to do, what will happen when we turn a bucket of water upside-down, and whether we will be strong enough to pick up a suitcase. However, psychologists have found that our estimates and forecasts tend to be biased in predictable ways [TK74][K10], which are called cognitive biases. The impact these have on probabilistic forecasts that people make is dramatic. Because of this, it matters how we approach a forecasting task. Some strategies for assessment can help to reduce cognitive biases, which I try to illustrate by example in this article. The highly readable book “Thinking fast and slow” by Daniel Kahneman [K10] does an excellent job at not only reviewing 35 years of psychology research on cognitive biases but also unifies the research into an understanding of why they arise.
##### Anchoring
Perhaps the strongest cognitive bias to be aware of is anchoring. What happens is that you think of a number for your forecast, maybe it is your first guess, or maybe someone else suggests it. Regardless of how bad or ridiculous you know that guess to be, and even if you aware of the anchoring bias, you will very likely end up with a forecast that is close to that number, or substantially closer than it otherwise might have been.  For example, in one study conducted at the San Francisco Exploratorium, some visitors were first asked whether the tallest redwood tree is more or less than 1,200 feet. Then they asked for the person’s best guess for the height of the tallest redwood. Other people were given a low anchor of 180 feet. The average difference in best guesses was a whopping 562 feet.[K10, p.123-4].  You might not believe you are influenced by a guess you know to be irrelevant, but psychology experiments have shown anchoring to be a incredibly robust bias.  This is why, in the intro, I asked you to avoid making your own guess upfront.
##### Affect bias
We tend to judge outcomes that elicit strong emotions to have a higher probability than is warranted. This bias is quite relevant here since the COVID-19 outbreak elicits strong emotional responses in all of us.  In one study [K10 p.138] people were given two possible causes of death, for example death by lightning and death from botulism, and asked which is more frequent and by what ratio. People judged lightning to be less frequent, even though it is 52 times more frequent, with similar incorrect orderings for many other pairs. Causes of deaths that evoked strong visual imagery, emotional repulsion, etc., were consistently overestimated. In the present case, the thought of an epidemic overtaking us is emotionally powerful, whereas heart disease and cancer seem mundane, so we should keep this bias in mind.

# Base Rates

##### Causes of death
My starting point to look at how many people die of other things each year in the US. The following is data that I lifted from US CDC’s reports on the cause of death in the US for years 2017 and 2018 [CDC19b], [CDC20b]. Although not guaranteed, it is a good guess that when a similar graph is created 18 months from now with the data for 2020, and with a separate bar for COVID-19 deaths, the COVID-19 bar fits in with the other bars. In other words, I already have a sense that the bulk of the probability mass for our estimated distribution should land within the range shown in this graph.
##### Seasonal flu
One bar stands out in the above graph, “Influenza + pneumonia” since the COVID-19 virus fits into this category. So we should place extra weight on the base rates for this category. The CDC’s Disease burden of influenza report [CDC20a] goes into more detail about seasonal influenza. Over the past 10 years, the number of deaths in the US from influenza has varied from 12,000 to 61,000. The CDC’s disease burden report also contains highly relevant data about the number of cases, medical visits, and hospitalizations, which I encourage you to review. Using the reported uncertainties for the past ten years depicted in the previous graph, I merged these into a single mixture distribution and plotted the resulting exceedance probability curve here: For completeness, I generated this plot using the Analytica expression

UncertainLMH( L,M,H, pLow:2.5% )[Season=ChanceDist(1,Season)]

after creating an index named Season (defined as Local Y:=2010..2018 Do Y&"-"&(Y+1) ), and tables L, M, and H indexed by season and populated with the CDC data depicted in the previous graph. This turns each CDC bar into a distribution, then composes an equally weighted mixture distribution from those. I then showed the result and selected the Exceedance probability view that is new in Analytica 5.4.

#### The base rate distribution

I am ready to synthesize the above information to come up with a first estimate (as a probability distribution, of course) based only on base rate data. This will be my starting point, and then I will adjust it based on other knowledge sources specific to COVID-19. I’ll start with the 25th and 75th percentiles. The 25th percentile in the seasonal influenza death distribution show earlier (based on 2010-2019 data) is 30,900 and  the 75th percentile is 47,500. Where do the number of US deaths in the past pandemics land on this distribution? The Spanish flu of 1918 is beyond the 99.9999th percentile, making it a true black swan [T10]. The Asian flu of 1957 is at the 97th percentile. The Hong Kong flu of 1968 is at the 32nd percentile. The H1N1 flu of 2009 is at the 6th percentile. And the SARS-CoV and MERS-CoV outbreaks are at the zeroth percentile since there were no deaths from either in the US. We need to also take into account that the flu burden graph is a sum of many strains of influenza, whereas the pandemic numbers are individual numbers. From there, I settled on the 25th percentile of 25,000 and the 75th percentile of 80,000 for my base rate distribution. Next, I turn to the (right) tail estimation. Here the Spanish flu of 1918 is the best data point we have, but it is also limited in many ways in terms of how similar it really is to our modern situation. It is useful for identifying an “extremely dire” outcome, 1 in a 100-year event. Conveniently, it occurred 102 years ago. There is only a single data point, so we don’t really know if this is representative of a 1 in 100-year event, but I have to go with what I have while trying to minimize too much inadvertent injection of “expert knowledge”. So, I use 550,000 deaths as my 99th percentile. With leaves me with the following estimated percentiles for my base-rate distribution:
 Percentile Base rate for deaths in US in 2020 0 22 25 25,000 75 75,000 99 550,000
I promise it is pure coincidence that my 25th and 75th percentiles came out to be 25,000 and 75,000. I only noticed that coincidence when creating the above table. Thanks to the recently introduced Keelin (MetaLog) distribution [K16], it is easy to obtain a full distribution from the above estimates supplying the table to the CumKeelinInv function in Analytica, which yielded the following exceedance curve. A log-log depiction of the same distribution is also useful for viewing the extreme tail probabilities: The probability density plot on a log-X axis is as follows:

# Incorporating COVID-19 specifics

The next step is to use actual information that is specific to  COVID-19 and its progression to date to adjust my base-rate distribution. The base-rate distribution is now my anchor. I view anchoring as an undesirable bias, but also as an unavoidable limitation of the way our minds work. But the benefit is that this helps to anchor me in actual probabilities rather than in degree of effect, story coherence, ease of recall, and so on. There are many ways to approach this: trend extrapolation, several back-of-the-envelope decompositions, and many epidemiological models from the very simple to very complex. Forecasts are typically improved if you can incorporate insights from multiple approaches.

#### Trends

The Worldometer.info website [W20] has been tracking the daily progression of reported cases and deaths. Early in the outbreak, on 1-Feb-2020, I published a blog article examining the scant information about mortality rates that were available at that time [C20]. At that time, I found a nearly perfect fit to exponential growth curves for both the reported cases and reported deaths. From pretty much the day of that posting onward, the trend of both curves has been very much linear, as seen here in the reported deaths up to today. The reported cases follow the same pattern — exponential growth to 1-Feb, and linear growth for the subsequent five weeks to the present. I find linear growth of either curve hard to explain since disease transmission is an intrinsically geometric process in the early stages. There is one obvious explanation for the linear growth of the reported cases, which is that the infrastructure for detecting and reporting cases, being a human bottleneck, can’t keep up with exponential growth, so it makes sense for reporting to grow linearly after some point. If that were the case, it would mean that the fraction of cases actually reported would be shrinking fast.  But I’m somewhat skeptical that the reporting of deaths would be hitting up against such a bottleneck, so I have to take the linearity of the reported deaths in February as real. A linear extrapolation of the reported deaths to the end of the year predicts a total of 33,640 deaths worldwide.  What fraction of those would occur in the US? Rough ratios for past pandemics are shown here So if by the end of the year, a pessimistic 5% of all deaths occur in the US, the predicted number in the US would be 1,682, which is at the 0.002th quantile (i.e., 0.2 percentile) of the base-rate distribution, suggesting that the left side of the distribution should be adjusted downward. I don’t know whether the linear trend will continue. Because transmission is a geometric process, I am not optimistic, but since it is happening, I have to consider it a realistic possibility.  Adjustment: 10th percentile = 1500.

#### Maximally simplistic models

I like to decompose difficult estimation problems into other variables that can be estimated in their own right. I refer to this as “model building” and I’ve spent the last 20 years designing the ideal software tool to assist critical thinkers with this process (i.e., the Analytica visual modeling software). The famous physicist, Enrico Fermi, was denounced for, among other things, emphasizing this style of problem-solving to his students, and I’ve seen it called “Fermi-izatng” [TG15]. With the right tools, the building of large models is simply the recursive application of “Fermi-ization” to the sub-components of your model, over and over. There are few things as helpful for gaining insight into forecasts like these as building models. However, the degree to which it is helpful really depends critically on the software you use, especially as models expand, as they invariably do. You can build models in spreadsheets, and you can implement models in Python, but I’ve learned that both of these extremes are pretty ineffective. Yes, you can burn a lot of time and experience the satisfaction of surmounting many tough technical challenges, but much of the good insight gets obscured in the tedious mechanics and complexity of the spreadsheet or in the code of Python. I use Python quite a lot and consider myself to be at the top tier among Python users, but I don’t find it useful for this type of work. Don’t discount the simplest models, but don’t stop with them either. Here is simplest model I could come up with
• 330M people in US
• p1 percent of population contract virus
• p2 percent of those get sick
• p3 percent of those die
I’ve just replaced the original assessment problem with three new ones.  Once you have these, you can just multiply the four numbers together. When you assess p1, p2 and p3, you can use distribution (even quick and dirty) for them as well, and then use, for example, Monte Carlo sampling to propagate the uncertainty.  The model itself looks like this: What makes this especially useful is not a single prediction in produces, but rather that you can experiment with a lot of different scenarios. For example, one scenario I’ve heard suggested repeatedly (which may or may not have any merit — which is something I am going to judge) is that the virus has already substantially spread through the population, so that many people are carrying it, but most people don’t develop symptoms. The claim is that our current sampling of reported cases has come from the small percentage of people that actually get sick.  So, high-rates of infection in the population, low percentage of sick:
• p1 = 20%
• p2 = 2%
• p3 = 2%
• Number of deaths = 24K
A pitfall is that when you plug in single numbers for each estimated quantity, as I just did in his example, it is easy to be greatly misled [S12]. Let’s repeat the same scenario with distributions centered on these values, to at least incorporate a little uncertainty.
Which results in the following exceedance probability graph The median is now only 16,600, but the graph shows a 1% chance of exceeding 100K deaths. Although it shows as an unlikely (1%) outcome, it is the information that is most relevant for planning and decision making, which gets missed entirely if you don’t explicitly include uncertainty. Now we can ask how the exceedance curve changes as our estimate for the percentage of people who don’t get sick varies. In the first scenario, we looked at the case the vast majority of people (roughly 98%) show no symptoms. What if more people show symptoms, and then the same percentage of those get sick?  I added a variable, Est_Prob_nosick, to hold the “Estimated percentage who don’t get sick”, and (re)defined
The direst scenario on this graph corresponds to the case where roughly 20% of the population contracts the virus, roughly 50% of those who contract it show symptoms, and roughly 1.5% of those who show symptoms end up dying by the end of the year. This strikes me as a  pretty low probability scenario, but not entirely far fetched. The base-rate distribution assigns a 1% chance of exceeding 500,000 deaths, whereas the direst curve here assigns a 75% chance, with a 45% chance of exceeding 1M. My takeaway from this is that we need to extend the right tail of the base-rate distribution somewhat, such as moving the 98th percentile to around 1M. However, I’m prepared to adjust that again after additional modeling exercises.
##### Dynamic models (the SIR-like model)
A dynamic model explores how the future unfolds over time. It makes use of rates, such as the rate at which an infection spreads from person to person, to simulate (with uncertainty) how the number of infected and recovered people evolves over time. I put together a simplistic dynamic model, and made it available open-source (see below), as shown in this influence diagram The model approximates the US as a closed system, with a certain number of people in each of the 5 stages, any one person is in a single stage on any given day. People move from being Susceptible to Incubating (infected but non-contagious) to Contagious to Recovered. Deaths occur only among those in the Contagious stage. There is no compartmentalization by age, geography or other criteria. Each of the dark blue nodes with heavy borders in the diagram is a module node, and each contains its own sub-model inside. Note: This is a simple extension of a classic SIR model, which stands for Susceptible to Infectious to Recovered. I’ve split the “I” stage in SIR into the two stages of Incubating and Contagious. I’ve also added explicit modeling of uncertainty. At the top left, there is an edit table where you can enter your own estimates for the uncertain inputs. For each uncertainty, you specify a low, medium and high value, which becomes 10-50-90 percentile estimates. Pct with innate immunity is the percentage of the population who will never become contagious (and will never die from infection). For whatever unknown reason, they are innately immune. (I totally invented these numbers) The Infection rate denotes how many people, on average, a contagious person infects per day, when the entire population other than that population is susceptible. The Time to recover is the average number of days that a contagious person remains contagious. Initially infected is the number of people who are infected (either incubating or infected) as of 8-Mar-2020 (the day I created the model), whether or not they’ve been officially diagnosed, and the Mortality rate is the percentage of contagious people who die from it. The model assumes the death happens while in the contagious phase. Time to contagious is the number of days a person spends in the incubation period.  I emphasize again that the real power of a model like this is not the output of a single run, but rather that it enables you to gain a deeper understanding of the problem by interacting with it, exploring how different estimates change the behavior, etc. The following chart shows the mean values for the key stock variables of the model. The dynamic nature of the model through time becomes evident when you notice that the X-axis is Time. I want to emphasize that you should not fixate on the specific projection depicted here since it is simply the mean projection based on the estimates for chance inputs shown above. The real insight comes from interacting with the model, changing inputs and exploring different views. Rather than reproduce many cases here, I will instead summarize some of the insights I got after playing with the model for quite a long time. The largest surprise for me was how extremely heavy the right tail tended to be (the statistical term for this is leptokurtic). In the above simulation, the distribution for total deaths in 2020 has a kurtosis of 54. This is so unusual that I spent quite a bit of time debugging, trying to figure out what was going wrong, but eventually satisfied myself it was working right.  To give you a sense of how extreme this is, consider this. The number of deaths in 2020 variable, the model’s key output, had a median of 401 deaths, a mean of 881,000, and a 95th percentile of 3.7M.  The source for this comes from the amplification power of geometric growth.  With my uncertain inputs, a large number of Monte Carlo scenarios died down quickly — in nearly 50 percent of them the disease died down before taking the lives of 400 people. But in the ones having a transmission factor (= infection rate times time to recover) greater than 1, the number of people infected took off very quickly, in the 95th percentile simulations. Here is a probability bands graph for the percentage of the population who become infected at some point on or before the indicated date. In the median case, the percentage of people who contract it stays extremely low, but in the 75th percentile case (and above), everyone catches it. This radical difference is due to the high kurtosis that occurs from the exponential growth in the pessimistic cases. Interestingly, the surge in July that appears to be predicted in the graph of the mean of “Projected number of people” (two plots previous) is somewhat of an artifact. The surge occurs only in the most pessimistic cases, such as the 95th percentile where the entire population reaches saturation near the beginning of July. The numbers of infected in those few outliers are so large they dominate the average, causing it to appear that the model predicts a surge in July when viewing the mean.
##### Obtaining a copy of the SICR model
I made this model available for free and open-source to encourage anyone interested to play with, modify and enhance it. To run it:
1. Install Analytica Free 101 (if you don’t already have Analytica installed) Recommended: Since exceedance plots are nice with this model, if you are an Analytica subscriber, use Analytica 5.4 beta instead.
3. Launch analytica and open the model.
4. If you are a newbie to Analytica, consider going through at least the first few chapters of the Analytica Tutorial to learn your way around.
##### Insights from the dynamic model
I did not find the dynamic model to be very helpful for obtaining meaningful predictions of real numbers. The predictions it makes are hyper-sensitive to the uncertain inputs, especially those inputs like infection rate and time to recover (which is the number of days infectious). Minuscule changes to those create dramatic swings in the numeric forecasts. For these purposes, the simplified models were far more useful. However, the dynamic model gave me an appreciation for just how heavy that right tail can be. Earlier in this article, I noted that the Spanish flu was at the 99.9999% of my base-rate distribution. The dynamic model has convinced me that it should not be. It doesn’t take much to swing the dynamic model into forecasting an ultra-dire scenario. But on the upside, the ultra-sensitivity of outcome of infection rate and mortality means that every precaution we take as a society has an amplified beneficial effect. So even the things that seem stupid and minor reduce the risk of the dire outcomes far more than my intuition would suggest. However, my treatment in this article is not dealing with social policy decisions.
##### Incorporating modeling insights
After incorporating the insights from the trends and models discussed in the text, I have revised my percentile assessments as follows to obtain the final forecast.
 Percentile Base rate for deaths in US in 2020 0 22 10 1500 25 15,000 75 75,000 95 550,000 99 2M
I show the plots of the full distribution in the Result section next. The quantile estimates in the preceding table are expanded to a full distribution using a semi-bounded Keelin (MetaLog) distribution [K16].

# Results

In this section, I summarize my forecast for the Number of Deaths from the COVID-19 Coronavirus that will occur in the US in the year 2020. For those who have jumped directly to this section, I have documented the information and process that has gone into these estimates in the text prior to this section. I can’t emphasize enough how important it is to pass these estimates around in the form of a probability distribution, and NOT convert what you see here to just a single number. If you are a journalist who is relaying these results to lay audiences, at least communicate the ranges in some form. The following exceedance probability graph sums up the forecast, and I’ll explain how to interpret it following the plot. The actual number of deaths has an equal chance of being less than or greater than the median value, making the median the “most typical” of the possible outcomes. To read the median, find 50% on the vertical axis, and find the corresponding value on the horizontal axis. The median estimate is that 36,447 people will die from COVID-19 in 2020. The 90% exceedance gives a reasonable “best-case” extreme scenario.  The forecast gives a best-case of 1,923 deaths. There is a 10% chance of the actual number being this low. You can consider the 10% exceedance probability to be the “very pessimistic” scenario. There is a 90% chance it won’t get that bad, but a 10% chance it will be even worse. This 10% exceedance forecast (also called the 90th percentile) is 271,900 deaths. The right tail corresponding to the part of the graph where exceedance probabilities are less than 10% represents the very unlikely, but not impossible, catastrophic scenarios. I judged the probability of 1 million deaths or more to be 2%, with 2 million or more at 1%. The good news about these extreme cases is that the probability of these extreme scenarios materializing is hyper-sensitive to the parameters that we can influence through changes that lower transmission rates.

# References

Share Now

### 19 thoughts on “Estimating US Deaths from COVID-19 Coronavirus in 2020”

1. Linda L Chambers

Lonnie, thanks for your call today. I am totally impressed with your write up and summary about this. I’m hoping this virus will turn around soon and be a memory. Most of this is “over my head,” as you know, but it is frightening to think that 100 million people will die from this virus in the U.S. I admire and appreciate all of your work on this presentation. Love you, Mom

1. Lonnie Chrisman

Thanks mom! Thanks for reading it. Just to be clear, we can be sure that the number of people who die in the US will not be 100M. I think it is plausible that 100M people might contract it, but only a small fraction of those will actually die from it. My forecast does include optimistic outcomes with substantial probability too. I think require extreme action from everyone to limit social gatherings (which is happening), and getting a lot more early testing out there (which people are working hard to do). Let’s turn this thing around!

1. Lonnie Chrisman

Sounds like the projections were communicated to the media by phone, so I don’t see a link in the article to a corresponding CDC publication. How does their projection compare to mine? Unfortunately, the details are not elaborated. They say there were 4 scenarios, but don’t elaborate on each one, nor give relative likelihoods. The article elaborates more on the more dire ones. But, my read is that the CDC expert’s estimate of the mean is 480,000 deaths, although he says that is conservative. It wasn’t explicit whether this was a mean or median or mode (best guess). To compare, my mean was 164,000, median was 36,477. They gave a range, which the wording sounded like it was a range for the dire scenario, of 200,000 to 1.7 million. That upper end is right where my upper end is.

Because the NY times article isn’t explicit about the distributions, there is a lot of ambiguity there for how to interpret those.

2. MORGAN EDWARDS

Hello,

I read your article with much interest. I even understood some of it. (LOL)

I am a retired communications operations executive with and engineering degree and an MBA. So reasonably well educated, but nothing like the education background like yourself and those who have commented on this article. In retirement I am a competitive swimming coach (a skill I learned in my youth).

I am struggling with the draconian measures that we have taken to essentially close down the schools and much or commerce in the US as a result of this pandemic. I am old enough to have lived through the pandemics of the recent past – in fact I was really ill with the Asian flu the late 1950’s and lived through all those pandemic/epidemics that followed.

My question is this – if the range of deaths in the US is projected to be between 2,000 and 1,000,000 – but the 50/50 projection is 36,000 and the most likely range 20,000-100,00, that makes this pandemic look by itself like a typical flue year with 10,000-60,000 deaths. If you add those two numbers togethers (36K +34K) – the result is 70,000. Higher than typical number of flu season deaths but probably lower than 2017-2018, which my research suggests to be 60,000-90,000.

That being the case, why don’t we implement these very serious social restrictions every year? What’s different about
CV-19?…………..and this pandemic?

Last question ………what would a projection curve look like for a typical flu season?

Thank you for providing me an opportunity to make these comments and to ask these questions.

Morgan Edwards

1. Lonnie Chrisman

Morgan – those are really good questions. Thank you for asking them.

I think we all are struggling to comprehend and live with the drastic actions that we’ve implemented so far. At the same time, trying to figure out whether they are enough.

It is important to understand that in this forecast I’m estimating where we will end up AFTER whatever measures we decide to implement as a society. But that is a lot different from the forecast of where we would end up if we take no action, or just treating it like an ordinary flu. I have also spent time forecasting the progression if no special measures were to be implemented, which I find to be much easier to forecast. If we took that route, I think we would likely be looking at numbers of deaths from COVID-19 in the US between 1 million and 10 million. In addition to those, we would probably see another extra 1.5M people die from other causes as a result of lack of access to ICU care. So the outcome if we don’t take unprecedented measures would not be in the range of a typical flu season. Drastic measures, if we are lucky, may enable us to exit this in the range of a typical flu season.

In addition, even when the median forecast seems acceptable, we need to take appropriate responses to protect against the catastrophic but low probability outcomes. In the dynamic models I’ve built and played with, social distancing measures amplify that tail very dramatically.

I’d like to throw out a couple numbers that you might be able to use while seeing if you agree or not with what I just said. The Incidence Fatality Ratio (ICF) for age groups 60-69, 70-79 and >=80 have been estimated to be 2.2%, 5.1% and 9.3% in a functional US health care system (I took these from the Imperial College report). CFR rates in countries where the health system has become overloaded (like Iran and Italy) have experienced a roughly 4-fold increase compared to countries with functional ICU systems. Without measures, the need for ICU care for COVID-19 cases is expected to be about 30-times current capacity, so reasonable ICF estimates for those age groups would be 9%, 20% and 37%. The number of Americans in those age groups is 38M, 23M and 13M. And with no curtailment measures, likely 20% to 70% of all people would catch it. I invite you to adjust these, then multiply them. For example, if you accept the numbers I gave with 25% of people in the 3 age groups catching it, you predict 833K, 1.17M and 1.19M deaths in each of those age groups (~3.1M among >=60 year olds).

Since I’m illustrating how to approach a difficult forecasting task here, doing a forecast of where we will actually end up has the advantage of being something that can be tested later — eventually we’ll know the true answer, and can compare that to the forecast. Forecasts of hypotheticals — “if we do this, then…” — don’t have that property.

Why don’t we implement these serious restrictions every year?

You are absolutely right that there is a big economic trade-off that we have to make. Slowing down / stopping our economy has serious health outcomes in its own right. We shut our economy down, we shorten lifespan through poverty, etc. So we have to treat this as a trade-off.

The two differences this time compared to a normal flu season are that COVID-19 appears to be a lot more infectious and a lot more virulent, which thus tips the trade-off scale. Both are aggravated by the fact that no one has ever has this disease before, so there is no immunity in the population.

> Last question ………what would a projection curve look like for a typical flu season?

For this one, go back up into my article and find the graph that says “Exceedance probability vs Seasonal influenza deaths”. That one is formed directly from the past 10 years of flu data, so I probably wouldn’t change it too much. I would expand the right tail somewhat though, because it is based only on 10 years of data, and R0 in particular tends to amplify the right tail (the low probability but very bad outcome cases).

1. Morgan Edwards

Lonnie,

Sorry for the delay in replying back.

Those are some really scary numbers – far worse for the US than 1917-18: 675K. Since we have a fairly robust but apparently unprepared health care system, looks like in some countries where the health care is not a good as the US could wipe out 1/3 or perhaps 1/2 the population

I understand that you have to protect against the low probability but catastrophic outcomes, so social policy is driven by ensuring that you have taken adequate precautions so every family does not loose one of their 4 grandparents to this pandemic, let alone the potential impact on children.

A couple of other questions. Maybe these are not a questions for a numbers guy – but I am interested in your thoughts.

Why are the numbers so low for Russia? Are they cooking the books? Would appear not……….. as they are going to send us medical supplies.

Also, why low numbers in Mexico? I have been to Mexico quite a few times – not a place that’s exactly germ free?

Thanks

Morgan

1. Morgan – I assume you’ve seen my updated forecast (now a week old). (https://lumina.com/forecast-update-us-deaths-from-covid-19-coronavirus-in-2020/).

> Why are the numbers so low for Russia?
I don’t know too much about Russia. They’ve been doing a fair amount of testing, which can be a key component of a good strategy. It would be pure speculation on my part as to whether they are cooking the books.

>Also, why low numbers in Mexico?
Opposite for Mexico — they haven’t been doing much testing at all. Less that 9,500 tests as of 28-Mar (https://coronavirus.gob.mx/noticias/), of which 11% have been positive (Of course, when you do low amounts of testing, your probably testing cases you think have it). They’ve been slow to take any significant measures to slow the spread.

This is a super interesting question, because I agree that Mexico City would be the perfect place for COVID-19 cases to skyrocket, given its high density and huge population. I haven’t yet seen news reports of overflowing hospitals, which it seems like we would be hearing about by now. Maybe in a couple weeks it is going to be super bad there. Or, maybe there is something about their lifestyle that makes them less susceptible (rates of smoking, diet, medications, climate, ???). Along with some colleagues, I’ve been marveling at how many of the COVID-19 metrics seem to be radically different between countries. There could be things that influence the spread or the susceptibility that no one has thought of yet.

1. Lonnie Chrisman

The dynamic SICR model discussed in the article automatically does this. People move through the stages Susceptible, Incubating, Contagious, Recovered (or from Contagious to Dead). The group Recovered are the people with acquired immunity, so they don’t return to the S pool.

3. Thank you for providing data, analysis and clear charts to support the discussion of facts and predictions. I really appreciate it.

4. Hi – I have been doing my own Fermi-style estimates for Covid-19 deaths since mid-March and finally decided to search to see if anyone else had similar thoughts, and found your post. Excellent write-up! Turns out my model is the same as your simple model, but with one key difference: my p2 = 1. And of course then my numbers are way off from yours, and from all of the ones I see online. Since you asked for comments on the subjective decision points, maybe I could run my values past you for feedback?

Using your simple model, and using it to model annual flu deaths, if 8% of the population catches the flu each year, and .1% die from it, then average deaths from the flu would be 330,000,000 x .08 x .001 = 26,400. And this is true for the CDC report you mention. To plug this in to your simple model:
p1 = 8% (percent of population that catches the flu)
p2 = 1 (all of whom are considered sick)
p3 = .1% (percent of population that dies)
330,000,000 x .08 x 1 x .001 = 26,400

My Covid-19 model has been p1 = 50%, p2 = 1, and p3 = .5% to .1% (I’ve been decreasing p3 as the anecdotal evidence mounts that many more people have had it than first suspected).
330,000,000 x .5 x 1 x .001 = 165,000 (same as the flu)
330,000,000 x .5 x 1 x .005 = 825,000 (if .5% of all infected)
330,000,000 x .5 x 1 x .020 = 3,300,000 (if 2% of all infected)

These numbers align almost exactly with the numbers in your model (the one that surprised you so you spent a lot of time debugging) with your results showing a mean of 881,000, and a 95th percentile of 3.7 million deaths.

Here’s my questions:
1. Why did you compare the Spanish Flu deaths directly to the 2018 deaths without adjusting for percentage of current population? Would that affect your model?
2. What would your values for p2 and p3 be for the flu example? I’m not sure how to use p2, and I’d like to try to adjust my simple model to include it. (As we get more and more testing done, perhaps there will even be data.)
3. Why did you spend so much time debugging your model when the mean number of deaths were 881k? Is it possible that you yourself have a cognitive bias against that result? (I ask that honestly, not facetiously, I’m quite familiar with the concept, having been part of a never-forgotten demonstration of it in a college class.)
4. It is now the end of April…what are your models showing now?

Thanks again!

1. Jackie – thanks for the questions.
> Why did you compare the Spanish Flu deaths directly to the 2018 deaths without adjusting for percentage of current population? Would that affect your model?

Very good question. I absolutely thought about doing that. I also felt there should be another adjustment in the opposite direction to account for scientific and medical advances. Both of these adjustments apply to all the past epidemics. At this stage of the assessment process, I’m trying to get a rough base rate, so I’m not trying for precision. I just felt an easy and reasonable compromise was to assume they roughly cancelled each other out. As you go through the process yourself, you might decide it is better to separate the two factors. The population in 1918 was 1/3 what it is today, so that adjustment is easy. But probably do want to adjust your base rate to semi-account for the differences in medical knowledge, etc. It doesn’t have to equal to population growth.

> What would your values for p2 and p3 be for the flu example?
You used p3=0.1%, which I think is the correct number.
When I created that, I was hypothesizing that there would be asymptomatic people, who might not show up in fatality ratios. It is interesting I did that given how today this has been discovered and turns out to be a big deal. I don’t know what the numbers would be for the flu. With these models, you want to break it down into quantities that are convenient to estimate. In the case of the flu, the CDC does estimate the number who are unreported and the number that seek medical care, and then the number who get “reported” (because that sought medical care) who then die, etc. So you might alter it slightly to use p2 = Percentage who touch the medical system and get reported, and then p3=Number of reported that die. You could also use the same formulation I did, but I have to admit I don’t know what the actual numbers are.

>Why did you spend so much time debugging your model when the mean number of deaths were 881k? Is it possible that you yourself have a cognitive bias against that result?
So that debugging was of the dynamic model, of course. When I get a model running, and I see something that I can’t explain, then I need to understand why it happens. If it is weird, it is not uncommon for it to be due to an error, rather than a real consequence. In this case, the thing that took me by surprise was the extremity of the distribution — as I mentioned, a kurtosis of 54 — which means the mean and median are worlds apart. I don’t think the issue for me was the size of the death toll in that case — it was the wild swings in the distribution. Eventually, I came to understand why it was happening. What it comes from is basically an extreme sensitivity to initial conditions — the so called “butterfly effect” — where really minor changes to initial conditions cause huge swings. A lot of people who build dynamic epidemiological models (SIR, SCIR/SEIR style) don’t explicitly put uncertainty on their parameters, so the butterfly effect doesn’t slap you in the face — you get out a single reasonable looking run. When you model the uncertainty explicitly, the butterfly effect manifests as distributions with huge kurtosis. I found that because I could alter parameters by really small amounts to fit nearly any behaviour, it didn’t really constraint the possibilities much. But it did give me a huge qualitative appreciation for the potential extremes.

> It is now the end of April…what are your models showing now?
If you haven’t already seen it, I did do an update a bit over a month ago:
https://analytica.com/forecast-update-us-deaths-from-covid-19-coronavirus-in-2020/
Yes, I am due for another update to the forecast. Given time constraints, I don’t expect that to be this week, but please stay tuned. Please take a look at other blog articles (I’ve written quite a few), since I’ve also been focusing a fair amount on how long the current recovery will take.

1. Thank you for the detailed reply. Some follow-up thoughts:

> Spanish flu deaths in 1918 vs now: I also gave some thought to the differences in medical advancements. For many back then it wouldn’t have made a difference, since people were dying within 24 hours especially in the autumn of 2018. Perhaps a second reason for not adjusting the numbers is that Covid-19 may not be as lethal as the Spanish flu.

> Thank you for explaining that a kurtosis of 54 means that the mean and median are far apart, I missed that in the first pass. And it was interesting to compare the results of your (and my even simpler) simple model vs the much more complex one.

I’m very much looking forward to your next update, and will be checking out the other blog articles as well.

5. Lawrence S. Wick

Lonnie Chrisman: Preliminary disclosure– my only course in logic, probability and statistics was “Intro to…” in 1966 at Northwestern U and my other studies there and at Columbia U, and my entire professional career, were about as far from AI, IT, computer learning, computational biology, virology, nanoswarm and predator networks, etc. as anyone could possibly get. So I have left the Covid-19 forecasting efforts to you, Analytica, et al and observe that you all are doing a very good job, given all the unknowns.
My lay-person’s effort is to determine, compare and contrast current fatalities with total recoveries plus fatalities in the U.S. and globally, for overall death rates for those who have been through Covid-19 to the end, so to speak, and attempt to determine what more, if anything, could be done everywhere, but especially in countries like the U.S. which appear to have significantly higher death rates from Covid-19. Obviously, these numbers are sort of slippery since introduction of the pandemic into different countries occurred at different times, exposure rates to date varied significantly, rates of effective testing differ, reporting incidence differs and some areas already are entering their second Covid-19 wave in a way comparable to the 1918 pandemic [e.g., the terrible 1919 experiences in the Bay Area (John M. Barry, The Great Influenza; Viking, 2004)].
I use your “dynamic model” together with the Bing Covid-19 Tracker’s current data as a starting point–in particular, the “recovered” and “fatal” numbers. As of May 18, 2020 reported Bing data, total U.S. “recoveries” are 281K and “fatalities” are 90K, for a total of 371K who it must be said have been through the whole disease one way or another, and for whom the ultimate Covid-19 death rate was 24.3% [90K / (90K + 281K)= 24.3 !]. Again using current Bing Tracker data, and looking at the rest of the world’s data excluding the U.S., total remaining global “recoveries” are 1,451K (1,732K total -281K U.S.) while the fatalities for the rest of the world are 225K (315K total – 90K U.S.), for a total 1,676K who also have been through the disease whether to recovery or death, and for whom the ultimate Covid-19 death rate was considerable lower, at 13.4% [225K / (225K + 1,451K) = 13.4 !]
These percentages, frankly, scare me since we are talking about large numbers of real dead people here– and I realize that the assumptions, variables and data are changing almost 24/7– but we– at least those of us who care about living people– must find more effective ways to lower the death rates. Forecasters like you will come up with your own numbers, which will be more useful, I am sure. But… many of those who have a political or economic “stake” in this disaster– whether elected politicians dependent upon certain “political contributions,” and idealogues of the left or right, and those who have large investment or business stakes in a certain outcome, and those who want people like me to die in order to reduce Social Security payments, welfare, tax burdens, increase tax revenues, etc.– should be expected to scream if they find out the true realities– and then come up with their own probably twisted and fake versions of the numbers.
SO, Lonnie Chrisman and you other experts– please keep fighting for the truth, for objectivity, for clear thinking, so that, hopefully, the virology, immunology and other health experts can use your observations to come up with better treatments and ultimately a cure for Covid 19 !
Sincerely, Lawrence S. Wick (Biloxi, MS)

1. Lonnie Chrisman

Lawrence – It is great that you’ve been really looking at and interpreting the numbers, and trying to make sense of them. I’ve been looking very deeply at these same questions about mortality rates and incidence. It is the recipe for solid, rational decision making.

I like to distinguish between the Case Fatality Rate (CFR) and the Infection Fatality Rate (IFR). CFR is Reported deaths from COVID-19 divided by Reported cases, with appropriate adjustments for the timing. IFR is the total deaths from COVID-19 divided by total cases (both reported and unreported). Everyone agrees that IFR is less than CFR, but it is harder to estimate since we don’t know how many cases and how many deaths aren’t being reported.

Your estimate using fatalities / (recoveries + fatalities) is one way to obtain an upper bound on CFR. It is an upper bound because many recoveries take a long time, so there are people who haven’t yet recovered but eventually will. Over time, this estimate will approach the true CFR. Another method is to divide number of deaths by the total number of cases reported as of 11 days ago. Death occurs on average 11-12 days after being reported, so this time-shift lines them up. This tends to underestimate the CFR slightly, since some people may be counted in the denominator who will eventually die.

What you are really looking for is the IFR, which as I said is lower than the CFR. Estimating the IFR requires estimates for how many unreported cases (and unreported deaths) there are. I have developed some interesting methods for obtaining these estimates, and hope to publish an article on that. I do believe that the IFR is substantially lower than the CFR, but still high enough to be quite concerning. In addition, unlike previously existing diseases, in this case no one has previous immunity, so if not suppressed we could end up with a lot more cases than we would with something like the flu. The social distancing measures have prevented that from happening so far.

I agree that it is unfortunate that the debate has become politically polarized, with at least some of the decision making being driven by interests other than minimizing societal damage (loss of life plus economic damage).