Where is everybody?
— Enrico Fermi
We build and use models to help us interpret complex situations, make predictions and ultimately decisions. The omnipresent uncertainty of the real world is a key factor that makes these tasks hard. We at Lumina are big advocates for the practice of treating uncertainty explicitly as part of our models by using probability distributions. Sadly this is not yes as common as it should be.
A recent paper by Anders Sandberg, Eric Drexler and Toby Ord, “Dissolving the Fermi Paradox” (2018), argues that one can naturally resolve the 70 year old paradox just by including uncertainty in the model. The apparent paradox is simply the result of what Sam L. Savage calls the Flaw of Averages. When you explicitly account for how uncertain we are, it dramatically changes your conclusion. In this blog article, we review the Sandberg et al. paper and provide a live Analytica version of their model that you can explore.
Flaw of Averages on Steroids
The paradox that we examine appears when you use point estimates and ignore how uncertain you are about those point estimates. To illustrate how dramatically this can distort your conclusions, Sandberg et al. give the following toy example. Suppose there are nine parameters, which multiplied together give the probability of extraterrestrial intelligence (ETI) arising on any given star. Suppose our state of knowledge is that each parameter could be anywhere between 0 and 0.2, with uniform uncertainty within this interval.
When you use a point estimate of 0.1 for each parameter, you conclude that there is a 10^{9} probability of any given start harboring ETI. Since there are about 10^{11} stars in the Milky Way, the probability that no other star other than our own harbors intelligent life is extremely small, (110^{9})^{100B} ≈ 3.7\times 10^{44}.
When you perform the same calculation using explicit distributions, Uniform(0, 0.2)
for each parameter, the mean estimate is 0.21, which is more than 5,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times more likely!
The Fermi Paradox
One day in 1950, Enrico Fermi, the Nobel prizewinning builder of the first nuclear reactor, was having lunch with a few friends in Los Alamos. They included Edward Teller the inventor of the hydrogen bomb. They were looking at a New Yorker cartoon of cheerful aliens emerging from a flying saucer and Fermi famously asked, “Where is everybody?”. Given the vast number of stars in the Galaxy and the likely development of extraterrestrial intelligent life, how come no ETs have come to visit or at least been detected? This question came to be called the “Fermi Paradox”. Ever since, it has bothered those interested in the question of extraterrestrial intelligence and whether we are alone in the Universe.
The Drake Equation
In 1961 Frank Drake, a radio astronomer interested in the search for extraterrestrial intelligence (SETI), tried to formalize this calculation. He suggested that we can estimate N, the number of detectable, intelligent civilizations in the Milky Way galaxy from what is now called the “Drake equation”, and is often referred to as the “second mostfamous equation in science (after E= mc^{2})“:
N= R^* \times f_p \times n_e \times f_l \times f_i \times f_c \times L
Where
R^* is the average rate of formation of stars in our galaxy,
f_p is the fraction of stars with planets.
n_e is the average number of those planets that could potentially support life.
f_l is the fraction of those on which life had actually developed;
f_i is the fraction of those with life that is intelligent; and
f_c is the fraction of those that have produced a technology detectable to us.
L is the average lifetime of such civilizations
Since Drake first proposed this calculation, quite a few people have tried to refine his calculation of N, the number of detectable extraterrestrial civilizations. Mostly they come up with a large number for N. The contradiction between expected proliferation of detectable ETs and their apparent absence came to be called the “Fermi paradox” after the famous lunch conversation.
Past explanations of the Fermi Paradox
Many explanations have been advanced to resolve this: Maybe advanced civilizations avoid wasteful emission of energy into space in the forms of electromagnetic radiation or other forms that would be detectable by us. Maybe interstellar travel is simply impossible. Or if it is technically possible, all ETs have decided it’s not worth the effort. Or perhaps ETs do visit us but choose to be discreet deeming us not ready for the shock of contact. Maybe there is a Great Filter that makes the progression of life to advanced stages exceedingly rare. Or perhaps, the development of life from lifeless chemicals (abiogenesis) and/or the evolution of technological intelligence are just so rare that we are in fact the only ones in the Galaxy. Or, even more depressingly, those intelligent civilizations that do emerge all manage to destroy themselves in short order before perfecting interstellar travel—as indeed we Earthlings may plausibly do ourselves.
The recent Sandberg et al. paper proposes an elegant way to resolve the apparent paradox without resorting to any speculative explanations. Recognizing that most of the terms of the Drake equation are highly uncertain, they express each term as a probability distribution. To obtain the distribution, they reviewed relevant scientific literature to characterize the range of opinions that appear for each parameter. They then use simple Monte Carlo simulation to estimate the probability distribution on N, and hence the probability that N<1 — i.e. that there are too few or simply zero ETs to detect. They estimate this probability at about 52% (our reimplementation of their model comes up with 48%). I.e. there is a decent probability no other observable civilization exists in our Milky Way galaxy, so we should not feel surprised. Thus, we might argue that the original Fermi paradox as articulated by previous estimates of N was the result of an application of Sam Savage’s “Flaw of Averages”: If you use only “best estimates” and ignore the range of uncertainty in each assumption, you’ll end up with a misleading result.
Quantifying the uncertainties
The Sandberg et al. paper reviews the range of estimates from the scientific literature for each factor in the Drake equation. They estimate how much uncertainty there is in our current scientific understanding for each one. Their estimates for nearly every factor vary over many orders of magnitude.
They rely heavily on the LogUniform distribution, which specifies that each order of magnitude between a minimum and maximum is equally likely. In other words, the logarithm of the value is uniformlydistributed. This table summarizes their estimated uncertainty for each factor.
Factor  Version 1  Version 2  Description 

R^*  LogUniform(1, 100) 
Rate of star formation (stars/year)  
f_p  LogUniform(0.1, 1) 
Fraction of stars with planets  
n_e  LogUniform(0.1, 1) 
Number of habitable planetary objects per system with planets (planets/star)  
f_l 
“LogNormal version” 1e^{e^{m}} 
“t V \lambda version” 1e^{t V \lambda}

Fraction of habitable planets that develop life

f_i  LogUniform(0.001, 1) 
Fraction of planets w/life that develop intelligence  
f_c  LogUniform(0.01, 1) 
Fraction of intelligent civilizations that are detectable  
L  LogUniform(100, 1e10) 
Duration of detectability (years) 
The scientific notation 1e15
that appears above is a way of writing 10^{15}. Abiogenesis refers to the first formation of life out of inanimate substances.
For the factor f_l, the fraction of habitable planets that develop some form of life, the the paper describes two versions, both of which use the form 1e^{r}, where r is the number of abiogenesis events per habitable planet. Version 1 estimates the number of abiogenesis events using a LogNormal distribution. Version 2 decomposes this into three other quantities, which appear even more difficult to estimate! The 1e^{r} form is the probability that one or more events occur (on a given habitable planet) assuming a Poissonprocess with rate r. We encoded both versions in our model, since we could not tell from the text of the paper alone which results in the paper used which version of the model. It is worth noting the obvious — that the paper uses an extraordinarily wide range for f_l for both versions.
This table gives some results from these models.
N = # detectable planets in Milky Way  Pr(N<1) “we are alone” 
Pr(N>100M) “Teeming with intelligent civilizations” 


Median  Mean  
Reported in paper  0.32  27 million  52%  – 
Version 1 i.e. f_l based on LogNormal 
1.8  27.8 million  48%  1.9% 
Version 2 i.e., f_l based on t V \lambda 
9.9e67  8.9M  84%  0.6% 
Using Point estimates (Version 1)  2000  0%  0%  
Using Point estimates (Version 2)  1e66  100%  0%  
Using f_l=16\%  500  8.9 million  17%  1.2% 
Using f_l\sim Beta(1,10)  170  5 million  23%  1.4% 
The top row, “Reported in paper” values appear in the text of Sandberg et al. The rest are from our Analytica implementation of their model. Their reported values seem more consistent with Version 1; but, other results that appear in their paper seem to have been generated using Version 2. We believe our implementation of both versions are faithful to those described in the paper, and we even reviewed their Python code in a futile attempt to explain why our results aren’t an exact match. We currently await a response to our email to the first author to clarify the situation. While we haven’t been able to reproduce their exact results the discrepancies would not affect their broad qualitative conclusions.
The row labeled “Using point estimates (version 1)” uses the median of their distributions as a point estimate for each of the seven factors of the Drake Equation. The row “Using point estimates (version 2)” uses the medians of t, V and \lambda rather than the median of f_l —i.e., there are 9 parameters. In the first row showing Version 1 of their model with uncertainty, the mean for N is 4 orders of magnitude larger than the corresponding point estimate, whereas with Version 2 it is 73 orders of magnitude larger.
The P(N<1) column shows the probability that there is no other detectable civilization in the Milky Way. The fact that it is so high means that we should not be surprised by Fermi’s observation that we haven’t detected any extraterrestrial civilization. In each case with uncertainty, there is probability (from 17% to 84%) that no other detectable civilization exists. The last column shows that the possibility our galaxy is absolutely teeming with life— with over 100 million civilizations, or 1 out of every thousand stars having a detectable intelligent civilization— is also consistent with the uncertainty in the Drake parameters.
Fraction of habitable planets that develop life, f_l
Microscopic fossils suggest that life started on Earth around 3.5 to 3.8 billion years ago, quite soon after the planet formed. This suggests that abiogenesis is easy and nearly inevitable on a habitable planet. On the other hand, every known living creature on Earth uses essentially the same DNAbased genetic code, which suggests abiogenesis occurred only once in the planet’s history. So perhaps it was an astoundingly rare event that just happened to occur here. The fact that it did occur here doesn’t give us information about f_l, other than the fact that f_l is not exactly zero, due to anthropic bias— the observation that we exist would be the same whether life on earth was an incredibly rare accident or whether it was inevitable.
The f_l parameter is arguably the one about which we have the least information. The paper reflects this by the immense range of uncertainty for f_l in both versions of their model. A PDF plot of their f_l is shown here
The plot is visually similar for both Version 1 and Version 2, with spikes at f_l\approx 0 and f_l\approx 1, and little probability mass between these extremes. In Version 1 the spikes are roughly equal, whereas in Version 2 the spike at f_l\approx 1 has about 16% of the total probability mass, the spike at f_l\approx 0 has about 84%. The interpretation of this distribution would be that with 16% probability, every habitable planet develops life, and with 84% probability, essentially no planet ever does. (Earth did, of course, but this isn’t inconsistent with f_l\approx 0 since these values are positive, just extremely small.) Thus, the distribution nearly degenerates into a Bernoulli (point) probability, interpreted appropriately. A (Bernoulli) point probability f_l=0.16 would mean that 16% of habitable planets develop life, which is a slightly different interpretation. To see this difference, we included f_l=0.16 in the results as a point of comparison (See the penultimate row of the table).
The core problem here is that the range they used for abiogenesis events per habitable planet, f_l, just seems implausibly large in both versions, with the 2575 quartile ranging from 2e15 to 4e+14. We see this as a flaw in their model. The nice thing about having a live model to play with is that it is possible to repeat the results using more sane alternatives.
Number of detectable civilizations
Because the model includes information about how uncertain each factor is, we can plot the probability distribution for N, the number of detectable civilizations in the Milky Way. Here is the distribution from the Sandberg et al. paper.
These two are from the Analytica model, for the two versions for f_l.
The similarity between the first and third density, a combination of roughly LogNormal centered around Log(N)=2, and a LogUniform down to 10^{160} suggests Sandberg et al. used Version 2 of f_l for this graph. However, as previously mentioned, the numbers given in the text are more consistent with Version 1.
These three graphs are examples of probability density plots, which is one way of visualizing the uncertainty of a continuous variable (i.e., N = # of detectable civilizations). A density at a particular xaxis value is obtained by estimating (by Monte Carlo simulation) the probability that the true value is within a small interval of width \epsilon around x, and then dividing by \epsilon to get the density.
The probability density of log_{10} N is not the same as the density of N since the denominator is quite different. Although the paper labels it as the probability density of N, they are clearly showing the density of log_{10} N, which is a sensible scale to use given the focus on the orderofmagnitude of uncertainty. Another fact about probability density is that the Yaxis scale is meaningful, albeit not very intuitive, but a Yaxis scale of Frequency is not. Their frequency is an artifact of their specific binning algorithm used to estimate the densities. Cumulative Probability Function graphs (CDFs) avoid these complications — i.e., it doesn’t matter to the yscale whether you plot N or log_{10} N, and the Yscale is easily interpreted.
These CDFs show a dramatic difference between Version 1 (using the LogNormal method) and Version 2 (using the t V \lambda method, and between those versions and ours that remove the massive lower tails. An interesting aspect of these graphs are their qualitative shape. The bellshaped body in the PDF is familiar, but the extreme left tail stands out as unusual. The previous section points out that both versions of f_l are so extreme the effective distribution is degenerate. We think this is a flaw. Hence, it is interesting to see how the graph changes when we set f_l to a less degenerate distribution.
The LogNormal method is Version 1 of f_l, the t V \lambda method in Version 2, and the remaining 3 methods are less extreme. 100% and 16% use these as point probabilities for f_l, and Beta uses a Beta(1,10)
distribution for f_l. Although the broad conclusions of the paper remain robust with less extreme distributions for f_l, the strange and extreme left tails of their models is not a robust phenomena.
Bayesian updating on Fermi’s Observation
Fermi’s question “where is everybody” refers to the observation that we haven’t detected any extraterrestrial civilizations. Sandberg et al. apply Bayes’ rule to this observation to update the estimates with this observation. To apply Bayes’ rule, you need the likelihoods P(¬DN) for each possible value of N, where ¬D is the observation that no ET civilizations have been detected.
The paper explores four distinct models for this updating:
 Random sampling update assumes that we have sampled K stars, none of which harbor a detectable civilization. K is a parameter of this model.
 Spatial Poisson update conditions on the conclusion that there is no detectable civilization within a distance d of Earth. d is a parameter of this model.
 Settlement update attempts to incorporate the possibility that interstellar propagation would be likely among advanced civilizations. It introduces several new parameters, including settlement timescales and a geometric factor. It conditions on the observation that no nearby spacetime volume around Earth has been permanently settled.
 No K3 observed update conditions on the observation that no Karhashev type 3 civilizations exist— civilizations that harness energy at the galactic scale. It presumes that if such a civilization exists, either in the Milky Way or even in another visible galaxy, we would have noticed it. Among other parameters, it includes one for the probability that a K3 civilization is theoretically possible.
We implemented all these update methods in the Analytica model. Our match to the paper’s quantitative results is only approximate. We are not sure why the results are not precisely reproducible. It was quite challenging figuring out what parameters they used for each case, since the paper and its Supplement 2 left out many details. With the exception of the settlement update method, we were able to get fairly close, and least in qualitative terms. We explored that space of possible parameter values for the Settlement update but were unable to match the qualitative shape of the posterior reported in the paper. The likelihood equation for the K3 update appears to be in error in the paper, since it doesn’t depend at all on N, but a more plausible version that does depend on N appears in Supplement 2.
In our Analytica model, you can select which update method(s) you want to view, and graph them sidebyside, along with (or without) the prior. For example,
Once of the more interesting posterior results is P(N<1).
The paper (in Table 2) reports these numbers (in the same order as the rows of the above table) for P(N<1): 52%, 53%, 57%, 66%, 99.6%. We think they may have based their first 4 posteriors on Version 1. Not sure about the K3 posterior, which substantially different from our calculation.
In this table, we see that the models with a nonextreme, nondegenerate version of f_l are not substantially changed by the posterior update on the negative Fermi observation. These are the models that use a point estimate for f_l of 100% and 16%, as well as the one that uses f_l \sim Beta(1,10).
How to compute the posteriors
We explored two ways to implement these posterior calculations in Analytica. We found the results to be consistent, so we stuck with the more interesting and more flexible method. This is interesting in its own right, and also very simple to code in Analytica.
The calculation uses sample weighting, in which each Monte Carlo sample is weighted by P(¬DN). The value for N is computed at each Monte Carlo sample, so from that P(¬DN) is also computed for each selected posterior method. The variable that computes P(¬DN) has the identifier P_obs_given_N
. To compute the posteriors, all we had to do was set the system variable SampleWeighting
to P_obs_given_N
.
We’ll mention a second method for computing the posterior, which we found less elegant and more complex, which computes the same results. This method extracts the histogram PDF(LogTen_N)
. It computes P(¬DN) based on the value of N that appears in the PDF. The product of the PDF column for LogTen_N and is P(¬DN) is the unnormalized PDF for P(N¬D).
We would expect the second method to perform better than the first method when the likelihood P(¬DN) is extremely leptokurtic. In this model, this is not the case.
Updating on a positive observation
The Fermi observation is the negative observation that we have never detected another extraterrestrial civilization. We thought it would be interesting here to explore what happens when you condition on a positive observation.
Extraterrestrial microbes
In March 2011, the first author (Lonnie) attended an astronomy talk at Foothill College by NASA planetary scientist Dr. Chris McKay. Six years earlier, with the Huygens probe descending into the atmosphere of Saturn’s moon Titan, he and a graduate student, Heather Smith, undertook a thought experiment. They asked the question: If there is any life on Titan, what chemical signatures might we see? Especially, signatures that could not result from any known inanimate processes? What would organisms eat? What would their waste products be?
At 190°C (290°F), life on Titan would be very different, not based on water, but rather on liquid methane. Without knowing what such life forms would look like, they could still make some inferences about what chemical bonds living organisms would likely utilize for metabolism. For example, the molecule with the most harvestable energy on Titan is acetylene ( C_2H_2 + 3H_2 \rarr 2 CH_4, \nabla G=80 kcal/mole ). They published a set of proposed signatures for life in
C.P. McKay and H.D.Smith (2005), “Possibilities for methanogenic life in liquid methane on the surface of Titan“, ICARUS 178(1):274276, doi.org/10.1016/j.icarus.2005.05.018
and then moved on to other work. A few years later, analysis of data from the Huygen’s probe and Cassini mission to Saturn found some unexplained chemical signatures.
Darrell F.Strobel (2010), “Molecular hydrogen in Titan’s atmosphere: Implications of the measured tropospheric and thermospheric mole fractions“, ICARUS 208(2):878886, doi.org/10.1016/j.icarus.2010.03.003
These matched those predicted as possible signatures for life 5 years earlier in the McKay and Smith paper. One signature in particular, a net downward flux of hydrogen, is particularly intriguing, since it implies that something is absorbing or converting hydrogen near the surface, for which no inorganic processes are known. The data on this remains ambiguous. For example, during its descent the Huygens probe did not detect a depletion of hydrogen near the surface, which is what would be expected if organisms on the surface are consuming hydrogen.
The interesting question in the present context is how we should update our uncertainty based on a (hypothetical future) discovery of microbial life on an extraterrestrial body such as Titan. Such a discovery would influence our belief about n_e, the number of planetary objects per star that are potentially habitable, as well as f_l, the fraction of habitable planetary objects where life actually starts.
Our estimate for n_e would need to increase. In our own solar system, that would double the number of bodies we know have life, and it would make Jupiter’s moons Europa and Enceladus even more likely candidates. So n_e should plausibly be increased by something like a factor of 3:
P( n_e  D ) = LogUniform( 0.3, 3 )
To update f_l, we’ll use P( D  f_l, habitable) = f_l, where D is the observation that this one additional planetary object is habitable and where life has emerged.
With these updates, the probability that there is no other intelligent contactable civilization would drop from 48.5% to 6.6%, and the probability that the galaxy is teeming with intelligent life would increase from under 2% to over 6%, using the Version 1 prior. Here is the table for different version of the prior (where only f_l varies between the 5 priors).
P(N<1) — “we are alone”  P(N>100) — galaxy teeming with life  

Prior  Posterior  Prior  Posterior  
Version 1  48.5%  6.6%  1.9%  6.2% 
Version 2  84%  6.6%  0.6%  6.2% 
f_l=100\%  10%  6.5%  3.7%  6.3% 
f_l=16\%  17%  12.7%  1.2%  2.5% 
f_l \simBeta(1,10) 
23%  13.6%  1.4%  2.4% 
One thing that is interesting when comparing across these different priors is how the extreme priors (Version 1 and Version 2) adjust to be nearly identical to the result obtained when setting f_l=100\%. The f_l=100\% models the most extreme assumption that life always starts on every habitable planet. This reinforces our earlier criticism that the paper’s two versions of f_l are flawed. Because they so extreme, and roughly equivalent to saying either life always starts or essentially never starts on a habitable planetary body, the evidence that it happened on Titan leaves these with only the option that abiogenesis always happens.
Explore the model yourself
Our Analytica model is running here where you can explore. (or, click here to run it in its own browser tab).
Here are some things to try while exploring the model.
 Click on a term in the Drake Equation for a description.
 Select a particular “model” for f_l, the fraction of planets or planetary objects where life begins, or keep it at All to run all of them.
 View each of the results in the UI above.
 Calculate N and view the statistics view to see the Mean and Median. Select Mid to see what the result would be without including uncertainty.
 View the PDF for LogTen(N).
 Select a observation method (or multiple ones) and see how the results change in the posterior compared to the prior.
 Click on Model Internals to explore the full implementation.
Summary
It is easy to be misled without realizing it when you estimate a single number (a point estimate) for an unknown quantity. Fermi’s question of why we never detected or encountered other extraterrestrial civilizations has spawned decades of conjecture for underlying reasons, yet Sandberg, Drexler and Ord show that there may be no paradox after all. We’ve reviewed and reimplemented the model they proposed. The possibility that there are no other detectable intelligent civilizations in the Milky Way is consistent with our level of uncertainty. The apparent paradox was simply the result of the “Flaw of Averages“.
We hope you are able to learn something by playing with the model. Enjoy!