The omnipresence of uncertainty is part of what makes making predictions and decisions so hard. We at Lumina advocates treating uncertainty explicitly in our models using probability distributions. Sadly this is not yes as common as it should be. A recent paper, titled “Dissolving the Fermi Paradox” (2018), is a powerful illustration of how including uncertainty can transform conclusions on the fascinating question of whether our Earth is the only place in the Universe harboring intelligent life. The paper argues that the apparent paradox is simply the result of what Sam L. Savage calls the Flaw of Averages–i.e. ignoring uncertainty. In this article, we review the article by Anders Sandberg, Eric Drexler and Toby Ord (which we shall call SDO) and provide a liveAnalytica version of their model that you can explore.
The Fermi Paradox
Enrico Fermi. From Wikimedia commons.
One day in 1950, Enrico Fermi, the Nobel prize-winning builder of the first nuclear reactor, was having lunch with a few friends in Los Alamos. They included Edward Teller the inventor of the hydrogen bomb. They were looking at a New Yorker cartoon of cheerful aliens emerging from a flying saucer and Fermi famously asked, “Where is everybody?”. Given the vast number of stars in the Galaxy and the likely development of extraterrestrial intelligent life, how come no ETs have come to visit or at least been detected? This question came to be called the “Fermi Paradox”. Ever since, it has bothered those interested in the question of extraterrestrial intelligence and whether we are alone in the Universe.
The Flaw of Averages on Steroids
Dr. Sam Savage who coined the term “Flaw of Averages”
To illustrate how dramatically ignoring uncertainty can distort your conclusions, SDO give a toy example. Suppose there are nine parameters, which multiplied together give the probability of extraterrestrial intelligence (ETI) arising on any given star. Suppose that, based on what we know, each parameter could be anywhere between 0 and 0.2, with uniform uncertainty over this interval.
When you use a point estimate of 0.1 for each parameter, you conclude that there is a 10^{-9} probability of any given star harboring ETI. There are about 10^{11} stars in the Milky Way, so the probability that no star other than our own has a planet harboring intelligent life is extremely small, (1-10^{-9})^{100B} ≈ 3.7times 10^{-44}.
When you perform the same calculation using explicit distributions, Uniform(0, 0.2), for each parameter, the mean result is 0.21–over 5,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times more likely!
N= R^* times f_p times n_e times f_l times f_i times f_c times L
Where
R^* is the average rate of formation of stars in our galaxy,
f_p is the fraction of stars with planets.
n_e is the average number of those planets that could potentially support life.
f_l is the fraction of those on which life had actually developed;
f_i is the fraction of those with life that is intelligent; and
f_c is the fraction of those that have produced a technology detectable to us.
L is the average lifetime of such civilizations
Since Drake first proposed this calculation in 1961, there have been many attempts to refine his calculation of N, the number of detectable extraterrestrial civilizations. Mostly they came up with a large number. The contradiction between expected proliferation of detectable ETs and their apparent absence came to be called the “Fermi paradox” after the famous lunch conversation.
Past explanations of the Fermi Paradox
Image attribution: Wikimedia commonsMany have tried to resolve the apparent paradox: Maybe advanced civilizations avoid wasteful emission of electromagnetic radiation into space that would be detectable by us. Maybe interstellar travel is simply impossible. Or if it is technically possible, all ETs have decided it’s not worth the effort. Or perhaps ETs do visit us but choose to be discreet deeming us not ready for the shock of contact. Maybe there is a Great Filter that makes the progression of life to advanced stages exceedingly rare. Or perhaps, the development of life from lifeless chemicals (abiogenesis) and/or the evolution of technological intelligence are just so unlikely that we are in fact the only ones in the Galaxy. Or, perhaps even more depressingly, those intelligent civilizations that do emerge all manage to destroy themselves in short order before perfecting interstellar communication—as indeed we Earthlings may plausibly do ourselves.
Quantifying Uncertainty in the Drake Equation
SDO resolve the apparent paradox elegantly without resorting to any speculative explanations. Recognizing that most of the terms of the Drake equation are highly uncertain, they express each term as a probability distribution to characterize the uncertainty based on a review of the relevant scientific literature. They then use simple Monte Carlo simulation to estimate the probability distribution on N, and hence the probability that N<1 — i.e. that there are too few or simply zero ETIs to detect. They estimate this probability at about 52% (our reimplementation of their model comes up with 48%). In other words, we shouldn’t be surprised at our failure to observe any ETI because there is a decent probability that there aren’t any. Thus, we may view the original Fermi paradox as the result of Sam Savage’s “Flaw of Averages”: If you use only “best estimates” and ignore the range of uncertainty in each assumption, you’ll end up with a misleading result.
For each factor in the Drake equation, SDO estimated a probability distribution to represents the uncertainty in range of estimates from their review of the scientific literature. For most factors. the uncertainty varies over many orders of magnitude. For all except one, they use the Log-Uniform distribution, assuming that each order of magnitude is equally likely between a minimum and maximum. In other words, the logarithm of the value is uniformly-distributed. This table summarizes their estimated uncertainty for each factor.
Factor
Distribution
Description
R^*
LogUniform(1, 100)
Rate of star formation (stars/year)
f_p
LogUniform(0.1, 1)
Fraction of stars with planets
n_e
LogUniform(0.1, 1)
Number of habitable planetary objects per system with planets (planets/star)
f_l
Version 1: LogNormal”
1-e^{-e^{m}}
where
m~Normal(0,50)
Version 2:
t V lambda
1-e^{-t V lambda}
tsimLogUniform(1e7, 1e10)
VsimLogUniform(1e2, 1e15)
lambdasimLogUniform(1e-188, 1e15)
Fraction of habitable planets that develop life. Abiogenesis refers to the formation of life out of inanimate substances.
tsim Time avail. for abiogenesis (years)
Vsim Volume of substrate for abiogenesis (m^3)
lambdasim Rate of abiogenesis (events per m^3 years)
The scientific notation 1e15 is a way of writing 10^{15}, and so on.
f_i
LogUniform(0.001, 1)
Fraction of planets with life that develops intelligence
f_c
LogUniform(0.01, 1)
Fraction of intelligent civilizations that are detectable
L
LogUniform(100, 1e10)
Duration of detectability (years)
SDO describe two versions for f_l, the fraction of habitable planets that develop some form of life. Both use the form 1 – e^{-r} as the probability that one or more abiogenesis events occur and assume a Poisson-process with rate r . Version 1 estimates a LogNormal distribution directly for r. Version 2 decomposes r into a product of three other quantities, t V lambda, and assigns a loguniform to each one. (It is not clear to us that these three quantities are any easier to estimate!) We couldn’t tell from the text of the paper alone which results used which version, so we included both versions in our model. At the risk of stating the obvious — SDO use enormous ranges for f_l in either version.
This table gives some results from these models.
N = # detectable planets in Milky Way
Pr(N<1)
“we are alone”
Pr(N>100M)
“Teeming with
intelligent
civilizations”
The top row, “Reported in SDO”, shows the numbers from their text. The rest are from our Analytica implementation of their model. Their reported values seem more consistent with Version 1; but other results in their paper seem more consistent Version 2. We believe our implementation of both versions reflects those described in the paper. We even examined their Python code in a futile attempt to explain why our results aren’t an exact match. We have emailed the first author in the hope he can clarify the situation. While we can’t reproduce their exact results the discrepancies do not affect their broad qualitative conclusions.
The rows, Versions 1 and 2 with uncertainty, use SDO’s full distributions. The rows, Versions 1 and 2 with point estimates, use the median of their distributions as a point estimate for each of the seven factors of the Drake Equation or 9 parameters for Version 2. In Version 1 with uncertainty, the mean for N is four orders of magnitude larger than the corresponding point estimate. In Version 2 with uncertainty, it is 73 orders of magnitude larger than the corresponding point estimate.
The P(N<1) column shows the probability that there is no other detectable civilization in the Milky Way. The fact that it is so high means that we should not be surprised by Fermi’s observation that we haven’t detected any extraterrestrial civilization. In each case with uncertainty, there is a substantial probability (from 17% to 84%) that no other detectable civilization exists. We added a last column with the probability that our galaxy is absolutely teeming with life— with over 100 million civilizations, or 1 out of every thousand stars having a detectable intelligent civilization. The uncertain models give us between 0.6% and 1.9% of this case.
Fraction of habitable planets that develop life, f_l
Original DALL-E 2 artwork. Prompt=Abiogenesis
The largest source of uncertainty is factor f_l, the fraction of habitable planets with life. Microscopic fossils suggest that life started on Earth around 3.5 to 3.8 billion years ago, quite soon after the planet formed. This suggests that abiogenesis is easy and nearly inevitable on a habitable planet. On the other hand, every known living creature on Earth uses essentially the same DNA-based genetic code, which suggests abiogenesis occurred only once in the planet’s history. So perhaps it was an astoundingly rare event that just happened to occur here. The fact that it did occur here doesn’t give us information about f_l, other than the fact that f_l is not exactly zero, due to anthropic bias— the observation that we exist would be the same whether life on earth was an incredibly rare accident or whether it was inevitable.
The f_l parameter is arguably the one about which we have the least information. The paper reflects this by the immense range of uncertainty for f_l in both versions of their model. A PDF plot of their f_l is shown here
The PDF of f_l
The plot is visually similar for both Version 1 and Version 2, with spikes at f_lapprox 0 and f_lapprox 1, and little probability mass between these extremes. In Version 1 the spikes are roughly equal, whereas in Version 2 the spike at f_lapprox 1 has about 16% of the total probability mass, the spike at f_lapprox 0 has about 84%. The interpretation of this distribution would be that with 16% probability, every habitable planet develops life, and with 84% probability, essentially no planet ever does. (Earth did, of course, but this isn’t inconsistent with f_lapprox 0 since these values are positive, just extremely small.) Thus, the distribution nearly degenerates into a Bernoulli (point) probability, interpreted appropriately. A (Bernoulli) point probability f_l=0.16 would mean that 16% of habitable planets develop life, which is a slightly different interpretation. To see this difference, we included f_l=0.16 in the results as a point of comparison (See the penultimate row of the table).
The core problem here is that the range they used for abiogenesis events per habitable planet, f_l, just seems implausibly large in both versions, with the 25-75 quartile ranging from 2e-15 to 4e+14. We see this as a flaw in their model. The nice thing about having a live model to play with is that it is possible to repeat the results using more sane alternatives.
Number of detectable civilizations
Because the model includes information about how uncertain each factor is, we can plot the probability distribution for N, the number of detectable civilizations in the Milky Way. Here is the distribution from the Sandberg et al. paper.
The probability density for Log(N) from Sandberg et al.
These two are from the Analytica model, for the two versions for f_l.
p(logten(N)) density plot using Version 1 of the model.p(logten(N)) density plot using Version 2 of the model.
The similarity between the first and third density, a combination of roughly LogNormal centered around Log(N)=2, and a LogUniform down to 10^{-160} suggests Sandberg et al. used Version 2 of f_l for this graph. However, as previously mentioned, the numbers given in the text are more consistent with Version 1.
These three graphs are examples of probability density plots, which is one way of visualizing the uncertainty of a continuous variable (i.e., N = # of detectable civilizations). A density at a particular x-axis value is obtained by estimating (by Monte Carlo simulation) the probability that the true value is within a small interval of width epsilon around x, and then dividing by epsilon to get the density.
The probability density of log_{10} N is not the same as the density of N since the denominator is quite different. Although the paper labels it as the probability density of N, they are clearly showing the density of log_{10} N, which is a sensible scale to use given the focus on the order-of-magnitude of uncertainty. Another fact about probability density is that the Y-axis scale is meaningful, albeit not very intuitive, but a Y-axis scale of Frequency is not. Their frequency is an artifact of their specific binning algorithm used to estimate the densities. Cumulative Probability Function graphs (CDFs) avoid these complications — i.e., it doesn’t matter to the y-scale whether you plot N or log_{10} N, and the Y-scale is easily interpreted.
The CDF of N=# detectable civilizations in the Milky Way, for 5 variations of f_l.
These CDFs show a dramatic difference between Version 1 (using the LogNormal method) and Version 2 (using the t V lambda method, and between those versions and ours that remove the massive lower tails. An interesting aspect of these graphs are their qualitative shape. The bell-shaped body in the PDF is familiar, but the extreme left tail stands out as unusual. The previous section points out that both versions of f_l are so extreme the effective distribution is degenerate. We think this is a flaw. Hence, it is interesting to see how the graph changes when we set f_l to a less degenerate distribution.
PDFs of Log(N) for 5 variations of f_l
The LogNormal method is Version 1 of f_l, the t V lambda method in Version 2, and the remaining 3 methods are less extreme. 100% and 16% use these as point probabilities for f_l, and Beta uses a Beta(1,10) distribution for f_l. Although the broad conclusions of the paper remain robust with less extreme distributions for f_l, the strange and extreme left tails of their models is not a robust phenomena.
Bayesian updating on Fermi’s Observation
Fermi’s question “where is everybody” refers to the observation that we haven’t detected any extraterrestrial civilizations. Sandberg et al. apply Bayes’ rule to this observation to update the estimates with this observation. To apply Bayes’ rule, you need the likelihoods P(¬D|N) for each possible value of N, where ¬D is the observation that no ET civilizations have been detected.
The paper explores four distinct models for this updating:
Random sampling update assumes that we have sampled K stars, none of which harbor a detectable civilization. K is a parameter of this model.
Spatial Poisson update conditions on the conclusion that there is no detectable civilization within a distance d of Earth. d is a parameter of this model.
Settlement updateattempts to incorporate the possibility that interstellar propagation would be likely among advanced civilizations. It introduces several new parameters, including settlement timescales and a geometric factor. It conditions on the observation that no nearby spacetime volume around Earth has been permanently settled.
No K3 observed update conditions on the observation that no Karhashev type 3 civilizations exist— civilizations that harness energy at the galactic scale. It presumes that if such a civilization exists, either in the Milky Way or even in another visible galaxy, we would have noticed it. Among other parameters, it includes one for the probability that a K3 civilization is theoretically possible.
We implemented all these update methods in the Analytica model. Our match to the paper’s quantitative results is only approximate. We are not sure why the results are not precisely reproducible. It was quite challenging figuring out what parameters they used for each case, since the paper and its Supplement 2 left out many details. With the exception of the settlement update method, we were able to get fairly close, and least in qualitative terms. We explored that space of possible parameter values for the Settlement update but were unable to match the qualitative shape of the posterior reported in the paper. The likelihood equation for the K3 update appears to be in error in the paper, since it doesn’t depend at all on N, but a more plausible version that does depend on N appears in Supplement 2.
In our Analytica model, you can select which update method(s) you want to view, and graph them side-by-side, along with (or without) the prior. For example,
Prior and posteriors for logten(N) based on Version 1 of the prior. Each posterior uses one of the methods for P(¬D|N) described in the paper.
Once of the more interesting posterior results is P(N<1).
Priors and posteriors P(N<1) for different models of f_l and different models of likelihood P(¬D|N).
The paper (in Table 2) reports these numbers (in the same order as the rows of the above table) for P(N<1): 52%, 53%, 57%, 66%, 99.6%. We think they may have based their first 4 posteriors on Version 1. Not sure about the K3 posterior, which substantially different from our calculation.
In this table, we see that the models with a non-extreme, non-degenerate version of f_l are not substantially changed by the posterior update on the negative Fermi observation. These are the models that use a point estimate for f_l of 100% and 16%, as well as the one that uses f_l sim Beta(1,10).
How to compute the posteriors
We explored two ways to implement these posterior calculations in Analytica. We found the results to be consistent, so we stuck with the more interesting and more flexible method. This is interesting in its own right, and also very simple to code in Analytica.
The calculation uses sample weighting, in which each Monte Carlo sample is weighted by P(¬D|N). The value for N is computed at each Monte Carlo sample, so from that P(¬D|N) is also computed for each selected posterior method. The variable that computes P(¬D|N) has the identifier P_obs_given_N. To compute the posteriors, all we had to do was set the system variable SampleWeighting to P_obs_given_N.
We’ll mention a second method for computing the posterior, which we found less elegant and more complex, which computes the same results. This method extracts the histogram PDF(LogTen_N). It computes P(¬D|N) based on the value of N that appears in the PDF. The product of the PDF column for LogTen_N and is P(¬D|N) is the unnormalized PDF for P(N|¬D).
We would expect the second method to perform better than the first method when the likelihood P(¬D|N) is extremely leptokurtic. In this model, this is not the case.
Updating on a positive observation
The Fermi observation is the negative observation that we have never detected another extraterrestrial civilization. We thought it would be interesting here to explore what happens when you condition on a positive observation.
Extraterrestrial microbes
Saturn’s moon Titan.
In March 2011, the first author (Lonnie) attended an astronomy talk at Foothill College by NASA planetary scientist Dr. Chris McKay. Six years earlier, with the Huygens probe descending into the atmosphere of Saturn’s moon Titan, he and a graduate student, Heather Smith, undertook a thought experiment. They asked the question: If there is any life on Titan, what chemical signatures might we see? Especially, signatures that could not result from any known inanimate processes? What would organisms eat? What would their waste products be?
At -190°C (-290°F), life on Titan would be very different, not based on water, but rather on liquid methane. Without knowing what such life forms would look like, they could still make some inferences about what chemical bonds living organisms would likely utilize for metabolism. For example, the molecule with the most harvestable energy on Titan is acetylene ( C_2H_2 + 3H_2 rarr 2 CH_4, nabla G=80 kcal/mole ). They published a set of proposed signatures for life in
and then moved on to other work. A few years later, analysis of data from the Huygen’s probe and Cassini mission to Saturn found some unexplained chemical signatures.
Astro-geophysicist Dr. Chris McKay
These matched those predicted as possible signatures for life 5 years earlier in the McKay and Smith paper. One signature in particular, a net downward flux of hydrogen, is particularly intriguing, since it implies that something is absorbing or converting hydrogen near the surface, for which no inorganic processes are known. The data on this remains ambiguous. For example, during its descent the Huygens probe did not detect a depletion of hydrogen near the surface, which is what would be expected if organisms on the surface are consuming hydrogen.
The interesting question in the present context is how we should update our uncertainty based on a (hypothetical future) discovery of microbial life on an extraterrestrial body such as Titan. Such a discovery would influence our belief about n_e, the number of planetary objects per star that are potentially habitable, as well as f_l, the fraction of habitable planetary objects where life actually starts.
Our estimate for n_e would need to increase. In our own solar system, that would double the number of bodies we know have life, and it would make Jupiter’s moons Europa and Enceladus even more likely candidates. So n_e should plausibly be increased by something like a factor of 3:
P( n_e | D ) = LogUniform( 0.3, 3 )
To update f_l, we’ll use P( D | f_l, habitable) = f_l, where D is the observation that this one additional planetary object is habitable and where life has emerged.
With these updates, the probability that there is no other intelligent contactable civilization would drop from 48.5% to 6.6%, and the probability that the galaxy is teeming with intelligent life would increase from under 2% to over 6%, using the Version 1 prior. Here is the table for different version of the prior (where only f_l varies between the 5 priors).
P(N<1) — “we are alone”
P(N>100) — galaxy teeming with life
Prior
Posterior
Prior
Posterior
Version 1
48.5%
6.6%
1.9%
6.2%
Version 2
84%
6.6%
0.6%
6.2%
f_l=100%
10%
6.5%
3.7%
6.3%
f_l=16%
17%
12.7%
1.2%
2.5%
f_l simBeta(1,10)
23%
13.6%
1.4%
2.4%
One thing that is interesting when comparing across these different priors is how the extreme priors (Version 1 and Version 2) adjust to be nearly identical to the result obtained when setting f_l=100%. The f_l=100% models the most extreme assumption that life always starts on every habitable planet. This reinforces our earlier criticism that the paper’s two versions of f_l are flawed. Because they so extreme, and roughly equivalent to saying either life always starts or essentially never starts on a habitable planetary body, the evidence that it happened on Titan leaves these with only the option that abiogenesis always happens.
Click on a term in the Drake Equation for a description.
Select a particular “model” for f_l, the fraction of planets or planetary objects where life begins, or keep it at All to run all of them.
View each of the results in the UI above.
Calculate N and view the statistics view to see the Mean and Median. Select Mid to see what the result would be without including uncertainty.
View the PDF for LogTen(N).
Select a observation method (or multiple ones) and see how the results change in the posterior compared to the prior.
Click on Model Internals to explore the full implementation.
Summary
It is easy to be misled without realizing it when you estimate a single number (a point estimate) for an unknown quantity. Fermi’s question of why we never detected or encountered other extraterrestrial civilizations has spawned decades of conjecture for underlying reasons, yetSandberg, Drexler and Ord show that there may be no paradox after all. We’ve reviewed and reimplemented the model they proposed. The possibility that there are no other detectable intelligent civilizations in the Milky Way is consistent with our level of uncertainty. The apparent paradox was simply the result of the “Flaw of Averages“.
We hope you are able to learn something by playing with the model. Enjoy!
Lonnie Chrisman is Lumina's Chief Technology Officer, where he heads engineering and development of Analytica. He has a Ph.D. in Artificial Intelligence and Computer Science from Carnegie Mellon University; and a BS in Electrical Engineering from University of California at Berkeley.
Aryeh Englander works on Artificial Intelligence at the John Hopkins University Applied Physics Laboratory, and is doing a Ph.D. in Information Systems at the University of Maryland, Baltimore County (UMBC), with a focus on analyzing potential risks from very advanced AI.
Yaakov Trachtman Yaakov Trachtman is an educator and independent researcher.
Max Henrion is the Founder and CEO of Lumina Decision Systems. He was awarded the prestigious 2018 Frank Ramsey Medal, the highest award from the Decision Analysis Society. Max has an MA in Natural Sciences from Cambridge University, Master of Design from the Royal College of Art, London, and a PhD from Carnegie Mellon.
2 thoughts on “Is the Fermi Paradox due to the Flaw of Averages?”
Gerardo M.
Very interesting subject. Is there an equation with uncertainty regarding climate change models? There is so much debate about that and I believe is also the result of trying to obtain the right answer instead of asking the likelihood of the hypothesis being TRUE. Thank you.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Very interesting subject. Is there an equation with uncertainty regarding climate change models? There is so much debate about that and I believe is also the result of trying to obtain the right answer instead of asking the likelihood of the hypothesis being TRUE. Thank you.
Glad you enjoyed the article!
We do have an overarching climate change model, although I will point out that it is quite old. Take a look and let us know what you think: https://lumina.com/case-studies/risk-analysis/integrated-climate-assessment-model/.
Additionally, we have a lot of other examples on various climate change initiatives and subjects: https://lumina.com/case-studies/environmental-modeling-with-analytica/ & https://lumina.com/case-studies/energy-and-power/