[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries

The disturbing doomsday argument

RARELY DOES PHILOSOPHY produce empirical predictions.
The doomsday argument is an
important exception. From seemingly trivial
premises, it seeks to show that the risk that
humankind will soon become extinct has been systematically
underestimated. Nearly everybody’s first
reaction is that there must be something wrong with
such an argument. Yet despite being subjected to
intense scrutiny by a growing number of philosophers,
no simple flaw in the argument has been identified.
It started some 15 years ago when astrophysicist
Brandon Carter discovered a previously unnoticed
consequence of a version of the weak anthropic principle.
Carter did not publish his finding, but the idea
was taken up by philosopher John Leslie, who has
been a prolific author on the subject, culminating in
his monograph The End of the World (Routledge,
1996). Versions of the doomsday argument have also
been independently rehearsed by other authors. In
recent years there have been numerous papers trying
to refute the argument, and an approximately equal
number of papers refuting these refutations.
I shall explain the doomsday argument in three
Step I
Imagine a universe that consists of 100 cubicles. In
each cubicle there is one person. Ninety of the cubicles
are painted blue on the outside and the other ten
are painted red. Each person is asked to guess whether
they are in a blue or a red cubicle (and everybody
knows all this).
Now, suppose you find yourself in one of these
cubicles. What colour should you think it has? Since
90% of all people are in blue cubicles, and since you
don’t have any other relevant information, it seems
you should think that there is a 90% probability that
you are in a blue cubicle. Let’s call this idea, that you
should reason as if you were a random sample from
the set of all observers, the self-sampling assumption.
Suppose everyone accepts the self-sampling assumption
and everyone has to bet on whether they are in a
blue or red cubicle. Then 90% of all people will win
their bets and 10% will lose. Suppose, on the other
hand, that the self-sampling assumption is rejected
and people think that one is no more likely to be in a
blue cubicle; so they bet by flipping a coin. Then, on
average, 50% of the people will win and 50% will lose.
The rational thing to do seems to be to accept the selfsampling
assumption, at least in this case.
Step II
Now we modify the thought experiment a bit. We still
have the 100 cubicles, but this time they are not
painted blue or red. Instead they are numbered from
1 to 100. The numbers are painted on the outside.
Then a fair coin is tossed (by God perhaps). If the coin
falls heads, one person is created in each cubicle. If the
coin falls tails, then people are only created in cubicles
1 to 10.
You find yourself in one of the cubicles and are
asked to guess whether there are ten or 100 people.
Since the number was determined by the flip of a fair
coin, and since you haven’t seen how the coin fell and
you don’t have any other relevant information, it
seems you should believe that there is a 50% probability
that it fell heads (and thus that there are 100
Moreover, you can use the self-sampling assumption
to assess the conditional probability of a number
between 1 and 10 being painted on your cubicle,
given how the coin fell. For example, conditional on
heads, the probability that the number on your cubicle
is between 1 and 10 is 10%, since one out of ten
people will then find themselves there. Conditional
on tails, the probability that you are in number 1 to
10 is 100%; for you then know that everybody is in
one of those cubicles.
Suppose that you open the door and discover that
you are in cubicle number 7. Again you are asked,
how did the coin fall? But now the probability is
greater than 50% that it fell tails. For what you are
observing is given a higher probability on that
hypothesis than on the hypothesis that it fell heads.
The precise new probability of tails can be calculated
using Bayes’s theorem. It is approximately 91%. So
after finding that you are in cubicle number 7, you
should think that with 91% probability there are only
ten people.
Step III
The last step is to transpose these results to our actual
situation here on Earth. Let’s formulate the following
two rival hypotheses:
‘Doom soon’
Humankind becomes extinct in the next century and
the total number of humans that will have existed is,
say, 20bn.
‘Doom late’
Humankind survives the next century and goes on to
colonise the galaxy; the total number of humans is,
say, 200 trillion. To simplify the exposition we will
consider only these hypotheses using a more finegrained
partition of the hypothesis doesn’t change the
principle, although it would give more exact
numerical values. ‘Doom soon’ corresponds to there only being ten
people in the thought experiment of Step II. ‘Doom
late’ corresponds to there being 100 people. Corresponding
the numbers on the cubicles, we now have
the ‘birth ranks’ of human beings their positions in
the human race. Corresponding to the prior probability
(50%) of the coin falling heads or tails, we now
have some prior probability of ‘doom soon’ or ‘doom
late’. This will be based on our ordinary empirical estimates
of potential threats to human survival, such as
nuclear or biological warfare, a meteorite destroying
the planet, self-replicating nano-machines running
amok, a breakdown of a meta-stable vacuum state
resulting from high-energy particle experiments, and
so on (presumably there are dangers that we haven’t
yet thought of). Let’s say that based on such considerations,
you think that there is a 5% probability of
doom soon. The exact number doesn’t matter for the
structure of the argument.
Finally, corresponding to finding you are in cubicle
number 7 we have the fact that you find that your
birth rank is about 60bn (that’s approximately how
many humans have lived before you). Just as finding
you are in cubicle 7 increased the probability of the
coin having fallen tails, so finding you are human
number 60bn gives you reason to think that doom
soon is more probable than you previously thought.
Exactly how much more probable will depend on the
precise numbers you use. In the present example, the
posterior probability of doom soon will be very close
to 100%. You can with near certainty rule out doom
That is the doomsday argument in a nutshell. After
hearing about it, many people think they know what
is wrong with it. But these objections tend to be mutually
incompatible, and often they hinge on some
simple misunderstanding. Be sure to read the literature
before feeling too confident that you have a refutation.
And the point is?
If the doomsday argument is correct, what precisely
does it show? It doesn’t show that there is no point in
trying to reduce threats to human survival ‘because
we’re doomed anyway’. On the contrary, it could
make such efforts seem even more urgent. Working to
reduce the risk that nano-technology will be abused to
destroy intelligent life, for example, would decrease
the prior probability of doom soon, and this would
reduce its posterior probability after taking the
doomsday argument into account; humankind’s life
expectancy would go up.
There are also a number of possible loopholes in
what the doomsday argument shows. For instance, it
turns out that if there are many extraterrestrial civilisations
and you interpret the selfsampling
assumption as applying
equally to all intelligent beings and
not exclusively to humans, then
another probability shift occurs that
exactly counterbalances and cancels
the probability shift that the doomsday
argument implies.
Another possible loophole occurs if
there are to be infinitely many
humans it’s not clear how to apply
the self-sampling assumption to the
infinite case. Further, if the human
species evolves into some vastly
more advanced species fairly soon
(within a century or two), then it is
not clear whether these post-humans would be in the
same reference class as us, so it is not clear how the
doomsday argument should be applied then. Yet
another possibility is if population figures go down
dramatically it would then be much longer before
enough humans were born to begin to make your
birth rank look surprisingly low. And finally, it may be
that the reference class needs to be adjusted so that
not all observers, not even all humans, will belong to
the same reference class.
The justification for this adjustment would have to
come from a general theory of observational selection
effects, of which the self-sampling assumption would
be only one element. A theory of observational selection
effects of how to correct for biases that are introduced
by the fact that our evidence has been filtered
by the precondition that a suitably positioned observer
exists to ‘have’ the evidence would have applications
in a number of scientific fields, including cosmology
and evolutionary biology.
So although the doomsday argument contains an
interesting idea, it needs to be combined with additional
assumptions and principles (some of which
remain to be worked out) before it can be applied to
the real world. In all likelihood there will be scope for
differing opinions about our future. Nonetheless, a
better understanding of observational selection effects
will rule out certain kinds of hypotheses and impose
surprising constraints on any coherent theorising
about the future of our species and about the distribution
of observers in the universe.