"Doubt is not a pleasant condition, but certainty is absurd." -- Voltaire (François-Marie Arouet)
My friend Jackie asked me today to explain why I said that I was 99.999% certain there was no God, but not 100% certain. I told her that, as a good scientist, I don't think that we can ever be 100% certain about anything, to which she replied that this doesn't make sense. I sympathise with the fact that she didn't quite understand what I was saying. I think this is a counter intuitive concept for many people and I often see students confused by this idea. Therefore, I thought that I would provide an explanation here as to why I don't think we can ever be 100% certain about almost anything.
Before beginning this explanation, I want to add a few caveats. There are a few things about which we can be 100% certain. For instance, I am 100% certain that there is no such thing as a round square (at least, not in Euclidean geometry.) That would be a contradiction, and I am 100% certain that there are no true contradictions. I am also certain that we can prove that 1+1=2 from the Peano Axioms (these are the generally accepted axioms of elementary arithmetic.) That I exist is an additional postulate I think I can be 100% certain of -- Rene Descartes' "cogito ergo sum" being a sufficiently good impetus. That certain mathematical statements can be deduced from certain axioms, together with the falsehood of contradictory statements and the existence of oneself, are some of the things I believe we can be 100% certain of.
As for other statements, I do not think we can be 100% certain of them.
But doesn't this result in solipsism or skepticism, you might ask, doubting that anything exists at all? Isn't that contrary to the scientific enterprise instead of being aligned with it?
No. The scientific enterprise is actually capable of proceeding only because we never are 100% certain of most things. We continuously take in additional evidence, evaluate our current positions based upon that evidence, and then lend either greater or lesser support for our current positions based on that evidence. If the support for our current position is sufficiently diminished, or if we find a large enough amount of evidence that contradicts our current position, then we should reject that position.
This process never lends 100% certainty onto anything and is actually incapable of doing so. It's always possible that the next piece of evidence that we gather will show us that we were wrong about everything we thought so far. Of course, we can be pretty damn sure about a lot of things, in virtue of the evidence that we have gathered so far. Just as in a trial, the burden on us (as the prosecution) is only to show that something is true beyond a reasonable doubt. The burden is not to show that something is true beyond any possible doubt, which is an entirely different task and is actually quite a bit more difficult (if not downright impossible in most cases.) As John Stuart Mill lamented, "There is no such thing as absolute certainty, but there is assurance sufficient for the purposes of human life."
Whenever we make a measurement -- whether by performing an experiment or making an observation -- we stand the chance of being wrong. Therefore, it stands to reason that we would like to be able to calculate with what probability we might be incorrect in whatever conclusion we draw. Statistics gives us the tools to answer precisely this question in a rigorous manner. Whenever we make a measurement or reach some conclusion in science, that conclusion comes coupled with a probability. We might say something like "we accept this hypothesis at the 95% level." That means we only stand a 5% chance of being wrong.
One thing that should be drawn out here is what it means to be wrong in the light of new evidence. Many scientific disciplines -- philosophers of science call them "special sciences" -- develop theories that are largely non-mathematical. For example, think of Freud's psychoanalytic theories or Darwin's original "On the Origin of Species". These were both originally conceptual theories. It's true that in Darwin's case, the theory was later put into a largely mathematical framework. Nonetheless, the standards in the disciplines these two practitioners were working in was such that one was not expected to be able to compute things using theories. In the special sciences, theories are not calculational devices.
The situation is very different in other areas, particularly physics (what philosophers of science call "non-special sciences".) In these fields, it is required that anything that is to be called a theory allows you to compute a numerical prediction of the outcome of any experiment dealing with things in the theory. If the experimental results agree with predictions calculated from the theory, then the theory is lended support. If not, the theory is ruled out.
But let's imagine a situation in which a theory (in physics) is well supported for a very long time by a very large amount of evidence, but then is found to fail in some instances. How this usually proceeds is that the theory is found to operate correctly in a certain domain, but then will fail outside that domain. This actually happened with Newtonian physics. Newtonian physics operates extremely well for objects that have sizes ranging from dust to planets, moving at reasonable everyday speeds, and weighing in some familiar weight range. It works beautifully for designing bridges, cars, skyscrapers, and so on. For this reason, it is foundational in mechanical engineering. Nonetheless, physicists at the turn of the 20th century figured out how to peer deeper inside matter, at objects on the atomic scale and smaller. They would eventually figure out how to make objects move much, much faster than everyday objects, and they would learn how to peer deep into outer space. In these three places, to their surprise, they found out that Newtonian physics fails. Instead, we need to use quantum mechanics and the theory of relativity to explain how things work in those places. Quantum mechanics and relativity still apply to objects in everyday life, but there's no reason to use them when the effects they predict, in that domain, are only negligibly different from Newtonian physics.
For this reason, we say that physical theories are always true in their domain of inquiry, even if they fail outside of that domain.
But what about God? It's hard to see where the evidence is against His existence, if there is any at all. So why not put equal weight on Him not existing as we do in Him existing?
There's an additional concept in science known as the null hypothesis. Most hypotheses that you could possibly come up with are simply false. This is one thing that makes science a tremendously aggravating affair; most ideas that anyone comes up with are simply wrong. Therefore, before we have any evidence at all, any positive statement (i.e. a statement like "it is true that...") that you could make begins its life with a very small chance of being true (and a very large chance of being false.) Positing that God exists is a positive statement, so even if there is no evidence to the contrary, we should only put an extremely small probability on it of being true. If there is no evidence either way, then we should say that, at present, due to its low probability, it is false beyond any reasonable doubt. This is similar to the motto in the American legal system that suspects are assumed innocent until proven guilty. Likewise, when confronted with a statement like "God exists", we should respond that this statement is most likely false until evidence is presented to the contrary.
But, in fact, there are more reasons to doubt God's existence than that. We are capable of explaining a very large number of things about our universe in terms of purely naturalistic processes and have never had to rely on the hypothesis that God exists in order to reliably explain anything. Any time that God has been used as an explanation, it was only for things that were not yet understood in naturalistic terms (think of how people used to think of epidemics as the wrath of God or of mental disease as demonic possession.) Later, it was found that naturalistic processes could explain those things after all, leaving less room for God to actually do anything. This is known as the God-of-the-gaps; i.e. the idea that we only posit God to explain those things which have not yet been explained by science (the gap in our scientific understanding.) Since that gap continuously closes as we learn more and more about our world, there is less and less for God to do.
|Napoleon and Laplace in their fabled conversation. Image taken from here.|
Furthermore, we are capable of understanding the mechanisms by which religions develop in cultures and why these sort of ideas are so psychologically attractive. There is actually a broadly developing literature on the evolutionary origins, psychology, anthropology and sociology of religion, and there are several mechanisms for the introduction of these sort of belief structures that are already well understood.
Given these three facts -- first, that in absence of evidence, we should confer low probability to its being true, second, that, as scientists, we don't seem to require the God hypothesis, and third that we understand the mechanisms by which religions originate in human cultures in purely naturalistic terms -- it seems remarkably unlikely that God exists. If this were a trial, the conclusion would be that the prosecution (the theists) failed to prove their case, and that it was supremely unreasonable to cast a verdict in their favor.
Can we compute the probability that God exists? It would be difficult because we'd have to know
(1) what probability to assign to God's existence before we knew anything,
(2) how we should update that probability in the light of the fact that the hypothesis isn't useful, and
(3) how we should update that probability in light of the fact that we understand some of the mechanisms for belief in God.
Given the difficulty of those three things, I will instead attempt to calculate something simpler. Assuming that there is no evidence for the truth or falsity of any religion, what is the probability that any given religion is true?
First, we need to know how many distinct religions there are in the world. According to the World Christian Encyclopaedia, there are approximately 10,000 religions in the world. I don't know how accurate this is, but I did find a website claiming that there are 50,000 distinct cultures in the world. If true, the idea that there are 10,000 religions doesn't seem untenable.
Since each religion was weighted the same (we assumed they each have the same amount of evidence) and no two religions can be simultaneously true, the probability that any given religion is true is 1/10000 = 0.0001, which is 0.001%. That's a thousandth of a percent. There's a 0.01% chance of getting strike by lightning -- which means that there's a higher chance of getting struck by lightning than any given religion being true. Note that the probability given here is likely a gross overestimate because I did not take into account whether these religions are simply ruled out by well maintained scientific principles (many of them are) and whether there are other pragmatic justifications for elimination of the belief (as in principles (1)-(3) I listed above.) In this calculation, I also assumed that all religions being false is not an option; had I included that case, it would have further decreased the probability that any particular religion has it right.
Philosophy of Science: A Very Short IntroductionScience Books)
The Complete Idiot's Guide to Statistics, 2nd EditionProbability & Statistics Books)
The Demon-Haunted World: Science as a Candle in the DarkControversial Knowledge in Religious Studies Books)
Fooled by Randomness: The Hidden Role of Chance in Life and in the MarketsSuccess Books)