"The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt." -- Bertrand Russell

Friday, August 12, 2011

Doubt & Scientific Certainty

 "Doubt is not a pleasant condition, but certainty is absurd." -- Voltaire (François-Marie Arouet)

My friend Jackie asked me today to explain why I said that I was 99.999% certain there was no God, but not 100% certain. I told her that, as a good scientist, I don't think that we can ever be 100% certain about anything, to which she replied that this doesn't make sense. I sympathise with the fact that she didn't quite understand what I was saying. I think this is a counter intuitive concept for many people and I often see students confused by this idea. Therefore, I thought that I would provide an explanation here as to why I don't think we can ever be 100% certain about almost anything.

Before beginning this explanation, I want to add a few caveats. There are a few things about which we can be 100% certain. For instance, I am 100% certain that there is no such thing as a round square (at least, not in Euclidean geometry.) That would be a contradiction, and I am 100% certain that there are no true contradictions. I am also certain that we can prove that 1+1=2 from the Peano Axioms (these are the generally accepted axioms of elementary arithmetic.) That I exist is an additional postulate I think I can be 100% certain of -- Rene Descartes' "cogito ergo sum" being a sufficiently good impetus. That certain mathematical statements can be deduced from certain axioms, together with the falsehood of contradictory statements and the existence of oneself, are some of the things I believe we can be 100% certain of.

As for other statements, I do not think we can be 100% certain of them.

But doesn't this result in solipsism or skepticism, you might ask, doubting that anything exists at all? Isn't that contrary to the scientific enterprise instead of being aligned with it?

No. The scientific enterprise is actually capable of proceeding only because we never are 100% certain of most things. We continuously take in additional evidence, evaluate our current positions based upon that evidence, and then lend either greater or lesser support for our current positions based on that evidence. If the support for our current position is sufficiently diminished, or if we find a large enough amount of evidence that contradicts our current position, then we should reject that position.

This process never lends 100% certainty onto anything and is actually incapable of doing so. It's always possible that the next piece of evidence that we gather will show us that we were wrong about everything we thought so far. Of course, we can be pretty damn sure about a lot of things, in virtue of the evidence that we have gathered so far. Just as in a trial, the burden on us (as the prosecution) is only to show that something is true beyond a reasonable doubt. The burden is not to show that something is true beyond any possible doubt, which is an entirely different task and is actually quite a bit more difficult (if not downright impossible in most cases.) As John Stuart Mill lamented, "There is no such thing as absolute certainty, but there is assurance sufficient for the purposes of human life."

Whenever we make a measurement -- whether by performing an experiment or making an observation -- we stand the chance of being wrong. Therefore, it stands to reason that we would like to be able to calculate with what probability we might be incorrect in whatever conclusion we draw. Statistics gives us the tools to answer precisely this question in a rigorous manner. Whenever we make a measurement or reach some conclusion in science, that conclusion comes coupled with a probability. We might say something like "we accept this hypothesis at the 95% level." That means we only stand a 5% chance of being wrong.

One thing that should be drawn out here is what it means to be wrong in the light of new evidence. Many scientific disciplines -- philosophers of science call them "special sciences" -- develop theories that are largely non-mathematical. For example, think of Freud's psychoanalytic theories or Darwin's original "On the Origin of Species". These were both originally conceptual theories. It's true that in Darwin's case, the theory was later put into a largely mathematical framework. Nonetheless, the standards in the disciplines these two practitioners were working in was such that one was not expected to be able to compute things using theories. In the special sciences, theories are not calculational devices.

The situation is very different in other areas, particularly physics (what philosophers of science call "non-special sciences".) In these fields, it is required that anything that is to be called a theory allows you to compute a numerical prediction of the outcome of any experiment dealing with things in the theory. If the experimental results agree with predictions calculated from the theory, then the theory is lended support. If not, the theory is ruled out.

But let's imagine a situation in which a theory (in physics) is well supported for a very long time by a very large amount of evidence, but then is found to fail in some instances. How this usually proceeds is that the theory is found to operate correctly in a certain domain, but then will fail outside that domain. This actually happened with Newtonian physics. Newtonian physics operates extremely well for objects that have sizes ranging from dust to planets, moving at reasonable everyday speeds, and weighing in some familiar weight range. It works beautifully for designing bridges, cars, skyscrapers, and so on. For this reason, it is foundational in mechanical engineering. Nonetheless, physicists at the turn of the 20th century figured out how to peer deeper inside matter, at objects on the atomic scale and smaller. They would eventually figure out how to make objects move much, much faster than everyday objects, and they would learn how to peer deep into outer space. In these three places, to their surprise, they found out that Newtonian physics fails. Instead, we need to use quantum mechanics and the theory of relativity to explain how things work in those places. Quantum mechanics and relativity still apply to objects in everyday life, but there's no reason to use them when the effects they predict, in that domain, are only negligibly different from Newtonian physics.

For this reason, we say that physical theories are always true in their domain of inquiry, even if they fail outside of that domain.

But what about God? It's hard to see where the evidence is against His existence, if there is any at all. So why not put equal weight on Him not existing as we do in Him existing?

There's an additional concept in science known as the null hypothesis. Most hypotheses that you could possibly come up with are simply false. This is one thing that makes science a tremendously aggravating affair; most ideas that anyone comes up with are simply wrong. Therefore, before we have any evidence at all, any positive statement (i.e. a statement like "it is true that...") that you could make begins its life with a very small chance of being true (and a very large chance of being false.) Positing that God exists is a positive statement, so even if there is no evidence to the contrary, we should only put an extremely small probability on it of being true. If there is no evidence either way, then we should say that, at present, due to its low probability, it is false beyond any reasonable doubt. This is similar to the  motto in the American legal system that suspects are assumed innocent until proven guilty. Likewise, when confronted with a statement like "God exists", we should respond that this statement is most likely false until evidence is presented to the contrary.

But, in fact, there are more reasons to doubt God's existence than that. We are capable of explaining a very large number of things about our universe in terms of purely naturalistic processes and have never had to rely on the hypothesis that God exists in order to reliably explain anything. Any time that God has been used as an explanation, it was only for things that were not yet understood in naturalistic terms (think of how people used to think of epidemics as the wrath of God or of mental disease as demonic possession.) Later, it was found that naturalistic processes could explain those things after all, leaving less room for God to actually do anything. This is known as the God-of-the-gaps; i.e. the idea that we only posit God to explain those things which have not yet been explained by science (the gap in our scientific understanding.) Since that gap continuously closes as we learn more and more about our world, there is less and less for God to do.

Napoleon and Laplace in their fabled conversation. Image taken from here.
When presented with Pierre-Simon Laplace's understanding of the universe, Napolean Bonaparte is said to have demanded to know why Laplace had left out God. Laplace responded that he "had no use for that hypothesis". To Laplace, God wasn't actually required to explain the functioning of the universe. To modern scientists, the same is true.

Furthermore, we are capable of understanding the mechanisms by which religions develop in cultures and why these sort of ideas are so psychologically attractive. There is actually a broadly developing literature on the evolutionary origins, psychology, anthropology and sociology of religion, and there are several mechanisms for the introduction of these sort of belief structures that are already well understood.

Given these three facts -- first, that in absence of evidence, we should confer low probability to its being true, second, that, as scientists, we don't seem to require the God hypothesis, and third that we understand the mechanisms by which religions originate in human cultures in purely naturalistic terms -- it seems remarkably unlikely that God exists. If this were a trial, the conclusion would be that the prosecution (the theists) failed to prove their case, and that it was supremely unreasonable to cast a verdict in their favor.

-------------------------------------------------------------------------------------------------------

Can we compute the probability that God exists? It would be difficult because we'd have to know

(1) what probability to assign to God's existence before we knew anything,
(2) how we should update that probability in the light of the fact that the hypothesis isn't useful, and
(3) how we should update that probability in light of the fact that we understand some of the mechanisms for belief in God.

Given the difficulty of those three things, I will instead attempt to calculate something simpler. Assuming that there is no evidence for the truth or falsity of any religion, what is the probability that any given religion is true?

 First, we need to know how many distinct religions there are in the world. According to the World Christian Encyclopaedia, there are approximately 10,000 religions in the world. I don't know how accurate this is, but I did find a website claiming that there are 50,000 distinct cultures in the world. If true, the idea that there are 10,000 religions doesn't seem untenable.

Since each religion was weighted the same (we assumed they each have the same amount of evidence) and no two religions can be simultaneously true, the probability that any given religion is true is 1/10000 = 0.0001, which is 0.001%. That's a thousandth of a percent. There's a 0.01% chance of getting strike by lightning -- which means that there's a higher chance of getting struck by lightning than any given religion being true. Note that the probability given here is likely a gross overestimate because I did not take into account whether these religions are simply ruled out by well maintained scientific principles (many of them are) and whether there are other pragmatic justifications for elimination of the belief (as in principles (1)-(3) I listed above.) In this calculation, I also assumed that all religions being false is not an option; had I included that case, it would have further decreased the probability that any particular religion has it right.

Books:

Philosophy of Science: A Very Short IntroductionScience Books)

The Complete Idiot's Guide to Statistics, 2nd EditionProbability & Statistics Books)

The Demon-Haunted World: Science as a Candle in the DarkControversial Knowledge in Religious Studies Books)

Fooled by Randomness: The Hidden Role of Chance in Life and in the MarketsSuccess Books)

6 comments:

  1. Interest post. It leads me to think about the difficulty of assigning truth value to religions. The first problem is that not all religions are mutually exclusive so more than one can be true. Hinduism, for example, is henotheistic, Paganism is polytheistic, and Shintoism accepts all gods as manifestations of kami. Therefore, all, some, or none can be true without contradiction.

    On the other hand, Sikhism, Zoroastrianism and the Abrahamic religions all claim a single God, which means if one is true all others are false. How could you assign a higher probability of truthfulness to one over the others? Well, Abrahamic religions claim that God is both omnipotent and a "jealous god." If all gods claimed to be jealous and omnipotent then we could easily declare a winner: it would be the most worshiped god (a jealous, all powerful god who comes in second is a contradiction). But even this is not the case. Zoroastrians reject monasticism and discourage proselytizing. Sikhs believe that you can take as many lifetimes as you like to get to know god. These gods are omnipotent but not jealous, so there is no contradiction if they come in last.

    The only religion that politely rules itself out is the nastika school of Hinduism, which consists of atheists who just like the rituals.

    So here is the interesting conclusion. Say we have three stories, and all, some, or none can be true. Further, the last story contradicts the previous two, but the previous two don't contradict any other story. We will call them story 1, 2, and 3, with probability of being true p(1), p(2), and p(3). What is the probability 3 is true if we assume equipartition of outcomes?

    For comparison, let's first assume all three contradict each other. Then there are only four outcomes: 1 is true, or 2 is true, or 3 is true, or none are true. Therefore, p(1)=p(2)=p(3)= 1/4.

    However, if only 3 contradicts 1 and 2, then the outcomes are: none are true, 1 is true, 2 is true, 1 and 2 are true, 3 is true.

    Hence, p(3) = 1/5
    and p(1)=p(2)= 2/5.

    So interestingly, if you assign equipartition to every possible outcome, monotheistic religions are LESS PROBABLE than polytheistic relitions.

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. To understand why monotheism is less probable, let

    n = # of religions that are true.

    Islam claims explicitly that n=1. Shintoism generally allows n>0, and is only false if n=0.

    Now, suppose we know for a fact that n=2. Islam is false, but Shintoism has not been ruled out.

    Now suppose we know for a fact that n=1. Neither Islam nor Shintoism has been ruled out, and are, at this level of analysis, equally probable.

    So the probability goes like:

    probability Islam is true = p(n=1 | god is Allah)

    while

    probability Shintoism is true = p(n=1 | god is Izanagi-no-Mikoto) + p(n>1 | one of the gods is Izanagi-no-Mikoto)

    So you can see that probability of Shintoism exceeds that of Islam as long as p(n>1) > 0.

    ReplyDelete
  4. Fonzo,

    That assumes that p(n>1|one of the gods is x) is constant for all x. Otherwise, we could have that:

    p(n=1 | god is Allah)>p(n=1 | god is Izanagi-no-Mikoto) + p(n>1 | one of the gods is Izanagi-no-Mikoto)

    and, therefore, Islam would be more probable than Shinto.

    ReplyDelete
  5. True, but you'd have to update the theory to one of lesser ignorance and I, for one, am not going to spend the Sabbath doing it.

    ReplyDelete
  6. Oh, fonzo, whoever you are, you continue to make my blog ever more epic.

    ReplyDelete