Skip to main content

Fun with the Reverend Bayes

A recent review of Bayes' Rule by James V. Stone for review, has reminded me of the delightful case of the mathematician's coloured balls. (Mathematicians often have cases of coloured balls. Don't ask me why.)

This is a thought experiment that helps illustrate why we have problems dealing with uncertainty and probability.

Imagine I've got a jar with 50 white balls and 50 black balls in it. I take out a ball but don't look at it. What's the chance that this ball is black?

I hope you said 50% or 50:50 or 1/2 or 0.5 - all ways of saying that it has equal chances of being either white or black. With no further information that's the only sensible assumption.

Now keep that ball to one side, still not looking at it. You pull out another ball and you do look at this one. (Mathematicians know how to have a good time.) It's white.

Now what's the chance that the first ball was black?

You might be very sensibly drawn to suggest that it's still 50:50. After all, how could the probability change just because I took another ball out afterwards? But the branch of probability and statistics known as Bayesian tells us that probabilities are not set in stone or absolute - they are only as good as the information we have, and gaining extra information can change the probability.

Initially you had no information about the balls other than that there were 50 of each colour in the pot. Now, however, you also know that a ball drawn from the remainder was white. If that first ball had been black, you would be slightly more likely to draw a white ball next time. So drawing a white makes it's slightly more likely that the first ball was black than it was white - you've got extra information. Not a lot of information, it's true. Yet it does shift the probability, even though the information comes in after the first ball was drawn.

If you find that hard to believe, imagine taking the example to the extreme. I've got a similar pot with just two balls in, one black, one white. I draw one out but don't look at it. What's the chance that this ball is black? Again it's 50%. Now lets take another ball out of the pot and look at. It's white. Do you still think that looking at another ball doesn't change the chances of the other ball being black? If so let's place a bet - because I now know that the other ball is definitely black.

So even though it appears that there's a 0.5 chance of the ball being black initially, what is really the case is that 0.5 is our best bet given the information we had. It's not an absolute fact, it's our best guess given what we know. In reality the ball was either definitely white or definitely black, not it some quantum indeterminate state. But we didn't know which it was, so that 0.5 gave us a best guess.

One final example to show how information can change apparently fixed probabilities.

We'll go back to the first example to show another way that information can change probability. Again I've got a pot, then with 50 black and 50 white balls. I draw one out. What's the probability it's black? You very reasonably say 50%.  So far this is exactly the same situation as the first time round.

I, however, have extra information. I now share that information with you - and you change your mind and say that the probability is 100% black, even though nothing has changed about the actual pot or ball drawn. Why? Because I have told you that all the balls at the bottom of the pot are white and all the balls at the top are black. My extra information changes the probabilities.

Comments

Popular posts from this blog

It's On You - Nick Chater and George Loewenstein *****

Going on the cover you might think this was a political polemic - and admittedly there's an element of that - but the reason it's so good is quite different. It shows how behavioural economics and social psychology have led us astray by putting the focus way too much on individuals. A particular target is the concept of nudges which (as described in Brainjacking ) have been hugely over-rated. But overall the key problem ties to another psychological concept: framing. Huge kudos to both Nick Chater and George Loewenstein - a behavioural scientist and an economics and psychology professor - for having the guts to take on the flaws in their own earlier work and that of colleagues, because they make clear just how limited and potentially dangerous is the belief that individuals changing their behaviour can solve large-scale problems. The main thesis of the book is that there are two ways to approach the major problems we face - an 'i-frame' where we focus on the individual ...

Introducing Artificial Intelligence – Henry Brighton & Howard Selina ****

It is almost impossible to rate these relentlessly hip books – they are pure marmite*. The huge  Introducing  … series (a vast range of books covering everything from Quantum Theory to Islam), previously known as …  for Beginners , puts across the message in a style that owes as much to Terry Gilliam and pop art as it does to popular science. Pretty well every page features large graphics with speech bubbles that are supposed to emphasise the point. Funnily,  Introducing Artificial Intelligence  is both a good and bad example of the series. Let’s get the bad bits out of the way first. The illustrators of these books are very variable, and I didn’t particularly like the pictures here. They did add something – the illustrations in these books always have a lot of information content, rather than being window dressing – but they seemed more detached from the text and rather lacking in the oomph the best versions have. The other real problem is that...

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader.  We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard.  This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, ...