Skip to main content

Everything is Predictable - Tom Chivers *****

There's a stereotype of computer users: Mac users are creative and cool, while PC users are businesslike and unimaginative. Less well-known is that the world of statistics has an equivalent division. Bayesians are the Mac users of the stats world, where frequentists are the PC people. This book sets out to show why Bayesians are not just cool, but also mostly right.

Tom Chivers does an excellent job of giving us some historical background, then dives into two key aspects of the use of statistics. These are in science, where the standard approach is frequentist and Bayes only creeps into a few specific applications, such as the accuracy of medical tests, and in decision theory where Bayes is dominant.

If this all sounds very dry and unexciting, it's quite the reverse. I admit, I love probability and statistics, and I am something of a closet Bayesian*), but Chivers' light and entertaining style means that what could have been the mathematical equivalent of debating angels on the heads of a pin becomes both enthralling and relatively easy to understand. You may have to re-read a few sentences, because there is a bit of a head-scrambling concept at the heart of the debate - but it's well worth it.

A trivial way of representing the difference between Bayesian and frequentist statistics is how you respond to the question 'What's the chance of the result being a head?' when looking at a coin that has already been tossed, but that you haven't seen. Bayesian statistics takes into account what you already know. As you don't know what the outcome is, you can only realistically say it's 50:50, or 0.5 in the usual mathematical representation. By contrast, frequentist statistics says that as the coin has been tossed, it is definitely heads or tails with probability 1... but we can't say which. This seems perhaps unimportant - but the distinction becomes crucial when considering the outcome of scientific studies.

Thankfully, Chivers goes into in significant detail the problem that arises because in most scientific use of (frequentist) probability, what the results show is not what we actually want to know. In the social sciences, a marker for a result being 'significant' is a p-value of less that 0.05. This means that if the null hypothesis is true (the effect you are considering doesn't exist), then you would only get this result 1 in 20 times or less. But what we really want to know is not the chance of this result if the hypothesis is true, but rather what's the chance that the hypothesis is true - and that's a totally different thing.

Chivers gives the example of 'it's the difference between "There's only a 1 in 8 billion chance that a given human is the Pope" and "There's only a 1 in 8 billion chance that the Pope is human"'. At risk of repetition because it's so important, frequentist statistics, as used by most scientists, tells us the chance of getting the result if the hypothesis is true; Bayesian statistics works out what the chance is of the hypothesis being true - which most would say is what we really want to know. In fact, as Chivers points out, most scientists don't even know that they aren't showing the chance of the hypothesis being true - and this even true of many textbooks for scientists on how to use statistics.

At this point, most normal humans would say 'Why don't those stupid scientists use Bayes?' But there is a catch. To be able to find how likely the hypothesis is, we need a 'prior probability' - a starting point which Bayes' theorem then modifies using the evidence we have. This feels subjective, and for the first attempt at a study it certainly can be. But, as Chivers points out, in many scientific studies there is existing evidence to provide that starting point - the frequentist approach throws away this useful knowledge.

Is the book perfect? Well, I suspect as a goodish Bayesian I can never say something is perfect. I found it hard to engage with an overlong chapter called 'the Bayesian brain' that is not about using Bayes, but rather trying to show that our brains take this approach, which all felt a bit too hypothetical for me. And Chivers repeats the oft-seen attack on poor old Fred Hoyle, taking his comment about evolution and 'a whirlwind passing through a junkyard creating a Boeing 747' in a way that oversimplifies Hoyle's original meaning. But these are trivial concerns.

I can't remember when I last enjoyed a popular maths book so much. It's a delight.

* Not entirely a closet Bayesian - my book Dice World includes an experiment using Bayesian statistics to work out what kind of dog I have, given a mug that's on my desk.

Hardback:   
Kindle 
Using these links earns us commission at no cost to you
Review by Brian Clegg - See all Brian's online articles or subscribe to a weekly email free here

Comments

Popular posts from this blog

Love Triangle - Matt Parker ****

There's no doubt that Matt Parker can make practically anything interesting - this is one of the few books I've ever read where I genuinely enjoyed the introduction. But there was a real challenge here. In a review of a recent book about maps and mathematics I said 'I always found [geometry and trigonometry] the most tedious aspect of maths.' Take a look at the subtitle here: 'the life-changing magic of trigonometry'. It's no surprise that the 'trig' word turns up - it literally means triangle measuring (trigon is an obsolete term for a triangle). But it inevitably raises a shudder for many. Parker does acknowledge this in his pure trigonometry section, suggesting it's primarily because it's a pain remembering what tan and cos and sin refer to, but pointing out convincingly how useful and powerful trigonometry is. I confess, however, it was still my least-favourite chapter in the book. Thankfully there's a lot more, introduced with Parke

A Crack in Everything - Marcus Chown *****

This is a book about black holes - and there are two ways to look at these amazing phenomena. One is to meander about in endless speculation concerning firewalls and holographic universes and the like, where there is no basis in observation, only mathematical magic. This, for me, is often closer to science fiction than science fact. The alternative, which is what Marcus Chown does so well here (apart from a single chapter), is to explore the aspects of theory that have observational evidence to back them up - and he does it wonderfully. I'm reminded in a way of the play The Audience which was the predecessor to The Crown . In the play, we see a series of moments in history when Queen Elizabeth II is meeting with her prime ministers, giving a view of what was happening in life and politics at that point in time. Here, Chown takes us to visit various breakthroughs over the last 100 or so years when a step was made in the understanding of black holes.  The first few are around the ba

Neil Lawrence - Atomic Human interview

Neil Lawrence is the DeepMind Professor of Machine Learning at the University of Cambridge where he leads the university-wide initiative on AI, and a Senior AI Fellow at the Alan Turing Institute. Previously he was Director of Machine Learning at Amazon, deploying solutions for Alexa, Prime Air and the Amazon supply chain. Co-host of the Talking Machines podcast, he's written a series for The Guardian and appeared regularly on other media. Known for his policy and societal work with the UK's AI Council, the Centre for Data Ethics and Innovation, and the OECD's Global Partnership on AI, his research focuses on improving data governance, accelerating scientific discovery, and how humans can take back control of large AI systems. His latest title is  The Atomic Human . What would you like your book to achieve? I wanted it to speak to individuals from different backgrounds in a way that didn’t preach or tell, but told stories in a way the reader could relate to. I hope the book