Skip to main content

Something Doesn't Add Up - Paul Goodwin ***

If there's one thing that's better than a juicy statistic, it's enjoying the process of pulling apart a dodgy one. It's why the radio programme More or Less is so excellent - so Paul Goodwin's book, subtitled 'surviving statistics in a post-truth world' was something I was really looking forward to - but for reasons I find it hard to put my finger on, it doesn't quite hit the spot.

Goodwin, a maths professor at the University of Bath, starts with a series of chapters telling us what's wrong with many of the statistics we see everyday. And he makes good points. We discover the dangers of rankings and trying to summarise a complex distinction in a single measure. We see why proxies are poor (essentially, if you can't actually measure what you want to, using something else that might be an appropriate indicator, but often isn't). We explore why polls are problematic. And there's a bit on Bayesian statistics and how it still tends to be disregarded by some, including the courts.

This is almost all negative, which is fine. Books like The Tiger that Isn't, one of my favourite titles on dodgy numbers and statistics take just such an approach. But they do so with lots of interesting stories and a plethora of examples. Although Goodwin does use some specifics, they feel more like case studies - they just don't engage in the way they should and there are too many generalities.

The other side of the book is we're promised a toolkit to help us cut through dodgy statistics. This is a good idea, but I'm really not sure how to use much of it in practice. For example, one instruction is 'If a questionnaire was used to obtain the number, was it biased?' With specifics such as checking whether, for example, it's based on leading questions, or questions which unrealistically limit people's response options. But I don't see how this can be used. This is supposed to be a toolkit to help ordinary people deal with statistics in the media (social and mainstream) - but how often does an article include details of the questionnaire used, or even the sample size? How are we supposed to answer these questions?

One last observation - the author proved at one point to be, perhaps surprisingly, honest. He tells us of an experiment he did showing students information on different tech products, asking which they would prefer, then repeating the exercise twice over four months, finding their choices were not set in stone, but changed. Goodwin points out limitations: that there may have been changes in technology over that period, news and reviews could have changed opinions, or as they weren't actually buying the technology, the students might not have cared much about the choice. 'But some of the inconsistency may have arisen simply because the respondents didn't really know what their true preferences were.' This is true, but equally it may not - in effect, he's telling us it wasn't a very useful study. (It would be interesting to ask, for example, why products that stay the same for decades and aren't likely to be reviewed, such as chocolate bars, weren't used, rather than tech?) Admitting this is quite brave.

I didn't dislike the book, and although it inevitably wheels out a lot of familiar examples, there were some new ones I hadn't come across before. But there was something about the presentation that just didn't do it for me. Even so, if, like me, you collect titles on dodgy statistics and how to deal with them, it's definitely one to add to the collection.

Hardback:   
Kindle 
Using these links earns us commission at no cost to you
Review by Brian Clegg

Comments

Popular posts from this blog

Roger Highfield - Stephen Hawking: genius at work interview

Roger Highfield OBE is the Science Director of the Science Museum Group. Roger has visiting professorships at the Department of Chemistry, UCL, and at the Dunn School, University of Oxford, is a Fellow of the Academy of Medical Sciences, and a member of the Medical Research Council and Longitude Committee. He has written or co-authored ten popular science books, including two bestsellers. His latest title is Stephen Hawking: genius at work . Why science? There are three answers to this question, depending on context: Apollo; Prime Minister Margaret Thatcher, along with the world’s worst nuclear accident at Chernobyl; and, finally, Nullius in verba . Growing up I enjoyed the sciencey side of TV programmes like Thunderbirds and The Avengers but became completely besotted when, in short trousers, I gazed up at the moon knowing that two astronauts had paid it a visit. As the Apollo programme unfolded, I became utterly obsessed. Today, more than half a century later, the moon landings are

Splinters of Infinity - Mark Wolverton ****

Many of us who read popular science regularly will be aware of the 'great debate' between American astronomers Harlow Shapley and Heber Curtis in 1920 over whether the universe was a single galaxy or many. Less familiar is the clash in the 1930s between American Nobel Prize winners Robert Millikan and Arthur Compton over the nature of cosmic rays. This not a book about the nature of cosmic rays as we now understand them, but rather explores this confrontation between heavyweight scientists. Millikan was the first in the fray, and often wrongly named in the press as discoverer of cosmic rays. He believed that this high energy radiation from above was made up of photons that ionised atoms in the atmosphere. One of the reasons he was determined that they should be photons was that this fitted with his thesis that the universe was in a constant state of creation: these photons, he thought, were produced in the birth of new atoms. This view seems to have been primarily driven by re

Everything is Predictable - Tom Chivers *****

There's a stereotype of computer users: Mac users are creative and cool, while PC users are businesslike and unimaginative. Less well-known is that the world of statistics has an equivalent division. Bayesians are the Mac users of the stats world, where frequentists are the PC people. This book sets out to show why Bayesians are not just cool, but also mostly right. Tom Chivers does an excellent job of giving us some historical background, then dives into two key aspects of the use of statistics. These are in science, where the standard approach is frequentist and Bayes only creeps into a few specific applications, such as the accuracy of medical tests, and in decision theory where Bayes is dominant. If this all sounds very dry and unexciting, it's quite the reverse. I admit, I love probability and statistics, and I am something of a closet Bayesian*), but Chivers' light and entertaining style means that what could have been the mathematical equivalent of debating angels on