Skip to main content

Writers are human

More like a similar number of bacteria
One of my favourite things about being a science writer is getting emails from readers. Some say nice things about a book, others ask questions about science or writing which I do my best to respond to helpfully. However, every now and then there is an email that's pointing out an error in one of my books - and I have to admit that those are distinctly depressing.

Let's be clear, I can say with reasonable confidence that every book I've ever written has a mistake or two in it. This isn't because I know what all the mistakes are - if I did, I would have corrected them before publication - but rather because pretty well every book I've ever read, fiction or non-fiction, has mistakes in it. It's just a fact of life. A book typically has between 60,000 and 120,000 words in it - the chances of an error slipping through is pretty high.

Many of these are typos. Spelling mistakes, words missing, something that should be spotted in a general edit, but that often slips through. The worst of these can seem glaring when you spot them as a reader, but the reality of proof reading is that it can be very difficult to spot them. Once I'm reading, I can easily get absorbed in the content and miss the detail. Although professional proof readers are excellent, possessing an attention to detail that I could never match, even they miss some.

I have a suspicion that the move of proofs from paper to electronic form has not helped. I don't know if it's been researched in any detail, but there's a widely held belief that we are better at spotting errors on paper than on the screen. Time was when publishers sent out paper proofs. I used to be distinctly nervous when a US publisher would send me proofs across the Atlantic that had already been marked up by a proof reader - if they ever got lost in the mail all that work would have to be done again. I'm afraid there are some publishers now that seem to do hardly any proof reading (not my publishers, by the way), so I do occasionally get sent books littered with typos, but most still do a reasonable job.

However, the errors that are most likely to get my readers excited are factual errors. A historical observation that's wrong or a scientific 'fact' that doesn't hold true. Sometimes this reflects the gap between the book being written and the reader picking it up. Some of my books written, say, 20 years ago are still available. That's great - I'm not complaining. But science moves on. One example from a more recent title - in The Universe Inside You I mention that there are as many as ten times more bacteria than human cells in your body (see snippet above). This was the accepted figure at the time of writing, though it was based on a very vague guesstimate. The guess has now been revised to be a similar number of bacterial and human cells, so I was out by an order of magnitude - but it was not a recognisable error at the time of writing.

Other times I (or others involved in the book production) just make a mistake. Guess what - writers are human. In my biography of James Clerk Maxwell, Professor Maxwell's Duplicitous Demon, there were two interesting errors in this regard. I simply misread a number for the population of the Lancashire mill town of Bolton (I picked up the number from an adjacent line), meaning that the hardback has me incorrectly saying that Aberdeen in 1856 was only half the size of Bolton. (This was particularly embarrassing as it was picked up as an interesting fact in a Scottish newspaper.) By the time the paperback was published, I was able to correct this to say Aberdeen was around the size of Bolton.

The incorrect equations
(bottom right B should be a D)
In the same book, I showed the iconic Maxwell's equations for electromagnetism, in one of the compact forms devised by Oliver Heaviside. I put these in correctly, but equations always have to be reset by the publisher and I didn't notice an error was introduced in doing this. This was pointed out to me in a discussion on my book at the Cheltenham Science Festival. The other contributor was a physicist. I had listened to her mangle a piece of history by attributing a quote to Maxwell that was actually said by someone else, but thought it would cause unnecessary embarrassment to point out the error. She had no such consideration in pointing out that the equation (which was only there to show the beauty of the form) was incorrect.

In other cases, the error could be an oversimplification. In writing up their work, scientists are very careful about being as precise as they can, which can result in lacing any statement with provisos. These don't make for great reading, and part of the job of a science writer is to trim down the provisos without becoming too inaccurate. Interestingly, scientists writing popular science tend to oversimplify more than professional science writers - in part because they like to present their own pet theories, or the most widely accepted theory, as if it were fact. I've lost count of the number of physicists who, for example, say that dark matter exists, rather than is one option to explain some anomalies, alongside modified gravity theories etc.

I have certainly been guilty of this in not always making clear in reporting something said by a historical figure that the saying is disputed or may have been originally been said by someone else. I recently got pulled up for attributing a comment to Lord Kelvin that definitely didn't occur on the occasion it has often been said to have occurred, and may not have been said by Kelvin at all. Yet a number of respectable sources do allocate it to Kelvin - so I feel no shame in using the quote, but should have made the doubts about it clear.

So we writers do make mistakes. But do we want to be told about them? On a personal level, the answer is probably usually 'No.' No one likes being told 'you were wrong' - though a lot can depend on how we are told. An email I recently got about that Kelvin issue had the subject line 'Kelvin, being smug and...' - it feels like a good way to irritate someone. Probably the most important thing about telling an author about an error is timeliness. Please make sure the book was published in the last year or so.

If you point out an error in a book that was published a number of years ago, the chances are you are not the first - having a stream of people pointing out the same mistake is not particularly helpful. More to the point, unless the book is a continuing bestseller, it's probably too late to do anything about it. If it's caught early, the error can be corrected in later editions, but once most copies have already been printed - which for many books is in the first year - it's pretty much a fruitless exercise, all pain and no gain. (Admittedly ebooks can always be updated, but publishers rarely bother.)

To reiterate - we authors love hearing from readers. But bear in mind (again) that we are human. Our books are a bit like our children - we know they have their flaws, but don't necessarily appreciate having them pointed out.

Brian Clegg

Comments

Popular posts from this blog

Models of the Mind - Grace Lindsay *****

This is a remarkable book. When Ernest Rutherford made his infamous remark about science being either physics or stamp collecting, it was, of course, an exaggeration. Yet it was based on a point - biology in particular was primarily about collecting information on what happened rather than explaining at a fundamental level why it happened. This book shows how biologists, in collaboration with physicists, mathematicians and computer scientists, have moved on the science of the brain to model some of its underlying mechanisms. Grace Lindsay is careful to emphasise the very real difference between physical and biological problems. Most systems studied by physics are a lot simpler than biological systems, making it easier to make effective mathematical and computational models. But despite this, huge progress has been made drawing on tools and techniques developed for physics and computing to get a better picture of the mechanisms of the brain. In the book we see this from two directions

The Ten Equations that Rule the World - David Sumpter ****

David Sumpter makes it clear in this book that a couple of handfuls of equations have a huge influence on our everyday lives. I needed an equation too to give this book a star rating - I’ve never had one where there was such a divergence of feeling about it. I wanted to give it five stars for the exposition of the power and importance of these equations and just two stars for an aspect of the way that Sumpter did it. The fact that the outcome of applying my star balancing equation was four stars emphasises how good the content is. What we have here is ten key equations from applied mathematics. (Strictly, nine, as the tenth isn’t really an equation, it’s the programmer’s favourite ‘If… then…’ - though as a programmer I was always more an ‘If… then… else…’ fan.) Those equations range from the magnificent one behind Bayesian statistics and the predictive power of logistic regression to the method of determining confidence intervals and the kind of influencer matrix so beloved of social m

How to Read Numbers - Tom Chivers and David Chivers *****

This is one of my favourite kinds of book - it takes on the way statistics are presented to us, points out flaws and pitfalls, and gives clear guidance on how to do it better. The Chivers brothers' book isn't particularly new in doing this - for example, Michael Blastland and Andrew Dilnot did something similar in the excellent 2007 title The Tiger that Isn't - but it's good to have an up-to-date take on the subject, and How to Read Numbers gives us both some excellent new examples and highlights errors that are more common now. The relatively slim title (and that's a good thing) takes the reader through a whole host of things that can go wrong. So, for example, they explore the dangers of anecdotal evidence, tell of study samples that are too small or badly selected, explore the easily misunderstood meaning of 'statistical significance', consider confounders, effect size, absolute versus relative risk, rankings, cherry picking and more. This is all done i