Skip to main content

The AI Delusion - Gary Smith *****

This is a very important little book ('little' isn't derogatory - it's just quite short and in a small format) - it gets to the heart of the problem with applying artificial intelligence techniques to large amounts of data and thinking that somehow this will result in wisdom.

Gary Smith as an economics professor who teaches statistics, understands numbers and, despite being a self-confessed computer addict, is well aware of the limitations of computer algorithms and big data. What he makes clear here is that we forget at our peril that computers do not understand the data that they process, and as a result are very susceptible to GIGO - garbage in, garbage out. Yet we are increasingly dependent on computer-made decisions coming out of black box algorithms which mine vast quantities of data to find correlations and use these to make predictions. What's wrong with this? We don't know how the algorithms are making their predictions - and the algorithms don't know the difference between correlation and causality.

The scientist's (and statistician's) mantra is often 'correlation is not causality.' What this means is that if we have two things happening in the world we choose to measure - let's call them A (it could be banana imports) and B (it could number of pregnancies in the country) and if B rises and falls as A does, it doesn't mean that B is caused by A. It could be that A is caused B, A and B are both caused by C, or it's just a random coincidence. The banana import/pregnancy correlation actually happened in the UK for a number of years after the second world war. Human statisticians would never think the pregnancies were caused by banana imports - but an algorithm would not know any better.

In the banana case there was probably a C linking the two, but because modern data mining systems handle vast quantities of data and look at hundreds or thousands of variables, it is almost inevitable that they will discover apparent links between two sets of information where the coincidence is totally random. The correlation happens to work for the data being mined, but is totally useless for predicting the future. 

This is the thesis at the heart of this book. Smith makes four major points that really should be drummed into all stock traders, politicians, banks, medics, social media companies... and anyone else who is tempted to think that letting a black box algorithm loose on vast quantities of data will make useful predictions. First, there are patterns in randomness. Given enough values, totally random data will have patterns embedded within it - it's easy to assume that these have a meaning, but they don't. Second, correlation is not causality. Third, cherry picking is dangerous. Often these systems pick the bits of the data that work and ignore the bits that don't - an absolute no-no in proper analysis. And finally, data without theory is treacherous. You need to have a theory and test it against the data - if you try to derive the theory from the data with no oversight, it will always fit that data, but is very unlikely to be correct.

My only problems with book is that Smith insists for some reason on making databases two words ('data bases' - I know, not exactly terrible), and the book can feel a bit repetitious because most of it consists of repeated examples of how the four points above lead AI systems to make terrible predictions - from Hillary Clinton's system mistakenly telling her team where to focus canvassing effort to the stock trading systems produced by 'quants'. But I think that repetition is important here because it shows just how much we are under the sway of these badly thought-out systems - and how much we need to insist that algorithms that affect our lives are transparent and work from knowledge, not through data mining. 

As Smith points out, we regularly hear worries that AI systems are going to get so clever that they will take over the world. But actually the big problem is that our AI systems are anything but intelligent: 'In the age of Big Data, the real danger is not that computers are smarter than us, but that we think computers are smarter than us and therefore trust computers to make important decisions for us.’

This should be big-selling book. A plea to the publisher: change the cover (it just looks like it's badly printed and smudged) and halve the price to give it wider appeal. 

Hardback:  

Kindle:  
Using these links earns us commission at no cost to you

Review by Brian Clegg

Comments

Popular posts from this blog

I Wish They'd Taught Me That - Robin Pemantle and Julian Gould ***

Subtitled 'overlooked and omitted topics in mathematics', the obvious concern is that there is a good reason these topics are overlooked and omitted. Thankfully, this is not the case, but it's fair to say that despite attempts to dress it up that way, this isn't a recreational maths book. There's a fair description in the blurb: 'the topics which every undergraduate mathematics student "should" know, but has probably never encountered... magnificent secrets that are beautiful, useful and accessible.' As someone who many years ago did a degree with a fair amount of mathematics in it, I think it probably would have appealed back then - though to be honest a lot of it has disappeared from my memory, strongly reducing the entertainment value. Here's an example. The first real page contains the sentence:  'If you are handed a real number 𝓍 ∈  ⁠ ⁠,  one way to tell if 𝓍 is rational or irrational is to look at sequences of rational numbers q n ...

Why Nobody Understands Quantum Physics - Frank Verstraete and Céline Broeckaert **

It's with a heavy heart that I have to say that I could not get on with this book. The structure is all over the place, while the content veers from childish remarks to unexplained jargon. Frank Versraete is a highly regarded physicist and knows what he’s talking about - but unfortunately, physics professors are not always the best people to explain physics to a general audience and, possibly contributed to by this being a translation, I thought this book simply doesn’t work. A small issue is that there are few historical inaccuracies, but that’s often the case when scientists write history of science, and that’s not the main part of the book so I would have overlooked it. As an example, we are told that Newton's apple story originated with Voltaire. Yet Newton himself mentioned the apple story to William Stukeley in 1726. He may have made it up - but he certainly originated it, not Voltaire. We are also told that ‘Galileo discovered the counterintuitive law behind a swinging o...

The Antigravity Enigma - Andrew May ****

Antigravity - the ability to overcome the pull of gravity - has been a fantasy for thousands of years and subject to more scientific (if impractical) fictional representation since H. G. Wells came up with cavorite in The First Men in the Moon . But is it plausible scientifically?  Andrew May does a good job of pulling together three ways of looking at our love affair with antigravity (and the related concept of cancelling inertia) - in science fiction, in physics and in pseudoscience and crankery. As May points out, science fiction is an important starting point as the concept was deployed there well before we had a good enough understanding of gravity to make any sensible scientific stabs at the idea (even though, for instance, Michael Faraday did unsuccessfully experiment with a possible interaction between gravity and electromagnetism). We then get onto the science itself, noting the potential impact on any ideas of antigravity that come from the move from a Newtonian view of a...