Skip to main content

The AI Delusion - Gary Smith *****

This is a very important little book ('little' isn't derogatory - it's just quite short and in a small format) - it gets to the heart of the problem with applying artificial intelligence techniques to large amounts of data and thinking that somehow this will result in wisdom.

Gary Smith as an economics professor who teaches statistics, understands numbers and, despite being a self-confessed computer addict, is well aware of the limitations of computer algorithms and big data. What he makes clear here is that we forget at our peril that computers do not understand the data that they process, and as a result are very susceptible to GIGO - garbage in, garbage out. Yet we are increasingly dependent on computer-made decisions coming out of black box algorithms which mine vast quantities of data to find correlations and use these to make predictions. What's wrong with this? We don't know how the algorithms are making their predictions - and the algorithms don't know the difference between correlation and causality.

The scientist's (and statistician's) mantra is often 'correlation is not causality.' What this means is that if we have two things happening in the world we choose to measure - let's call them A (it could be banana imports) and B (it could number of pregnancies in the country) and if B rises and falls as A does, it doesn't mean that B is caused by A. It could be that A is caused B, A and B are both caused by C, or it's just a random coincidence. The banana import/pregnancy correlation actually happened in the UK for a number of years after the second world war. Human statisticians would never think the pregnancies were caused by banana imports - but an algorithm would not know any better.

In the banana case there was probably a C linking the two, but because modern data mining systems handle vast quantities of data and look at hundreds or thousands of variables, it is almost inevitable that they will discover apparent links between two sets of information where the coincidence is totally random. The correlation happens to work for the data being mined, but is totally useless for predicting the future. 

This is the thesis at the heart of this book. Smith makes four major points that really should be drummed into all stock traders, politicians, banks, medics, social media companies... and anyone else who is tempted to think that letting a black box algorithm loose on vast quantities of data will make useful predictions. First, there are patterns in randomness. Given enough values, totally random data will have patterns embedded within it - it's easy to assume that these have a meaning, but they don't. Second, correlation is not causality. Third, cherry picking is dangerous. Often these systems pick the bits of the data that work and ignore the bits that don't - an absolute no-no in proper analysis. And finally, data without theory is treacherous. You need to have a theory and test it against the data - if you try to derive the theory from the data with no oversight, it will always fit that data, but is very unlikely to be correct.

My only problems with book is that Smith insists for some reason on making databases two words ('data bases' - I know, not exactly terrible), and the book can feel a bit repetitious because most of it consists of repeated examples of how the four points above lead AI systems to make terrible predictions - from Hillary Clinton's system mistakenly telling her team where to focus canvassing effort to the stock trading systems produced by 'quants'. But I think that repetition is important here because it shows just how much we are under the sway of these badly thought-out systems - and how much we need to insist that algorithms that affect our lives are transparent and work from knowledge, not through data mining. 

As Smith points out, we regularly hear worries that AI systems are going to get so clever that they will take over the world. But actually the big problem is that our AI systems are anything but intelligent: 'In the age of Big Data, the real danger is not that computers are smarter than us, but that we think computers are smarter than us and therefore trust computers to make important decisions for us.’

This should be big-selling book. A plea to the publisher: change the cover (it just looks like it's badly printed and smudged) and halve the price to give it wider appeal. 

Hardback:  

Kindle:  
Using these links earns us commission at no cost to you

Review by Brian Clegg

Comments

Popular posts from this blog

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader.  We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard.  This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, ...

The AI Paradox - Virginia Dignum ****

This is a really important book in the way that Virginia Dignum highlights various ways we can misunderstand AI and its abilities using a series of paradoxes. However, I need to say up front that I'm giving it four stars for the ideas: unfortunately the writing is not great. It reads more like a government report than anything vaguely readable - it really should have co-authored with a professional writer to make it accessible. Even so, I'm recommending it: like some government reports it's significant enough to make it necessary to wade through the bureaucrat speak. Why paradoxes? Dignum identifies two ways we can think about paradoxes (oddly I wrote about paradoxes recently , but with three definitions): a logical paradox such as 'this statement is false', or a paradoxical truth such as 'less is more' - the second of which seems a better to fit to the use here.  We are then presented with eight paradoxes, each of which gives some insights into aspects of t...

Einstein's Fridge - Paul Sen ****

In Einstein's Fridge (interesting factoid: this is at least the third popular science book to be named after Einstein's not particularly exciting refrigerator), Paul Sen has taken on a scary challenge. As Jim Al-Khalili made clear in his excellent The World According to Physics , our physical understanding of reality rests on three pillars: relativity, quantum theory and thermodynamics. But there is no doubt that the third of these, the topic of Sen's book, is a hard sell. While it's true that these are the three pillars of physics, from the point of view of making interesting popular science, the first two might be considered pillars of gold and platinum, while the third is a pillar of salt. Relativity and quantum theory are very much of the twentieth century. They are exciting and sometimes downright weird and wonderful. Thermodynamics, by contrast, has a very Victorian feel and, well, is uninspiring. Luckily, though, thermodynamics is important enough, lying behind ...