Skip to main content

Artificial Intelligence - Melanie Mitchell *****

As Melanie Mitchell makes plain, humans have limitations in their visual abilities, typified by optical illusions, but artificial intelligence (AI) struggles at a much deeper level with recognising what's going on in images. Similarly in some ways, the visual appearance of this book misleads. It's worryingly fat and bears the ascetic light blue cover of the Pelican series, which since my childhood have been markers of books that were worthy but have rarely been readable. This, however, is an excellent book, giving a clear picture of how many AI systems go about their business and the huge problems designers of such systems face.

Not only does Mitchell explain the main approaches clearly, her account is readable and engaging. I read a lot of popular science books, and it's rare that I keep wanting to go back to one when I'm not scheduled to be reading it - this is one of those rare examples.

We discover how AI researchers have achieved the apparently remarkable abilities of, for example, the Go champion AlphaGo, or the Jeopardy! playing Watson. In each case these systems are tightly designed for a particular purpose and arguably have no intelligence in the broad sense.  As for what's probably the most impressively broad AI application of modern times, self-driving cars, Mitchell emphasises how limited they truly are. Like so many AI applications, the hype far exceeds the reality - when companies and individuals talk of self-driving cars being commonplace in a few years' time, it's quite clear that this could only be the case in a tightly controlled environment.

One example, that Mitchell explores in considerable detail are so-called adversarial attacks, a particularly AI form of hacking where, for example, those in the know can make changes to images that are invisible to the human eye but that force an AI system to interpret what they are seeing as something totally different. It's a sobering thought that, for example, by simply applying a small sticker to a stop sign on the road - unnoticeable to a human driver - an adversarial attacker can turn the sign into a speed limit sign as far as an AI system is concerned, with potentially fatal consequences.

Don't get me wrong, Mitchell, a professor of computer science who has specialised in AI, is no AI luddite. But unlike many of the enthusiasts in the field (or, for that matter, those who are terrified AI is about to take over the world), she is able to give us a realistic, balanced view, showing us just how far AI has to go to come close to the more general abilities humans make use of all the time even in simple tasks. AI does a great job, for example, in something like Siri or Google Translate or unlocking a phone with a face - but AI systems still have no concept of, for example, understanding (as opposed to recognising) what is in an image. Mitchell makes it clear that where systems learn from large amounts of data, it is usually impossible to uncover how they are making decisions (which makes the EU's law requiring transparent AI decisions pretty much impossible to implement), so we really shouldn't trust them with important outcomes as they could easily be basing their outcomes on totally irrelevant inputs.

Apart from occasionally finding the explanations of the workings of types of neural network a little hard to follow, the only thing that made me raise an eyebrow was being told that Marvin Minsky 'coined the phrase "suitcase word"' - I would have thought 'derived* the phrase from Lewis Carroll's term "portmanteau word"' would have been closer to reality.

There have been good books on the basics of AI already, and excellent ones on the problems that 'deep learning' and big data systems throw up. But without a doubt, Mitchell's book sets a new standard in giving an understanding of what's possible and how difficult it is to go further. It should be read by every journalist, PR person and politician before they pump out yet more hype on the AI future. Recommended.

* Polite term
Hardback:   

Kindle 
Using these links earns us commission at no cost to you

Comments

Popular posts from this blog

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader.  We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard.  This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, ...

The AI Paradox - Virginia Dignum ****

This is a really important book in the way that Virginia Dignum highlights various ways we can misunderstand AI and its abilities using a series of paradoxes. However, I need to say up front that I'm giving it four stars for the ideas: unfortunately the writing is not great. It reads more like a government report than anything vaguely readable - it really should have co-authored with a professional writer to make it accessible. Even so, I'm recommending it: like some government reports it's significant enough to make it necessary to wade through the bureaucrat speak. Why paradoxes? Dignum identifies two ways we can think about paradoxes (oddly I wrote about paradoxes recently , but with three definitions): a logical paradox such as 'this statement is false', or a paradoxical truth such as 'less is more' - the second of which seems a better to fit to the use here.  We are then presented with eight paradoxes, each of which gives some insights into aspects of t...

Einstein's Fridge - Paul Sen ****

In Einstein's Fridge (interesting factoid: this is at least the third popular science book to be named after Einstein's not particularly exciting refrigerator), Paul Sen has taken on a scary challenge. As Jim Al-Khalili made clear in his excellent The World According to Physics , our physical understanding of reality rests on three pillars: relativity, quantum theory and thermodynamics. But there is no doubt that the third of these, the topic of Sen's book, is a hard sell. While it's true that these are the three pillars of physics, from the point of view of making interesting popular science, the first two might be considered pillars of gold and platinum, while the third is a pillar of salt. Relativity and quantum theory are very much of the twentieth century. They are exciting and sometimes downright weird and wonderful. Thermodynamics, by contrast, has a very Victorian feel and, well, is uninspiring. Luckily, though, thermodynamics is important enough, lying behind ...