As Melanie Mitchell makes plain, humans have limitations in their visual abilities, typified by optical illusions, but artificial intelligence (AI) struggles at a much deeper level with recognising what's going on in images. Similarly in some ways, the visual appearance of this book misleads. It's worryingly fat and bears the ascetic light blue cover of the Pelican series, which since my childhood have been markers of books that were worthy but have rarely been readable. This, however, is an excellent book, giving a clear picture of how many AI systems go about their business and the huge problems designers of such systems face.
Not only does Mitchell explain the main approaches clearly, her account is readable and engaging. I read a lot of popular science books, and it's rare that I keep wanting to go back to one when I'm not scheduled to be reading it - this is one of those rare examples.
We discover how AI researchers have achieved the apparently remarkable abilities of, for example, the Go champion AlphaGo, or the Jeopardy! playing Watson. In each case these systems are tightly designed for a particular purpose and arguably have no intelligence in the broad sense. As for what's probably the most impressively broad AI application of modern times, self-driving cars, Mitchell emphasises how limited they truly are. Like so many AI applications, the hype far exceeds the reality - when companies and individuals talk of self-driving cars being commonplace in a few years' time, it's quite clear that this could only be the case in a tightly controlled environment.
One example, that Mitchell explores in considerable detail are so-called adversarial attacks, a particularly AI form of hacking where, for example, those in the know can make changes to images that are invisible to the human eye but that force an AI system to interpret what they are seeing as something totally different. It's a sobering thought that, for example, by simply applying a small sticker to a stop sign on the road - unnoticeable to a human driver - an adversarial attacker can turn the sign into a speed limit sign as far as an AI system is concerned, with potentially fatal consequences.
Don't get me wrong, Mitchell, a professor of computer science who has specialised in AI, is no AI luddite. But unlike many of the enthusiasts in the field (or, for that matter, those who are terrified AI is about to take over the world), she is able to give us a realistic, balanced view, showing us just how far AI has to go to come close to the more general abilities humans make use of all the time even in simple tasks. AI does a great job, for example, in something like Siri or Google Translate or unlocking a phone with a face - but AI systems still have no concept of, for example, understanding (as opposed to recognising) what is in an image. Mitchell makes it clear that where systems learn from large amounts of data, it is usually impossible to uncover how they are making decisions (which makes the EU's law requiring transparent AI decisions pretty much impossible to implement), so we really shouldn't trust them with important outcomes as they could easily be basing their outcomes on totally irrelevant inputs.
Apart from occasionally finding the explanations of the workings of types of neural network a little hard to follow, the only thing that made me raise an eyebrow was being told that Marvin Minsky 'coined the phrase "suitcase word"' - I would have thought 'derived* the phrase from Lewis Carroll's term "portmanteau word"' would have been closer to reality.
There have been good books on the basics of AI already, and excellent ones on the problems that 'deep learning' and big data systems throw up. But without a doubt, Mitchell's book sets a new standard in giving an understanding of what's possible and how difficult it is to go further. It should be read by every journalist, PR person and politician before they pump out yet more hype on the AI future. Recommended.
* Polite term
Not only does Mitchell explain the main approaches clearly, her account is readable and engaging. I read a lot of popular science books, and it's rare that I keep wanting to go back to one when I'm not scheduled to be reading it - this is one of those rare examples.
We discover how AI researchers have achieved the apparently remarkable abilities of, for example, the Go champion AlphaGo, or the Jeopardy! playing Watson. In each case these systems are tightly designed for a particular purpose and arguably have no intelligence in the broad sense. As for what's probably the most impressively broad AI application of modern times, self-driving cars, Mitchell emphasises how limited they truly are. Like so many AI applications, the hype far exceeds the reality - when companies and individuals talk of self-driving cars being commonplace in a few years' time, it's quite clear that this could only be the case in a tightly controlled environment.
One example, that Mitchell explores in considerable detail are so-called adversarial attacks, a particularly AI form of hacking where, for example, those in the know can make changes to images that are invisible to the human eye but that force an AI system to interpret what they are seeing as something totally different. It's a sobering thought that, for example, by simply applying a small sticker to a stop sign on the road - unnoticeable to a human driver - an adversarial attacker can turn the sign into a speed limit sign as far as an AI system is concerned, with potentially fatal consequences.
Don't get me wrong, Mitchell, a professor of computer science who has specialised in AI, is no AI luddite. But unlike many of the enthusiasts in the field (or, for that matter, those who are terrified AI is about to take over the world), she is able to give us a realistic, balanced view, showing us just how far AI has to go to come close to the more general abilities humans make use of all the time even in simple tasks. AI does a great job, for example, in something like Siri or Google Translate or unlocking a phone with a face - but AI systems still have no concept of, for example, understanding (as opposed to recognising) what is in an image. Mitchell makes it clear that where systems learn from large amounts of data, it is usually impossible to uncover how they are making decisions (which makes the EU's law requiring transparent AI decisions pretty much impossible to implement), so we really shouldn't trust them with important outcomes as they could easily be basing their outcomes on totally irrelevant inputs.
Apart from occasionally finding the explanations of the workings of types of neural network a little hard to follow, the only thing that made me raise an eyebrow was being told that Marvin Minsky 'coined the phrase "suitcase word"' - I would have thought 'derived* the phrase from Lewis Carroll's term "portmanteau word"' would have been closer to reality.
There have been good books on the basics of AI already, and excellent ones on the problems that 'deep learning' and big data systems throw up. But without a doubt, Mitchell's book sets a new standard in giving an understanding of what's possible and how difficult it is to go further. It should be read by every journalist, PR person and politician before they pump out yet more hype on the AI future. Recommended.
* Polite term
Comments
Post a Comment