Skip to main content

Deep Learning: John Kelleher **

This is an entry in a series from the MIT Press that selects a small part of a topic (in this case, a subset of artificial intelligence) and gives it an 'essential knowledge' introduction. The problem is, there seems to be no consistency over the target audience of the series.

I previously reviewed Virtual Reality in the same series and it kept things relatively simple and approachable to the general reader, even if it did overdo the hype. This book by John Kelleher starts gently, but by about half way through it has become a full-blown simplified textbook with far too much in-depth technical content. That's exactly what you don't want in a popular science title.

What we get is plenty of detail of what deep learning-based systems are and how they work at the technical level, but there is practically nothing on how they fit with applications (unless you count playing games), which are described but not really explained, nor is there anything much on the problems that arise when deep learning is used for real world applications. There is a passing reference, admittedly to the difficulties of understanding how a deep learning AI system came to a decision and how this clashes with the EU's GDPR requirement for transparency and explanation, but if feels more like this is done to criticise the naivety of the legislation than the danger of using such systems.

Similarly, I saw nothing about the dangers of deep learning systems using big data picking up on correlations that don't involve any causal link, nor does the book discuss the long tail problems that arise with inputs that are relatively uncommon and so are unlikely to turn up in the training data. Similarly we read nothing about the dangers of adversarial attacks, which can fool the systems into misinterpreting inputs with tiny changes, or the difficulties such systems have with real, messy environments as opposed to the rigid rules of a game.

Overall, the book is both pitched wrong and doesn't cover the aspects that really matter to the public. It may well do fine as an introductory text for a computer science student, but that doesn't fit with the blurb on the back, which implies it is for public consumption.

Paperback:   
Kindle 
Using these links earns us commission at no cost to you
Review by Brian Clegg

Comments

Popular posts from this blog

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader.  We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard.  This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, ...

The Infinity Machine - Sebastian Mallaby ****

It's very quickly clear that Sebastian Mallaby is a huge Demis Hassabis fan - writing about the only child prodigy and teen genius ever who was also a nice, rounded personality. After a few chapters, though, things settle down (I'm reminded of Douglas Adams' description of the Hitchhiker's Guide to the Galaxy ) and we get a good, solid trip through the journey that gave us DeepMind, their AlphaGo and AlphaFold programs, the sudden explosion of competition on the AI front and thoughts on artificial general intelligence. Although Mallaby does occasionally still go into fan mode - reading this you would think that AlphaFold had successfully perfectly predicted the structure of every protein, where it is usually not sufficiently accurate for its results to have direct practical application - we get a real feel for the way this relatively unusual company was swiftly and successfully developed away from Silicon Valley. It's readable and gives an important understanding of...

Nanotechnology - Rahul Rao ****

There was a time when nanotechnology was both going to transform the world and wipe us out - a similar position to our view of AI today. On the positive transformation side there was K. Eric Drexler's visions in the 1986 Engines of Creation. Arguably as much science fiction as engineering possibilities, it predicted the ability to use vast armies of assemblers to put objects together from individual atoms.  On the negative side was the vision of grey goo, out of control nanotechnology consuming all in its path as it made more and more copies of itself. In 2003, for instance, the then Prince Charles made the headlines  when newspapers reported ‘The prince has raised the spectre of the “grey goo” catastrophe in which sub-microscopic machines designed to share intelligence and replicate themselves take over and devour the planet.’ These days the expectations have been eased down a notch or two. Where nanotechnology has succeeded, it has been with the likes of atom-thick mat...