Skip to main content

Rule of the Robots - Martin Ford ****

Douglas Adams described how the (fictional) Hitchiker's Guide to the Galaxy started off in an overexcited manner, telling the reader how mindbogglingly big space is - but after a while it settled down a bit and started telling you things you actually needed to know. Rule of the Robots is a bit like this. It begins with far too much over-excitement about what artificial intelligence can do, but then it settles down to a reasonable picture of what is achievable, what's good and bad about it, what it's likely to do and how it might need controlling.

The point where I started to be happier with Martin Ford was when he described the progress (and problems) with self-driving cars. For too long, AI enthusiasts have over-sold how easy it would be to have self-driving vehicles replacing all the error-prone human drivers on the road. It's certainly likely that over the next couple of decades we will see them in restricted applications on carefully managed bits of road - but the chances of a self-driving car being able to operate safely in a busy city or on a windy country road are very distant. Ford explains the difficulties well. It's not just the technical problems either. He points out that, for example, moving to self-driving taxis, which seems to be goal of the likes of Uber and Lyft, has real problems, because their human drivers don't just do the driving - they provide the car, keep it clean and maintained and more. Owning a fleet of very expensive self-driving cars is a whole different proposition - one that may not be financially viable when it can be undercut by an organisation with human drivers and car-owners.

Ford goes on to describe the capabilities and limitations of deep learning systems, and to consider the impact of AI automation on jobs. Here, perhaps, he is a little pessimistic, as in the past, rather than automation destroying jobs, it has tended to shift and expand activity, not reduce it. But where he comes into his own is when he gets on to China and the rise of the AI surveillance state. I've read quite a bit about China's use of AI, but Ford goes into considerably more clear detail than I've seen elsewhere. He then goes on to examine the implications for the West, and the US in particular pointing out the dilemma between, say US AI workers refusing to undertake some projects where they don't like the politics, but the risk this poses of the US being left behind. 

The book is also very good on the dangers of AI. For too long, we've had something close to hysteria about AIs taking over the world, driven by hype about the 'singularity' and other super intelligent AI speculation. But, as Ford points out, the mostly likely prediction is that we are 80+ years away from the general artificial intelligence these panics are based on - in reality, the risk comes from misuses of the technology, whether it be for social control and autonomous weapons or AI systems making decisions about is that can be accidentally and intentionally biased in various ways.

Although Ford does recognise the limitations that mean we won't have generally available self-driving cars for quite a long time, he does still skate over some of the weaknesses of AI - for example, he doesn't mention catastrophic forgetting. It's true, for example, that you can train a machine learning based system to be good at distinguishing between, say, photos of cats and dogs. Let's imagine you decide to add another distinction - say between chairs and tables. You train the system up. But now it will have forgotten how to distinguish cats and dogs. To be fair, Ford does mention the related 'brittleness' of many AI systems - he points out an example of the famous Deep-Mind system that proved great at playing some Atari video games. Move the position of the paddle a couple of pixels up the screen and it's no longer any good. But more could have been made of this.

A bigger concern in the early, over-excited part was Ford's comparison of AI with electricity, suggesting it will be an equivalent for our century. I had two problems with this analogy. Firstly electricity is a universal power source to do anything - AI can only do one thing - information manipulation. It may have lots of applications, but it's not in the same category. A more apt comparison would be the electric motor or the silicon chip. The second problem is that AI is also one of the (very) many things that depends on electricity - a clockwork AI is pretty unlikely. So it can hardly be said to be the next electricity.

When I first hit the over-excited bit I was not at all impressed with this book - less so than I was with Ford's previous title The Rise of the Robots - but it grew on me. For its balanced view of self-driving cars and Ford's thoughts on China's use of AI, how the West should respond and the challenges it presents, this is a valuable book that deserves to be widely read.

Paperback: 
Bookshop.org

  

Kindle 
Using these links earns us commission at no cost to you
Review by Brian Clegg

Comments

Popular posts from this blog

Philip Ball - How Life Works Interview

Philip Ball is one of the most versatile science writers operating today, covering topics from colour and music to modern myths and the new biology. He is also a broadcaster, and was an editor at Nature for more than twenty years. He writes regularly in the scientific and popular media and has written many books on the interactions of the sciences, the arts, and wider culture, including Bright Earth: The Invention of Colour, The Music Instinct, and Curiosity: How Science Became Interested in Everything. His book Critical Mass won the 2005 Aventis Prize for Science Books. Ball is also a presenter of Science Stories, the BBC Radio 4 series on the history of science. He trained as a chemist at the University of Oxford and as a physicist at the University of Bristol. He is also the author of The Modern Myths. He lives in London. His latest title is How Life Works . Your book is about the ’new biology’ - how new is ’new’? Great question – because there might be some dispute about that! Many

Stephen Hawking: Genius at Work - Roger Highfield ****

It is easy to suspect that a biographical book from highly-illustrated publisher Dorling Kindersley would be mostly high level fluff, so I was pleasantly surprised at the depth Roger Highfield has worked into this large-format title. Yes, we get some of the ephemera so beloved of such books, such as a whole page dedicated to Hawking's coxing blazer - but there is plenty on Hawking's scientific life and particularly on his many scientific ideas. I've read a couple of biographies of Hawking, but I still came across aspects of his lesser fields here that I didn't remember, as well as the inevitable topics, ranging from Hawking radiation to his attempts to quell the out-of-control nature of the possible string theory universes. We also get plenty of coverage of what could be classified as Hawking the celebrity, whether it be a photograph with the Obamas in the White House, his appearances on Star Trek TNG and The Big Bang Theory or representations of him in the Simpsons. Ha

The Blind Spot - Adam Frank, Marcelo Gleiser and Evan Thompson ****

This is a curate's egg - sections are gripping, others rather dull. Overall the writing could be better... but the central message is fascinating and the book gets four stars despite everything because of this. That central message is that, as the subtitle says, science can't ignore human experience. This is not a cry for 'my truth'. The concept comes from scientists and philosophers of science. Instead it refers to the way that it is very easy to make a handful of mistakes about what we are doing with science, as a result of which most people (including many scientists) totally misunderstand the process and the implications. At the heart of this is confusing mathematical models with reality. It's all too easy when a mathematical model matches observation well to think of that model and its related concepts as factual. What the authors describe as 'the blind spot' is a combination of a number of such errors. These include what the authors call 'the bifur