Skip to main content

Rule of the Robots - Martin Ford ****

Douglas Adams described how the (fictional) Hitchiker's Guide to the Galaxy started off in an overexcited manner, telling the reader how mindbogglingly big space is - but after a while it settled down a bit and started telling you things you actually needed to know. Rule of the Robots is a bit like this. It begins with far too much over-excitement about what artificial intelligence can do, but then it settles down to a reasonable picture of what is achievable, what's good and bad about it, what it's likely to do and how it might need controlling.

The point where I started to be happier with Martin Ford was when he described the progress (and problems) with self-driving cars. For too long, AI enthusiasts have over-sold how easy it would be to have self-driving vehicles replacing all the error-prone human drivers on the road. It's certainly likely that over the next couple of decades we will see them in restricted applications on carefully managed bits of road - but the chances of a self-driving car being able to operate safely in a busy city or on a windy country road are very distant. Ford explains the difficulties well. It's not just the technical problems either. He points out that, for example, moving to self-driving taxis, which seems to be goal of the likes of Uber and Lyft, has real problems, because their human drivers don't just do the driving - they provide the car, keep it clean and maintained and more. Owning a fleet of very expensive self-driving cars is a whole different proposition - one that may not be financially viable when it can be undercut by an organisation with human drivers and car-owners.

Ford goes on to describe the capabilities and limitations of deep learning systems, and to consider the impact of AI automation on jobs. Here, perhaps, he is a little pessimistic, as in the past, rather than automation destroying jobs, it has tended to shift and expand activity, not reduce it. But where he comes into his own is when he gets on to China and the rise of the AI surveillance state. I've read quite a bit about China's use of AI, but Ford goes into considerably more clear detail than I've seen elsewhere. He then goes on to examine the implications for the West, and the US in particular pointing out the dilemma between, say US AI workers refusing to undertake some projects where they don't like the politics, but the risk this poses of the US being left behind. 

The book is also very good on the dangers of AI. For too long, we've had something close to hysteria about AIs taking over the world, driven by hype about the 'singularity' and other super intelligent AI speculation. But, as Ford points out, the mostly likely prediction is that we are 80+ years away from the general artificial intelligence these panics are based on - in reality, the risk comes from misuses of the technology, whether it be for social control and autonomous weapons or AI systems making decisions about is that can be accidentally and intentionally biased in various ways.

Although Ford does recognise the limitations that mean we won't have generally available self-driving cars for quite a long time, he does still skate over some of the weaknesses of AI - for example, he doesn't mention catastrophic forgetting. It's true, for example, that you can train a machine learning based system to be good at distinguishing between, say, photos of cats and dogs. Let's imagine you decide to add another distinction - say between chairs and tables. You train the system up. But now it will have forgotten how to distinguish cats and dogs. To be fair, Ford does mention the related 'brittleness' of many AI systems - he points out an example of the famous Deep-Mind system that proved great at playing some Atari video games. Move the position of the paddle a couple of pixels up the screen and it's no longer any good. But more could have been made of this.

A bigger concern in the early, over-excited part was Ford's comparison of AI with electricity, suggesting it will be an equivalent for our century. I had two problems with this analogy. Firstly electricity is a universal power source to do anything - AI can only do one thing - information manipulation. It may have lots of applications, but it's not in the same category. A more apt comparison would be the electric motor or the silicon chip. The second problem is that AI is also one of the (very) many things that depends on electricity - a clockwork AI is pretty unlikely. So it can hardly be said to be the next electricity.

When I first hit the over-excited bit I was not at all impressed with this book - less so than I was with Ford's previous title The Rise of the Robots - but it grew on me. For its balanced view of self-driving cars and Ford's thoughts on China's use of AI, how the West should respond and the challenges it presents, this is a valuable book that deserves to be widely read.

Paperback: 
Bookshop.org

  

Kindle 
Using these links earns us commission at no cost to you
Review by Brian Clegg

Comments

Popular posts from this blog

On the Fringe - Michael Gordin *****

This little book is a pleasant surprise. That word 'little', by the way, is not intended as an insult, but a compliment. Kudos to OUP for realising that a book doesn't have to be three inches thick to be interesting. It's just 101 pages before you get to the notes - and that's plenty. The topic is fringe science or pseudoscience: it could be heavy going in a condensed form, but in fact Michael Gordin keeps the tone light and readable. In some ways, the most interesting bit is when Gordin plunges into just what pseudoscience actually is. As he points out, there are elements of subjectivity to this. For example, some would say that string theory is pseudoscience, even though many real scientists have dedicated their careers to it. Gordin also points out that, outside of denial (more on this a moment), many supporters of what most of us label pseudoscience do use the scientific method and see themselves as doing actual science. Gordin breaks pseudoscience down into a n

A (Very) Short History of Life on Earth - Henry Gee *****

In writing this book, Henry Gee had a lot to live up to. His earlier title  The Accidental Species was a superbly readable and fascinating description of the evolutionary process leading to Homo sapiens . It seemed hard to beat - but he has succeeded with what is inevitably going to be described as a tour-de-force. As is promised on the cover, we are taken through nearly 4.6 billion years of life on Earth (actually rather more, as I'll cover below). It's a mark of Gee's skill that what could have ended up feeling like an interminable list of different organisms comes across instead as something of a pager turner. This is helped by the structuring - within those promised twelve chapters everything is divided up into handy bite-sized chunks. And although there certainly are very many species mentioned as we pass through the years, rather than feeling overwhelming, Gee's friendly prose and careful timing made the approach come across as natural and organic.  There was a w

Michael D. Gordin - Four Way Interview

Michael D. Gordin is a historian of modern science and a professor at Princeton University, with particular interests in the physical sciences and in science in Russia and the Soviet Union. He is the author of six books, ranging from the periodic table to early nuclear weapons to the history of scientific languages. His most recent book is On the Fringe: Where Science Meets Pseudoscience (Oxford University Press). Why history of science? The history of science grabbed me long before I knew that there were actual historians of science out there. I entered college committed to becoming a physicist, drawn in by the deep intellectual puzzles of entropy, quantum theory, and relativity. When I started taking courses, I came to understand that what really interested me about those puzzles were not so much their solutions — still replete with paradoxes — but rather the rich debates and even the dead-ends that scientists had taken to trying to resolve them. At first, I thought this fell under