Skip to main content

Human-Centered AI - Ben Shneiderman ****

Reading some popular science books is like biting into a luscious peach. Others are more like being presented with an almond - you have to do a lot of work to get through a difficult shell to get to the bit you want. This is very much an almond of a book, but it's worth the effort.

At the time of writing, two popular science topics have become so ubiquitous that it's hard to find anything new to say about them - neuroscience and artificial intelligence. Almost all the (many) AI books I've read have either been paeans to its wonders or dire warnings of how AI will take over the world or make opaque and biassed decisions that destroy lives. What is really refreshing about Ben Shneiderman's book is that it does neither of these - instead it provides an approach to benefit from AI without suffering the negative consequences. That's why it's an important piece of work.

To do this, Shneiderman takes us right back to the philosophical contrast between rationalism and empiricism. Rationalism, we discover, is driven by rules, logic and well-defined boundaries. Empiricists drive their understanding from observation of the real world where things are more fuzzy. Shneiderman then expands this distinction to that between science and innovation. Here, science is seen on focussing on the the essence of what is happening, while innovation is driven by applications. 

When we get to AI, Shneiderman argues that many AI researchers take the science approach - they want to understand how people think and to reproduce human-like intelligence in computers and human-like robots. The empirical, innovation-driven AI researchers, meanwhile, focus on ways that AI can not duplicate and supplant human abilities, but support them. It's the difference between providing a human replacement and an AI-driven super tool that enables the human to work far better. Although Shneiderman makes an effort to portray both sides fairly, there is no doubt that he comes down strongly on the empirical, innovation-driven side - human-centred AI. It is exploring this distinction that makes the book important. Shneiderman argues convincingly that we need to move from AI taking decisions and actions, replacing humans, to human-centred AI that augments human abilities.

Quite a lot of this is driven by the importance of the human-computer interface. Science-driven AI tends to have poor or non-existent user interface, with the AI's processes opaque and impossible to control, where innovation driven-AI puts a lot of importance on having meaningful controls and interface. It's frustrating, then, that someone so strong on good user interface produces a book that has such a bad one - instead of the narrative structure of good writing, Human-Centred AI has the dire, rigid structure of a business book or textbook. We get sections with an opening summary, then an introductory chapter that tells you what the section is going to tell you, then a bit of useful content, before a closing chapter that summarises the section. There is so much repetition of the basic points that it becomes really irritating. The interface of cameras on smartphones, for example, are used as exemplars almost word for word many times over. 

The useful content could be covered in a couple of magazine articles - yet when you hit the good stuff it is really good stuff. This is by no means the best way of putting the information across - nevertheless, by dint of this valuable message, it is one of the most important AI books of the last few years.

Hardback: 
Bookshop.org

  

Kindle 
Using these links earns us commission at no cost to you
Review by Brian Clegg

Comments

Popular posts from this blog

It's On You - Nick Chater and George Loewenstein *****

Going on the cover you might think this was a political polemic - and admittedly there's an element of that - but the reason it's so good is quite different. It shows how behavioural economics and social psychology have led us astray by putting the focus way too much on individuals. A particular target is the concept of nudges which (as described in Brainjacking ) have been hugely over-rated. But overall the key problem ties to another psychological concept: framing. Huge kudos to both Nick Chater and George Loewenstein - a behavioural scientist and an economics and psychology professor - for having the guts to take on the flaws in their own earlier work and that of colleagues, because they make clear just how limited and potentially dangerous is the belief that individuals changing their behaviour can solve large-scale problems. The main thesis of the book is that there are two ways to approach the major problems we face - an 'i-frame' where we focus on the individual ...

Introducing Artificial Intelligence – Henry Brighton & Howard Selina ****

It is almost impossible to rate these relentlessly hip books – they are pure marmite*. The huge  Introducing  … series (a vast range of books covering everything from Quantum Theory to Islam), previously known as …  for Beginners , puts across the message in a style that owes as much to Terry Gilliam and pop art as it does to popular science. Pretty well every page features large graphics with speech bubbles that are supposed to emphasise the point. Funnily,  Introducing Artificial Intelligence  is both a good and bad example of the series. Let’s get the bad bits out of the way first. The illustrators of these books are very variable, and I didn’t particularly like the pictures here. They did add something – the illustrations in these books always have a lot of information content, rather than being window dressing – but they seemed more detached from the text and rather lacking in the oomph the best versions have. The other real problem is that...

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader.  We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard.  This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, ...