Skip to main content

Human-Centered AI - Ben Shneiderman ****

Reading some popular science books is like biting into a luscious peach. Others are more like being presented with an almond - you have to do a lot of work to get through a difficult shell to get to the bit you want. This is very much an almond of a book, but it's worth the effort.

At the time of writing, two popular science topics have become so ubiquitous that it's hard to find anything new to say about them - neuroscience and artificial intelligence. Almost all the (many) AI books I've read have either been paeans to its wonders or dire warnings of how AI will take over the world or make opaque and biassed decisions that destroy lives. What is really refreshing about Ben Shneiderman's book is that it does neither of these - instead it provides an approach to benefit from AI without suffering the negative consequences. That's why it's an important piece of work.

To do this, Shneiderman takes us right back to the philosophical contrast between rationalism and empiricism. Rationalism, we discover, is driven by rules, logic and well-defined boundaries. Empiricists drive their understanding from observation of the real world where things are more fuzzy. Shneiderman then expands this distinction to that between science and innovation. Here, science is seen on focussing on the the essence of what is happening, while innovation is driven by applications. 

When we get to AI, Shneiderman argues that many AI researchers take the science approach - they want to understand how people think and to reproduce human-like intelligence in computers and human-like robots. The empirical, innovation-driven AI researchers, meanwhile, focus on ways that AI can not duplicate and supplant human abilities, but support them. It's the difference between providing a human replacement and an AI-driven super tool that enables the human to work far better. Although Shneiderman makes an effort to portray both sides fairly, there is no doubt that he comes down strongly on the empirical, innovation-driven side - human-centred AI. It is exploring this distinction that makes the book important. Shneiderman argues convincingly that we need to move from AI taking decisions and actions, replacing humans, to human-centred AI that augments human abilities.

Quite a lot of this is driven by the importance of the human-computer interface. Science-driven AI tends to have poor or non-existent user interface, with the AI's processes opaque and impossible to control, where innovation driven-AI puts a lot of importance on having meaningful controls and interface. It's frustrating, then, that someone so strong on good user interface produces a book that has such a bad one - instead of the narrative structure of good writing, Human-Centred AI has the dire, rigid structure of a business book or textbook. We get sections with an opening summary, then an introductory chapter that tells you what the section is going to tell you, then a bit of useful content, before a closing chapter that summarises the section. There is so much repetition of the basic points that it becomes really irritating. The interface of cameras on smartphones, for example, are used as exemplars almost word for word many times over. 

The useful content could be covered in a couple of magazine articles - yet when you hit the good stuff it is really good stuff. This is by no means the best way of putting the information across - nevertheless, by dint of this valuable message, it is one of the most important AI books of the last few years.

Hardback: 
Bookshop.org

  

Kindle 
Using these links earns us commission at no cost to you
Review by Brian Clegg

Comments

Popular posts from this blog

The Antigravity Enigma - Andrew May ****

Antigravity - the ability to overcome the pull of gravity - has been a fantasy for thousands of years and subject to more scientific (if impractical) fictional representation since H. G. Wells came up with cavorite in The First Men in the Moon . But is it plausible scientifically?  Andrew May does a good job of pulling together three ways of looking at our love affair with antigravity (and the related concept of cancelling inertia) - in science fiction, in physics and in pseudoscience and crankery. As May points out, science fiction is an important starting point as the concept was deployed there well before we had a good enough understanding of gravity to make any sensible scientific stabs at the idea (even though, for instance, Michael Faraday did unsuccessfully experiment with a possible interaction between gravity and electromagnetism). We then get onto the science itself, noting the potential impact on any ideas of antigravity that come from the move from a Newtonian view of a...

The World as We Know It - Peter Dear ***

History professor Peter Dear gives us a detailed and reasoned coverage of the development of science as a concept from its origins as natural philosophy, covering the years from the eighteenth to the twentieth century. inclusive If that sounds a little dry, frankly, it is. But if you don't mind a very academic approach, it is certainly interesting. Obviously a major theme running through is the move from largely gentleman natural philosophers (with both implications of that word 'gentleman') to professional academic scientists. What started with clubs for relatively well off men with an interest, when universities did not stray far beyond what was included in mathematics (astronomy, for instance), would become a very different beast. The main scientific subjects that Dear covers are physics and biology - we get, for instance, a lot on the gradual move away from a purely mechanical views of physics - the reason Newton's 'action at a distance' gravity caused such ...

It's On You - Nick Chater and George Loewenstein *****

Going on the cover you might think this was a political polemic - and admittedly there's an element of that - but the reason it's so good is quite different. It shows how behavioural economics and social psychology have led us astray by putting the focus way too much on individuals. A particular target is the concept of nudges which (as described in Brainjacking ) have been hugely over-rated. But overall the key problem ties to another psychological concept: framing. Huge kudos to both Nick Chater and George Loewenstein - a behavioural scientist and an economics and psychology professor - for having the guts to take on the flaws in their own earlier work and that of colleagues, because they make clear just how limited and potentially dangerous is the belief that individuals changing their behaviour can solve large-scale problems. The main thesis of the book is that there are two ways to approach the major problems we face - an 'i-frame' where we focus on the individual ...