Skip to main content

Ian Watson – Four Way Interview

Ian Watson is a professor of artificial intelligence at the University of Auckland New Zealand. His latest book is The Universal Machine: from the dawn of computing to digital consciousness, exploring the legacy of Alan Turing, the inventor of the computer. He is a keen blogger and is currently researching in Game AI.
Why science?
I’ve never felt it’s a matter of choosing science over something else. At school I specialised in Biology, Chemistry and English Literature for my university entrance exams. My school said, “You can’t do that! You’ve got to specialise in either the sciences or the arts.” I replied, “I can do that. The timetable permits it – I checked.” Going from a chemistry or biology lab to an English Lit. class or vice versa was constantly refreshing and stimulating.
Steve Jobs was often photographed in front of a mythical street intersection: Liberal Arts and Technology. He famously said “In my perspective … computer science is a liberal art.” I agree with him; to be inventive in computer science you have to imagine it first. It’s a creative act – computer scientists are not discoverers exploring reality and bringing back theories, we’re creative artists, imagining things and making them happen.
Why this book?
There are many books about difference parts of the history of computing from Charles Babbage to the present day. I’ve read many of them, but most are only for the real enthusiast: 600 plus pages on Steve Jobs or Facebook is not for everyone. I wanted to write one book that would cover all the basics from the 1830s to the present day and be accessible and easy to read. 2012 is also the centenary of Alan Turing’s birth and I felt that a book that put his legacy in its full context would be a great contribution to the Alan Turing Year celebrations.
What’s next?
I’ve now started work on a popular science history of artificial intelligence. AI is probably the aspect of computer science that most fascinates and even frightens people. I intend to look at the history of AI from the mythical creations of ancient history through mechanical automata to the birth of AI in the 1950s and to today. The book will then deal, in layperson terms, with the main techniques of AI and look at how AI has been applied in business and industry, health care, the arts and entertainment and the military. The final chapter will look over the horizon at what AI may have in store for us in the future.
What’s exciting you at the moment?
In computing it’s the power of the cloud to provide us with unlimited processing power and data storage where ever we are via our mobile devices. Soon we’ll no longer care how much processing power or memory our new gadget has – this will be utterly irrelevant. The advent of this will enable a completely new class of intelligent applications become feasible – I call it cloud intelligence, perhaps I should write a book about it.

Comments

Popular posts from this blog

It's On You - Nick Chater and George Loewenstein *****

Going on the cover you might think this was a political polemic - and admittedly there's an element of that - but the reason it's so good is quite different. It shows how behavioural economics and social psychology have led us astray by putting the focus way too much on individuals. A particular target is the concept of nudges which (as described in Brainjacking ) have been hugely over-rated. But overall the key problem ties to another psychological concept: framing. Huge kudos to both Nick Chater and George Loewenstein - a behavioural scientist and an economics and psychology professor - for having the guts to take on the flaws in their own earlier work and that of colleagues, because they make clear just how limited and potentially dangerous is the belief that individuals changing their behaviour can solve large-scale problems. The main thesis of the book is that there are two ways to approach the major problems we face - an 'i-frame' where we focus on the individual ...

Introducing Artificial Intelligence – Henry Brighton & Howard Selina ****

It is almost impossible to rate these relentlessly hip books – they are pure marmite*. The huge  Introducing  … series (a vast range of books covering everything from Quantum Theory to Islam), previously known as …  for Beginners , puts across the message in a style that owes as much to Terry Gilliam and pop art as it does to popular science. Pretty well every page features large graphics with speech bubbles that are supposed to emphasise the point. Funnily,  Introducing Artificial Intelligence  is both a good and bad example of the series. Let’s get the bad bits out of the way first. The illustrators of these books are very variable, and I didn’t particularly like the pictures here. They did add something – the illustrations in these books always have a lot of information content, rather than being window dressing – but they seemed more detached from the text and rather lacking in the oomph the best versions have. The other real problem is that...

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader.  We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard.  This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, ...