Skip to main content

Ian Watson – Four Way Interview

Ian Watson is a professor of artificial intelligence at the University of Auckland New Zealand. His latest book is The Universal Machine: from the dawn of computing to digital consciousness, exploring the legacy of Alan Turing, the inventor of the computer. He is a keen blogger and is currently researching in Game AI.
Why science?
I’ve never felt it’s a matter of choosing science over something else. At school I specialised in Biology, Chemistry and English Literature for my university entrance exams. My school said, “You can’t do that! You’ve got to specialise in either the sciences or the arts.” I replied, “I can do that. The timetable permits it – I checked.” Going from a chemistry or biology lab to an English Lit. class or vice versa was constantly refreshing and stimulating.
Steve Jobs was often photographed in front of a mythical street intersection: Liberal Arts and Technology. He famously said “In my perspective … computer science is a liberal art.” I agree with him; to be inventive in computer science you have to imagine it first. It’s a creative act – computer scientists are not discoverers exploring reality and bringing back theories, we’re creative artists, imagining things and making them happen.
Why this book?
There are many books about difference parts of the history of computing from Charles Babbage to the present day. I’ve read many of them, but most are only for the real enthusiast: 600 plus pages on Steve Jobs or Facebook is not for everyone. I wanted to write one book that would cover all the basics from the 1830s to the present day and be accessible and easy to read. 2012 is also the centenary of Alan Turing’s birth and I felt that a book that put his legacy in its full context would be a great contribution to the Alan Turing Year celebrations.
What’s next?
I’ve now started work on a popular science history of artificial intelligence. AI is probably the aspect of computer science that most fascinates and even frightens people. I intend to look at the history of AI from the mythical creations of ancient history through mechanical automata to the birth of AI in the 1950s and to today. The book will then deal, in layperson terms, with the main techniques of AI and look at how AI has been applied in business and industry, health care, the arts and entertainment and the military. The final chapter will look over the horizon at what AI may have in store for us in the future.
What’s exciting you at the moment?
In computing it’s the power of the cloud to provide us with unlimited processing power and data storage where ever we are via our mobile devices. Soon we’ll no longer care how much processing power or memory our new gadget has – this will be utterly irrelevant. The advent of this will enable a completely new class of intelligent applications become feasible – I call it cloud intelligence, perhaps I should write a book about it.

Comments

Popular posts from this blog

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader.  We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard.  This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, ...

The AI Paradox - Virginia Dignum ****

This is a really important book in the way that Virginia Dignum highlights various ways we can misunderstand AI and its abilities using a series of paradoxes. However, I need to say up front that I'm giving it four stars for the ideas: unfortunately the writing is not great. It reads more like a government report than anything vaguely readable - it really should have co-authored with a professional writer to make it accessible. Even so, I'm recommending it: like some government reports it's significant enough to make it necessary to wade through the bureaucrat speak. Why paradoxes? Dignum identifies two ways we can think about paradoxes (oddly I wrote about paradoxes recently , but with three definitions): a logical paradox such as 'this statement is false', or a paradoxical truth such as 'less is more' - the second of which seems a better to fit to the use here.  We are then presented with eight paradoxes, each of which gives some insights into aspects of t...

Einstein's Fridge - Paul Sen ****

In Einstein's Fridge (interesting factoid: this is at least the third popular science book to be named after Einstein's not particularly exciting refrigerator), Paul Sen has taken on a scary challenge. As Jim Al-Khalili made clear in his excellent The World According to Physics , our physical understanding of reality rests on three pillars: relativity, quantum theory and thermodynamics. But there is no doubt that the third of these, the topic of Sen's book, is a hard sell. While it's true that these are the three pillars of physics, from the point of view of making interesting popular science, the first two might be considered pillars of gold and platinum, while the third is a pillar of salt. Relativity and quantum theory are very much of the twentieth century. They are exciting and sometimes downright weird and wonderful. Thermodynamics, by contrast, has a very Victorian feel and, well, is uninspiring. Luckily, though, thermodynamics is important enough, lying behind ...