Skip to main content

Four Way Interview - Hector Levesque

Hector Levesque is Professor Emeritus in the Department of Computer Science at the University of Toronto. He worked in the area of knowledge representation and reasoning in artificial intelligence. He is the co-author of a graduate textbook and co-founder of a conference in this area. He received the Computers and Thought Award in 1985 near the start of his career, and the Research Excellence Award in 2013 near the end, both from IJCAI (the International Joint Conferences on Artificial Intelligence). His latest title is Common Sense, The Turing Test, and the Quest for Real AI.

Why computer science?

Computer science is not really the science of computers, but the science of computation, a certain kind of information processing, with only a marginal connection to electronics. (I prefer the term used in French and other languages, informatics, but it never really caught on in North America.) Information is somewhat like gravity: once you are made aware of it, you realize that it is everywhere. You certainly cannot have a Theory of Everything without a clear understanding of the role of information. 

Why this book?

AI is the part of computer science concerned with the use of information in the sort of intelligent behaviour exhibited by people. While there is an incredible amount of buzz (and money) surrounding AI technology these days, it is mostly concerned with what can be learned by training on massive amounts of data. My book makes the case that this is an overly narrow view of intelligence, that what people are able to do, and what early AI researchers first proposed to study, goes well beyond this.

What's next?

I have a technical monograph with Gerhard Lakemeyer published in 2000 by MIT Press on the logic of knowledge bases, that is, on the relationship between large-scale symbolic representations and abstract states of knowledge. We are working on a new edition that would incorporate some of what we have learned about knowledge and knowledge bases since then. 

What's exciting you at the moment?

For me, the most exciting work in AI these days, at least in the theoretical part of AI, concerns the general mathematical and computational integration of logical and probabilistic reasoning seen, for example, in the work of Vaishak Belle. It's pretty clear to all but diehards that both types of knowledge will be needed, but previous solutions have been somewhat ad hoc and required giving up something out of one or the other.

Comments

Popular posts from this blog

It's On You - Nick Chater and George Loewenstein *****

Going on the cover you might think this was a political polemic - and admittedly there's an element of that - but the reason it's so good is quite different. It shows how behavioural economics and social psychology have led us astray by putting the focus way too much on individuals. A particular target is the concept of nudges which (as described in Brainjacking ) have been hugely over-rated. But overall the key problem ties to another psychological concept: framing. Huge kudos to both Nick Chater and George Loewenstein - a behavioural scientist and an economics and psychology professor - for having the guts to take on the flaws in their own earlier work and that of colleagues, because they make clear just how limited and potentially dangerous is the belief that individuals changing their behaviour can solve large-scale problems. The main thesis of the book is that there are two ways to approach the major problems we face - an 'i-frame' where we focus on the individual ...

Introducing Artificial Intelligence – Henry Brighton & Howard Selina ****

It is almost impossible to rate these relentlessly hip books – they are pure marmite*. The huge  Introducing  … series (a vast range of books covering everything from Quantum Theory to Islam), previously known as …  for Beginners , puts across the message in a style that owes as much to Terry Gilliam and pop art as it does to popular science. Pretty well every page features large graphics with speech bubbles that are supposed to emphasise the point. Funnily,  Introducing Artificial Intelligence  is both a good and bad example of the series. Let’s get the bad bits out of the way first. The illustrators of these books are very variable, and I didn’t particularly like the pictures here. They did add something – the illustrations in these books always have a lot of information content, rather than being window dressing – but they seemed more detached from the text and rather lacking in the oomph the best versions have. The other real problem is that...

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader.  We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard.  This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, ...