Skip to main content

Four Way Interview - Hector Levesque

Hector Levesque is Professor Emeritus in the Department of Computer Science at the University of Toronto. He worked in the area of knowledge representation and reasoning in artificial intelligence. He is the co-author of a graduate textbook and co-founder of a conference in this area. He received the Computers and Thought Award in 1985 near the start of his career, and the Research Excellence Award in 2013 near the end, both from IJCAI (the International Joint Conferences on Artificial Intelligence). His latest title is Common Sense, The Turing Test, and the Quest for Real AI.

Why computer science?

Computer science is not really the science of computers, but the science of computation, a certain kind of information processing, with only a marginal connection to electronics. (I prefer the term used in French and other languages, informatics, but it never really caught on in North America.) Information is somewhat like gravity: once you are made aware of it, you realize that it is everywhere. You certainly cannot have a Theory of Everything without a clear understanding of the role of information. 

Why this book?

AI is the part of computer science concerned with the use of information in the sort of intelligent behaviour exhibited by people. While there is an incredible amount of buzz (and money) surrounding AI technology these days, it is mostly concerned with what can be learned by training on massive amounts of data. My book makes the case that this is an overly narrow view of intelligence, that what people are able to do, and what early AI researchers first proposed to study, goes well beyond this.

What's next?

I have a technical monograph with Gerhard Lakemeyer published in 2000 by MIT Press on the logic of knowledge bases, that is, on the relationship between large-scale symbolic representations and abstract states of knowledge. We are working on a new edition that would incorporate some of what we have learned about knowledge and knowledge bases since then. 

What's exciting you at the moment?

For me, the most exciting work in AI these days, at least in the theoretical part of AI, concerns the general mathematical and computational integration of logical and probabilistic reasoning seen, for example, in the work of Vaishak Belle. It's pretty clear to all but diehards that both types of knowledge will be needed, but previous solutions have been somewhat ad hoc and required giving up something out of one or the other.

Comments

Popular posts from this blog

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader.  We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard.  This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, ...

The Infinity Machine - Sebastian Mallaby ****

It's very quickly clear that Sebastian Mallaby is a huge Demis Hassabis fan - writing about the only child prodigy and teen genius ever who was also a nice, rounded personality. After a few chapters, though, things settle down (I'm reminded of Douglas Adams' description of the Hitchhiker's Guide to the Galaxy ) and we get a good, solid trip through the journey that gave us DeepMind, their AlphaGo and AlphaFold programs, the sudden explosion of competition on the AI front and thoughts on artificial general intelligence. Although Mallaby does occasionally still go into fan mode - reading this you would think that AlphaFold had successfully perfectly predicted the structure of every protein, where it is usually not sufficiently accurate for its results to have direct practical application - we get a real feel for the way this relatively unusual company was swiftly and successfully developed away from Silicon Valley. It's readable and gives an important understanding of...

Nanotechnology - Rahul Rao ****

There was a time when nanotechnology was both going to transform the world and wipe us out - a similar position to our view of AI today. On the positive transformation side there was K. Eric Drexler's visions in the 1986 Engines of Creation. Arguably as much science fiction as engineering possibilities, it predicted the ability to use vast armies of assemblers to put objects together from individual atoms.  On the negative side was the vision of grey goo, out of control nanotechnology consuming all in its path as it made more and more copies of itself. In 2003, for instance, the then Prince Charles made the headlines  when newspapers reported ‘The prince has raised the spectre of the “grey goo” catastrophe in which sub-microscopic machines designed to share intelligence and replicate themselves take over and devour the planet.’ These days the expectations have been eased down a notch or two. Where nanotechnology has succeeded, it has been with the likes of atom-thick mat...