Skip to main content

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader. 

We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard. 

This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, what made the book doubly fascinating was that combination of the computing/AI side and the attempts to understand our mental processes - both in the similarities and overlaps, and the huge differences.

My only complaint about this book is that there is too much detail on the development of neural networks - too many characters introduced and, while I appreciate how long it took to get to modern LLMs and such like we don't need every nuanced step. There were also a couple of small items that surprised me - when talking about the ways we sometimes need a lot more evidence when applying induction than others, I was surprised there was no mention of Bayesian thinking - this is left till late in the book despite Bayes and Bayesian approaches to probability predating all the work on AI. I appreciate most in the field were late coming to its specific application, but its relevance to induction was obvious long before this. And in the description of John Wilkins' fascinating seventeenth century attempt at a universal language it would have been fun to have brought in Samuel Delaney's 1966 SF book Babel-17. But these are small moans indeed.

By merging an exploration of the development of psychologists' ideas of the way the mind processes language and concepts with the development of information theory and neural networks, Griffiths does the best job I've ever seen of helping the reader understand how we think, and how technology is approximating to similar results to intelligence without any understanding of the information involved.

Hardback:   
Kindle 
Using these links earns us commission at no cost to you
These articles will always be free - but if you'd like to support my online work, consider buying a virtual coffee or taking out a membership:
Review by Brian Clegg - See all Brian's online articles or subscribe to a weekly email free here

Comments

Popular posts from this blog

Einstein's Fridge - Paul Sen ****

In Einstein's Fridge (interesting factoid: this is at least the third popular science book to be named after Einstein's not particularly exciting refrigerator), Paul Sen has taken on a scary challenge. As Jim Al-Khalili made clear in his excellent The World According to Physics , our physical understanding of reality rests on three pillars: relativity, quantum theory and thermodynamics. But there is no doubt that the third of these, the topic of Sen's book, is a hard sell. While it's true that these are the three pillars of physics, from the point of view of making interesting popular science, the first two might be considered pillars of gold and platinum, while the third is a pillar of salt. Relativity and quantum theory are very much of the twentieth century. They are exciting and sometimes downright weird and wonderful. Thermodynamics, by contrast, has a very Victorian feel and, well, is uninspiring. Luckily, though, thermodynamics is important enough, lying behind ...

Nanotechnology - Rahul Rao ****

There was a time when nanotechnology was both going to transform the world and wipe us out - a similar position to our view of AI today. On the positive transformation side there was K. Eric Drexler's visions in the 1986 Engines of Creation. Arguably as much science fiction as engineering possibilities, it predicted the ability to use vast armies of assemblers to put objects together from individual atoms.  On the negative side was the vision of grey goo, out of control nanotechnology consuming all in its path as it made more and more copies of itself. In 2003, for instance, the then Prince Charles made the headlines  when newspapers reported ‘The prince has raised the spectre of the “grey goo” catastrophe in which sub-microscopic machines designed to share intelligence and replicate themselves take over and devour the planet.’ These days the expectations have been eased down a notch or two. Where nanotechnology has succeeded, it has been with the likes of atom-thick mat...