Skip to main content

The Demon in the Machine - Paul Davies *****

Physicists have a habit of dabbling in biology and, perhaps surprisingly, biologists tend to be quite tolerant of it. (I find it hard to believe the reverse would be true if biologists tried to do physics.) Perhaps one reason for that tolerance is Schrödinger’s lecture series and book What is Life?, which had a huge impact on molecular biology and with a reference to which, not surprisingly, Paul Davies begins his fascinating book. 

At the heart of the The Demon in the Machine (we'll come back to that demon in a moment) is the relationship between life and information. In essence, Davies points out that if we try to reduce life to its simple physical components it is like trying to work with a computer that has no software. The equivalent of software here is information, not just in the best publicised aspect of the information stored in the DNA, but on a far broader scale, operating in networks across the organism.

This information and its processing gives life its emergent complexity, which is why, Davies suggests, Dawkins-style reductionism to the gene level entirely misses the point. What's more, the biological setup provides a particularly sophisticated relationship between information and the physical aspects of the organism because the information can modify itself - it's as if a computer program could redesign itself as it went along.

The subtitle 'how hidden webs of information are solving the mystery of life' probably over-promises. As Davies makes clear, we still have no idea how life came into being in the first place. However, by bringing in this physical/information aspect we at least can get a better grip on the workings of the molecular machines inside organisms and how biology can do so much with so little. Here's where the demon in the title comes in. This is Maxwell's demon, the hypothetical miniature being dreamed up by the great nineteenth century Scottish physicist.

Maxwell's demon has the remarkable ability to tweak the second law of thermodynamics allowing, for example, heat to flow from a colder to a hotter body or, to put it another way, providing a mechanism for entropy (the measure of disorder in a system) to spontaneously decrease. Entropy has a strong (negative) relationship with information and Davies shows how miniature biological systems act in a demon-like fashion to effectively manage information.

There's lots to like here, from the best explanation I've seen of the relationship of information and entropy to fascinating coverage of how far we’ve gone beyond the selfish gene. This is not just about basic epigenetic processes (operating outside of genes, switching them on and off and so on) but how, for example, the electric field of a (biological) cell apparently has a role to play in ‘sculpting‘ the physical structure of an organism.

My only real complaint is that in part of the chapter Enter the Demon dealing with information engines and most of the chapter The Logic of Life, describing the relationship between living organisms and computation, Davies fails to put across clearly just what is going on. I read it, but didn't feel I gained as much information (ironically) as I needed from it. There was also one very odd statistic. We're told the information in a strand of DNA contains 'about 2 billion bits - more than the information contained in all the books in the Library of Congress.' There are about 32 million books in the Library of Congress, so that gives us on average 62.5 bits per book. Unless those are very short books, some information has gone astray.

Really interesting, then, from a transformed understanding of the importance of information in living organisms through to Davies' speculation on whether biological systems need new physical laws to describe them. But expect to come away feeling you need to read it again to be sure what it said.
Hardback 

Kindle 
Using these links earns us commission at no cost to you
Review by Brian Clegg

Comments

  1. Nick lane argues in his book The Vital Question that biological systems are not really working to decrease entropy they exist because they essentially dont violate the second law of thermodynamics . I wonder how that reconciles with the theory here

    ReplyDelete
  2. I find it difficult to follow some of this stuff. Not the fluffy conceptual models, but the simple aspergers stuff.

    eg p95, text describing Fig 11, says 'barred lines [indicate inhibition]'. Then seems to use a different term - 'loopy broken arrows' to denote self inhibition. Change of term is confusing but not fatal.

    But Fig 11 contains no broken or barred loopy arrows, only solid ones, thus indicating self activation, not self inhibition. Again, I guess, not fatal, if the reader is expected to be alert for errors or misprints.

    But table 2, P96 shows that in at least 2 cases, the loop is self suppression, so the loop should be broken or barred. Still not quite fatal.

    But then, following Table 2 and Fig 11 together, node G suddenly gets activated in step 8, although in Fig 11 there is no 'incoming [activation] arrow' pointing at it, except for it's own self activation loop, which can never be initiated.

    So what's going on? AFAIK just errors, which I'd hope not to have been included in this book.

    So, how do we read this book? Perhaps best just to gloss over all the clever diagrams & tables, say 'Gee, Whizz, that's all amazing!'

    OTOH if I'm really not understanding a perfectly valid exposition, I'd love to know.

    ReplyDelete

Post a Comment

Popular posts from this blog

It's On You - Nick Chater and George Loewenstein *****

Going on the cover you might think this was a political polemic - and admittedly there's an element of that - but the reason it's so good is quite different. It shows how behavioural economics and social psychology have led us astray by putting the focus way too much on individuals. A particular target is the concept of nudges which (as described in Brainjacking ) have been hugely over-rated. But overall the key problem ties to another psychological concept: framing. Huge kudos to both Nick Chater and George Loewenstein - a behavioural scientist and an economics and psychology professor - for having the guts to take on the flaws in their own earlier work and that of colleagues, because they make clear just how limited and potentially dangerous is the belief that individuals changing their behaviour can solve large-scale problems. The main thesis of the book is that there are two ways to approach the major problems we face - an 'i-frame' where we focus on the individual ...

Introducing Artificial Intelligence – Henry Brighton & Howard Selina ****

It is almost impossible to rate these relentlessly hip books – they are pure marmite*. The huge  Introducing  … series (a vast range of books covering everything from Quantum Theory to Islam), previously known as …  for Beginners , puts across the message in a style that owes as much to Terry Gilliam and pop art as it does to popular science. Pretty well every page features large graphics with speech bubbles that are supposed to emphasise the point. Funnily,  Introducing Artificial Intelligence  is both a good and bad example of the series. Let’s get the bad bits out of the way first. The illustrators of these books are very variable, and I didn’t particularly like the pictures here. They did add something – the illustrations in these books always have a lot of information content, rather than being window dressing – but they seemed more detached from the text and rather lacking in the oomph the best versions have. The other real problem is that...

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader.  We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard.  This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, ...