I have a list of popular science book ideas that I occasionally revisit – things I quite fancy writing in the future. Now I have to cross one of them off the list, because Simon Flynn has written it for me. My list describes it as a popular science version of Schott’s Miscellany, but Flynn has called it by the (to me, rather clumsy) title The Science Magpie.
At least, that’s what I assume it is, because I have to confess, I’ve never read Schott’s Miscellany, so I don’t know what it contains – I merely assume it’s this kind of kaleidoscopic mix of all manner of facts, from the quite interesting to the downright weird. It’s the sort of book you can imagine Stephen Fry curling up with of an evening before hosting QI.
Inevitably in such an inspired hotch-potch there will always be some entries that inspire more than most. I loved, for instance, real molecules with silly names, the 1858 Cambridge University exam questions and the curly snail periodic table. Other parts are more ‘Hmm’ moments, like a whole page of digits of pi, while still others simply get a little dull. Often this is a transcription of a historical document – I have fallen for this one myself. They fascinate if you are researching the particular topic, but to the general reader they can fall a little flat.
The nice thing though is that, even if there’s a topic that doesn’t really grab you, you know that in the next page or two there will be something completely different. There is no order to all this, it is just stuff accumulated at random, like one of those wonderful old fashioned museums where you get a Victorian vacuum cleaner alongside an Egyptian mummy. Delicious.
If anyone ought to know what grabs the science reading public’s attention it’s Flynn, who used to be MD of Icon Books and is now training as a science teacher. I know this is going to be a book that will find its way into many science loving people’s present piles.
It was really interesting coming to this book soon after reading The Black Swan, as in some ways they cover similar ground – but take a very different approach. I ought to say straight away that this book is too long at a wrist-busting 534 pages, but on the whole it is much better than its rival. Where Black Swan is written in a highly self-indulgent fashion, telling us far too much about the author and really only containing one significant piece of information, Signal and Noise has much more content. (Strangely, the biggest omission is properly covering Taleb’s black swan concept.)
What we’re dealing with is a book about forecasting, randomness, probability and chance. You will find plenty about all the interesting stuff – weather forecasting, the stock market, climate change, political forecasts and more, and with the exception of one chapter which I will come back to in a moment it is very readable and well-written (though inevitably takes a long time to get through). It has one of the best explanations of Bayes’ theorem I’ve ever seen in a popular science book, and (properly to my mind) makes significant use of Bayesian statistics.
What’s not to like? Well, frankly, if you aren’t American, you might find it more than a trifle parochial. There is a huge section on baseball and predicting baseball results that is unlikely to mean anything to the vast majority of the world’s readers. I’m afraid I had to skip chunks of that. And there’s a bizarre chapter about terrorism. I have two problems with this. One is the fawning approach to Donald Rumsfeld. Nate Silver seems so thrilled Rumsfeld gives him an interview that he treats his every word as sheer gold. Unfortunately, he seems to miss that for much of the world, Rumsfeld is hardly highly regarded (that parochialism again).
There is also a moment where Silver falls for one of the traps he points out that it’s easy to succumb to in analyzing data. On one subject he cherry picks information to present the picture he wants. He contrasts the distribution of deaths in terrorist attacks in the US and Israel, pointing out that where the US numbers follow a rough power law, deaths in Israel tail off before 100 people killed in an incident, which he puts down to their approach to security. What he fails to point out is that this is also true of pretty well every European country, none of which have Israeli-style security.
I also couldn’t help point out one of the funniest typos I have ever seen. He quotes physicist Richard Rood as saying ‘At NASA, I finally realised that the definition of rocket science is using relatively simple psychics to solve complex problems.’ Love it Bring on the simple psychics.
Overall, despite a few issues it was a good read with a lot of meat on probability and forecasting and a good introduction to the basics of Bayesian statistics thrown in. Recommended.
On 4 July 2012, scientists at CERN announced the discovery of a new elementary particle that they judged to be consistent with the long-sought Higgs boson. The next step is therefore reasonably obvious. Physicists involved in the ATLAS and CMS detector collaborations at CERN’s Large Hadron Collider (LHC) facility will be keen to push ahead and fully characterize the new particle. They will want to know if this is indeed the Higgs boson, the one ingredient missing from the so-called standard model of particle physics.
How will they tell?
Physicists at Fermilab’s Tevatron collider and CERN’s LHC have been searching for the Higgs boson by looking for the tell-tale products of its different predicted decay pathways. The current standard model is used to predict both the rates of production of the Higgs boson in high-energy particle collisions and the rates of its various decay modes. After subtracting the ‘background’ that arises from all the other ways in which the same decay products can be produced, the physicists are left with an excess of events that can be ascribed to Higgs boson decays.
Now that we know the new particle has a mass of between 125-126 billion electron-volts (equivalent to the mass of about 134 protons), both the calculations and the experiments can be focused tightly on this specific mass value.
So far, excess events have been observed for three important decay pathways. These involve the decay of the Higgs boson to two photons (written H → γγ), decay to two Z bosons (H → ZZ → l+l-l+l-, where l signifies leptons, such as electrons and muons and their anti-particles) and decay two W particles (H → W+W- → l+ν l-ν, where ν signifies neutrinos). All these decay pathways involve the production of bosons. This should come as no real surprise, as the Higgs field was originally invented to break the symmetry between the weak and electromagnetic forces, thereby giving mass to the W and Z particles and leaving the photon massless. There is therefore an intimate connection between the Higgs, photons and W and Z particles.
The decay rates to these three pathways are broadly as predicted by the standard model. There is an observed enhancement in the rate of decay to two photons compared to predictions, but this may be the result of statistical fluctuations. Further data on this pathway will determine whether or not there’s a problem (or maybe a clue to some new physics) in this channel.
But the Higgs field is also involved in giving mass to fermions – matter particles, such as electrons and quarks. The Higgs boson is therefore also predicted to decay into fermions, specifically very large massive fermions such as bottom and anti-bottom quarks and tau and anti-tau leptons. Bottom quarks and tau leptons (heavy versions of the electron) are third-generation matter particles with masses respectively of about 4.2 billion electron volts (about four and a half proton masses) and 1.8 billion electron volts (about 1.9 proton masses).
But these decay pathways are a little more problematic. The backgrounds from other processes are more significant and so considerably more data are required to discriminate the background from genuine Higgs decay events. The decay to bottom and anti-bottom quarks was studied at the Tevatron before it was shut down earlier this year. But the collider had insufficient collision energy and luminosity (a measure of the number of collisions that the particle beams can produce) to enable independent discovery of the Higgs boson.
ATLAS physicist Jon Butterworth, who writes a blog for the British newspaper The Guardian, recently gave this assessment:
If and when we see the Higgs decaying in these two [fermion] channels at roughly the predicted rates, I will probably start calling this new boson the Higgs rather than a Higgs. It won’t prove it is exactly the Standard Model Higgs boson of course, and looking for subtle differences will be very interesting. But it will be close enough to justify [calling it] the definite article.
When will this happen? This is hard to judge, but perhaps we will have an answer by the end of this year.
This is a rather poetic book, something of a rarity in popular science and not necessarily one that fits well with the genre. The author, who has a botanical background, tries to give the reader a portrait of the air as it influences mostly living things on the planet.
I’m afraid that for me it just didn’t work. I found the attempt to be arty in descriptions simply plodding and hard work. I just wasn’t getting anywhere quickly enough: I found myself making excuses for why I wasn’t coming back to the book every time I put it down. I can see it will work for some people, but it didn’t for me.
Apart from anything else, the title is a bit misleading. The book is called ‘AIR – the restless shaper of the world’ – but very little of it is actually about the air, it’s much more about how living things on the Earth make use of the air. Even when you get a section labelled ‘Shining’ with chapters like ‘Why the daytime sky is light’ (which spends most of its time explaining why it’s not about why the sky is blue), there is very little content about the air and soon William Bryant Logan is off on one of his pet topics again.
I haven’t read the author’s previous books Dirt and Oak, but by the sound of them they are much more the kind of thing he ought to be writing. Air is not his kind of thing.
Whenever someone famous dies or there’s a major royal event you will see a book arrive in the shops with undue haste. It’s hard to imagine it wasn’t thrown together with minimum effort – and with equally minimal quality. So when I saw that Jim Baggott had produced a book on the Higgs boson all of five weeks after the likely detection was announced following several years work by the Large Hadron Collider at CERN, it seemed likely that this too was a botched rush job. But the reality is very different.
In one sense it has to be a rushed job – the announcement was made on 4 July 2012 and the book was out by mid-August, featuring said announcement. So that bit of the book could hardly have had much time for careful editing, bearing in mind publishers usually take at least a couple of months from final versions of the text to having a physical book. (Much of the rest of the book was written well in advance.) But the remarkable trick that Baggott and OUP have pulled off is that the rush doesn’t show. This is an excellent book throughout.
The first, but probably not most important way it’s great is that it provides by far the best explanation of what the Higgs field is and how it is thought to work (and what the Higgs boson has to do with anything) I’ve seen – and that by a long margin. However, for me it’s not so much that, as the way it provides a superb introduction to the development of the standard model of particle physics, our current best guess of what everything’s made of. Again, this is the best I’ve ever read and yet it’s here just as a setting for the Higgs business. It is really well done, and the book deserves a wide readership for that alone, not to mention the way it puts the Higgs into context.
Is it perfect? Well, no. Like every other book I’ve read on the subject it falls down on making the linkage between the mathematics of symmetry and the particle physics comprehensible. That is immensely difficult to do, but ought to be possible. However, as long as you take some of the symmetry stuff on trust, the rest works superbly well.
Congratulations, then, to author and publisher alike. Both in its timing and its content this is a tour de force. Recommended.
There is something stunning about the colours of a peacock feather. It’s not just a simple matter of the sort of coloured pigments an artist mixes up on a palette. The colours in the feathers almost glow in their iridescence, changing subtly with angle to catch the eye. To produce this effect, the feather contains a natural nanotechnology that has the potential to transform optics when this remarkable approach is adapted for use in human technology.
Both the iridescence of that peacock’s tail and the swirly, glittering appearance of the semi-precious stone opal are caused by forms of photonic lattices. These are physical structures at the nano level that act on light in a way that is reminiscent of electronics, like the semiconductors that act to switch and control electrons, giving unparalleled manipulation of photons.
The colours of the peacock feather bear no resemblance to those of a pigment. In blue paint, for example, the pigment is a material that tends to absorb most of the spectrum of white light but re-emits primarily blue, so we see anything painted with the pigment as being blue. In the peacock feathers it’s the internal structure of the feather (or to be precise the tiny ‘barbules’ on the feather) that produce the hue.
The colouration is primarily due to internal reflections off the repeated structure of the barbule, similar to the way the lattice arrangement of a crystal can produce enhanced reflection. What happens is that photons reflected from a deeper layer are in phase with those from an outer layer, reinforcing the particular colours of light (or energies of photons) that fit best with the lattice spacing. This is a photonic lattice. These effects depend on the angle at which the light reflects, giving the typical ‘shimmer’ of iridescence.
The practical applications of artificially created photonic crystals can do much more than produce a pretty effect and striking colours. Because a photonic lattice acts on light as semiconductors do on electrons, they are essential components if we are ever to build optical computers.
These theoretical machines would use photons to represent bits, rather than the electrical impulses we currently employ in a conventional computer. This could vastly increase the computing power. Because photons don’t interact with each other, many more can be crammed into a tiny space. What’s more, one of the biggest restrictions in current computer architecture is the complex spaghetti of links joining together different parts of the structure. With photons, those links can flow through each other in a basket of light – unlike wires and circuits, photons can pass through each other without interacting, allowing more complex and faster architectures. Equally, optical switching – and in the end, a computer is just a huge array of switches – could be much faster than the electronic equivalent.
There are significant technical problems to be overcome, but the potential is great. Photonic crystals are already used in special paint and ink systems which change colour depending on the angle at which the paint is viewed, reflection reducing coatings on lenses and high transmission photonic fibre optics.
Another example of nanotechnology having a quantum effect on light is plasmonics. Something remarkable happens, for example, if light is shone on a gold foil peppered with millions of nanoholes. It seems reasonable that only a tiny fraction of the light hitting the foil would pass through these negligible punctures, but in fact in a process known as ‘extraordinary optical transmission’ they act like funnels, channelling all the light that hits the foil through the sub-microscopic apertures. This bizarre phenomenon results from the interaction between the light and plasmons, waves in the two dimensional ocean of electrons in the metal.
The potential applications of plasmonics are remarkable. Not only the more obvious optical ones – near perfect lenses and supplementing the photonic lattices in superfast computers that use light rather than electrons to function – but also in the medical sphere to support diagnostics, by detecting particular molecules, and for drug delivery. Naomi Halas of Rice University in Texas envisions implanting tiny cylinders containing billions of plasmonic spheres, each carrying a minuscule dose of insulin. Infra red light, shone from outside the body, could trigger an exact release of the required dose. ‘Basically, people could wear a pancreas on their arm,’ said Halas.
Over the last seven weeks since the first post, we have explored a wide range of the ways that nanotechnology, given a push in the right direction by nature, is starting to be important in our lives. At the moment we are most likely to come across relatively simple applications like the nanoparticles in sun block or technology making fabrics and electronics water repellent.
As our abilities to construct nanostructures improve we will see increased use of the likes of carbon nanotubes and the nano-optics described in this piece. And eventually? It is entirely possible that we will see Richard Feynman’s 1950s speculation about nanomachines come to fruition, though they are likely to be more like the ‘wet’ machines of nature than a traditional mechanical device.
When nanotechnology appears in the news it is often in a negative light. We might hear that Prince Charles is worrying about the threat of grey goo, or the Soil Association won’t allow artificial nanoparticles in organic products. But the reality is very different. Nanotechnology is both fascinating and immensely valuable in its applications. I, for one, can’t wait to see what comes next.
This series has been sponsored by P2i, a British company that specializes in producing nanoscale water repellent coatings. P2i was founded in 2004 to bring technologies developed at the UK Government’s Defence Science & Technology Laboratory to the commercial market. Applications range from the Aridion coating, applied to mobile technology inside and out after manufacture using a plasma, to protection for filtration media preventing clogging and coatings for trainers that reduce water absorption.
Anyone who talks to young children about science knows that there are two things that really grab their attention – dinosaurs and space. While I’m not aware of any antediluvian nanotechnology, there is certainly an absolutely stunning potential space application that has some natural inspirations. (I’m aware, by the way, that the word ‘antedeluvian’ is both anachronistic and unscientific… but it’s a lovely word that we really shouldn’t lose from the language.)
Nature has some amazing, extremely fine fibres. Take, for example, that everyday wonder, a spider’s web. The spider silk that makes up the web is a spun fibre constructed from proteins. Though light, these filaments are extremely resistant to fracture – tougher than steel. Spider silk is typically 3,000 nanometers across, but its toughness is down to its structure at the nano level.
A team at MIT discovered that the unusual strength is down to a substructure of ‘beta sheet crystals’, which hold the silk together. The linking is done by hydrogen bonds, the same kind of bonding that stops water from boiling at room temperature. Such bonds are easy to break, but the MIT scientists discovered that if they are confined to spaces just a few nanometers in span – as they are in the beta sheet crystals – they become exceedingly strong. So spider silk depends on a kind of nano-glue for its strength.
In the nanotechnology world, the equivalent of spider silk is the carbon nanotube. We are all familiar with the way carbon comes in different physical structures or ‘allotropes’ that have remarkably different properties. Chemically there is no difference between diamond and the graphite in a pencil ‘lead’ but physically one is extremely hard and the other has multiple planes that slide easily over each other making it effectively soft (although those planes themselves are surprisingly tough).
Another way to fit together a structure of carbon atoms is to form a tube. Imagine taking a plane of graphite a single atom thick (technically graphene) and folding it around to make a tubular shape. Carbon nanotubes are amongst the most amazing artefacts ever made. Though simple in structure, they are remarkable both in their strength and their other physical properties.
Electrically they can behave as if they were a metal or a semiconductor, simply as a result of the shape of the tube. Although carbon nanotube electronics is in its infancy, there is considerable speculation about the capabilities of nanotube products. They could be used to make everything from transistors that are switched by a single electron to batteries built into a sheet of paper. But their pièce de résistance is their strength. Carbon nanotubes make spider silk look like tissue. When you compare a nanotube’s strength per unit weight with steel it comes out around 300 times greater.
All kinds of applications are possible for such a remarkable material. Nanotubes are present in the much thicker carbon fibres used to reinforce everything from tennis rackets to bike frames, but only incidentally and in small quantities. At the moment they tend to be used in random bulk combinations of many small fragments – not as strong as a set of individual aligned nanotubes, but still enough to add strength and to change electrical properties. But one potential application could totally transform the space industry.
Getting things into space is expensive. Hugely expensive. To reach a geosynchronous orbit (of which more later) typically costs around $20,000 per kilogram. But there is a hypothetical nanotube technology that once developed could deliver satellites and even people into space for around 1/100th of this cost. What’s more, rocket technology is inherently risky. You will inevitably lose some of your space missions. Yet the nanotube technology could, once established, run day after day without problem.
Imagine you were sitting on top of a house and wanted to get something up there. You could have someone attach your payload to a rocket and shoot it in your direction. But like the space launch it’s a dangerous and expensive solution. Instead you are more likely to throw a piece of string off the roof, have a basket tied to it and then haul the object up.
Now extend this picture to the Empire State Building. Your piece of string would have to be very strong, which would make it quite heavy to haul up and down, increasing the cost of the process. What might be better is to keep the string (or more likely a piece of metal) in place and have the basket haul itself up and down along the supporting structure.
Time to take another jump into that geosynchronous orbit. An object in orbit is in a very strange state. It is in free fall, dropping towards the Earth – but at the same time it is moving sideways at just the right speed so it always misses. This, incidentally, is why people float around in the International Space Station. It’s not because there’s no gravity – the Earth’s gravitational pull at its height (350 kilometres above the Earth’s surface) is around 90% Earth normal. The astronauts float because they and the station are falling. But they stay in orbit because their sideways motion means they keep missing the planet.
Because of this balance, at any particular height there is one speed that keeps you in orbit. And if you go high enough – around 35,786 kilometres up – that speed is the same as the rotational speed of the Earth, making you geosynchronous. Point the orbit in the right direction and you will stay over the same point on the Earth’s surface (this is a geostationary orbit).
So, imagine you could drop a piece of string from a geostationary satellite down to the ground. You could then just send a lift (elevator) up the string and replace all that dangerous, expensive rocketry. What you’ve got is a space elevator – and to make it work, that string needs to be made from carbon nanotubes.
Of course this is a long way in the future, though a range of companies (including, bizarrely Google) are working on the technology required. There’s no doubt that Bradley Edwards of NASA’s Institute of Advanced Concepts was being over-optimistic when in 2002 he commented ‘[With nanotubes] I’m convinced that the space elevator is practical and doable. In 12 years, we could be launching tons of payload…’ However in a more reasonable timescale – perhaps another 30 or 40 years – it is entirely feasible. And you can’t fault the scope of imagination that allows the inspiration of spider silk to transport us into space.
Jim Baggott is a freelance science writer. He trained as a scientist, completing a doctorate in physical chemistry at Oxford in the early 80s, before embarking on post-doctoral research studies at Oxford and at Stanford University in California. He gave up a tenured lectureship at the University of Reading after five years in order to gain experience in the commercial world. He worked for Shell International Petroleum for 11 years before leaving to establish his own business consultancy and training practice. He writes about science, science history and philosophy in what spare time he can find. His books include Beyond Measure: Modern Physics, Philosophy and the Meaning of Quantum Theory (2003), A Beginners Guide to Reality (2005), Atomic: The First War of Physics and the Secret History of the Atom Bomb (2009), The Quantum Story: A History in 40 Moments (2011) and, most recently,Higgs: The Invention and Discovery of the ‘God Particle’ (2012).
Why science? I guess I’ve always had an innate, child-like curiosity about the nature of physical reality – matter, force, space, time and the universe. I was influenced in the direction of science by some truly great schoolteachers, and I became a chemist, for the simple reason that my competence in maths wasn’t strong enough for me to contemplate a career as a physicist. That said, my desire to seek explanations for things led me to physical chemistry (or even ‘chemical physics’) and it was with a great sense of pride and pleasure that I did manage to publish some entirely theoretical research papers, full of mathematical equations!
I left academia carrying a very strong desire to maintain my interests in science and, with the conspicuous help of a one-time features editor at New Scientist magazine, I learned a bit about writing popular science. I write principally to learn about a subject that interests me, a process that has been made considerably easier in the last 15 years by the emergence and development of resources on the web.
Why this book? The idea for this book came to me in the summer of 2010 as I was putting the finishing touches to the manuscript of The Quantum Story. That book sets out a history of quantum physics, told through 40 crucial ‘moments’ or turning-points both in theory and experiment. In March 2010, the Large Hadron Collider at CERN achieved record proton-proton collision energies of seven trillion electron-volts and I figured that if the Higgs boson really did exist (and there were going to be all kinds of trouble if it didn’t), then there was a very good chance that it would be discovered soon. I approached Oxford University Press with a proposal to write a book tracing the history of the so-called standard model of particle physics, placing the ‘invention’ of the Higgs field and the Higgs boson in its proper context, right up to the most recent developments at CERN. The idea was to have the book typeset and ready to go. I continued to update the final chapter through 2011 and early 2012, leaving 1500 words or so to describe the discovery itself. I followed the 4 July announcement live via a CERN webcast, and drafted the final words describing the discovery of ‘something that looks very much like the Higgs boson’ the following day. This was how we were able to publish Higgs so soon after the discovery announcement.
What’s next? I’ve already submitted the manuscript of my next book, called Farewell to Reality: How Fairy-tale Physics Betrays the Search for Scientific Truth. This will be published in the UK early next year by Constable & Robinson. The book was born out of a growing sense of frustration with the way that a lot of unproven (and arguably unprovable) contemporary theoretical physics is paraded as accepted science in the popular science literature. Farewell to Reality attempts to set the record straight. It provides a hopefully accessible summary of what I call the ‘authorised’ version of reality – quantum theory, the standard model of particle physics, the special and general theories of relativity and the standard model of big bang cosmology – and explains why this version can’t be right or, at least, why it can’t be the whole story. Attempts to fix the problems with this version of reality have given us supersymmetry, superstring/M-theory, various flavours of multiverse theory and the anthropic cosmological principle.
I give reasons why I think that much of this theory should be regarded as metaphysics rather than science. As Einstein once said: ‘Time and again the passion for understanding has led to the illusion that man is able to comprehend the objective world rationally by pure thought without any empirical foundations – in short, by metaphysics.’ The book will be controversial, and I’m really looking forward to its reception.
What’s exciting you at the moment? We’re still far from the end of the story about the Higgs boson. The evidence gathered by the two detector collaborations at the LHC – ATLAS and CMS – point very clearly to the existence of a new particle with many of the characteristics expected of ‘the’ Higgs boson as demanded by the current standard model. But further data are needed to be sure, and surprises are not impossible. I’m continuing to follow developments as best I can.
In the meantime, I’m mulling over potential topics for my next writing project. I’m intrigued by the possibility of going back to a short period in cold war history leading up to the decision to build the hydrogen bomb. In the United States, cold war nuclear strategy was framed by aspects of game theory. I’m thinking it might be interesting to personalize the different strategies in a prisoner’s dilemma type game using Einstein, Oppenheimer and von Neumann, all of whom were together at the Institute for Advanced Study in Princeton in this period. I’d like to explore these strategies through a sequence of imagined conversations between these three extraordinary intellects. What excites me is the challenge to get this right and make it compelling reading.
There is an interesting premise (and a dubious assumption) underlying this book. The premise is that some of the most interesting bits of science are the bits where we don’t actually know the answers – in this case, in the ‘evolutionary puzzles of human nature’ to quote the subtitle. The dubious bit is the author’s assumption that this is a new idea. David Barash comments ‘One of these days I will design a course titled something like “What we don’t know about biology,” hoping that my colleagues in chemistry, physics, geology, mathematics, psychology, and the like will join in the fun.’
It may be true that biologists often present their science as if it were all known facts, but I think physicists, for example, have always emphasized the gaps in out knowledge in their courses. If you think of cosmology, for example, with about 95% of the mass-energy of the universe unexplained, or the uncertainty over the standard model or quantum gravity, I think that it’s clear that in at least some sciences there is already a fairly widespread awareness, and maybe it’s just a matter of biology catching up.
Even so, it’s a good thing to acknowledge this – homing in on the detailed human biology aspects of what Stuart Firestein identified as the driving force of science: ignorance (in a good way). Barash takes on a detailed exploration of many of the mysteries of human biology – primarily sexual features (particular in the female), homosexuality, art and religion. He does this by examining different hypotheses for why, for example, menstruation takes such a dramatic form in humans (different from pretty well every other mammal), or why we create art.
In the process of examining these hypotheses, Barash can be quite vicious in attacking some ideas that he doesn’t like (particularly those proposed some while ago by poor old Desmond Morris, who gets a lot of stick). On the whole Barash’s writing style is good – amiable and approachable (though I think Richard Dawkins goes over the top calling it ‘A beautifully written book.’
In principle this should be good stuff, and bits of it are, particularly, I think, the first of the two chapters on art. However the reality is, to be honest, rather boring in far too many places. It’s partly because there’s no resolution. Of course it’s important to know that there are these areas where we don’t know the answers, but we all like to get to some conclusions, so the sheer open endedness of it can be trying. But it’s also because reading repeated hypothesis after hypothesis to explain particular traits, some of which can be rather samey, just gets dull after a while.
If this is an area that particularly interests you, then these different possibilities should be both informative and exciting. But if you are coming at this from a general interest in science, wanting to explore a new area, then the lack of conclusions will probably prove a touch frustrating, and the strings of hypotheses will test your boredom threshold.
Isaac Asimov was a great science fiction writer, but even the best has his off days, and Asimov’s low point was probably his involvement with the dire science fiction movie Fantastic Voyage. Asimov wasn’t responsible for the story, but provided the novelization – and he probably regretted it. The premise of the film was that miniaturization technology has made it possible shrink a submarine and its crew down to around 1,000 nanometres, sending it into a man’s bloodstream to find and destroy a blood clot on his brain.
Along the way the crew have various silly encounters with the body’s systems – but strip away the Hollywood shlock and underneath is an idea that has been developed in a lot more detail by IT pioneer and life extension enthusiast Ray Kurzweil. Based on the idea of miniature robotic devices – nanobots – Kurzweil believes that in the future we will not have a single manned Proteus submarine as featured in Fantastic Voyage in our bloodstreams but rather a whole host of nanobots that will undertake medical functions and keep humans of the future alive indefinitely.
As we have seen in The Importance of Being Wet, the chances are that any such devices would not be a simple miniaturization of existing mechanical robots with their flat metal surfaces and gears, but rather would be based on the wet technology of the natural nanoscale world.
Among the possibilities Kurzweil suggests are on the cards are self-propelling robotic replacements for blood cells (this eliminates the importance of the heart as a pump, and hence the risk of heart disease), built in monitors for any sign of the body drifting away from ideal operation, nanobots that can deliver drugs to control cancer or remove cancer cells, and even miniature robots that make direct repairs to genes.
Kurzweil also expects we might separate the pleasure of eating from getting the nutrients we need, leaving the latter to nanobots in the bloodstream that release the essentials when we need them, while other nano devices remove toxins from the blood and destroy unwanted food without it ever influencing our metabolism. You could pig out on anything you wanted, all day and every day, and never suffer the consequences. (Given Kurzweil is notorious for living on an unpleasant diet to attempt to extend his life until nanotechnology is available, perhaps this is wishful thinking.)
If we are to develop this kind of nanotechnology, there are two aspects of nature that we will need to use as guides. One is to listen to the bees. Bearing in mind just how small a medical nanobot would have to be, even with the best developments in electronics the chances are it would have to be relatively unintelligent – yet it would need to achieve quite complex tasks. Bees are an excellent natural model for a way to achieve this.
A colony of bees achieves remarkable things in the construction and maintenance of its hive – yet taken as individuals, bees have very little capacity for mental activity. The realization that transformed our understanding of bees is that they form a super-organism. In effect, a whole colony is a single organism, not a collection of individual bees. A bee is more like a cell in a typical animal than it is a whole creature. By having appropriate mechanisms for communicating between the component parts – in the case of bees, using everything from chemical scent markers to waggle dances – relatively incapable individuals can come together to make a greater whole.
It would be sensible to expect something similar from medical nanobots at work in a human body. Individually they could not be intelligent enough to carry out their functions properly – but collectively, if they can interact to form a super-organism, they could operate autonomously without an external control mechanism continuously providing them with orders.
A second model for these miniature medics is a piece of natural nanotechnology that we usually regard as a bad guy – the virus. Viruses are very small – typically between 20 and 400 nanometres in size – and they lack many of the essential components of a living entity. However they are able to reproduce and thrive by using a remarkably clever cheat. Lacking the physical space to carry all the components of a living cell, they take over an existing cell in their host and subvert its mechanism to do their reproduction for them.
The particular class of virus that may be particularly useful as a model for medical nanobots is the phage. These are amongst the weirdest looking natural structure – some have an uncanny resemblance of the Apollo Lunar module: they actually look as if they are the sort of nanotechnology we might construct.
The word ‘phage’ is short for bacteriophage – ‘bacteria eater’. These are viruses than instead of preying on human cells – or those of any other large scale animals – attack and destroy bacteria. Because there are so many bacteria out there (even the human body has ten times more bacteria than human cells on board), their predators are also immensely populous and diverse. Phages may not be common fare on David Attenborough’s nature programmes, but they play a major role in the overall biological life of the Earth.
Because phages attack bacteria, they can be beneficial to human life. Throughout human existence we have been plagued with bacterial infections. (Literally – bacteria, for example, cause bubonic plague.) It is only relatively recently that antibiotics have provided us with a miracle cure for bacterial attacks – but that miracle is weakening. Bacteria breed and evolve quickly. There are strains of bacteria that can resist most of the existing antibiotics. But phages have the potential to attack bacteria resistant to all antibiotics. For a long time phage therapy was restricted to the former Soviet Union, but interest is spreading in making use of phages in medical procedures.
The biggest problem with phages is getting them to the right place. But medical nanobots based on a phage’s ability to attack or modify particular cells, combined with a super-organism’s ability to act in a collective manner would have huge potential. Modified viruses are already used to insert genetic payloads into cells – but the nanotechnology of the future, inspired by the phage and the bee, could see something much closer to Kurzweil’s vision.
Moving away from the medical, and from individual nanoscale elements, in the next installment of Nature’s Nanotech we will see how natural nanotechnology plays a role in silk and how fibres based on a nanotechnology structure could make rockets obsolete for putting satellites into space.