Skip to main content

Nature’s Nanotech #4 – The importance of being wet – Brian Clegg


Fourth in our Nature’s Nanotech series
The image that almost always springs to mind when nanotechnology is mention is Drexler’s tiny army of assemblers and the threat of being overwhelmed by grey goo. But what many forget is that there is a fundamental problem in physics facing anyone building invisibly small robots (nanobots) – something that was spotted by the man who first came up with the concept of working on the nanoscale.
That man was Richard Feynman. His name may not be as well known outside physics circles as, say, Stephen Hawking, but ask a physicist to add a third to a triumvirate of heroes with Newton and Einstein and most would immediately choose Feynman. It didn’t hurt that Richard Feynman was a bongo-playing charmer whose lectures delighted even those who couldn’t understand the science, helped by an unexpected Bronx accent – imagine Tony Curtis lecturing on quantum theory.
Feynman became best known to the media for his dramatic contribution to the Challenger inquiry, when in front of the cameras he plunged an O-ring into iced water to show how it lost its elasticity. But on an evening in December 1959 he gave a lecture that laid the foundation for all future ideas of nanobots. His talk at the annual meeting of the American Physical Society was titled There’s Plenty of Room at the Bottom, and his subject was manipulating and controlling things on a small scale.
Feynman pointed out that people were amazed by a device that could write the Lord’s Prayer on the head of a pin. But ‘Why cannot we write the entire 24 volumes of the Encyclopedia Britannica on the head of a pin?’ As he pointed out, the dots that make up a printed image, if reduced to a scale that took the area of paper in the encyclopedia down to pinhead size, would still contain 1,000 atoms each – plenty of material to make a pixel. And it could be read with technology they had already.
Feynman went on to describe how it would be possible to write at this scale, but also took in the idea that the monster computers of his day would have to become smaller and smaller to cram in the extra circuits required for sophisticated computation. Then he described how engineering could be undertaken on the nanoscale, and to do so, he let his imagination run a little wild.
What Feynman envisaged was making use of the servo ‘hands’ found in nuclear plants to act remotely, but instead of making the hands the same size as the original human hands, building them on a quarter scale. He would also construct quarter size lathes to produce scaled down parts for new devices. These quarter scale tools would be used to produce sixteenth scale hands and lathes, which themselves would produce sixty-fourth scale items… and so on, until reaching the nanoscale.
The second component of Feynman’s vision was a corresponding multiplication of quantity, as you would need billions of nanobots to do anything practical. So he would not make one set of quarter scale hands, but ten. Each of those would produce 10 sixteenth scale devices, so there would be 100 of them – and so on. Feynman points out there would not be a problem of space or materials, because one billion 1/4000 scale lathes would only take up two percent of the space and materials of a conventional lathe.
When he discussed running nanoscale machines, Feynman even considered the effect on lubrication. The mechanical devices we are familiar with need oil to prevent them ceasing up. As he pointed out, the effective viscosity of oil gets higher and higher in proportion as you go down in scale. It stops being a lubricant and starts being like attempting to operate in a bowl of tar. But, he argues, you may well not need lubricants, as the bearings won’t run hot because the heat would escape very rapidly from such a small device.
So far, so good, but what is the problem Feynman mentions? He points out that ‘As we go down in size there are a number of interesting problems that arise. All things do not simply scale down in proportion.’ Specifically, as things get smaller they begin to stick together. If you unscrewed a nanonut from a nanobolt it wouldn’t fall off – the Van der Waals force we met on the gecko’s foot is stronger than the force of gravity on this scale. Small things stick together in a big way.
Feynman is aware there would be problems. ‘It would be like those old movies of a man with his hands full of molasses, trying to get rid of a glass of water.’ But he does effectively dismiss the problems. In reality, the nano-engineer doesn’t just have Van der Waals forces to deal with. Mechanical engineering generally involves flat surfaces briefly coming together to transfer force from one to the other, as when the teeth of a pair of gears mesh. But down at the nanoscale a new, almost magical, force springs into life – the Casimir effect.
If two plates get very close, they are attracted towards each other. This has nothing to do with electromagnetism, like the Van der Waals force, but is the result of a weird aspect of quantum theory. All the time, throughout all of space, quantum particles briefly spring into existence, then annihilate each other. An apparently empty vacuum is, in fact, a seething mass of particles that exist for such a short space of time that we don’t notice them.
However, one circumstance when these particles do come to the fore is when there are two sheets of material very close to each other. If the space separating the sheets is close enough, far fewer of these ‘virtual’ particles can appear between them than outside them. The result is a real pressure that pushes the plates together. Tiny parallel surfaces slam together under this pressure.
The result of these effects is that even though toy nanoscale gears have been constructed from atoms, a real nanotechnology machine – a nanobot – would simply not work using conventional engineering. Instead the makers of nanobots need to look to nature. Because the natural world has plenty of nanoscale machines, moving around, interacting and working. What’s the big difference? Biological machines are wet and soft.
By this I don’t mean they use water as a lubricant rather than oil, but rather they are not usually a device made up of a series of interlocking mechanical components like our machines but rather use a totally different approach to mechanisms and interaction that results in a ‘wet’, soft environment lacking flat surfaces and the opportunities for small scale stickiness to get in the way of their workings.
If we are to build nanomachines, our engineers need to think in a totally different way. We need to dismiss Feynman’s picture of miniature lathes, nuts, bolts and gears. Instead our model has to be the natural world and the mechanisms that evolution has generated to make our, admittedly inefficient, but still functioning nanoscale technology work and thrive. The challenge is huge – but so is the potential.
In the next article in this series we will look at the lessons we can learn from a specific example of nature’s ability to manufacture technology on the nanoscale – the remarkable virus.

Comments

Popular posts from this blog

Roger Highfield - Stephen Hawking: genius at work interview

Roger Highfield OBE is the Science Director of the Science Museum Group. Roger has visiting professorships at the Department of Chemistry, UCL, and at the Dunn School, University of Oxford, is a Fellow of the Academy of Medical Sciences, and a member of the Medical Research Council and Longitude Committee. He has written or co-authored ten popular science books, including two bestsellers. His latest title is Stephen Hawking: genius at work . Why science? There are three answers to this question, depending on context: Apollo; Prime Minister Margaret Thatcher, along with the world’s worst nuclear accident at Chernobyl; and, finally, Nullius in verba . Growing up I enjoyed the sciencey side of TV programmes like Thunderbirds and The Avengers but became completely besotted when, in short trousers, I gazed up at the moon knowing that two astronauts had paid it a visit. As the Apollo programme unfolded, I became utterly obsessed. Today, more than half a century later, the moon landings are

Space Oddities - Harry Cliff *****

In this delightfully readable book, Harry Cliff takes us into the anomalies that are starting to make areas of physics seems to be nearing a paradigm shift, just as occurred in the past with relativity and quantum theory. We start with, we are introduced to some past anomalies linked to changes in viewpoint, such as the precession of Mercury (explained by general relativity, though originally blamed on an undiscovered planet near the Sun), and then move on to a few examples of apparent discoveries being wrong: the BICEP2 evidence for inflation (where the result was caused by dust, not the polarisation being studied),  the disappearance of an interesting blip in LHC results, and an apparent mistake in the manipulation of numbers that resulted in alleged discovery of dark matter particles. These are used to explain how statistics plays a part, and the significance of sigmas . We go on to explore a range of anomalies in particle physics and cosmology that may indicate either a breakdown i

Splinters of Infinity - Mark Wolverton ****

Many of us who read popular science regularly will be aware of the 'great debate' between American astronomers Harlow Shapley and Heber Curtis in 1920 over whether the universe was a single galaxy or many. Less familiar is the clash in the 1930s between American Nobel Prize winners Robert Millikan and Arthur Compton over the nature of cosmic rays. This not a book about the nature of cosmic rays as we now understand them, but rather explores this confrontation between heavyweight scientists. Millikan was the first in the fray, and often wrongly named in the press as discoverer of cosmic rays. He believed that this high energy radiation from above was made up of photons that ionised atoms in the atmosphere. One of the reasons he was determined that they should be photons was that this fitted with his thesis that the universe was in a constant state of creation: these photons, he thought, were produced in the birth of new atoms. This view seems to have been primarily driven by re