Skip to main content

Harry Cliff - Space Oddities Interview

Harry Cliff is a particle physicist at the University of Cambridge working on the LHCb experiment, a huge particle detector buried 100 metres underground at CERN near Geneva. He is a member of an international team of around 1400 physicists, engineers and computer scientists who are using LHCb to study the basic building blocks of our universe. Harry also spend a big chunk of his time sharing his love of physics with the public. From 2012 to 2018 he held a joint post between Cambridge and the Science Museum in London, where he curated two major exhibitions: Collider (2013) and The Sun (2018). His latest title is Space Oddities.

Why science?

It’s hard to remember a single moment that turned me on to science, but like a lot of small children, I was fascinated by dinosaurs and tried to drag my parents up to the Natural History Museum as often as I could. Aged about six, I wanted to be a palaeontologist, which was probably the longest word I knew at the time. Another moment came at secondary school when we started to learn about atoms and how the orbits of electrons explained the properties of the chemical elements. I found that idea, that all these radically different materials emerged from rearrangements of tiny constituents, electrifying. That’s still what excites me about physics today – that you can explain all the richness of the world around us in terms of a tiny number of basic building blocks interacting through a small number of forces, all following remarkably simple laws.

There’s another aspect too – science is a source of optimism. Science progresses, our knowledge deepens, our understanding grows, as does the frontier of our ignorance. Not only that, but scientific understanding allows us to achieve things we never could have imagined without it. It’s the most powerful tool humans have for understanding and controlling the world around us. At a time when it feels like we’re going backwards in a lot of ways, the fact that science continues to advance is a source of hope.

Why are anomalies important to science?

Isaac Asimov, the American science fiction author, once wrote that ‘the most exciting phrase to hear in science isn’t “Eureka!” (I found it!) but “that’s funny...”’. We tend to have this romantic idea that breakthroughs in science come from flashes of inspiration, like the myth of Newton seeing an apple fall from a tree and inventing his law of gravitation or Archimedes jumping out of the bath, but many of the biggest advances have come from small niggling effects in experiments, that could have all too easily have been dismissed. 

A great example of this is the evidence that finally clinched the Big Bang as the correct theory of the universe. In 1963-64 two radio astronomers, Arno Penzias and Robert Wilson, were plagued by a low-level microwave noise in the antenna that they were trying to use to make astronomical observations. After eliminating a catalogue of possible causes, including famously evicting two pigeons that had taken up roost in their instrument and cleaning up ‘the white dielectric material’ that they left behind, they eventually discovered that the noise that was spoiling their measurements was in fact the faded light from the fireball of the Big Bang. It was their dogged pursuit of this irritating microwave buzz that led to what was arguably the most profound realisation in the history of science – that the universe had a beginning.

Stories like this crop up again and again through the history of physics. Our modern theory of particles and forces was founded on the basis of an anomaly in the spectrum of hydrogen. The first evidence for Einstein’s general theory of relativity came from a minute anomaly in the orbit of Mercury around the Sun. Of course, more often than not, anomalies have a mundane explanation, but occasionally they give us the key to a new view of nature. That’s what makes them so fascinating and exciting.

Which current anomaly has the most potential to create a paradigm shift?

Of the five big anomalies I discuss in Space Oddities, I find the disagreement over how fast the universe is expanding the most compelling. This anomaly is known as the ‘Hubble tension’ in cosmology, and essentially boils down to the fact that when astronomers measure the expansion rate of space by observing distant galaxies, they get a significantly different result to what is predicted. These predictions are based on the measured properties of the early universe deduced from the cosmic microwave background, the faded light from the fireball of the Big Bang, projected forward to the present day using the standard cosmological model. 

The fact that this anomaly has persisted for over a decade comes from multiple different measurements, and is, statically speaking at least, very significant makes it challenging to explain as a missed bias in either the measurements or the prediction. That said, there is still a healthy debate between different teams over the crucial issue of how you measure distances to galaxies, and just a month or so ago a new study using the James Webb Space Telescope came out that cast some doubt on the anomaly. But if I was forced to put my money on an anomaly panning out, it’d be that one.

If the Hubble tension is the real deal, then it’s potentially telling us several things about the universe at once, perhaps that dark energy varies with time, that dark matter has its own set of forces or even that our theory of gravity needs revision. Any of these would have an enormous impact on the way we think about the universe and its history.

The description of the LHCb anomalies and how they turned out to be due to incorrectly filtering out misleading signals is fascinating - but it makes you think, can we know for certain that any modern particle physics result is not due to a failure to rule out misleading signals?

I’d turn this around and say that way the mistake was discovered and corrected is actually a reason why we can trust particle physics results. It shows that science is a self-correcting process. That when mistakes are made, they are invariably found out one way or another and then fixed. 

We went through this rollercoaster on LHCb were for a long while we were seeing evidence that beauty quarks were decaying in ways that cannot be explained by the standard model of particle physics. The implications were huge, as if confirmed the results would have implied the existence of a new fundamental force. Several different results seemed to confirm this until, to our horror, we gradually realised that there was a background process that was biasing the results. However, to the credit of my colleagues, they didn’t hide the mistake and worked incredibly hard to put out a corrected result that the community could have confidence in. 

However, let’s imagine that we hadn’t found the mistake, or that we’d been unscrupulous so-and-sos and hidden it. Well, rightly, the community would never have fully believed that a new force had been discovered until other experiments had verified it, and when that happened the truth would have outed anyway. That’s why there are two big general-purpose detectors at the Large Hadron Collider – they exist to cross-check each other. One of the crucial facts that convinced the world that the Higgs boson really was discovered in 2012 was that two independent experiments saw the same particle at the same energy. 

Ultimately, any discovery needs verification. And even when mistakes are made, you always learn from them, whether that’s how to perform certain kinds of experiments or the correct way to carry out theoretical calculations. My undergraduate physics tutor used to have a card above his desk that read, ‘I’ve learned so much from my mistakes, I think I’ll make another’.

Is there still a chance for the LHC to produce results that change the standard model, or have its limits been reached?

There’s definitely a very good chance that the LHC will still find something from beyond the standard model. By 2029, the collider will have gotten a major upgrade to dramatically increase the rate it collides protons, meaning that the rate we accumulate data will increase enormously. This will make it possible to eke out ever rarer or more subtle signals that could have been missed so far. What’s more, physicists are constantly coming up with new ways to look at the existing data to spy signals that might be hiding in the data.

However, even in the absence of new physics, by the time the LHC switches off some time in the early 2040s it will have left an enormous legacy in terms of understanding the standard model itself. There are still many phenomena predicted by our current theory that we don’t fully understand. For instance, we don’t actually know whether the Higgs boson discovered in 2012 is really the bog-standard model Higgs or perhaps something more exotic. The LHC will continue to put the Higgs under the microscope over the next decade and a half giving us a much clearer picture of this all important particle. At the same time, we’re constantly learning more about the properties of the known particles and discovering new phenomena associated with the strong force, like tetraquarks and pentaquarks, or probing the super-heated liquid known as ‘quark-gluon plasma’ that filled the very early universe. The LHC still has a lot to teach us.

Comments

Popular posts from this blog

David Spiegelhalter Five Way interview

Professor Sir David Spiegelhalter FRS OBE is Emeritus Professor of Statistics in the Centre for Mathematical Sciences at the University of Cambridge. He was previously Chair of the Winton Centre for Risk and Evidence Communication and has presented the BBC4 documentaries Tails you Win: the Science of Chance, the award-winning Climate Change by Numbers. His bestselling book, The Art of Statistics , was published in March 2019. He was knighted in 2014 for services to medical statistics, was President of the Royal Statistical Society (2017-2018), and became a Non-Executive Director of the UK Statistics Authority in 2020. His latest book is The Art of Uncertainty . Why probability? because I have been fascinated by the idea of probability, and what it might be, for over 50 years. Why is the ‘P’ word missing from the title? That's a good question.  Partly so as not to make it sound like a technical book, but also because I did not want to give the impression that it was yet another book

The Genetic Book of the Dead: Richard Dawkins ****

When someone came up with the title for this book they were probably thinking deep cultural echoes - I suspect I'm not the only Robert Rankin fan in whom it raised a smile instead, thinking of The Suburban Book of the Dead . That aside, this is a glossy and engaging book showing how physical makeup (phenotype), behaviour and more tell us about the past, with the messenger being (inevitably, this being Richard Dawkins) the genes. Worthy of comment straight away are the illustrations - this is one of the best illustrated science books I've ever come across. Generally illustrations are either an afterthought, or the book is heavily illustrated and the text is really just an accompaniment to the pictures. Here the full colour images tie in directly to the text. They are not asides, but are 'read' with the text by placing them strategically so the picture is directly with the text that refers to it. Many are photographs, though some are effective paintings by Jana Lenzová. T

Everything is Predictable - Tom Chivers *****

There's a stereotype of computer users: Mac users are creative and cool, while PC users are businesslike and unimaginative. Less well-known is that the world of statistics has an equivalent division. Bayesians are the Mac users of the stats world, where frequentists are the PC people. This book sets out to show why Bayesians are not just cool, but also mostly right. Tom Chivers does an excellent job of giving us some historical background, then dives into two key aspects of the use of statistics. These are in science, where the standard approach is frequentist and Bayes only creeps into a few specific applications, such as the accuracy of medical tests, and in decision theory where Bayes is dominant. If this all sounds very dry and unexciting, it's quite the reverse. I admit, I love probability and statistics, and I am something of a closet Bayesian*), but Chivers' light and entertaining style means that what could have been the mathematical equivalent of debating angels on