Skip to main content

Shannon Vallor - AI Mirror Interview

Shannon Vallor is the Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence in the Department of Philosophy at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a Fellow of the Alan Turing Institute and former AI Ethicist at Google. She is the author of  Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford, 2016) Her latest book is The AI Mirror How to Reclaim Our Humanity in an Age of Machine Thinking.

Why philosophy?

I’m a philosopher because there’s no other area of research that investigates what our knowledge rests on, what justifies our beliefs, values, and choices, what legitimizes the power of the institutions and laws that govern us, what we owe to one another, and how we ought to live in order to flourish together. I have a hard time imagining more important questions to ask right now.

Why this book?

I’ve been writing about the ethics of artificial intelligence and robotics for over a decade, long before large language models and ChatGPT. But about 5 years ago I started thinking about the unique moral, political and philosophical questions being raised by AI systems trained on large volumes of human-generated data. These are machine learning models designed to mirror and reproduce the same patterns found in our past behaviours: our decisions, our speech and writing, our movements, our creative work and our thought processes. 

So it was fortunate that I was already writing this book when large language models arose, because the book was about exactly this approach to developing AI. But the book became much more urgent when the media hype around ChatGPT started fuelling distorted fears and fantasies about ‘superhuman’ AI that distract us from the truth about what these systems do, and what their benefits and risks to us really are. 

I wrote the book to help people better understand that today’s AI systems are not the ‘intelligent machine minds’ that our science fiction has long predicted, but instead are really just powerful but heavily distorted mirror images of our own intelligence. I also wanted to write a more readable, nonacademic book on AI, weaving in literature and humour, that would still allow readers to get at these really deep philosophical questions. 

Most of all I wanted readers to understand the danger that comes when we lose sight of the difference between our own humanity and its reflection in a machine mirror, and the folly of automating our lives and societies with mindless tools that cannot help us chart new and wiser paths into the future, but can only reproduce the unsustainable patterns of humanity’s past. The book’s message isn’t doom, but hope; we are still free to chart those better paths. It’s a call to action: not to reject technology, but to understand and relate to it in a new and better way.

AI as a mirror of human intelligence is a great metaphor - are there aspects of AI, though, where the metaphor can mislead us?

I have been asking my audiences to test the limits of the metaphor, because all metaphors have limits and break down eventually! But the metaphor has been surprisingly robust – more often people come up to me after a talk to tell me about other ways that the metaphor fits – additional properties that mirrors share with large machine learning models. 

That said, mirrors can’t be automated; their surfaces reproduce the pattern of your body or face only so long as you are standing there in front of them. The algorithmic surface of a machine learning model, on the other hand, reproduces the patterns it has received – the patterns in the ‘light’ of our data –  over and over again, every time we ask for it. And generative AI tools like GPT do something else mirrors don’t, which is add statistical noise to every reflection. That is, the outputs of a generative AI tool, whether text, image, or video, are designed to weave in some randomness, giving the output a degree of novelty or surprise that makes it seem more ‘creative’ or ‘intelligent’. But it’s just a trick of the light, so to speak; a random distortion added to the mirror image so that you don’t realize it’s a reflection, so that you might mistake it for something else.

What’s next?

That’s a great question, although I find that after a book is written it’s a good idea to take a year off and let new ideas enter into your brain before you start a new project. Otherwise you just end up writing the same thing in different ways. That said, I think the accelerating climate crisis, and the nearing prospects of catastrophic upheaval like an AMOC collapse*, are going to quickly force us to invent new ways of living with technology. These are explored in the last two chapters of The AI Mirror, but I think they need to be pushed further. I plan to spend the next year reading the work of others who are envisioning what a transition to a sustainable human culture looks like.

What’s exciting you at the moment?

The philosopher Albert Borgmann, four decades ago, wrote about how technological threats to automate humane values and experiences can, paradoxically, reawaken our appreciation of them, and spark a new commitment to protecting and holding space in our lives for them. I’m seeing exciting signs of this kind of wider awakening, as a response to AI and machine automation. I see a reinvigoration of public awareness of the value of the arts and creative labour, and a greater appreciation for and commitment to protect humane thought and expression. 

When Apple makes an ad like the May 2024 one for the iPad that celebrated the machine flattening of human culture, and people react with visceral anger and disgust, I think that’s healthy. When people see a Google ad at the Summer Olympics that celebrates AI’s ability to replace a child’s authentic self-expression, and they uniformly hate it, that’s a good sign. Sometimes the counterreaction is a misplaced hatred of AI and technology itself, which I firmly reject; I’m ultimately calling for us to restore a healthy, free human relationship to technology, not break it. But at their best, these reactions are responses of humane protection, of cherishing and jointly celebrating what we refuse to lose of ourselves. That’s something to get excited about.

* Ed: AMOC is the Atlantic Meridional Overturning Circulation, an seawater circulation system which is separate from but interacts with the much smaller Gulf Stream.

Image © Callum Bennetts – Maverick Photo Agency

These articles will always be free - but if you'd like to support my online work, consider buying a virtual coffee:
Interview by Brian Clegg - See all Brian's online articles or subscribe to a weekly email free here

Comments

Popular posts from this blog

David Spiegelhalter Five Way interview

Professor Sir David Spiegelhalter FRS OBE is Emeritus Professor of Statistics in the Centre for Mathematical Sciences at the University of Cambridge. He was previously Chair of the Winton Centre for Risk and Evidence Communication and has presented the BBC4 documentaries Tails you Win: the Science of Chance, the award-winning Climate Change by Numbers. His bestselling book, The Art of Statistics , was published in March 2019. He was knighted in 2014 for services to medical statistics, was President of the Royal Statistical Society (2017-2018), and became a Non-Executive Director of the UK Statistics Authority in 2020. His latest book is The Art of Uncertainty . Why probability? because I have been fascinated by the idea of probability, and what it might be, for over 50 years. Why is the ‘P’ word missing from the title? That's a good question.  Partly so as not to make it sound like a technical book, but also because I did not want to give the impression that it was yet another book

The Genetic Book of the Dead: Richard Dawkins ****

When someone came up with the title for this book they were probably thinking deep cultural echoes - I suspect I'm not the only Robert Rankin fan in whom it raised a smile instead, thinking of The Suburban Book of the Dead . That aside, this is a glossy and engaging book showing how physical makeup (phenotype), behaviour and more tell us about the past, with the messenger being (inevitably, this being Richard Dawkins) the genes. Worthy of comment straight away are the illustrations - this is one of the best illustrated science books I've ever come across. Generally illustrations are either an afterthought, or the book is heavily illustrated and the text is really just an accompaniment to the pictures. Here the full colour images tie in directly to the text. They are not asides, but are 'read' with the text by placing them strategically so the picture is directly with the text that refers to it. Many are photographs, though some are effective paintings by Jana Lenzová. T

Everything is Predictable - Tom Chivers *****

There's a stereotype of computer users: Mac users are creative and cool, while PC users are businesslike and unimaginative. Less well-known is that the world of statistics has an equivalent division. Bayesians are the Mac users of the stats world, where frequentists are the PC people. This book sets out to show why Bayesians are not just cool, but also mostly right. Tom Chivers does an excellent job of giving us some historical background, then dives into two key aspects of the use of statistics. These are in science, where the standard approach is frequentist and Bayes only creeps into a few specific applications, such as the accuracy of medical tests, and in decision theory where Bayes is dominant. If this all sounds very dry and unexciting, it's quite the reverse. I admit, I love probability and statistics, and I am something of a closet Bayesian*), but Chivers' light and entertaining style means that what could have been the mathematical equivalent of debating angels on