Skip to main content

Shannon Vallor - AI Mirror Interview

Shannon Vallor is the Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence in the Department of Philosophy at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a Fellow of the Alan Turing Institute and former AI Ethicist at Google. She is the author of  Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford, 2016) Her latest book is The AI Mirror How to Reclaim Our Humanity in an Age of Machine Thinking.

Why philosophy?

I’m a philosopher because there’s no other area of research that investigates what our knowledge rests on, what justifies our beliefs, values, and choices, what legitimizes the power of the institutions and laws that govern us, what we owe to one another, and how we ought to live in order to flourish together. I have a hard time imagining more important questions to ask right now.

Why this book?

I’ve been writing about the ethics of artificial intelligence and robotics for over a decade, long before large language models and ChatGPT. But about 5 years ago I started thinking about the unique moral, political and philosophical questions being raised by AI systems trained on large volumes of human-generated data. These are machine learning models designed to mirror and reproduce the same patterns found in our past behaviours: our decisions, our speech and writing, our movements, our creative work and our thought processes. 

So it was fortunate that I was already writing this book when large language models arose, because the book was about exactly this approach to developing AI. But the book became much more urgent when the media hype around ChatGPT started fuelling distorted fears and fantasies about ‘superhuman’ AI that distract us from the truth about what these systems do, and what their benefits and risks to us really are. 

I wrote the book to help people better understand that today’s AI systems are not the ‘intelligent machine minds’ that our science fiction has long predicted, but instead are really just powerful but heavily distorted mirror images of our own intelligence. I also wanted to write a more readable, nonacademic book on AI, weaving in literature and humour, that would still allow readers to get at these really deep philosophical questions. 

Most of all I wanted readers to understand the danger that comes when we lose sight of the difference between our own humanity and its reflection in a machine mirror, and the folly of automating our lives and societies with mindless tools that cannot help us chart new and wiser paths into the future, but can only reproduce the unsustainable patterns of humanity’s past. The book’s message isn’t doom, but hope; we are still free to chart those better paths. It’s a call to action: not to reject technology, but to understand and relate to it in a new and better way.

AI as a mirror of human intelligence is a great metaphor - are there aspects of AI, though, where the metaphor can mislead us?

I have been asking my audiences to test the limits of the metaphor, because all metaphors have limits and break down eventually! But the metaphor has been surprisingly robust – more often people come up to me after a talk to tell me about other ways that the metaphor fits – additional properties that mirrors share with large machine learning models. 

That said, mirrors can’t be automated; their surfaces reproduce the pattern of your body or face only so long as you are standing there in front of them. The algorithmic surface of a machine learning model, on the other hand, reproduces the patterns it has received – the patterns in the ‘light’ of our data –  over and over again, every time we ask for it. And generative AI tools like GPT do something else mirrors don’t, which is add statistical noise to every reflection. That is, the outputs of a generative AI tool, whether text, image, or video, are designed to weave in some randomness, giving the output a degree of novelty or surprise that makes it seem more ‘creative’ or ‘intelligent’. But it’s just a trick of the light, so to speak; a random distortion added to the mirror image so that you don’t realize it’s a reflection, so that you might mistake it for something else.

What’s next?

That’s a great question, although I find that after a book is written it’s a good idea to take a year off and let new ideas enter into your brain before you start a new project. Otherwise you just end up writing the same thing in different ways. That said, I think the accelerating climate crisis, and the nearing prospects of catastrophic upheaval like an AMOC collapse*, are going to quickly force us to invent new ways of living with technology. These are explored in the last two chapters of The AI Mirror, but I think they need to be pushed further. I plan to spend the next year reading the work of others who are envisioning what a transition to a sustainable human culture looks like.

What’s exciting you at the moment?

The philosopher Albert Borgmann, four decades ago, wrote about how technological threats to automate humane values and experiences can, paradoxically, reawaken our appreciation of them, and spark a new commitment to protecting and holding space in our lives for them. I’m seeing exciting signs of this kind of wider awakening, as a response to AI and machine automation. I see a reinvigoration of public awareness of the value of the arts and creative labour, and a greater appreciation for and commitment to protect humane thought and expression. 

When Apple makes an ad like the May 2024 one for the iPad that celebrated the machine flattening of human culture, and people react with visceral anger and disgust, I think that’s healthy. When people see a Google ad at the Summer Olympics that celebrates AI’s ability to replace a child’s authentic self-expression, and they uniformly hate it, that’s a good sign. Sometimes the counterreaction is a misplaced hatred of AI and technology itself, which I firmly reject; I’m ultimately calling for us to restore a healthy, free human relationship to technology, not break it. But at their best, these reactions are responses of humane protection, of cherishing and jointly celebrating what we refuse to lose of ourselves. That’s something to get excited about.

* Ed: AMOC is the Atlantic Meridional Overturning Circulation, an seawater circulation system which is separate from but interacts with the much smaller Gulf Stream.

Image © Callum Bennetts – Maverick Photo Agency

These articles will always be free - but if you'd like to support my online work, consider buying a virtual coffee:
Interview by Brian Clegg - See all Brian's online articles or subscribe to a weekly email free here

Comments

Popular posts from this blog

Rakhat-Bi Abdyssagin Five Way Interview

Rakhat-Bi Abdyssagin (born in 1999) is a distinguished composer, concert pianist, music theorist and researcher. Three of his piano CDs have been released in Germany. He started his undergraduate degree at the age of 13 in Kazakhstan, and having completed three musical doctorates in prominent Italian music institutions at the age of 20, he has mastered advanced composition techniques. In 2024 he completed a PhD in music at the University of St Andrews / Royal Conservatoire of Scotland (researching timbre-texture co-ordinate in avant- garde music), and was awarded The Silver Medal of The Worshipful Company of Musicians, London. He has held visiting affiliations at the Universities of Oxford, Cambridge and UCL, and has been lecturing and giving talks internationally since the age of 13. His latest book is Quantum Mechanics and Avant Garde Music . What links quantum physics and avant-garde music? The entire book is devoted to this question. To put it briefly, there are many different link...

Should we question science?

I was surprised recently by something Simon Singh put on X about Sabine Hossenfelder. I have huge admiration for Simon, but I also have a lot of respect for Sabine. She has written two excellent books and has been helpful to me with a number of physics queries - she also had a really interesting blog, and has now become particularly successful with her science videos. This is where I'm afraid she lost me as audience, as I find video a very unsatisfactory medium to take in information - but I know it has mass appeal. This meant I was concerned by Simon's tweet (or whatever we are supposed to call posts on X) saying 'The Problem With Sabine Hossenfelder: if you are a fan of SH... then this is worth watching.' He was referencing a video from 'Professor Dave Explains' - I'm not familiar with Professor Dave (aka Dave Farina, who apparently isn't a professor, which is perhaps a bit unfortunate for someone calling out fakes), but his videos are popular and he...

Everything is Predictable - Tom Chivers *****

There's a stereotype of computer users: Mac users are creative and cool, while PC users are businesslike and unimaginative. Less well-known is that the world of statistics has an equivalent division. Bayesians are the Mac users of the stats world, where frequentists are the PC people. This book sets out to show why Bayesians are not just cool, but also mostly right. Tom Chivers does an excellent job of giving us some historical background, then dives into two key aspects of the use of statistics. These are in science, where the standard approach is frequentist and Bayes only creeps into a few specific applications, such as the accuracy of medical tests, and in decision theory where Bayes is dominant. If this all sounds very dry and unexciting, it's quite the reverse. I admit, I love probability and statistics, and I am something of a closet Bayesian*), but Chivers' light and entertaining style means that what could have been the mathematical equivalent of debating angels on...