Skip to main content

Shannon Vallor - AI Mirror Interview

Shannon Vallor is the Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence in the Department of Philosophy at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a Fellow of the Alan Turing Institute and former AI Ethicist at Google. She is the author of  Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford, 2016) Her latest book is The AI Mirror How to Reclaim Our Humanity in an Age of Machine Thinking.

Why philosophy?

I’m a philosopher because there’s no other area of research that investigates what our knowledge rests on, what justifies our beliefs, values, and choices, what legitimizes the power of the institutions and laws that govern us, what we owe to one another, and how we ought to live in order to flourish together. I have a hard time imagining more important questions to ask right now.

Why this book?

I’ve been writing about the ethics of artificial intelligence and robotics for over a decade, long before large language models and ChatGPT. But about 5 years ago I started thinking about the unique moral, political and philosophical questions being raised by AI systems trained on large volumes of human-generated data. These are machine learning models designed to mirror and reproduce the same patterns found in our past behaviours: our decisions, our speech and writing, our movements, our creative work and our thought processes. 

So it was fortunate that I was already writing this book when large language models arose, because the book was about exactly this approach to developing AI. But the book became much more urgent when the media hype around ChatGPT started fuelling distorted fears and fantasies about ‘superhuman’ AI that distract us from the truth about what these systems do, and what their benefits and risks to us really are. 

I wrote the book to help people better understand that today’s AI systems are not the ‘intelligent machine minds’ that our science fiction has long predicted, but instead are really just powerful but heavily distorted mirror images of our own intelligence. I also wanted to write a more readable, nonacademic book on AI, weaving in literature and humour, that would still allow readers to get at these really deep philosophical questions. 

Most of all I wanted readers to understand the danger that comes when we lose sight of the difference between our own humanity and its reflection in a machine mirror, and the folly of automating our lives and societies with mindless tools that cannot help us chart new and wiser paths into the future, but can only reproduce the unsustainable patterns of humanity’s past. The book’s message isn’t doom, but hope; we are still free to chart those better paths. It’s a call to action: not to reject technology, but to understand and relate to it in a new and better way.

AI as a mirror of human intelligence is a great metaphor - are there aspects of AI, though, where the metaphor can mislead us?

I have been asking my audiences to test the limits of the metaphor, because all metaphors have limits and break down eventually! But the metaphor has been surprisingly robust – more often people come up to me after a talk to tell me about other ways that the metaphor fits – additional properties that mirrors share with large machine learning models. 

That said, mirrors can’t be automated; their surfaces reproduce the pattern of your body or face only so long as you are standing there in front of them. The algorithmic surface of a machine learning model, on the other hand, reproduces the patterns it has received – the patterns in the ‘light’ of our data –  over and over again, every time we ask for it. And generative AI tools like GPT do something else mirrors don’t, which is add statistical noise to every reflection. That is, the outputs of a generative AI tool, whether text, image, or video, are designed to weave in some randomness, giving the output a degree of novelty or surprise that makes it seem more ‘creative’ or ‘intelligent’. But it’s just a trick of the light, so to speak; a random distortion added to the mirror image so that you don’t realize it’s a reflection, so that you might mistake it for something else.

What’s next?

That’s a great question, although I find that after a book is written it’s a good idea to take a year off and let new ideas enter into your brain before you start a new project. Otherwise you just end up writing the same thing in different ways. That said, I think the accelerating climate crisis, and the nearing prospects of catastrophic upheaval like an AMOC collapse*, are going to quickly force us to invent new ways of living with technology. These are explored in the last two chapters of The AI Mirror, but I think they need to be pushed further. I plan to spend the next year reading the work of others who are envisioning what a transition to a sustainable human culture looks like.

What’s exciting you at the moment?

The philosopher Albert Borgmann, four decades ago, wrote about how technological threats to automate humane values and experiences can, paradoxically, reawaken our appreciation of them, and spark a new commitment to protecting and holding space in our lives for them. I’m seeing exciting signs of this kind of wider awakening, as a response to AI and machine automation. I see a reinvigoration of public awareness of the value of the arts and creative labour, and a greater appreciation for and commitment to protect humane thought and expression. 

When Apple makes an ad like the May 2024 one for the iPad that celebrated the machine flattening of human culture, and people react with visceral anger and disgust, I think that’s healthy. When people see a Google ad at the Summer Olympics that celebrates AI’s ability to replace a child’s authentic self-expression, and they uniformly hate it, that’s a good sign. Sometimes the counterreaction is a misplaced hatred of AI and technology itself, which I firmly reject; I’m ultimately calling for us to restore a healthy, free human relationship to technology, not break it. But at their best, these reactions are responses of humane protection, of cherishing and jointly celebrating what we refuse to lose of ourselves. That’s something to get excited about.

* Ed: AMOC is the Atlantic Meridional Overturning Circulation, an seawater circulation system which is separate from but interacts with the much smaller Gulf Stream.

Image © Callum Bennetts – Maverick Photo Agency

These articles will always be free - but if you'd like to support my online work, consider buying a virtual coffee:
Interview by Brian Clegg - See all Brian's online articles or subscribe to a weekly email free here

Comments

Popular posts from this blog

The Infinite Alphabet - Cesar Hidalgo ****

Although taking a very new approach, this book by a physicist working in economics made me nostalgic for the business books of the 1980s. More on why in a moment, but Cesar Hidalgo sets out to explain how it is knowledge - how it is developed, how it is managed and forgotten - that makes the difference between success and failure. When I worked for a corporate in the 1980s I was very taken with Tom Peters' business books such of In Search of Excellence (with Robert Waterman), which described what made it possible for some companies to thrive and become huge while others failed. (It's interesting to look back to see a balance amongst the companies Peters thought were excellent, with successes such as Walmart and Intel, and failures such as Wang and Kodak.) In a similar way, Hidalgo uses case studies of successes and failures for both businesses and countries in making effective use of knowledge to drive economic success. When I read a Tom Peters book I was inspired and fired up...

God: the Science, the Evidence - Michel-Yves Bolloré and Olivier Bonnassies ***

This is, to say the least, an oddity, but a fascinating one. A translation of a French bestseller, it aims to put forward an examination of the scientific evidence for the existence of a deity… and various other things, as this is a very oddly structured book (more on that in a moment). In The God Delusion , Richard Dawkins suggested that we should treat the existence of God as a scientific claim, which is exactly what the authors do reasonably well in the main part of the book. They argue that three pieces of scientific evidence in particular are supportive of the existence of a (generic) creator of the universe. These are that the universe had a beginning, the fine tuning of natural constants and the unlikeliness of life.  To support their evidence, Bolloré and Bonnassies give a reasonable introduction to thermodynamics and cosmology. They suggest that the expected heat death of the universe implies a beginning (for good thermodynamic reasons), and rightly give the impression tha...

Humble Pi - Matt Parker ****

Matt Parker had me thoroughly enjoying this collection of situations where maths and numbers go wrong in everyday life. I think the book's title is a little weak - 'Humble Pi' doesn't really convey what it's about, but that subtitle 'a comedy of maths errors' is far more informative. With his delightful conversational style, honed in his stand-up maths shows, it feels as if Parker is a friend down the pub, relating the story of some technical disaster driven by maths and computing, or regaling us with a numerical cock-up. These range from the spectacular - wobbling and collapsing bridges, for example - to the small but beautifully formed, such as Excel's rounding errors. Sometimes it's Parker's little asides that are particularly attractive. I loved his rant on why phone numbers aren't numbers at all (would it be meaningful for someone to ask you what half your phone number is?). We discover the trials and tribulations of getting cal...