Skip to main content

Shannon Vallor - AI Mirror Interview

Shannon Vallor is the Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence in the Department of Philosophy at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a Fellow of the Alan Turing Institute and former AI Ethicist at Google. She is the author of  Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford, 2016) Her latest book is The AI Mirror How to Reclaim Our Humanity in an Age of Machine Thinking.

Why philosophy?

I’m a philosopher because there’s no other area of research that investigates what our knowledge rests on, what justifies our beliefs, values, and choices, what legitimizes the power of the institutions and laws that govern us, what we owe to one another, and how we ought to live in order to flourish together. I have a hard time imagining more important questions to ask right now.

Why this book?

I’ve been writing about the ethics of artificial intelligence and robotics for over a decade, long before large language models and ChatGPT. But about 5 years ago I started thinking about the unique moral, political and philosophical questions being raised by AI systems trained on large volumes of human-generated data. These are machine learning models designed to mirror and reproduce the same patterns found in our past behaviours: our decisions, our speech and writing, our movements, our creative work and our thought processes. 

So it was fortunate that I was already writing this book when large language models arose, because the book was about exactly this approach to developing AI. But the book became much more urgent when the media hype around ChatGPT started fuelling distorted fears and fantasies about ‘superhuman’ AI that distract us from the truth about what these systems do, and what their benefits and risks to us really are. 

I wrote the book to help people better understand that today’s AI systems are not the ‘intelligent machine minds’ that our science fiction has long predicted, but instead are really just powerful but heavily distorted mirror images of our own intelligence. I also wanted to write a more readable, nonacademic book on AI, weaving in literature and humour, that would still allow readers to get at these really deep philosophical questions. 

Most of all I wanted readers to understand the danger that comes when we lose sight of the difference between our own humanity and its reflection in a machine mirror, and the folly of automating our lives and societies with mindless tools that cannot help us chart new and wiser paths into the future, but can only reproduce the unsustainable patterns of humanity’s past. The book’s message isn’t doom, but hope; we are still free to chart those better paths. It’s a call to action: not to reject technology, but to understand and relate to it in a new and better way.

AI as a mirror of human intelligence is a great metaphor - are there aspects of AI, though, where the metaphor can mislead us?

I have been asking my audiences to test the limits of the metaphor, because all metaphors have limits and break down eventually! But the metaphor has been surprisingly robust – more often people come up to me after a talk to tell me about other ways that the metaphor fits – additional properties that mirrors share with large machine learning models. 

That said, mirrors can’t be automated; their surfaces reproduce the pattern of your body or face only so long as you are standing there in front of them. The algorithmic surface of a machine learning model, on the other hand, reproduces the patterns it has received – the patterns in the ‘light’ of our data –  over and over again, every time we ask for it. And generative AI tools like GPT do something else mirrors don’t, which is add statistical noise to every reflection. That is, the outputs of a generative AI tool, whether text, image, or video, are designed to weave in some randomness, giving the output a degree of novelty or surprise that makes it seem more ‘creative’ or ‘intelligent’. But it’s just a trick of the light, so to speak; a random distortion added to the mirror image so that you don’t realize it’s a reflection, so that you might mistake it for something else.

What’s next?

That’s a great question, although I find that after a book is written it’s a good idea to take a year off and let new ideas enter into your brain before you start a new project. Otherwise you just end up writing the same thing in different ways. That said, I think the accelerating climate crisis, and the nearing prospects of catastrophic upheaval like an AMOC collapse*, are going to quickly force us to invent new ways of living with technology. These are explored in the last two chapters of The AI Mirror, but I think they need to be pushed further. I plan to spend the next year reading the work of others who are envisioning what a transition to a sustainable human culture looks like.

What’s exciting you at the moment?

The philosopher Albert Borgmann, four decades ago, wrote about how technological threats to automate humane values and experiences can, paradoxically, reawaken our appreciation of them, and spark a new commitment to protecting and holding space in our lives for them. I’m seeing exciting signs of this kind of wider awakening, as a response to AI and machine automation. I see a reinvigoration of public awareness of the value of the arts and creative labour, and a greater appreciation for and commitment to protect humane thought and expression. 

When Apple makes an ad like the May 2024 one for the iPad that celebrated the machine flattening of human culture, and people react with visceral anger and disgust, I think that’s healthy. When people see a Google ad at the Summer Olympics that celebrates AI’s ability to replace a child’s authentic self-expression, and they uniformly hate it, that’s a good sign. Sometimes the counterreaction is a misplaced hatred of AI and technology itself, which I firmly reject; I’m ultimately calling for us to restore a healthy, free human relationship to technology, not break it. But at their best, these reactions are responses of humane protection, of cherishing and jointly celebrating what we refuse to lose of ourselves. That’s something to get excited about.

* Ed: AMOC is the Atlantic Meridional Overturning Circulation, an seawater circulation system which is separate from but interacts with the much smaller Gulf Stream.

Image © Callum Bennetts – Maverick Photo Agency

These articles will always be free - but if you'd like to support my online work, consider buying a virtual coffee:
Interview by Brian Clegg - See all Brian's online articles or subscribe to a weekly email free here

Comments

Popular posts from this blog

Math for English Majors - Ben Orlin *****

Ben Orlin makes the interesting observation that the majority of people give up on understanding maths at some point, from fractions or algebra all the way through to tensors. At that stage they either give up entirely or operate the maths mechanically without understanding what they are doing. In this light-hearted take, Orlin does a great job of taking on mathematical processes a step at a time, in part making parallels with the structure of language. Many popular maths books shy away from the actual mathematical representations, going instead for verbal approximations. Orlin doesn't do this, but makes use of those linguistic similes and different ways of looking at the processes involved to help understanding. He also includes self-admittedly awful (but entertaining) drawings and stories from his experience as a long-time maths teacher. To make those parallels, Orlin refers to numbers as nouns, operations as verbs (though he points out that there are some flaws in this simile) a

Everything is Predictable - Tom Chivers *****

There's a stereotype of computer users: Mac users are creative and cool, while PC users are businesslike and unimaginative. Less well-known is that the world of statistics has an equivalent division. Bayesians are the Mac users of the stats world, where frequentists are the PC people. This book sets out to show why Bayesians are not just cool, but also mostly right. Tom Chivers does an excellent job of giving us some historical background, then dives into two key aspects of the use of statistics. These are in science, where the standard approach is frequentist and Bayes only creeps into a few specific applications, such as the accuracy of medical tests, and in decision theory where Bayes is dominant. If this all sounds very dry and unexciting, it's quite the reverse. I admit, I love probability and statistics, and I am something of a closet Bayesian*), but Chivers' light and entertaining style means that what could have been the mathematical equivalent of debating angels on

2040 (SF) - Pedro Domingos ****

This is in many ways an excellent SF satire - Pedro Domingos never forgets that part of his job as a fiction writer is to keep the reader engaged with the plot, and it's a fascinating one. There is one fly in the ointment in the form of a step into heavy-handed humour that takes away its believability - satire should push the boundaries but not become totally ludicrous. But because the rest of it is so good, I can forgive it. The setting is the 2040 US presidential election, where one of the candidates is an AI-powered robot. The AI is the important bit - the robot is just there to give it a more human presence. This is a timely idea in its own right, but it gives Domingos an opportunity not just to include some of the limits and possibilities of generative AI, but also to take a poke at the nature of Silicon Valley startups, and of IT mega-companies and their worryingly powerful (and potentially deranged) leaders. Domingos knows his stuff on AI as a professor of computer science w