Skip to main content

Scott Shapiro - Five Way Interview

Scott Shapiro is the Charles F Southmayd Professor of Law and Professor of Philosophy at Yale Law School, where he is Director of the Centre for Law and Philosophy at the CyberSecurity Lab. He is the author of Legality and the co-author with Oona Hathaway of The Internationalists: How a Radical Plan to Outlaw War Remade the World. His latest book is Fancy Bear goes Phishing.

Why information technology?

Two reasons. First, I like playing with computers. They’re fun. Second, information technology is a vital interest.  As venture capitalist Marc Andreessen said in his 2011 essay, 'software is eating the world.' It’s transformed industries, businesses, and society. The ten richest people in the world, with a combined wealth of $250 billion, six are tech billionaires and two—Bill Gates and Larry Ellison—are founders of operating system companies—Microsoft and Oracle. That means a couple of things: first, vast global power is now in the hands of those who understand and control the technologies, and second, we have become so interconnected digitally that if one system fails, it can compromise the rest. If we can’t understand how the internet works, how our information is stored, transferred, and manipulated, we can’t keep our information safe. 

Why this book?

Most cybersecurity books fall into two categories: a joyless, 'eat-your-vegetables' style or a breathless, 'run-for-the-hills-now' one. I didn’t want to make people feel guilty about their password choices or terrify them with catastrophic scenarios. That’s counterproductive. The goal of Fancy Bear Goes Phishing is to avoid the two extremes. If anything, it’s more funny than scary. It offers an engaging, entertaining perspective on hacking culture. Hackers are often bored, anxious, insecure teenage boys or underemployed men with the capacity to destabilize our digital lives. I try to provide a compelling account of this human side of hacking while also explaining the more technical elements to people with no background in computer science. 

Does AI have anything to offer to either side of the cybercrime story?

Generative AI is AI that can create new things from existing data. This brings new potential attack, such as the hacking of biometric authentication systems. With deepfakes, voice cloning, and fingerprint generation, hackers can impersonate victims using these new AI technologies. More interesting are 'Hallucination' Attacks. Chatbots often 'hallucinate'—make stuff up (events, people, books, computer programs). In a Hallucination Attack, the hacker creates programs that correspond to hallucinated ones but secretly places malware in them. This is extremely dangerous because users willingly download the poisoned program directly from the hacker’s website. People using AI engines for coding, therefore, should make sure that the AI-recommended module hasn’t been maliciously created by a hacker.

What’s next?

I’m writing a history of Artificial Intelligence and the philosophical critique of AI. The premise of the book is that much of our current confusion with AI—not understanding black box algorithms and how to hold them accountable—has been tackled, and for the most part solved, by 20th century philosophers. These philosophers were perplexed by how we can understand each other; after all, as far as we’re concerned, our skulls are black boxes to one another. Privacy is the hallmark of mental life, yet we understand each other constantly—our intentions, beliefs, feelings—and we hold each other accountable for our actions daily. The goal of the book is to show how to apply philosophical principles that have been developed by philosophers to explain natural intelligences and use them to explain artificial intelligence.

What’s exciting you at the moment?

Two years ago, I teamed up with the Yale Computer Science Department and won an NSF grant on Designing Accountable Software Systems. The following example illustrates the focus of our research: Imagine there’s a car accident, and a child is hit by a self-driving car using AI. How do we respond? Whom do we hold responsible? The sad truth is that we don’t know, because we don’t really understand how self-driving cars work. Using philosophical techniques developed in the 20th century and automated reasoning techniques from the 21st century, our team has developed tools to determine the intentions of a self-driving car. No one can immediately tell what the car intends just by looking at the code, but our tool can figure them out by asking the car questions. We can ask the car how it would respond if the child were, say, on the sidewalk. Would it climb the sidewalk? If so, that would be evidence that the car wasn’t suffering from a sensor malfunction, but intended to hit the child. What excites me about the project is that we are using philosophical tests for the existence of intentions, beliefs, and desires and then implementing them computationally using techniques in automated reasoning. Going from a high-level philosophical theory to an app is exciting. 

Image © 2018 Guy Jordan



Comments

Popular posts from this blog

Everything is Predictable - Tom Chivers *****

There's a stereotype of computer users: Mac users are creative and cool, while PC users are businesslike and unimaginative. Less well-known is that the world of statistics has an equivalent division. Bayesians are the Mac users of the stats world, where frequentists are the PC people. This book sets out to show why Bayesians are not just cool, but also mostly right. Tom Chivers does an excellent job of giving us some historical background, then dives into two key aspects of the use of statistics. These are in science, where the standard approach is frequentist and Bayes only creeps into a few specific applications, such as the accuracy of medical tests, and in decision theory where Bayes is dominant. If this all sounds very dry and unexciting, it's quite the reverse. I admit, I love probability and statistics, and I am something of a closet Bayesian*), but Chivers' light and entertaining style means that what could have been the mathematical equivalent of debating angels on

Roger Highfield - Stephen Hawking: genius at work interview

Roger Highfield OBE is the Science Director of the Science Museum Group. Roger has visiting professorships at the Department of Chemistry, UCL, and at the Dunn School, University of Oxford, is a Fellow of the Academy of Medical Sciences, and a member of the Medical Research Council and Longitude Committee. He has written or co-authored ten popular science books, including two bestsellers. His latest title is Stephen Hawking: genius at work . Why science? There are three answers to this question, depending on context: Apollo; Prime Minister Margaret Thatcher, along with the world’s worst nuclear accident at Chernobyl; and, finally, Nullius in verba . Growing up I enjoyed the sciencey side of TV programmes like Thunderbirds and The Avengers but became completely besotted when, in short trousers, I gazed up at the moon knowing that two astronauts had paid it a visit. As the Apollo programme unfolded, I became utterly obsessed. Today, more than half a century later, the moon landings are

Deep Utopia - Nick Bostrom ***

This is one of the strangest sort-of popular science (or philosophy, or something or other) books I've ever read. If you can picture the impact of a cross between Douglas Hofstadter's  Gödel Escher Bach and Gaileo's Two New Sciences  (at least, its conversational structure), then thrown in a touch of David Foster Wallace's Infinite Jest , and you can get a feel for what the experience of reading it is like - bewildering with the feeling that there is something deep that you can never quite extract from it. Oxford philosopher Nick Bostrom is probably best known in popular science for his book Superintelligence in which he looked at the implications of having artificial intelligence (AI) that goes beyond human capabilities. In a sense, Deep Utopia is a sequel, picking out one aspect of this speculation: what life would be like for us if technology had solved all our existential problems, while (in the form of superintelligence) it had also taken away much of our appare