Skip to main content

Scott Shapiro - Five Way Interview

Scott Shapiro is the Charles F Southmayd Professor of Law and Professor of Philosophy at Yale Law School, where he is Director of the Centre for Law and Philosophy at the CyberSecurity Lab. He is the author of Legality and the co-author with Oona Hathaway of The Internationalists: How a Radical Plan to Outlaw War Remade the World. His latest book is Fancy Bear goes Phishing.

Why information technology?

Two reasons. First, I like playing with computers. They’re fun. Second, information technology is a vital interest.  As venture capitalist Marc Andreessen said in his 2011 essay, 'software is eating the world.' It’s transformed industries, businesses, and society. The ten richest people in the world, with a combined wealth of $250 billion, six are tech billionaires and two—Bill Gates and Larry Ellison—are founders of operating system companies—Microsoft and Oracle. That means a couple of things: first, vast global power is now in the hands of those who understand and control the technologies, and second, we have become so interconnected digitally that if one system fails, it can compromise the rest. If we can’t understand how the internet works, how our information is stored, transferred, and manipulated, we can’t keep our information safe. 

Why this book?

Most cybersecurity books fall into two categories: a joyless, 'eat-your-vegetables' style or a breathless, 'run-for-the-hills-now' one. I didn’t want to make people feel guilty about their password choices or terrify them with catastrophic scenarios. That’s counterproductive. The goal of Fancy Bear Goes Phishing is to avoid the two extremes. If anything, it’s more funny than scary. It offers an engaging, entertaining perspective on hacking culture. Hackers are often bored, anxious, insecure teenage boys or underemployed men with the capacity to destabilize our digital lives. I try to provide a compelling account of this human side of hacking while also explaining the more technical elements to people with no background in computer science. 

Does AI have anything to offer to either side of the cybercrime story?

Generative AI is AI that can create new things from existing data. This brings new potential attack, such as the hacking of biometric authentication systems. With deepfakes, voice cloning, and fingerprint generation, hackers can impersonate victims using these new AI technologies. More interesting are 'Hallucination' Attacks. Chatbots often 'hallucinate'—make stuff up (events, people, books, computer programs). In a Hallucination Attack, the hacker creates programs that correspond to hallucinated ones but secretly places malware in them. This is extremely dangerous because users willingly download the poisoned program directly from the hacker’s website. People using AI engines for coding, therefore, should make sure that the AI-recommended module hasn’t been maliciously created by a hacker.

What’s next?

I’m writing a history of Artificial Intelligence and the philosophical critique of AI. The premise of the book is that much of our current confusion with AI—not understanding black box algorithms and how to hold them accountable—has been tackled, and for the most part solved, by 20th century philosophers. These philosophers were perplexed by how we can understand each other; after all, as far as we’re concerned, our skulls are black boxes to one another. Privacy is the hallmark of mental life, yet we understand each other constantly—our intentions, beliefs, feelings—and we hold each other accountable for our actions daily. The goal of the book is to show how to apply philosophical principles that have been developed by philosophers to explain natural intelligences and use them to explain artificial intelligence.

What’s exciting you at the moment?

Two years ago, I teamed up with the Yale Computer Science Department and won an NSF grant on Designing Accountable Software Systems. The following example illustrates the focus of our research: Imagine there’s a car accident, and a child is hit by a self-driving car using AI. How do we respond? Whom do we hold responsible? The sad truth is that we don’t know, because we don’t really understand how self-driving cars work. Using philosophical techniques developed in the 20th century and automated reasoning techniques from the 21st century, our team has developed tools to determine the intentions of a self-driving car. No one can immediately tell what the car intends just by looking at the code, but our tool can figure them out by asking the car questions. We can ask the car how it would respond if the child were, say, on the sidewalk. Would it climb the sidewalk? If so, that would be evidence that the car wasn’t suffering from a sensor malfunction, but intended to hit the child. What excites me about the project is that we are using philosophical tests for the existence of intentions, beliefs, and desires and then implementing them computationally using techniques in automated reasoning. Going from a high-level philosophical theory to an app is exciting. 

Image © 2018 Guy Jordan



Comments

Popular posts from this blog

It's On You - Nick Chater and George Loewenstein *****

Going on the cover you might think this was a political polemic - and admittedly there's an element of that - but the reason it's so good is quite different. It shows how behavioural economics and social psychology have led us astray by putting the focus way too much on individuals. A particular target is the concept of nudges which (as described in Brainjacking ) have been hugely over-rated. But overall the key problem ties to another psychological concept: framing. Huge kudos to both Nick Chater and George Loewenstein - a behavioural scientist and an economics and psychology professor - for having the guts to take on the flaws in their own earlier work and that of colleagues, because they make clear just how limited and potentially dangerous is the belief that individuals changing their behaviour can solve large-scale problems. The main thesis of the book is that there are two ways to approach the major problems we face - an 'i-frame' where we focus on the individual ...

Introducing Artificial Intelligence – Henry Brighton & Howard Selina ****

It is almost impossible to rate these relentlessly hip books – they are pure marmite*. The huge  Introducing  … series (a vast range of books covering everything from Quantum Theory to Islam), previously known as …  for Beginners , puts across the message in a style that owes as much to Terry Gilliam and pop art as it does to popular science. Pretty well every page features large graphics with speech bubbles that are supposed to emphasise the point. Funnily,  Introducing Artificial Intelligence  is both a good and bad example of the series. Let’s get the bad bits out of the way first. The illustrators of these books are very variable, and I didn’t particularly like the pictures here. They did add something – the illustrations in these books always have a lot of information content, rather than being window dressing – but they seemed more detached from the text and rather lacking in the oomph the best versions have. The other real problem is that...

The Laws of Thought - Tom Griffiths *****

In giving us a history of attempts to explain our thinking abilities, Tom Griffiths demonstrates an excellent ability to pitch information just right for the informed general reader.  We begin with Aristotelian logic and the way Boole and others transformed it into a kind of arithmetic before a first introduction of computing and theories of language. Griffiths covers a surprising amount of ground - we don't just get, for instance, the obvious figures of Turing, von Neumann and Shannon, but the interaction between the computing pioneers and those concerned with trying to understand the way we think - for example in the work of Jerome Bruner, of whom I confess I'd never heard.  This would prove to be the case with a whole host of people who have made interesting contributions to the understanding of human thought processes. Sometimes their theories were contradictory - this isn't an easy field to successfully observe - but always they were interesting. But for me, at least, ...