Skip to main content

Scott Shapiro - Five Way Interview

Scott Shapiro is the Charles F Southmayd Professor of Law and Professor of Philosophy at Yale Law School, where he is Director of the Centre for Law and Philosophy at the CyberSecurity Lab. He is the author of Legality and the co-author with Oona Hathaway of The Internationalists: How a Radical Plan to Outlaw War Remade the World. His latest book is Fancy Bear goes Phishing.

Why information technology?

Two reasons. First, I like playing with computers. They’re fun. Second, information technology is a vital interest.  As venture capitalist Marc Andreessen said in his 2011 essay, 'software is eating the world.' It’s transformed industries, businesses, and society. The ten richest people in the world, with a combined wealth of $250 billion, six are tech billionaires and two—Bill Gates and Larry Ellison—are founders of operating system companies—Microsoft and Oracle. That means a couple of things: first, vast global power is now in the hands of those who understand and control the technologies, and second, we have become so interconnected digitally that if one system fails, it can compromise the rest. If we can’t understand how the internet works, how our information is stored, transferred, and manipulated, we can’t keep our information safe. 

Why this book?

Most cybersecurity books fall into two categories: a joyless, 'eat-your-vegetables' style or a breathless, 'run-for-the-hills-now' one. I didn’t want to make people feel guilty about their password choices or terrify them with catastrophic scenarios. That’s counterproductive. The goal of Fancy Bear Goes Phishing is to avoid the two extremes. If anything, it’s more funny than scary. It offers an engaging, entertaining perspective on hacking culture. Hackers are often bored, anxious, insecure teenage boys or underemployed men with the capacity to destabilize our digital lives. I try to provide a compelling account of this human side of hacking while also explaining the more technical elements to people with no background in computer science. 

Does AI have anything to offer to either side of the cybercrime story?

Generative AI is AI that can create new things from existing data. This brings new potential attack, such as the hacking of biometric authentication systems. With deepfakes, voice cloning, and fingerprint generation, hackers can impersonate victims using these new AI technologies. More interesting are 'Hallucination' Attacks. Chatbots often 'hallucinate'—make stuff up (events, people, books, computer programs). In a Hallucination Attack, the hacker creates programs that correspond to hallucinated ones but secretly places malware in them. This is extremely dangerous because users willingly download the poisoned program directly from the hacker’s website. People using AI engines for coding, therefore, should make sure that the AI-recommended module hasn’t been maliciously created by a hacker.

What’s next?

I’m writing a history of Artificial Intelligence and the philosophical critique of AI. The premise of the book is that much of our current confusion with AI—not understanding black box algorithms and how to hold them accountable—has been tackled, and for the most part solved, by 20th century philosophers. These philosophers were perplexed by how we can understand each other; after all, as far as we’re concerned, our skulls are black boxes to one another. Privacy is the hallmark of mental life, yet we understand each other constantly—our intentions, beliefs, feelings—and we hold each other accountable for our actions daily. The goal of the book is to show how to apply philosophical principles that have been developed by philosophers to explain natural intelligences and use them to explain artificial intelligence.

What’s exciting you at the moment?

Two years ago, I teamed up with the Yale Computer Science Department and won an NSF grant on Designing Accountable Software Systems. The following example illustrates the focus of our research: Imagine there’s a car accident, and a child is hit by a self-driving car using AI. How do we respond? Whom do we hold responsible? The sad truth is that we don’t know, because we don’t really understand how self-driving cars work. Using philosophical techniques developed in the 20th century and automated reasoning techniques from the 21st century, our team has developed tools to determine the intentions of a self-driving car. No one can immediately tell what the car intends just by looking at the code, but our tool can figure them out by asking the car questions. We can ask the car how it would respond if the child were, say, on the sidewalk. Would it climb the sidewalk? If so, that would be evidence that the car wasn’t suffering from a sensor malfunction, but intended to hit the child. What excites me about the project is that we are using philosophical tests for the existence of intentions, beliefs, and desires and then implementing them computationally using techniques in automated reasoning. Going from a high-level philosophical theory to an app is exciting. 

Image © 2018 Guy Jordan



Comments

Popular posts from this blog

The Antigravity Enigma - Andrew May ****

Antigravity - the ability to overcome the pull of gravity - has been a fantasy for thousands of years and subject to more scientific (if impractical) fictional representation since H. G. Wells came up with cavorite in The First Men in the Moon . But is it plausible scientifically?  Andrew May does a good job of pulling together three ways of looking at our love affair with antigravity (and the related concept of cancelling inertia) - in science fiction, in physics and in pseudoscience and crankery. As May points out, science fiction is an important starting point as the concept was deployed there well before we had a good enough understanding of gravity to make any sensible scientific stabs at the idea (even though, for instance, Michael Faraday did unsuccessfully experiment with a possible interaction between gravity and electromagnetism). We then get onto the science itself, noting the potential impact on any ideas of antigravity that come from the move from a Newtonian view of a...

The World as We Know It - Peter Dear ***

History professor Peter Dear gives us a detailed and reasoned coverage of the development of science as a concept from its origins as natural philosophy, covering the years from the eighteenth to the twentieth century. inclusive If that sounds a little dry, frankly, it is. But if you don't mind a very academic approach, it is certainly interesting. Obviously a major theme running through is the move from largely gentleman natural philosophers (with both implications of that word 'gentleman') to professional academic scientists. What started with clubs for relatively well off men with an interest, when universities did not stray far beyond what was included in mathematics (astronomy, for instance), would become a very different beast. The main scientific subjects that Dear covers are physics and biology - we get, for instance, a lot on the gradual move away from a purely mechanical views of physics - the reason Newton's 'action at a distance' gravity caused such ...

It's On You - Nick Chater and George Loewenstein *****

Going on the cover you might think this was a political polemic - and admittedly there's an element of that - but the reason it's so good is quite different. It shows how behavioural economics and social psychology have led us astray by putting the focus way too much on individuals. A particular target is the concept of nudges which (as described in Brainjacking ) have been hugely over-rated. But overall the key problem ties to another psychological concept: framing. Huge kudos to both Nick Chater and George Loewenstein - a behavioural scientist and an economics and psychology professor - for having the guts to take on the flaws in their own earlier work and that of colleagues, because they make clear just how limited and potentially dangerous is the belief that individuals changing their behaviour can solve large-scale problems. The main thesis of the book is that there are two ways to approach the major problems we face - an 'i-frame' where we focus on the individual ...