Skip to main content

Scott Shapiro - Five Way Interview

Scott Shapiro is the Charles F Southmayd Professor of Law and Professor of Philosophy at Yale Law School, where he is Director of the Centre for Law and Philosophy at the CyberSecurity Lab. He is the author of Legality and the co-author with Oona Hathaway of The Internationalists: How a Radical Plan to Outlaw War Remade the World. His latest book is Fancy Bear goes Phishing.

Why information technology?

Two reasons. First, I like playing with computers. They’re fun. Second, information technology is a vital interest.  As venture capitalist Marc Andreessen said in his 2011 essay, 'software is eating the world.' It’s transformed industries, businesses, and society. The ten richest people in the world, with a combined wealth of $250 billion, six are tech billionaires and two—Bill Gates and Larry Ellison—are founders of operating system companies—Microsoft and Oracle. That means a couple of things: first, vast global power is now in the hands of those who understand and control the technologies, and second, we have become so interconnected digitally that if one system fails, it can compromise the rest. If we can’t understand how the internet works, how our information is stored, transferred, and manipulated, we can’t keep our information safe. 

Why this book?

Most cybersecurity books fall into two categories: a joyless, 'eat-your-vegetables' style or a breathless, 'run-for-the-hills-now' one. I didn’t want to make people feel guilty about their password choices or terrify them with catastrophic scenarios. That’s counterproductive. The goal of Fancy Bear Goes Phishing is to avoid the two extremes. If anything, it’s more funny than scary. It offers an engaging, entertaining perspective on hacking culture. Hackers are often bored, anxious, insecure teenage boys or underemployed men with the capacity to destabilize our digital lives. I try to provide a compelling account of this human side of hacking while also explaining the more technical elements to people with no background in computer science. 

Does AI have anything to offer to either side of the cybercrime story?

Generative AI is AI that can create new things from existing data. This brings new potential attack, such as the hacking of biometric authentication systems. With deepfakes, voice cloning, and fingerprint generation, hackers can impersonate victims using these new AI technologies. More interesting are 'Hallucination' Attacks. Chatbots often 'hallucinate'—make stuff up (events, people, books, computer programs). In a Hallucination Attack, the hacker creates programs that correspond to hallucinated ones but secretly places malware in them. This is extremely dangerous because users willingly download the poisoned program directly from the hacker’s website. People using AI engines for coding, therefore, should make sure that the AI-recommended module hasn’t been maliciously created by a hacker.

What’s next?

I’m writing a history of Artificial Intelligence and the philosophical critique of AI. The premise of the book is that much of our current confusion with AI—not understanding black box algorithms and how to hold them accountable—has been tackled, and for the most part solved, by 20th century philosophers. These philosophers were perplexed by how we can understand each other; after all, as far as we’re concerned, our skulls are black boxes to one another. Privacy is the hallmark of mental life, yet we understand each other constantly—our intentions, beliefs, feelings—and we hold each other accountable for our actions daily. The goal of the book is to show how to apply philosophical principles that have been developed by philosophers to explain natural intelligences and use them to explain artificial intelligence.

What’s exciting you at the moment?

Two years ago, I teamed up with the Yale Computer Science Department and won an NSF grant on Designing Accountable Software Systems. The following example illustrates the focus of our research: Imagine there’s a car accident, and a child is hit by a self-driving car using AI. How do we respond? Whom do we hold responsible? The sad truth is that we don’t know, because we don’t really understand how self-driving cars work. Using philosophical techniques developed in the 20th century and automated reasoning techniques from the 21st century, our team has developed tools to determine the intentions of a self-driving car. No one can immediately tell what the car intends just by looking at the code, but our tool can figure them out by asking the car questions. We can ask the car how it would respond if the child were, say, on the sidewalk. Would it climb the sidewalk? If so, that would be evidence that the car wasn’t suffering from a sensor malfunction, but intended to hit the child. What excites me about the project is that we are using philosophical tests for the existence of intentions, beliefs, and desires and then implementing them computationally using techniques in automated reasoning. Going from a high-level philosophical theory to an app is exciting. 

Image © 2018 Guy Jordan



Comments

Popular posts from this blog

The Infinity Machine - Sebastian Mallaby ****

It's very quickly clear that Sebastian Mallaby is a huge Demis Hassabis fan - writing about the only child prodigy and teen genius ever who was also a nice, rounded personality. After a few chapters, though, things settle down (I'm reminded of Douglas Adams' description of the Hitchhiker's Guide to the Galaxy ) and we get a good, solid trip through the journey that gave us DeepMind, their AlphaGo and AlphaFold programs, the sudden explosion of competition on the AI front and thoughts on artificial general intelligence. Although Mallaby does occasionally still go into fan mode - reading this you would think that AlphaFold had successfully perfectly predicted the structure of every protein, where it is usually not sufficiently accurate for its results to have direct practical application - we get a real feel for the way this relatively unusual company was swiftly and successfully developed away from Silicon Valley. It's readable and gives an important understanding of...

In Seach of Sea Dragons - Matthew Myerscough ****

It's common advice to would-be authors of narrative non-fiction to open with something dramatic - Matthew Myerscough certainly does this with the story of his being trapped under an avalanche on Snowdon (while his girlfriend, also carried away remains on top of the snow unhurt). It certainly is dramatic, but seemed entirely disconnected from the reason I got the book, which was to read about fossil collecting.  Luckily, though, in the second chapter we get into a more conventional 'how I got interested in fossils as a boy'. Having recently reviewed Patrick Moore's autobiography and noting that astronomy was one of the few sciences where amateurs can still make a contribution, it came to mind that palaeontology is another - Myerscough is a civil engineer by trade, but just as amateur astronomers can find new details in the skies, so amateur fossil hunters have been searching for these relics for centuries. When I give talks in junior schools, the two topics that guarant...

Robot-Proof - Vivienne Ming ****

As Vivienne Ming makes apparent, there seem largely to be two views of AI's pros and cons, both of which are almost certainly wrong. It's either doom-saying 'It'll destroy life as we know it' or Pollyanna-ish 'It'll do all the boring work and we can all be wonderfully creative and live lives of leisure.' Instead, Ming gives us a clear analysis of the likely trajectory for the workplace, particularly for the IT industry. She describes three 'equally flawed, intellectually lazy strategies' to deal with the impact of AI. The first is substitution and deprofessionalisation, using AI to allow cheaper 'AI-augmented technicians' to replace more expensive professionals, producing more low wage jobs and fewer mid-range. This does save money but leaves a company at risk of being easily outcompeted. The second is what Ming describes as the '"A-Player" Hunger Games', the approach favoured by Silicon Valley. This sees the growing rif...