Scott Shapiro is the Charles F Southmayd Professor of Law and Professor of Philosophy at Yale Law School, where he is Director of the Centre for Law and Philosophy at the CyberSecurity Lab. He is the author of Legality and the co-author with Oona Hathaway of The Internationalists: How a Radical Plan to Outlaw War Remade the World. His latest book is Fancy Bear goes Phishing.
Why information technology?
Two reasons. First, I like playing with computers. They’re fun. Second, information technology is a vital interest. As venture capitalist Marc Andreessen said in his 2011 essay, 'software is eating the world.' It’s transformed industries, businesses, and society. The ten richest people in the world, with a combined wealth of $250 billion, six are tech billionaires and two—Bill Gates and Larry Ellison—are founders of operating system companies—Microsoft and Oracle. That means a couple of things: first, vast global power is now in the hands of those who understand and control the technologies, and second, we have become so interconnected digitally that if one system fails, it can compromise the rest. If we can’t understand how the internet works, how our information is stored, transferred, and manipulated, we can’t keep our information safe.
Why this book?
Most cybersecurity books fall into two categories: a joyless, 'eat-your-vegetables' style or a breathless, 'run-for-the-hills-now' one. I didn’t want to make people feel guilty about their password choices or terrify them with catastrophic scenarios. That’s counterproductive. The goal of Fancy Bear Goes Phishing is to avoid the two extremes. If anything, it’s more funny than scary. It offers an engaging, entertaining perspective on hacking culture. Hackers are often bored, anxious, insecure teenage boys or underemployed men with the capacity to destabilize our digital lives. I try to provide a compelling account of this human side of hacking while also explaining the more technical elements to people with no background in computer science.
Does AI have anything to offer to either side of the cybercrime story?
Generative AI is AI that can create new things from existing data. This brings new potential attack, such as the hacking of biometric authentication systems. With deepfakes, voice cloning, and fingerprint generation, hackers can impersonate victims using these new AI technologies. More interesting are 'Hallucination' Attacks. Chatbots often 'hallucinate'—make stuff up (events, people, books, computer programs). In a Hallucination Attack, the hacker creates programs that correspond to hallucinated ones but secretly places malware in them. This is extremely dangerous because users willingly download the poisoned program directly from the hacker’s website. People using AI engines for coding, therefore, should make sure that the AI-recommended module hasn’t been maliciously created by a hacker.
What’s next?
I’m writing a history of Artificial Intelligence and the philosophical critique of AI. The premise of the book is that much of our current confusion with AI—not understanding black box algorithms and how to hold them accountable—has been tackled, and for the most part solved, by 20th century philosophers. These philosophers were perplexed by how we can understand each other; after all, as far as we’re concerned, our skulls are black boxes to one another. Privacy is the hallmark of mental life, yet we understand each other constantly—our intentions, beliefs, feelings—and we hold each other accountable for our actions daily. The goal of the book is to show how to apply philosophical principles that have been developed by philosophers to explain natural intelligences and use them to explain artificial intelligence.
What’s exciting you at the moment?
Two years ago, I teamed up with the Yale Computer Science Department and won an NSF grant on Designing Accountable Software Systems. The following example illustrates the focus of our research: Imagine there’s a car accident, and a child is hit by a self-driving car using AI. How do we respond? Whom do we hold responsible? The sad truth is that we don’t know, because we don’t really understand how self-driving cars work. Using philosophical techniques developed in the 20th century and automated reasoning techniques from the 21st century, our team has developed tools to determine the intentions of a self-driving car. No one can immediately tell what the car intends just by looking at the code, but our tool can figure them out by asking the car questions. We can ask the car how it would respond if the child were, say, on the sidewalk. Would it climb the sidewalk? If so, that would be evidence that the car wasn’t suffering from a sensor malfunction, but intended to hit the child. What excites me about the project is that we are using philosophical tests for the existence of intentions, beliefs, and desires and then implementing them computationally using techniques in automated reasoning. Going from a high-level philosophical theory to an app is exciting.
Image © 2018 Guy Jordan
Interview by Brian Clegg - See all of Brian's online articles or subscribe to a digest free here
Comments
Post a Comment