Teleport Workload Identity with SPIFFE: Achieving Zero Trust in Modern Infrastructure
May 23
Virtual
Register Today
Teleport logoTry For Free
Background image

Securing a World of Physically Capable Computers with Bruce Schneier - overview

Computer security is no longer about data; it's about life and property. This change makes an enormous difference, and will shake up our industry in many ways. First, data authentication and integrity will become more important than confidentiality. And second, our largely regulation-free Internet will become a thing of the past.

Soon we will no longer have a choice between government regulation and no government regulation. Our choice is between smart government regulation and stupid government regulation. Given this future, it's vital that we look back at what we've learned from past attempts to secure these systems, and forward at what technologies, laws, regulations, economic incentives, and social norms we need to secure them in the future.

Key topics on Securing a World of Physically Capable Computers with Bruce Schneier

  • Computers affect us in a direct, physical manner. In the world we're creating, everything is a computer.
  • Computers are hard to secure, and this can be summed up in Bruce Schneier's seven lessons on the topic.
  • Security is failing as everything becomes interconnected.
  • The significance of the tech space in our daily lives makes regulating it necessary. Today the risks are too great, and the stakes are too high.
  • Tech can subvert law, but law can also subvert tech. Law and tech have to work together.
  • Right now, most large enterprises are most profitable by being as insecure as they can get away with.
  • The cost of security should be cheaper than the cost of insecurity so that companies have incentives to become more secure.

Expanding your knowledge on Securing a World of Physically Capable Computers with Bruce Schneier

Introduction - Securing a World of Physically Capable Computers with Bruce Schneier
(The transcript of the session)

Ev: 00:00:02.182 Hey, good morning, or good afternoon, everyone. I am Ev Kontsevoy, CEO of a company called Teleport. We're sponsoring today's webinar, and we are the easiest and most secure way to access all of your infrastructure. You can go and download Teleport at goteleport.com. It's an open source product. But this conversation is not really about that. This conversation is about computer security, and we have Bruce Schneier today joining us. It doesn't feel like Bruce even needs an introduction. He's a well-known, an internationally recognized cryptographer, security researcher, bestselling author of several books, and he blogs about computer security on his website as well. Bruce also asked me, "Why did you decide to invite me here and talk about security?" And the answer is I personally, when I started my career, I used to think that security is kind of boring subject, computer security in particular. I felt that it's an industry where people sell fear, that they try to convince you to go and buy things you truly don't need. But then I recognized that — as I was growing older and wiser — I realized that so many people need to get things done. We need to launch new products, enable new services, make impossible things possible. And to do all of these things, engineers who themselves don't do security day-to-day — they need a lot of help from the security industry. And the security industry, in turn, turned out to be filled with very interesting, very unusual people. And one of the most visible ones is Bruce. And that is really why we invited him, simply to remind everyone that security is not boring. Security is actually quite interesting, exciting, and a very dynamically evolving field. So, Bruce, perhaps the best way to start our conversation is to ask you a question about the title of your talk. So you say that you want to talk about securing the world of physically capable computers. Physically capable sounds a little bit threatening, doesn't it?

Bruce: 00:02:13.478 I mean, it is a little threatening, right? But that's the world we live in. Computers affect us in a direct, physical manner. They turn our heat on and off. They are increasingly driving our cars. They are controlling the insulin that goes into our bodies. So, yeah, my computers are becoming physical. They're no longer screens. They're in our world, in our lives, and they're doing things. And that is what I want to talk about. We're creating a world where everything is a computer. And this is a small portable computer that happens to make phone calls. And very similarly, your refrigerator is a computer that keeps things cold. And your microwave oven is a computer that makes things hot. And an ATM machine is a computer with money inside. And your car is a computer with four wheels and engine. Actually, that's not true. Your car is a hundred plus computer distributed system with four wheels and an engine. I mean, this is more than the Internet of Things. It's more than the internet. It is this immersive, interconnected world of data and control and communications. And it means a couple of things for internet security for us, for me. It means that internet security becomes everything security. And it means all the lessons from my world of internet security become broadly applicable to everything. So I want to start by giving — I guess this is my very quick, computers are still hard to secure in six easy lessons, right?

Why computers are hard to secure

Lessons 1 and 2

Bruce: 00:04:02.466 Lesson one. Most software is poorly written and insecure, and this is basically economic. We don't want to pay for quality software. I've heard this joke about restaurants. Good, fast, cheap, pick any two. It's also basically true for software. Right? And we in the industry have chosen fast and cheap over good. The economics has spoken. Most software is poor. Now, poor software is full of bugs. Some of those bugs are security vulnerabilities. Some of those are exploitable, some of those are exploited, which means the software we use from the operating systems to our applications are full of exploitable software vulnerabilities. And this is an economic reason why. Second lesson. The internet was never designed with security in mind. Now that's absolutely crazy when I say it today. But if you go back to the early days of the internet, '70s and '80s, there were two things true. One, it wasn't used for anything important ever. And two, there were organizational constraints that limited who had access to it in the first place. And because of those things, there was a conscious decision to ignore security in the internet and leave it to the endpoints. And that was a designed decision. And then, into the '90s, we start connecting computers to the internet that were never designed to be connected to a network in the first place. Those were the PCs. And we are still living with the effects of those decisions. In the domain name system, internet routing, packet security, email addresses, all of those protocols are fundamentally insecure because we never wrote them to be secure.

Lessons 3 and 4

Bruce: 00:06:01.430 Third lesson. The extensibility of computerized systems means that everything can be used against us. Now, extensibility is a property of computers that isn't true for other things. Basically, it means you can't constrain the functionality of a computerized device because it runs software. Okay, so when I was a kid, I had a phone at home. Big black thing attached to the wall with a cord. Great object, but it could never be anything other than a phone. Right? Remember, this is a computer that happens to make phone calls. It can do whatever you want. Right? Now, if you remember Apple's first iPhone slogan, "There's an app for that." Right? You can download new functionality onto this device all the time in a way you could never do it to an old analog phone. Now, for security, maybe this is a constantly evolving system. It's hard to secure. The designers can't anticipate every use condition, right? And any new feature can add more insecurities. Right? I mean, putting a virus on this phone gives it new features. It's not features you paid for. Sorry, I just turned the camera on. Not features you want, but because this is extensible, right, it can be attacked. The fourth lesson. The complexity of computerized systems means that attack is easier than defense. This is a complicated concept, but basically complexity is the worst enemy of security. Complex systems are harder to secure than simple systems. A bunch of reasons you could think of it as, oh, I don't know, the attacker has to just find one avenue for attack, and the defender needs to defend the entire attack surface. The attacker can choose the time, place, method of attack. The defender has to defend everything.

Bruce: 00:08:02.784 Right? Complex systems have larger attack surfaces. They're harder to defend. And it means attack is easier than defense. It also means security testing is hard. Internet is easily the most complex machine mankind has ever built.

Ev: 00:08:18.722 Bruce?

Bruce: 00:08:19.834 Yes.

Ev: 00:08:20.298 About that. So I just wanted to clarify that I hope at some point in the conversation we will also be able to talk about what can people, like folks on this show who are listening, what they could do about these issues. And to encourage that, I wanted to remind everyone that they can actually ask questions in the Q&A.

Lessons 5 & 6

Bruce: 00:08:44.641 And I will see questions, and I will answer them. That's right. Yeah, I mean, well, we'll talk about that. There's not a lot we can do because a lot of these systems are not in our hands. Hey, I mean, Google runs most of it, and I can either use them or not. And we're stuck in this world of cloud computing. What we can do is narrowing as individuals, which is interesting to talk about. And I will talk about it later. All right. So I want to talk about vulnerabilities of the interconnection. This is my lesson five. The more we connect things to each other, the more vulnerabilities in one thing affect other things. Right? And you see this in botnets, right? Vulnerabilities in DVRs and CCTV cameras allow attackers to launch massive DDoS attacks against something else. All of our supply chain vulnerability — vulnerabilities in SolarWinds allow hackers to break into networks and do other things. A story from 2018 — there was a Vegas casino. I think it was the Sands, we're not sure, was attacked through a vulnerability in their internet-connected fish tank. We'll talk about this more. We're going to talk about supply chain attacks. But these vulnerabilities are really hard to fix because the interconnection is not always obvious, because it could be insecure interactions of two individually secure systems, all sorts of reasons. My lesson six is that attacks always get better, easier, and faster. Some of it is Moore's Law. Computers get faster, but attackers also get smarter and adapt, right? These are adaptive, malicious adversaries. And expertise flows downhill. What today is a top secret NSA program, tomorrow is someone's PhD thesis, and the next day it's a criminal hacker tool. And we see that progression again and again. Now, none of these lessons are new. And up to now, this has all been a manageable problem. But for a confluence of reasons, we're reaching a crisis.

Bruce: 00:10:49.653 And this is the physically capable computers. Automation, autonomy, and physical agency bring new dangers. So traditionally in computer security, we're concerned with confidentiality. Actually, let me step back. There's something called the CIA triad. Confidentiality, integrity and availability. Those are the three properties that we are supposed to provide. Normally, security attacks attack confidentiality, right? Privacy breaches, data theft, data misuse. And this is Cambridge Analytic, OPM, Target, Equifax. All of those hacks that make the news, like SolarWinds. But the threats come in many forms. There are availability threats. Ransomware is an availability threat, right? You can't get at your data. DDoS is an availability attack. Your network no longer works. They're integrity attacks. If I hack a bank and manipulate some bank balances, I'm just changing data in a spreadsheet. That's data integrity, but I steal money that way. But when you get to physically capable computers, the integrity and availability threats are much worse than the confidentiality threats. Their effects are greater. There are real risks to life and property. This is the difference. So I'm concerned if someone hacks the hospital and steals my confidential medical records, but I am much more concerned they changed my blood type. Right? That's a data integrity attack, and that can kill me. And I'm concerned if someone hacks my car and eavesdrops on my conversations with the Bluetooth microphone, but I am much more concerned that they remotely disabled the brakes. That is a data availability attack, and that can kill me. Cars, medical devices, drones, weapon systems, thermostats, power plants, smart city, anything — I mean we are worried about, actually, Ukraine is worried about today.

Lesson 7

Bruce: 00:12:51.998 DDoS attacks against critical systems. We are concerned about ransomware on your car, hopefully not at speed. Right? There's this fundamental difference between your spreadsheet crashes and you lose your data, and your pacemaker crashes and you lose your life. And it could be the exact same CPU and operating system and application software. An attack vulnerability, an attack tool, an attack. The only thing different is what the computer is attached to, and what it can do. And these trends become even more dangerous as our systems become more critical. So I have a seventh lesson. That's computers fail differently. They all work perfectly until one day when none of them do. Right? So imagine cars, right? Cars have parts. Parts have [inaudible] failures. And there is this steady industry of auto repair shops to deal with the cars that break in our society a little bit every day. Right? Computers don't fail that way. Computers fail all at once. So there's the story of Onity. Onity makes keyless entry systems for hotel rooms, right? You have a key card, you wave it on a reader, and the door opens up. You probably all seen them. A bunch of years ago, someone found a vulnerability in Onity locks that enabled someone to break into all hotel rooms. And because of the way the system works, the way you updated the lock was you went in manually and changed the firmware. There was no remote update. Now you think about a hotel. They have a system for dealing with broken door locks. So I'm going to make it up. There's a locksmith on call, he'll drive over, he'll rehang the door, or fix the bolt, or replace the lock — he'll do something.

Bruce: 00:14:49.792 They have no system for all 500 of our hotel room doors need to be updated. And in most cases, it wasn't, right? Those lock vulnerabilities still exist. And that's because the way locks fail is not the way computerized locks fail, and this industry couldn't deal with the computer way. Right? So we're worried about crashing all the cars, shutting down all the power plants, all of that. So at the same time as all of this — some of our longstanding security systems are failing. We're talking about three of them. The first one is patching. All right, so patching is failing. Now, sort of our computers and phones are as secure as they are practically for two reasons. Right? The one is that there are engineers at Microsoft, Apple, Google. They do their best to design these as secure as possibly in the first place. And the second is those teams are able to quickly and effectively push security patches down to these devices when someone discovers a vulnerability. That's the way it works, right? We can't secure it out of the box. So we are agile, and we patch. This doesn't work for low-cost embedded systems like DVRs, home routers, let alone toys or appliances. They're built at a much lower profit margin. They're often designed offshore, third parties. They don't have full-time security teams associated with them. There's no one around to write those patches. Even worse, even if there, where many devices were like the Onity locks, they have no way to patch their software or firmware. Right now, the way you patch your home router is you throw it away and buy a new one. That's the patch mechanism. Now, that's wasteful and expensive, but it actually works. And we also get security from the fact we replace our computers and phones every few years. That's also not true for embedded systems. You buy a video player, you're going to replace it, what, five to ten years? A refrigerator, 25 years?

Bruce: 00:16:56.287 I bought a computerized home thermostat a few years ago. I expect to replace it approximately never. And think of it this way. You buy a car today. Let's make this up. Software is, say, a couple of years old. You buy it, you drive for ten years, you sell it, somebody else buys it, they drive for another ten years, they sell it. At this point, it's probably put in a boat, shipped, in my case, to Latin America. I don't know where you are. Someone else buys it, they drive another 10 to 20 years. All right. You try to find a computer from 1980, try to boot it up, try to make it run, try to make it secure. We have no idea how to maintain 40-year-old computer software. There is a reason Microsoft and Apple depreciate their operating systems every few generations. Right? And the problem is even worse for low-cost consumer devices. You have no answer to this. Second thing is authentication. Authentication is starting to fail also. And honestly, it only ever just barely worked. Right? Now, human-memorizable passwords no longer work in a lot of situations. Two-factor isn't suitable for some situations. Backup authentication systems are terrible, and the amount of authenticating is about to explode exponentially. So right now, when you authenticate—

Ev: 00:18:27.347 Bruce, why would amount of authentication would explode exponentially? Is it like you're referring to the fact that machines also need to authenticate against each other, because human population does not—

Bruce: 00:18:36.761 Don't tell them the answer before I get there. That's not fair.

Ev: 00:18:41.182 I just guessed. I'm sorry if it was the right answer.

Bruce: 00:18:42.949 It's okay. It's okay. All right, so when you authenticate, you do one of two — I'm going to demonstrate authenticating. All right? Log onto my phone and I check my email. Okay, so that's me authenticating to an object, I use the face recognition system, and I authenticated to remote service. Now, the Internet of Things is all about things talking to each other behind our back. Things authenticating to other things. So you got it right. And if you have 100 IoT devices that need to talk to each other, that's 10,000 authentications. You have 1,000 devices. That's a million authentications. Or imagine some driverless car system. You have to authenticate in real time to thousands of other cars and traffic alerts and road signs and police messages and whatever. We have no idea how to authenticate thing-to-thing at that scale. Now, we do have a system. When I go into my car, this one app authenticates the car, right, turns on the cabin microphone. That's Bluetooth. But if you remember the way Bluetooth works, I set it up manually. I'll do that 1, 10, 20 times. I'm not doing it 10,000 times. I'm not doing it a million times. We don't have good authentication systems for the next decade.

The zero trust concept

Ev: 00:20:04.983 So this is actually related to the question that Ray asked earlier, about the zero trust approach to a system's design. And I would add my own maybe dissatisfaction a little bit, because zero trust, I think, is a very healthy concept. That basically means that the resource that something or someone needs to authenticate into shouldn't trust the network. So the local client, remote client, doesn't matter. Which means that every system must be designed with this assumption that it's accessible by everyone in the world. It has a public IP, but then at the same time, we have network solution vendors selling zero trust networking. That makes little sense. So what are your thoughts on the kind of zero trust approach, and if there may be better alternatives?

Bruce: 00:20:53.773 Yeah. I mean, so zero trust is right now, I think, a buzzword that's losing its meaning. And it has two meanings, right? Well, one, assume everything is open, which makes a whole lot of sense. Right? And the other is continual trust, where the notion that I can log into my bank account and then do whatever I want feels crazy. Right? You log into your bank account. You start doing normal things. If you do something abnormal, right, I send my entire life savings to some random address in the Philippines, and maybe I should get a phone call. Or you see Facebook do this. Normal computer — they do one type of authentication. You log in from a strange computer — they have additional authentication steps. Right? So this notion of continual trust and variable trust makes a lot of sense. And also the notion of assuming everything is a public resource makes a lot of sense. I mean, it doesn't solve any of the problems I talk about, but it is a good model.

Ev: 00:21:50.983 So basically it's one of the components, but it's not the entire solution. And at the same time we don't have anything better at the moment.

The difficulty of securing supply chains

Bruce: 00:21:56.901 In general, you will find that there is no one answer. There are always lots of little answers. Third thing that is failing, and that's supply chain, which is in the news all the time. I think it is an insurmountably hard problem. It used to be a few years ago — it was about Huawei. Right? Should we trust a Chinese networking company. Or Kaspersky. Should we trust a Russian antivirus company? These days it is more about vulnerabilities in the systems in our supply chain attacking our entire network. SolarWinds-like example. This is actually a really important question. Can you trust technology from a country whose government you don't trust? Right? That's a good question. But it is just the beginning of a much bigger problem, right? This is a US product, but it's not made in the US. Its chips aren't made in the US. Its software is written all over the world. Its programmers carry, what, 100 different passports? So it is this supply chain — is actually a really robustly insecure system. We are forced to trust everything. We have to trust the development process, lots of hacked software libraries. We have to trust the distribution mechanism. We have seen fake apps in the Google Play Store, the Apple Store. Again, the update mechanism. You remember NotPetya, distributed through a fake update of a popular Ukrainian accounting package. SolarWinds was infected with a hacked update. You have to trust our shipping mechanism. There's a great document from Snowden of the NSA installing a backdoor in Cisco equipment being shipped to the Syrian telephone company. And I remember a paper maybe a decade ago. You can hack smartphones through malicious replacement screens.

Bruce: 00:23:59.989 These are very complex risks, and it's hard to know what to worry about, what's true. We have no choice but to trust everyone, and we can't trust anyone, and we have no better answers. So I think this is a perfect storm. Right? Security is failing just as everything is becoming interconnected. And a lot of what I talk about is regulation. We've largely been okay with an unregulated tech space because it didn't matter. And this is no longer sustainable. So I think this is a policy problem. Right? Getting the policy right here is critical, and law and tech have to work together. To me, actually, this is the most important lesson of Edward Snowden. We always knew that tech could subvert law. He showed us that law could subvert technology as well. It goes both ways. Now, I wrote a book on this. This is my latest book. Talks about a lot of the policy here. Standards, regulations, liabilities, treaties, agreements. I want to talk about three things quickly. The policy principle that defense must dominate. And this is important. I mean, I think of it as one world, one network, one answer. Back in the Cold War, we would defend our stuff and attack their stuff. Today everyone uses the same stuff. We all use Microsoft Windows, TCP/IP, and the Google Cloud. And we all use it. Either everyone gets security, or no one gets security. Either everyone gets a spy, or no one gets a spy. And as long as this device is in the pocket of every single, oh, I don't know, elected official and CEO and judge and police officer and nuclear power plant operator, we have to design this for security and not for surveillance.

Raising the cost of insecurity to incent security

Bruce: 00:26:02.199 And that's critical. I have a tech principle, and that's we need to build in resilience. Right? Assume insecurity, and then design systems that work anyway. And here's where things like zero trust fit in. Right? Defense in depth, compartmentalization, avoiding single points of failure, fail-safe, fail-secure, removing functionality, deleting data, building systems that monitor each other. Okay? So I think here there is a research question that rivals the internet itself. If you think back to the early days of the internet, the internet was created to answer this basic question. Can we build a reliable network out of unreliable parts in an unreliable world? I have a similar question. Can we build a secure network out of insecure parts and an insecure world? Now the answer isn't obviously yes, but it's not obviously no either. And we need research here. And lastly, I have an economic principle. That's we need to build in security incentives. Right? So, I mean, you all know the SolarWinds story. The SolarWinds's backstory is that the company was owned by a private equity firm, Thoma Bravo. They're Brazilian. And the way Thoma Bravo works, like most private equity firms, is they take companies that have established user bases that are sticky. And basically they suck as much money out of them as possible. They make the product as lousy as they can get away with without losing customers. This is what happened with SolarWinds. This is why SolarWinds was hacked. Their security was terrible because they had no money for it. And this kind of story is not an anomaly. Right? The economics matter a lot here. It's why banking websites are insecure. It's why your phone is vulnerable to SIM swapping. It's why Facebook only does a cursory job at removing misinformation. It's why IoT devices are insecure. Right? We need to raise the cost of insecurity, and thereby incent security. And that's what spurs innovation.

Ev: 00:28:21.970 Bruce, but talking about this point, I noticed that you would present the problem and then you would suggest the solution. In your solution section, you would say, "[inaudible] be designed." Designed, you use the word “designed” multiple times. Designed is very different from regulate, because in the beginning you promised to talk a little bit about additional government regulations. We also had that question from Thomas earlier about car recalls. So shall we have someone — because economic incentives are not there. Even the very beginning, you said people don't want to pay for security, period. It almost doesn't matter who owns the supplier. Is it Thoma Bravo or is it —

The need for regulation

Bruce: 00:29:04.337 But that's not true at all. Basically, if the CEO of the company goes to jail, if the bad thing happens, the bad thing doesn't happen. And I'll talk about it. But this is why the food you buy doesn't poison you. It's not because the food companies like you — it's because of regulation. I mean, markets don't solve this. Markets are short-term, and they're profit-motivated at the expense of society. Markets cannot solve collective action problems. Government is the entity we use to solve this. Now, of course there are problems. It's hard for governments to be proactive, regulatory capture. Oh, I mean, I can spend an hour on security versus safety, and the difference between those environments, and the notion of fast-moving tech. The devil is in the details, but the alternative doesn't work anymore. And this is important. Governments get involved regardless. The risks are too great, and the stakes are too high. Governments are already involved in physical systems. Cars, planes, consumer goods. The physical threats of the IoT spur them into action. And your choice is no longer government involvement or no government involvement. Our choice is now smarter government involvement or stupider government involvement. And this is what we need to start thinking. And you do this right, I think this gets to your question. Regulation incents private industry, right? The thing you always hear is that regulation stifles innovation. That is complete and total bullshit. It is what is told to you by the companies that don't want to be regulated. And you hear with everything from restaurant sanitation codes, automobile safety regulations, there is no evidence. The problem is economic. I have a lot of technology that nobody will use because it is not in their financial best interest to use it. Right?

Bruce: 00:31:05.553 The bank would rather have poor security than spend the money, right? SolarWinds, they're better off having lousy security, pushing the risk onto their users, and taking the profit. Right? If you do regulation right, you regulate outcomes. It spurs innovation in achieving those outcomes.

Ev: 00:31:27.817 So as an industry, what shall we do to make sure that the regulations are actually smart?

Bruce: 00:31:34.255 Well, we have to get involved.

Ev: 00:31:34.784 Because we all see this “accept cookies” in the browser on every single website.

Bruce: 00:31:39.547 Yeah, but you don't do that. Good regulation regulates outcomes. So when you look at what's going on, GDPR in Europe. Strong privacy security requirements, good penalties, it's doing good. California data protection law. We're seeing movement in New York, in Massachusetts, and I think the government agencies are getting involved here. And what's interesting, I think, is international considerations. Right? Because software is, right, one sell everywhere. So, right, the car I buy in the US is not the same car I buy in Mexico. Environmental regulations are different, and the manufacturers tune their engines to the market. But software doesn't work that way. Right? The Facebook I get in the US, same Facebook you get everywhere. And regulations promulgate. So California passed an IoT security law. As of I think the beginning of last year, you couldn't sell a device in California with a default password. Really basic security measure. Now nobody who makes an interconnected anything is going to have two versions of the thing. One for California, one for everybody else. They're going to fix the problem, and we all benefit. And this is true with GDPR. I mean, you know that many companies implement GDPR worldwide because that is easier than figuring out who is a European. So smart regulations in a few large markets improve security everywhere. And I don't see any alternative. There is no industry in the past, what, 200 years that has improved safety or security without being forced to by the government. Cars, planes, pharmaceuticals, food production, medical devices, restaurants, consumer goods, workplace conditions, most recently financial products.

Bruce: 00:33:39.682 The government is how we do this. And to your question, how do we get smarter security? We technologists need to get involved in policy, and we have no choice. As internet security becomes everything security, in that security technology becomes more important to overall security policy. And all of the security policy issues have strong tech components. And we will never get the policy right if policymakers get the tech wrong. Whether it's the going dark debate, the vulnerability equities debate, voting machines, driverless cars, regulating social media.

Bruce: 00:34:25.542 You need technologists in the room because otherwise you get a lot of bad decisions. And this is bigger than security. I mean, I'm trying to build a career path for what we call public-interest technologists. You don't do that and bad policy happens to us. So that's kind of my talk. I'm happy to poke at the questions. What are people asking? You've been looking more than I have.

Q&A

Ev: 00:34:57.645 So the question on the top basically talks about what are the things that ordinary citizens can do?

Bruce: 00:35:05.947 So this is hard, right?

Ev: 00:35:07.523 For example, your personal computing hygiene. If you travel around the world, if you go to Russia, if you go to China, when you buy computers, when you buy software, how do you deal with your email? So what's your advice for just —

Bruce: 00:35:22.641 I mean, basically we do the best we can. I mean, I don't know if you saw that the US is now giving advice to people going to the Chinese Olympics to leave their devices at home. I don't know if you saw that.

Ev: 00:35:33.791 Well, it's actually been standard practice across many, many, many companies for years.

Bruce: 00:35:37.975 But that's kind of, in some ways, unusable advice. Phones are an extension of our lives. You leave your phone home, you're going to get lost, you're not going to communicate with anybody. You might as well not go. We're not living in a world where you can do that. And it's gotten worse recently, because nowadays our computing is no longer under our control. Right? We use Gmail, we use Google Docs, our photographs are on Flickr. I have minimal control over this device. This is my phone. We can do things around the edges, and there are a whole bunch of security survival guys on the internet. I mean, I don't want to dispense advice here, but largely I think we need to address this as a policy issue. There's not a lot we can do because we are losing control. I mean, and I think that's the way the change is happening. And that's really important to think about. I don't do anything special that everyone else does. I probably do less. You simply have to accept it.

Ev: 00:36:45.558 Okay, let's just kind of maybe expand this question a little bit. So let's just say you are in a kind of leadership role in the organization. Let's just say you run — like you have several cloud environments that your teams are running, and you are acquiring different products. Maybe you're thinking, "Well, should I adopt Kubernetes or not? Should they use MongoDB or CockroachDB?" And keeping in mind that you’re constantly adding more and more technology into your cloud accounts, which continues to expand the attack surface area. So do you have any practical advice for people who are in this position? Because, look, they do need that additional database because they're building applications. So there is no way to say no to growing complexity. So what do we do then?

Bruce: 00:37:37.152 That's right. We figure it out. I mean, the best advice is don't take advice from a random webinar. These are hard questions. They'll depend on your network, your risk tolerance, what's going on. A lot of it is you have to accept it. I mean, things are bad and you have no choice. We do things — depending on who you are. You're running a major bank. You'll have a different answer. But these are actually hard questions. There are no quick answers to it. I want to mention something, I see a question from Mirage, just talking about basically I've seen many people, tech people get the policy wrong. Engineers, not good policy makers either. How do we manage that, right? And this is an important part of the solution — that we need technologists and policy people, I would say, working together. And there are people who straddle, who do both. Techies who do policy, policy people who do tech. But largely it is getting both groups together. And this is true for pretty much all of our complicated policy, whether it's future of work or agriculture or pretty much anything. Everything we do has a strong tech component. We have such a technological society, and it's getting the tech people in the room with the policymakers, hopefully talking with each other and not past each other. This is hard.

Ev: 00:39:02.911 So the audience actually was asking about this. On one hand, it could be said that engineers are often not good at policy. And also at the same time, could you give an example of — because you said, "Let's regulate outcomes." Can you give the audience an example of —

Bruce: 00:39:19.781 Oh, sure. I mean, it was so —

Ev: 00:39:21.156 Kind of two questions rolled into one.

Bruce: 00:39:22.688 Yeah, we see that in the good regulations on emitting carbon. You just give a ceiling. We don't care how you reduce your carbon footprint. You could conserve. You can go out of business. We actually don't care how. Right? You figure out the most cost effective-way to do it and do it. And that kind of regulation, suddenly there's an entire industry appears in carbon reduction. Right? You don't describe how, you just give the answer we want. And then, right, you start lowering your thresholds. You do this with food safety. Right? We don't care how you achieve food safety — you just need to do it. You figure out the cost-effective way to do it. This is better regulation than, I think, like the California law of saying no default passwords, right? That's mandating how. You mandate what, right? Mandating what is better.

Ev: 00:40:25.420 So, and how is this related to the earlier question that engineers might actually not be best people to define policy?

Bruce: 00:40:31.804 I never said they should define policies. They should be in the room where policy is defined. You need both. Engineers are terrible at policy. Policymakers are terrible at engineering. But the answer has to include both areas of expertise. So we need the two groups to work together. And some people straddle, most people in one camp or the other.

Ev: 00:40:56.794 And how do you recommend technologists even get into the room where the policy is?

Bruce: 00:40:59.896 So this is something we're still working with. There are programs, I mean, they tend to be nascent, right? Tech congress puts technologists on congressional staffs in the United States. Really great program. There are programs for staff technologists at the Federal Trade Commission, at the SEC, in doing that kind of work. We have technologists now working at it in trust legislation. So it is starting to happen, but it's a matter of convincing the policymakers they need it, and then giving them the ability to hire those people. And that is starting to happen.

Ev: 00:41:40.605 And also there was a related question again from the audience, is that being compliant with regulation, it can be frustrating. It's expensive. It takes a lot of time. Also, I would probably inject my own observation that the industry right now — not just security industry, but tech industry — there's a huge talent. We, for example, get people call us, and they say it's not just we need FedRAMP compliance. We actually need someone to tell us what it actually means for us. So what are your thoughts on that?

Bruce: 00:42:15.761 I mean, there's a lot of bad regulation out there. I'm okay with regulation being hard to comply with. I mean, there's actually no reason why Facebook should be guaranteed to be profitable. If they can't, if the only way to be profitable is destroy democracy, maybe they shouldn't exist. Right? We can say entire industries, 100 years ago, they sent five-year-olds up chimneys to clean them. We decided that industry should no longer exist. We can make decisions that are broader than economics. So regulations can be hard to comply with. Airline regulations are really hard to comply with, because planes drop out of the sky and people die. And we're okay with that. We in software have been used to this sort of libertarian, no regulation, anything goes, because it didn't matter. Now it matters, and society gets to say what works and what doesn't. The notion that software engineers can design the world as they saw fit, I think that that era is over, and it's okay that it's over.

Ev: 00:43:38.226 Yeah. And also, if as you said, if the regulation is introduced by a big enough market, let's just say United States of America. So then it will raise the bar for everybody, and we won't have the competition issue that someone is just — so it will be impossible for anybody to sell insecure systems, which means that we will have economic incentives, for everyone to pay better salaries to hire more —

Bruce: 00:44:00.991 It's a continuum. It's not secure versus insecure. It's going to be levels. I mean, but you see that in sort of all areas aside — there's no such thing as perfectly safe. I mean, you want to have a bad day, go on the FDA website and read the document that talks about minimum contaminants. I mean, there's an amount of insect parts that's allowed in your breakfast cereal. It's not zero. It's low, but it's not zero.

Ev: 00:44:33.881 But a little bit related to that, so someone asked if there could be three immediate things. Remember, we actually predicted that this question will be asked. If there were three immediate things you, Bruce Schneier, would have done right now, what would those be to make the situation better?

Bruce: 00:44:51.347 Okay, so I think liabilities is really important. I mean, the fact that software has been exempt from product liability has been a disaster. I mean, it made sense early on because the industry is moving fast and it didn't matter. But now you need real liabilities in software, and that will do an enormous amount. The second is to break up the tech monopolies. And then the monopoly system, it was really bad for security because it just removes consumer choice. All right, Antonio, third. Outlaw surveillance capitalism. Right? The notion that there's an entire industry designed to spy on people is a disaster for security.

Ev: 00:45:37.123 So this actually — I think it answers the question that I had in mind when I was listening to you earlier. Are you familiar with NSO Group out of Israel?

Bruce: 00:45:47.072 Oh, of course.

Ev: 00:45:48.424 So there are actually multiple firms, like it's basically a private industry. Companies that make cyberattack weapons.

Bruce: 00:45:56.957 I mean, think of them as cyber weapons arms manufacturers, right?

Ev: 00:45:59.652 Pretty much, yeah.

Bruce: 00:46:00.498 Right. They sell them to countries that we don't want to have these capabilities, like Saudi Arabia and Sudan and Ethiopia and Kazakhstan. Countries that are not good countries have access to these pretty high-grade cyber weapons through companies like this. Yes, this is bad.

Ev: 00:46:21.641 So your proposal is to just outlaw that completely, or just make them basically be like SpaceX and have a single customer, which is —

Bruce: 00:46:28.667 Right, you can't outlaw defense contractors, but you can regulate who they're allowed to sell to. Right? NSO Group is Israeli, and it's sort of unclear how connected they are to the government, probably more than they admit. But, yeah, when we're reducing the power of these cyber weapons arms manufacturers — would be a really good thing for us.

Ev: 00:46:56.696 And several people in the audience actually were wondering, how is AI, our recent advances in machine learning and related technologies? How is that relevant with cyber security? I mean, how does it make —

Bruce: 00:47:15.485 It's a good question, right? I mean, AI is huge wild card because you just don't know. Advances are very discontinuous, and things that we think are easy are hard. Things that we think are harder, easy. The real question I think we're asking is: who benefits long-term, the attacker or defender? The answer is, we don't know. In the near term, I think the defender benefits because the attacker is already attacking at machine speeds. The defender is defending at human speeds. So speeding up the defense will be an enormous benefit. I think there's one area where the defense benefits greatly, and that's in vulnerability finding. I mean, you can think about — imagine AI vulnerability finding tools, that could just troll software and find thousands of vulnerabilities. That's a benefit for the attacker. But you can [inaudible] those tools on software and development. You can imagine in ten years that software vulnerabilities are a thing of the past. Because the AIs are built in the compilers, and they clean them up before the software is ever released. So there in particular, I think the defense benefits. So I'm betting on defense benefiting from AI. I mean, it helps solve our problem with not enough trained people. By making people more powerful, it will benefit the attacker, but I think it benefits the defender more, is my guess. But the real answer is we don't know.

Ev: 00:48:45.620 And speaking of attackers, earlier, you mentioned that attackers are getting more sophisticated. It gets you more productive. So it almost sounds like it's getting cheaper and cheaper to build offensive capabilities. And do you mind telling us a little bit about how does it work? Because in the world of commercial software, the open source obviously created a major revolution for how quickly and how cheaply we could build new things by stitching components together. So in a cybercriminal world, is there a similar dynamic that a 14-year-old, after reading How to Be a Hacker in 21 Days or Hacking for Dummies, can just go and build something really impressive really quickly.

Bruce: 00:49:28.369 But that teenager doesn't build things. That teenager goes online and downloads things, right? That's a script kiddie, right, they're using tools other people build. Right? So there's this entire ecosystem on the attack side. There are those 14-year-old script kiddies, there's organized crime, there are professionals who sell tools, there's the cyber weapons arms manufacturers, the national governments like the NSA or China or Russia. I mean, there's hackers at all levels, and there's a lot of interplay between them, and it is extremely complex, that attacker ecosystem. It's very hard to sum up.

Ev: 00:50:06.184 So would it be fair to say that the purpose of regulations, or better regulations, is to increase economic incentives on the defensive side so they will be greater than being on the offensive side? Because at some point we want all the 14-year-olds looking at it and say, "You know what? Hacking doesn't pay."

Bruce: 00:50:25.456 Yeah. So, you'll always get — it's tough, right? I mean, it's hard for crime to pay because the risks are so great. You can do much better with a legal job than with the illegal job. You don't go to jail if you make a mistake. Really, economic incentives are less — a company like SolarWinds will never flip and become an attacker. What you want for SolarWinds is for the cost of security to be cheaper than the cost of insecurity. You want for Colonial Pipeline, for it to be cheaper for them to secure their systems than take the hit when they're attacked. You want for a bank, for a phone company, for Facebook to be more profitable if they are secure then they are insecure. Right now, pretty much all of those companies, all those industries, you are most profitable by being as insecure as you can get away with. Raise that floor. That's what we need to do. And then, right, the tech takes care of itself. We are incredibly innovative. Right? You go to the RSA show floor. There are thousands of products. Nobody's buying them. Right? I need to change the economic incentives so they are bought.

Ev: 00:51:49.867 So we talked about the companies and decisions that organizations should be doing. But at the same time, if you're an average Joe and you're buying some IoT device for a house, so you are unaware of all of these things. So it's really hard for you to make an intelligent decision. I think it was a question earlier from Raj. So does it mean that someone else has to take on this role of —

Bruce: 00:52:18.757 100%. Right? When I get on an airplane, I know nothing about airplane maintenance. I know nothing about crew training. I know nothing about anything. I trust that the FAA puts well-trained, well [inaudible] into a well-repaired aircraft all the time. Right? When I walk into a restaurant, I don't check the kitchen. Right? I'm trusting whoever does restaurant inspections to ensure that there's not some fungus growing in the salad. Right? Yet in all of our society, we trust someone else to ensure safety and security. We never do it ourselves. Nowhere. This is going to be no different.

Ev: 00:53:08.276 I like the aviation analogy. And we also have a question. It's actually been asked many times as I'm scrolling, so the latest version of this question comes from Elena Krug. So the question is generally it seems to be that people in charge of regulations are just incompetent. They don't understand technology. But at the same time, when I'm looking at the airline industry, the regulations they have in place probably makes sense if it's working so well. But at the same time, we all know that it's actually extremely expensive to launch a new plane. So increased regulation around safety of flying — they led to massive consolidation in the space. We effectively have maybe four major airline manufacturers in the world because no one else can do it. Because I actually looked into it recently. It's extremely expensive to design even a tiny aircraft. So what shall we do about it? How do we ensure that regulations are smart? But at the same time, they don't kill competition, they don't kill innovation.

Bruce: 00:54:16.387 Yeah. So my guess is it won't be as bad as airlines. I mean, you should look at industries that aren't that consolidated. But yeah, think of the medical industry. There's a lot of medical startups, even though there's enormous regulations. And that's probably a better comparison. I mean, yes, there's a lot of money in getting a new drug onto the market, that's very expensive. But there's an entire system of startups that do innovation. I mean, we got our COVID vaccines through some pretty impressive innovation. So we know we can innovate in this space. Yes, this is hard, right? And regulatory bodies do not understand tech, regulatory captures a thing. I mean, the fact that Boeing can self-certify the 737 Max was a freaking disaster, literally. But these are problems to solve. These are not reasons not to do the thing. So Richard asked, "What was the name of that company?" I have no idea what company he's referring to.

Ev: 00:55:18.118 I suspect it was earlier example, you may be —

Bruce: 00:55:21.832 Yeah. I don't know. I use SolarWinds a lot. Oh, NSO Group, probably, was the name of the company. Someone asked me to wave my book around again. This is my latest book with the awesome title of Click Here to Kill Everybody. So I can recommend that. My previous book, you can see it up there, it's called Data and Goliath. What else we got? We got to bring this thing into a landing. Any other good questions?

Ev: 00:55:48.263 So, okay. Spacecraft, weapon systems, etc. have complex supply chains. They manage to secure them.

Bruce: 00:55:57.640 No they don't. Who said they managed to secure them? The security of our weapon systems is a freaking disaster, not even —

Ev: 00:56:04.671 Ryan, you're learning something you forget.

Bruce: 00:56:06.998 There have been some US government reports. It's way worse than anybody wants to admit. We are not securing our supply chains and weapons systems.

Ev: 00:56:14.756 So which means the second part of this question no longer makes sense, because it asks, "What will it take for other tech to do —"

Bruce: 00:56:21.593 If you think about the development process of software for spacecraft, or for aircraft. They're like, 10 X expensive, because they have a process of safety, which is like the rest of the industry does not have at all. I don't think we're going to get there, but I think we're going to get a little further along that line.

Ev: 00:56:43.241 Oh, fantastic. So we are out of time. It's 12 o'clock. Bruce, thank you so much for coming up here and telling us —

Bruce: 00:56:52.997 It's 3 o'clock where I am. How weird.

Ev: 00:56:54.314 So yeah, trying to be — next time we'll be more inclusive of the time zones. Yeah. Thank you so much for giving us your perspective of what's happening in the world.

Bruce: 00:57:03.150 Next time we're going to be in person somewhere, and there'll only be one time zone. How about that?

Ev: 00:57:07.796 How about that? Absolutely. Well, and thank you —

Bruce: 00:57:09.624 That'd be great.

Ev: 00:57:10.432 —everyone else who joined us today. And this is not the last talk in the series, so looking forward to seeing more of you all. Goodbye.

Bruce: 00:57:18.675 Bye, all.

Our next Security Visionaries 2022 speaker - Troy Hunt

Ben: 00:57:21.428 Thank you for watching our first in our Security Visionaries 2022 series. We've gathered the world's foremost security researchers, practitioners, and thinkers to see what they think is on the horizon for 2022. Next up we'll have Troy Hunt. Troy is an information security author and the founder of Have I Been Pwned. He'll be telling us about lessons of billions of breached records. Join us on February 16th at 11:00 am Pacific. This event is free to register.

Join The Community

Background image

Try Teleport today

In the cloud, self-hosted, or open source
Get StartedView developer docs