Balancing Security and Agility While Scaling Your Company - overview

Fast-growth companies are some of the richest targets for hackers because that’s where the user data is. How do you balance the security you need to protect your customers/users with the agility you need to build a business? This talk provides practical tips drawn from Michael Coates' experience as CISO of an iconic brand with hundreds of millions of users. The talk will also explore current threats, data breaches, and the new reality of risk to identify what security controls are actually needed for enterprises that are moving fast, leaning into new technology, and want effective security defenses.

Key topics on Balancing Security and Agility While Scaling Your Company

  • A great way to start in the cybersecurity field is to dispel myths by demonstrating the risk at hand to company stakeholders.
  • While security is technical in the details, success in security is more about humans, behaviors, and psychology.
  • In cybersecurity, there are shades of grey that have to be considered based on target user groups and specific context.
  • Destroying the economics of automated attacks is the type of thinking needed to build your security program.
  • As security leaders, we are not in the profession of eliminating risk but of managing risk.

Expanding your knowledge on Balancing Security and Agility While Scaling Your Company

Introduction - Balancing Security and Agility While Scaling Your Company

(The transcript of the session)

Reed: 00:00:02.558 [music] Hi everyone. Thank you all for joining our speaking session today. My name is Reed Loden, and I’m the Vice President of Security here at Teleport. If you don’t know much about Teleport, we are the easiest, most secure way for engineers to access infrastructure resources. But that’s not why we’re here. This is the 2022 Security Visionaries Speakers Series, and we’ve got a great one for you today. We are super excited to have Michael Coates joining us to talk about Balancing Security and Agility While Scaling Your Company. As the former CISO at Twitter and current Co-founder & CEO at Altitude Networks, Michael has a few lessons he can share with us, I’m sure. I’ve actually known Michael for over a decade, back from when we were both at Mozilla, so I know this is going to be an awesome talk. If you have any questions for Michael, please ask them via the Q&A option. I will be presenting or moderating the Q&A at the end of the talk. This will just be a conversation, so no slides will be presented. And without further ado, Michael, welcome.


Michael: 00:00:58.282 Thanks so much, Reed. And thanks everyone at Teleport for having me. I’m super excited to be here. I think for many of you, and myself included, security is not just what we do to put food on the table, though of course it does, but it’s also an interest, a passion, a hobby, all sorts of things, for us. And I love the opportunity to chat with others. Share some of the things I’ve learned through successes, through failures, and missteps. [laughter] But hopefully, give some different thoughts and perspectives to take back to your organizations that may help you as you build your security programs and build your companies as well. So today I’m going to talk about the topic of how you’re balancing security and agility and thinking about this as your company is going through rapid growth and scaling. Hopefully, I’m going to be able to give you, again, a different perspective to think about from maybe the status quo in security. And as I go through this, share some of the interesting stories I’ve been a part of.

Early start as hacker for hire

Michael: 00:02:03.666 So in the last 20 years I’ve been in the cybersecurity field, I’ve certainly seen a lot. I had the opportunity to get my feet wet in the field, actually being the hacker for hire. Legally, in my case. Banks, governments, telcos, international corporations, they hired me and my team to break into their technology, into their banks, actually literally into their vaults from time to time, and show them where the flaws were and how those could be fixed before the bad guys found and exploited them. This was incredibly exciting. A really great way to start in the field to dispel myths. And I think that’s one of the things that’s been very helpful for me — and hopefully, it can be helpful for you — is whenever you put your hands on the keyboard, when you actually demonstrate the item, the exploit, the vulnerability, what that might be, it makes it real and it takes away the fog around it when people are discussing, “Hey, that’s not possible,” or, “I don’t believe that.” Things of that nature. And what a great way to start my foray into the field.

Leading security in exciting environments

Michael: 00:03:08.551 As Reed mentioned, I’ve had the opportunity to lead security at some really exciting places. I was head of security at Mozilla Firefox for a number of years. Protecting, at the time, maybe half a billion users around the world through what we worked on with the browser, what we worked on with the back-end technology. And also, was CISO for Twitter. And I think it’s pretty clear now, but looking back when I started moving in towards Twitter, I think there was confusion over why you would want to go and do security at Twitter. People asked, “Okay. If your users are telling the world they’re making a ham sandwich, who really cares about security for that kind of technology?” But lo and behold, I think as many of us now understand, Twitter is a vehicle for speech. And with that comes all sorts of interesting challenges from individuals around the world that may be speaking out on topics that are not desired from the entities around them. Their corporations. Their governments. And protecting the identity of those people and those pseudonymous accounts has incredible cybersecurity considerations around that, amongst many others.

Why security matters and why its success depends on humans

Michael: 00:04:24.933 So as we start to think about this topic of balancing security, what I want to talk about is, yeah, let’s step back and think about how we approach security. Let’s understand, why does that actually matter? We’re in it every day. But why does it actually matter that we’re doing cybersecurity? What if we didn’t do it? [laughter] Which, of course, is not the direction we’re going to go. I want to talk about a couple of different items. Also, I want to crush this myth that, to achieve security, we actually have to limit functionality. And as you start to reflect on some of the kind of status quo decisions that are made in cybersecurity in the industry, I think you’re going to see that wow. We’ve actually taken this weird approach of saying no far too often because of our own failures, actually. And then what I want to show you is that, while security is so technical in the details, your success of security is actually more about humans, behaviors, and psychology, and that’s going to be some of the items that really make your program successful in your company.

Michael: 00:05:28.568 So diving into this first part, let’s think about why do we do security at all. And what I’ve often told my teams is, “Do this exercise of, so what?” Or some might call it the 5 Whys question. And if you can’t explain to someone in your business who’s outside of the field of cybersecurity why it matters that they need to patch their system, we have a disconnect. And that’s an incredibly important thing. Because while our lens is on cybersecurity and those details, the business as a whole is looking to function, to be successful, to get to market, to build customers, to have revenue. And what we need to do is take the items we’re trying to achieve from a cybersecurity perspective that we know are critical and bring them back to the business. And that’s one of the most important things, just in the conversations in general, and how we present this to others outside of the team.

Michael: 00:06:25.982 Which is, “All right. I understand that patching’s important as a security professional, but the reason we have to do it for the business is because if we don’t patch that machine, it’s going to get compromised and someone’s going to have root on it.” From, again, the business leader, “So what?” “Well, if they have root on the machine, then they can access anything on there.” “Well, so what? It’s not an important machine.” “They could use that to pivot to another machine that is important, and that machine has critical user data.” “All right. So what?” “If they access that critical user data, then we have a data breach. Then, we have reporting requirements. Then, we have potential fines. Then, we have impact to user trust, etc.” “Ah.” Now, your business leader understands why that matters and is behind the chain of events and can see why you’re pushing this particular security item.

Michael: 00:07:10.065 Now, when you think about every business, every business today is powered by technology, no question asked. And I think because we are just in the thick of it, it’s not even a revelation to us. But it is such a change as to where businesses have been from 10 years ago, from 20 years ago. So while we may have this conversation that we can say, “All right. User data, compromise of machines, etc.,” because technology is so interwoven, we actually have this concern around cybersecurity for everything. Even a business that is dealing with baked goods, they’re going to have logistical issues. They have suppliers. Those suppliers run on technology. Third-party suppliers. So now if you have a security breach, you could impact food delivery services. So it actually plays into everything. And our goal as we think about security is to understand that big picture and be able to put a plan in place and get support for that.

Learning from past lessons

Michael: 00:08:11.944 So as we think about how we do security, I think it’s important to think about how we’ve failed in the past and what we need to do different. For example, cybersecurity is not clean desk policies. That is just an outdated, ineffective approach. It doesn’t really achieve much. It’s not even complex password policies. That is the old world in ways that we’ve failed. It’s not even tricking and punishing employees when I see things that really set me off that we’re going down the wrong path. Think about those really advanced phishing emails. You might have saw the news story where a company sent a phishing test that was talking about an annual bonus for employees. Well, of course, they got people to click. That’s not right. We’re going to the wrong extreme. And I’m going to talk about the right way to do that and why some amount of phishing training is good, but conceptually, we’re actually missing the boat.

Michael: 00:09:16.876 Now, the other thing in just terms of setting up our perspective on cybersecurity is we need to step away from absolute statements. Far too often in cybersecurity when we’re talking about approaches, we talk about them absolutely and say, “This is the answer. The right way of doing this is 2FA with a hardware token. The wrong way of doing this is anything less than that. SMS is broken. That’s the wrong way.” Let’s just call it. That approach is actually naïve and it’s wrong. So it’s important to know that all of your businesses are different from one another. I’m talking with the perspective of the tech companies and the user data, and the impacts and risks that happen with compromise for those companies. But of course, you have to interpret this differently if your business is a government nuclear facility. If your user is a high-net-worth individual in an oppressive regime that is speaking out against the government. All of those things have different user bases, different risk concerns, etc. But the principles on how we think about [inaudible] do hold is just where you put that dial.

Michael: 00:10:35.879 Now, I want to talk about that example about 2FA. And this is something we dealt with at Twitter. When you think about the conversation for authentication security, and you hear people say, “SMS two factor is broken. You should never use it. Only use a hardware token,” think about your user base. If it’s the employees of your company and they all work in the U.S., maybe you can dictate that a 2FA hardware token is the right answer. But what if your user base is the customer base and they are spread out across the entire world? Have you given thought to whether or not your user base even has a smartphone? Could they use an OTP style 2FA approach? What if they only have flip phones? If you actually look at the statistics globally, the number of users or people with smartphones is surprisingly low. And so, as we sit oftentimes in Silicon Valley and build things somewhat in our bubbles, we sometimes forget about that. So when you go and say, “SMS is unacceptable. We should never even offer it.” What you’re actually saying is, “Your user base will have a hardware token for the few people that adopt it, the rest of them will have nothing.”

Considering the shades of gray within security

Michael: 00:11:56.214 And so now you need to think about the shades of gray and say, “Well, wait a minute. Would I rather our users have no 2FA or would I rather them have SMS 2FA?” And then you could say, “Well, all right. There are risks in SMS 2FA. You could have potentially SS7 over the air vulnerabilities. You could potentially have malicious insiders at telcos.” But then again, ask yourself, “Are we designing for the masses of the user base and we want to have broad acceptance?” Because those users that again are individuals in oppressive regimes that are targeted by their nation-states, that is a separate case, and we have to think about isolating out our different groups of users with our different threat models. And so this is just a small example, but the important nuance here is when you see people make broad statements and saying, “This is insecure, this is never acceptable. How dare you do this?” We actually need to be way more nuanced because there’s plenty of times when it’s actually appropriate for particular threat models.

The ‘security dial’ concept

Michael: 00:13:03.678 Now, I want to get to this concept of what I think about as the security dial. What I think we’ve done far too often as we’ve approached this in the industry has said, “The way to be secure in cybersecurity is to limit functionality. So if you turn the dial this way, you are more secure, but you have more limitations. And if you give more access to things, more functionality in general, you are less secure.” And this is again, unfortunately, I think a failure of us in the cybersecurity industry to evolve and be creative. Now, when you think about the kind of stereotypical cybersecurity team of no, this is really going to come through. Think about examples where we make these statements. “You are not allowed to do this.” You are not allowed to work from a coffee house because wireless is dangerous or you are not allowed to browse to non-work websites because the web is dangerous. You are not allowed to use a personal device because they are dangerous.

Michael: 00:14:10.680 On the surface, those have some security merit and lots of people will agree with you. But if you start to implement those in your fast-growing companies, people are going to revolt. They’re going to say, “That’s unacceptable. Why can’t I work in a coffeehouse?” And I think we’ve actually gone through this front and center over the last couple of years. And then they may say, “Well, why are there some websites that are trusted and some that aren’t? And what does it even mean to be trusted?” The actual issue as you pull this apart is we have failed at technology. So let’s use the wireless network example in the coffeehouse. The reason we said that you should never use coffee house and malicious wireless exists is because it used to be a problem. It used to be an issue where websites wouldn’t use SSL, TLS correctly or wouldn’t use it at all. And if you were on a network with a malicious entity, they could easily, trivially, view your communication.

Michael: 00:15:10.718 And I remember myself actually sitting in North Beach here in San Francisco in 2010, and I remembered demonstrating the Firesheep attack against a popular social network site just proving how simple it was to do that. We’re not in that reality. 10 years in, technology and cybersecurity is eons than the rest of technological development. Every meaningful site is using SSL and TLS correctly. The browsers are enforcing controls on this. This is not a real issue that has extensive risk to the broad set of users that necessitate such a broad, restrictive statement. Now, also, when you think about that, like, “Oh, well, if you’re on a wireless network, you could conceptually have another entity that is malicious and actually attacking your device because of network proximity.” Well, what we’re saying actually is, “As a security program, we don’t feel like we have enough control to enforce security controls on your device from a local firewall to patching to know that it will actually be secure against an attack.” We’re also saying, “We’re nervous that if it was compromised, you would have lateral movement inside the network, the ability to attack other devices, and potential follow-on effects.”

Building security for an evolving company

Michael: 00:16:29.890 So it sounds like we’re projecting. It sounds like we’re projecting our failure as a security team onto the user and saying, “Because we can’t get our act together, you don’t get to use a coffeehouse network.” But as you think about building your program for your evolving company in today’s modern world, what you actually can do is address those underlying, quote, “Failures,” we used to have and take away this restrictive policy. And that’s where I think we can go with modern security programs. And so just to put a point on this kind of analogy, what we’ve been doing in the past is telling people, “You get our car, but our car is not very good. If you go above 40 miles an hour, the car’s wheels are going to fall off. So you are not allowed to go above 40. If it’s raining, you’re not allowed to drive because the brakes don’t really work.” And that kind of thinking, when you say it about a car, well that’s not a policy problem, that’s not a user problem, that’s a technology problem. The car is the problem. What we built. And that’s how we can think about security in terms of what we do.

Michael: 00:17:41.988 And if you want to pull it back into a very relevant, timely example, I would challenge you individually and even your security teams to say, “If you watch the Super Bowl or if you went to South by Southwest this past week, when the QR code went on the commercial, what was your reaction?” You may have had an initial knee-jerk reaction of, “Oh, that is dangerous.” But what is that rooted in? And if you told other people, “Don’t scan that QR code.” Think through the threat models. Think through the different controls. Are we telling them that they have a really crappy car? Or do we have an opportunity to say, “Wait a minute, this should be totally safe. You should be able to scan a QR code. You should be able to open a website.” Because if you can’t, then we have an issue that we don’t think you are patching your device. We don’t think that your browser can contain malicious code. And I think that is where we have a breakdown. So I think just like you should be able to click a link, you should be able to open a QR code. And I think we should be able to get technology there to allow us to do those things.

Examples of solving security challenges

Michael: 00:19:02.423 Now moving away from this notion of saying no, I want to talk about a couple kind of examples of things that have been very effective for solving actual challenges that I faced at Twitter, at Mozilla. So at Twitter, we had a challenge, as many companies do, of being very global and having people travel around the world, travel to different countries. And as many of you may do in your companies, you may have different risk profiles for different parts of the world based on your business. For example, perhaps there is intellectual property or user data that is not appropriate to be in different parts of the world, or perhaps you have different threat models with different nation-state entities and presence in those countries is a higher risk to your employees or to your devices.

Michael: 00:19:59.812 Now, what we attempted to do initially was what many of you probably do. Is forbid actions that we can’t have occurring based on policy. And as you might expect, kind of paper tiger approaches doesn’t really work very well. So we would end up having people that would go to the countries they’re not supposed to with their corporate devices. We’d later find out about it and say, “Hey, you violated policy, etc., etc.” And as you might imagine, this is not the real way to do things because, well, you’ve realized the risk. They have gone to the country. You now have an incident you have to respond to in terms of, “Did something happen?” Investigations, etc. And so, just like we’ve kind of tried to move from what I said before, this don’t disallow things entirely, to something more practical, we moved this from a policy-based approach to an actual technical-based approach.

Reed: 00:21:00.926 And what we did in this situation was we actually used our endpoint device software. Opened up the ability for it to have visibility outside of the corporate network and then use reporting back to our central server to understand changes in geocodes and such. This gave us the ability now to move from, instead of having a paper-based approach, where the policy said thou shalt not take this device in to said country, now we had a technology approach that said, we understand now that you have moved into this country. We are able to automatically restrict access to things that are sensitive. We are able to notify you that you’ve done something that we’ve asked you not to. We are able to follow up with managers and such automatically. And moving to something that is both more scalable and more realistic. And that’s really, I think, one of the ideas that’s so important here. Is we have to move cybersecurity into something that’s more realistic.

A challenge around passwords

Michael: 00:22:00.910 Another item that we focused on at Twitter that was quite interesting and it was something that really hit us in the summer of, I think, wow, was that 2015? Was an interesting challenge around passwords. And so the reason I bring this up is because passwords very much represent quintessential cybersecurity thinking and it also encapsulates a lot of our, maybe not failures, but our needs to move forward and progress in our thinking. And so for passwords, of course, we always rant on strong, complex, unique for every site. But if you step back and think about that, it’s very challenging for users, or even employees, to actually internalize that and realistically follow that guidance.

Michael: 00:22:57.966 What we faced at Twitter was the challenge not that attackers were brute-forcing the Twitter website. They weren’t trying thousands of logins per account. But instead, what they were doing is taking commonly available breaches — taking emails and passwords — and saying, “I bet people reuse passwords.” And they would then try those passwords across numerous popular websites. So Facebook, Bank of America, Twitter, etc., etc. And so we faced this challenge, as do many other companies, of attackers automatically trying one password for a user account. And so this throws off all of our previous thinking as an industry on how you prevent brute-force attacks. It’s not many passwords against an account. It’s one failed login attempt for an account. And of course, you could go back and say, “Well, you could just identify them by IP address and block the IP address.” First of all, blocking IP addresses is really kind of an antiquated way of thinking. And second, it’s really trivial for them to spin up a cloud instance, or move through different devices to try their one attack per account.

Michael: 00:24:12.268 And so it put us in a scenario where we had to again re-evaluate, what is the threat here and how do we actually prevent this? And it wasn’t going back to hundreds of millions of users and saying, “We told you so. You need 2FA. This is the issue.” Because while you can get up on your high horse and say that, it doesn’t actually translate into meaningful adoption across your user base. So instead, you have to get creative. And in this situation, there’s a few different approaches. You can use anti automation type defenses. But the notion of putting a captcha on a login page for all your users is pretty horrendous. You can use some more advanced anti automation tools that exist. That’s one approach. There are vendors in that space. You can also use a notion of challenging those new logins with other pieces of data that may not be commonly available from data breaches. And this is not going to the degree of what you might see with, “Verify your identity. Where did you live 12 years ago?” Which sometimes I fail those myself. But you can ask for another piece of data to block this type of attack.

Destroying the economics of automated attacks

Michael: 00:25:27.884 The interesting thing about that approach is if you were to say this to the broader cybersecurity community of, “Oh, we’re going to ask this question to help stop this attack.” The community may say, “Well, that is defeatable.” You could find that out. It’s not perfect. And again, that’s a red flag we have to watch out for. It’s not about having something that is perfect or can’t be defeated. It’s about saying, “This particular threat is based on automated attacks. It works because the attackers can get the data. They can perpetrate the attacks automatically with no human involvement.” If you can destroy the ability to automate that and take it so if the attacker wanted to do it, they’d have to do it manually, one account at a time, you have destroyed the economics of that attack. You’ve destroyed that vector and moved into a whole other category. And that type of thinking is what you need to build your security program. That’s ultimately what we did at Twitter. The ability to have a secondary check of another piece of data. It’s something we had to build internally, but that ultimately worked for that.

The human side of security

Michael: 00:26:45.263 Now, while much of security is our mindset around technology and risk, the other side of security is very much around humans. And this is going to be incredibly important as your company is scaling. So if your company is at 20 people or 100 people, you are just now forming the bedrock of your security culture. If your company is 1,000 people, your security culture has formed, whether or not you’ve been a part of it, [laughter] and your job now is to manage and shape that and maybe even reposition that into a way that is positive. Now, you can do this in a few different ways, from both focusing on your engineering and technical teams, and also from the kind of workflow and human element of it. Now, one of the things that we always think about in security is we call it shift left, security integrated from the beginning, etc., etc. But oftentimes we fail, and we say, “Oh, I wish they would have brought us in sooner,” or, “we just got added at the end. We were a checkpoint at the end.” And there’s a lot of ways you can address that, again, to form the culture that is incredibly helpful for your company.

Establishing a security culture

Reed: 00:28:08.095 So let’s look at the approach to engineering. One of the best things that you can do with engineering to help establish your security culture and just build things securely by design is some of the stuff that you’d expect. Do security training with your developers. This is something we did at Twitter. Every new employee, every new engineer, I believe even before they could ship code - it was in their first week or two - they had secure developer training. Now, at the beginning, the secure developer training was a little bit generic. It took common principles and talked about them. And that’s great. We should do that. What it moved into was using real-world examples from the Twitter code base that had issues. “This was a vulnerability we found internally and fixed. Here’s exactly how it worked.” The training was led by security engineers who used to be developers. That was a popular way of us building our security team. Bringing experts in from other teams, training them up on the security skills, and then they would be part of our team with their strong foundation in development. So the security training is key, and I think that’s probably not unexpected.

Reed: 00:29:21.715 One of the other things that can help scale your program is looking at the technology stack. So now that you have your developers, security engineers that understand how code is shipped in your company, have those individuals look at your frameworks and say, “What opportunities do we have to turn on security by default in the framework?” And this is something that we specifically did at Mozilla. I may be dating myself, but this was almost 10 years ago. The web dev team used Django framework at Mozilla. And what we found was, so much of the time when we were involved at the end, we would say, “Hey, you missed these core framework controls,” for I think it was like HTTP only or maybe around content-encoding. And we would go and turn those on afterwards. Then, code would break. Then, they’d have to fix things, etc. We finally said, “What if we just changed the framework default for the Mozilla project so that when you start a new project, it has all these controls turned on at the beginning?”

Reed: 00:30:29.584 That, although a simple concept, was actually pretty monumental. Because we worked with the web dev teams, we had those changes made in their core frameworks, and now when they started a project, the security controls were on. So if they had compatibility issues, if they had conflicts, they were able to just fix those real-time as they were building. And that resulted in just basically eliminating these classes of issues from the security concerns. This is a concept that you may see in other companies. Netflix uses this kind of concept. They call it their Paved Road or Gold Road approach. They applied that to their virtual instances. You can get a virtual instance. It’s available. It’s ready to go. We make it easy for you. And it’s also secure by default. So one action item as you scale is to look at how you’re building technology and say, “How can we just make this easy and secure for our development teams out of the box?”

Sources of risk

Michael: 00:31:33.161 But engineers are not going to be your only source of risk. You’re going to have projects that involve third-party companies. You’re going to have business relationships. You’re going to have data leaving your organization for these relationships or third parties connecting in. They present risk too. One of the items that we did at Mozilla to get ahead of these concerns — and again, not to be at the end of it — was to say, “How do we find our way into all new ideas, all new projects at the very beginning so we can give those very gentle nudges that help?” Because at the very beginning, if you say, “Hey, if you just do it this way instead of that way,” a lot of times, the teams are like, “Hey, sure. I wouldn’t have thought to do it that way, but if that makes a huge deal for security, great. I’ll do that.” But you have to be there at the beginning. It’s like trying to retrofit the basement of a skyscraper. You can do it at the beginning. That’s no problem. You can’t do it at the end.

Michael: 00:32:28.089 So what we did at Mozilla was we created what was called a project kick-off form, and this is where we also connected with our other key stakeholders. This kick-off form, just an operational item, was a way to say, “Hey, everybody in the business, if you’re doing something new, fill out the core details so we can give you all the resources and support you need.” It linked together the security team, the legal team, the privacy team, the finance team. All of those people became aware of what this new endeavor was, and they were ready to say, “Hey. Oh, it looks like you could use a third-party organization. Legals here to help you with any contracting. Here’s the stuff they have out of the box. If you want to engage with them, here’s their way of doing it. Here’s the workflows.” Same thing with cybersecurity team. “Here are the frameworks. Here’s the guidance. Here’s how we can be a part of the build process. Here’s how we can be at your sprints and help do code reviews or what have you.” All of those things happened, and they were streamlined because of this simple kick-off form. So how complex is the technology do that? Not very. How big was the impact on the organization as a whole? Pretty huge.

Michael: 00:33:38.804 Now, on the other side, we did something similar at Twitter. And this was very popular because of third-party risk reviews. As you may have in your companies, you establish these third-party relationships and you want to do a review of that third party to understand their risk posture and whether or not that’s highly risky. There’s a lot of lessons learned here, and this is something that a lot of companies struggle with. So a few things that we found that worked really well. One, think about, again, standardizing and streamlining. If people can fill out a simple form and select what type of data they are interacting with or what type of connectivity they’re interacting with, you can, again, automatically or programmatically weed out the vast majority of cases where the risk level is acceptable. And this is really important. As security leaders, we have got to understand that threshold of what do we really care about and what is acceptable to let through? We are not in the profession of eliminating risk. We’re in the profession of managing risk.

Michael: 00:34:48.216 And so with this third-party risk process, if someone was interacting with very sensitive data with a third party, we would then go through the process to evaluate the security of that organization. We would do some of the things that indicate whether or not they are taking security more serious than those that or not. These are not foolproof, but they are layers of confidence. So if the third party had a SOC 2 by no means is that foolproof. But if they didn’t have a SOC 2, and they needed sensitive data, that certainly is a red flag. So that’s one of the things. If they wouldn’t agree to our security terms in the legal contracts, that was another item. Of course, legal contracts alone are not going to prevent you from being breached, but a company that won’t even agree to it in a legal contract, again, that’s something that’s certainly concerning. And of course, you can imagine there are other levels that you go to, all the way to actually testing them, if you really needed to, which I will say is really a costly way of doing things.

Michael: 00:35:47.981 The other part that we brought into our third-party risk review at Twitter that was really important was the notion of making sure the risk accountability lied with the team, not with security. And this is an incredibly important item on how you build your security culture. Security accountability cannot entirely fall on the cybersecurity team. If you have all accountability for all breaches and all security mistakes, you also need all authority to prevent them. Which means you basically need unilateral decision to veto every decision of the business. That is a horrible business strategy. Businesses take calculated risks. Sometimes they work, sometimes they don’t. But those risks need to be made within the departments of the business that have the understanding of what they’re doing. Time to market is a risk calculation. Features is a risk calculation. Cybersecurity will help advise on those, but the accountability within range needs to be with those business leaders.

Michael: 00:36:51.383 So stepping back to the third-party risk form, if they got to a spot where their data was incredibly sensitive and the business partner was not meeting our check the box, get through this quickly type level of security, that would then go to the VP of that other team to sign off on the risk. Now, when you think about risk sign-off, that sometimes gets to be something that people rubber stamp, but it actually doesn’t. When you rise it up the chain to a VP and say, “This is now your neck on the line. This is what’s going on.” What we found happened, those VPs got into a meeting with us. We explained the reality. We again explained, “Hey, if you make these types of tweaks or change the relationship in these ways, you actually drive down risk dramatically.” And suddenly, there’s a lot of traction on doing those things. And in some cases, it did end up being, “Yep. This is a risk we want to take. This is something they’ll sign off on.”

Organizational elements of security and accountability

Michael: 00:37:53.107 Other ways you can look at organizational elements of security and accountability in authority is again, to not try and shoulder this blame or this mountain yourself as a security team. Shine a light on the reality and bring accountability to others. Another item that we used at Twitter that was very effective was security scorecards. Security ratings of teams. So we didn’t have all elements of endpoint security fully automated in its entirety. We are, of course, pursuing it. But ultimately people had to reboot their system or accept an update. What we did is we measured that. We measured the level of security in different teams and then we reported on it based on roll-up to business leaders. And I reported that to the C-suite. I reported it in front of those peers. And they knew it was going to get reported, and I was the messenger. I said, “Hey, we’ve established that security is important. We’ve established these are the things we need to do. Here’s how we’re doing.” And that’s all I had to say.

Michael: 00:38:58.563 Imagine what happened to those people at the bottom. Nobody came after them and said, “Hey, you need to do better.” They knew as a leader of the business that they were at the bottom of this thing and that it mattered. They got back to their teams and said, “Hey, I don’t actually know what this is, but I want you to fix it because it’s important. We have trust amongst our leadership. You need to go and patch your machines,” or whatever the issue might have been. So it was a good example of saying, “You don’t have to be the person running around bothering people saying, hey, you need to do this. You don’t need to be a security team that takes the blame that other people aren’t doing things.” Build-in measurements, build-in controls, and use that to shine a light, and the business will respond.

Authentication, remote work, workflows

Michael: 00:39:41.333 So a few other things that I want to mention as we’re looking at time and then open up this up for a conversation, which I’m excited about, is some of the things I’ve talked about — are technical here. We need to think about the right types of controls for things as obvious as authentication. We need to think about the right kind of controls to enable people to work from anywhere, work from coffee houses, and then still prevent lateral movement. The whole notion of this abstract zero trust idea is a fundamental concept, which is you should not inherently be able to do things just because you’re an employee or just because it’s a device or just because you’re in an office or not. We need to actually have the right security checks at each step. And so that is our job from a technology side, to build those in and to crush these situations where we’ve told people no.

Michael: 00:40:39.908 On an organizational side, we need to build workflows that make it easier for people to do the right thing and to give that kind of Paved Road notion. That the easy path is also the secure path. But also, there’s the notion of kind of the compliance sides of things. And when we talk about compliance and security, sometimes, again, it’s a dirty word. And I think that’s, again, because of our failures previously. We’ve unfortunately in the past had companies where they say, “Just be compliant. I don’t care about anything else. Get the box checked. That’s all the money you get for your funding, etc.” So compliance got this dirty word of like, “Oh, well, we’re compliant, but it doesn’t actually mean we’re secure.” Which is true. Having compliance does not mean you are necessarily secure. But not having compliance is also bad for many other reasons, not to mention just the business friction you get.

Achieving compliance to enhance security

Michael: 00:41:38.549 Compliance brings predictability. And if you do compliance well, you can increase and enhance your security process. For example, if you take on like a SOC 2 or an ISO 27001, you’re going to be building standardized practices that build predictability across your business. It’s going to be building shared responsibility and documented processes. All things that are incredibly valuable for security. If you find when you’re doing that, that you’re doing something that is just to check a box, that doesn’t provide security value, that’s actually an opportunity for that security engineer or that other team member outside of security to raise that back to your security leadership. And you can definitely fight those things because nothing about compliance is implementing meaningless point stuff. And I’ve done this many times at Twitter and said, “All right. Auditor, this is what the control is asking for. What that means is you’re asking for this item to be achieved, this security principle to be achieved. We are achieving it this way because that’s what matters to our security, to our technology stack and framework.” We’ve even had things that are not attainable as written, and we had to translate that because the prescriptive is often written for older technology. So you can actually use compliance in a really powerful way to help your business move forward. I’ll tell you, at Altitude, we brought in SOC 2. We’re SOC 2 type Two certified. And we did that at eight people. You can do it early on, and you can establish norms that help you grow in a really strong way.

Testing security in practice

Michael: 00:43:18.779 And just a few more notes as we wrap up here. The other thing that I want to mention is, in theory, it works. In practice, sometimes it doesn’t. In theory, everything works. Always, always, always test it live. One of the best learning opportunities I had at Twitter, in the first few months that I got there, we did war games, basically. We created a scenario where only a couple of people knew that we were going to do a fake attack against the company. What we did is we had one trusted insider who authorized their account to be used in such a way. We asked one database administrator to retrieve a piece of data from this account. And then what we did is we said, “All right. We’re going to create this scenario.” We went back and sent an anonymous email to press that said, “I’m a reporter. I have a hacker who has contacted me that says they breached Twitter systems. Here is a screenshot of the data to prove that they have it. I’m writing a story in two hours. Do you have any comment?”

Michael: 00:44:30.422 Imagine what that would be like if that happened to your company. And so we sent that email to press, and we said, “All right. Let’s see what happens.” Where does that email go? Where does it bounce around through the organization? How long does it even take to get to the security team? It finally gets to the security team. “Okay. All right, guys. Let’s mobilize an incident response. What are you going to do?” And they’re like, “All right.” They’re looking at playbooks. They’re finding out very quickly, like, “Oh, we thought we had this, but we didn’t,” or this or that. They bring the right teams together to have the conversation. So now they’re pulling in the technology teams that were involved. They’re doing research to figure out what’s going on. Like, “All right. What system could this have come from? Is this real? Is this fake?” They’re going through all these paces.

Michael: 00:45:09.055 Now one thing I’ll tell you is incident responses can be very costly in terms of time. So one of the things we did here was, at different intervals, we kind of shortcut that to speed it up to get to the value of what we’re doing. We said, “All right,” after several hours. “Hey, guys. This is a test. We’re trying to demonstrate what things we can get better at,” etc., etc. I was really thrilled to see that that was okay. They’re like, “Hey, we just want to get the answer. This is a challenge.” And we got through this whole process. And at the very end, we did find out what system they had used, etc., etc. We are able to solve it. But along the way, we had so many moments of, “Oh, my gosh. I wish we had this ready or this didn’t work.” And that is perfect because when you find those things out, you’re like, “oh, I wish we would have known this before a real situation.” And that was exactly it.

Michael: 00:45:57.596 So while you’re thinking about changing your perspective on security, while you’re thinking about how to build in core technical controls that work automatically in crushing these previous approaches of no, while you’re thinking about building organizational structures and workflows that actually enable security to move fast in your growing business, also go back at the very end and say, “Let’s do some examples. Let’s do some war games where we actually do something like this or let’s do some tabletop scenarios where we talk through it in a two-hour window.” All of those things are going to give you incredibly valuable pieces of data that are going to help you be in a much better position.

Security takeaways to consider for your company

Michael: 00:46:35.984 So as we open this up for a little bit of a conversation and questions, I want to leave you with a couple of items to think about yourself. In your company, what is your security culture? What is the feedback from your employees and your business leaders? Have you asked them what they’re concerned about most? What are the top three disaster scenarios for the company as a whole, and do you have plans for them? Do you feel like your leaders have accountability for security even though they’re not in the security team? And do they even know what that is? And then lastly, if you went on vacation, would security continue working? And what about if your company doubled or tripled in size? How would you scale to that? So all good things that will hopefully start to pull you back to, “All right. What fundamentally matters and how do we move the ball of security in a way that works?” Because if we have a secure business that constrains it such that it goes out of business, that doesn’t help anyone. But if we have a secure business that innovates and wins, that’s amazing. Reed, I’ll turn it back over to you. Thank you so much for the time thus far. And can’t wait for some good, wild questions.

Q&A time

Reed: 00:47:45.445 Awesome. Thank you again, Mike. So far, this has been an excellent conversation. I’ve been doing security a long time and I’ve even picked up some things. Like, “Hey, I should try and think that out.” So we have a few questions from the audience. Again, audience, please feel free to use the Q&A functionality to ask some questions live and get those answered. We have one question that is asking about compliance and is saying that a lot of times in a compliance world that your vendors will actually provide the SOC 2 or the compliance reports of their cloud provider to you as an example of like, “Hey, we’re compliant because our cloud providers compliant with these things.” So how do you know-- what do you do to push back on that vendor or the supplier to make sure that they’re actually setting the proper security controls that will actually meet SOC 2 or any of the compliance requirements you might have?

Michael: 00:48:39.743 It’s a good question. It’s a statement that we have some more maturity as an industry. Because obviously, I think as we look at that as cybersecurity professionals like, “Well, that’s the wrong answer.” [laughter] Like, “That doesn’t apply.” And so if so many companies are doing that, they don’t yet know that that is not the right way of answering it. I think what that gets back to is many companies then say, “Well, if you’re not able to provide a standardized attestation of your security processes,” which is the easy path, “we then have to take the hard path.” And the hard path is the 100 question third-party questionnaire. One thing that we tried to do at Twitter was we joined an alliance of other companies that said, “How about we not all have our individual sets of questions, but we make one set of questions that people can answer once?” And we were a part of this creation called the Vendor Security Alliance and it still exists. And that’s a good number of tech companies got together and said, “Here are the standard questions.” And if you are a vendor, that’s a good place to start. You could build your answers to those questions and give people that. But also, if you’re a vendor, it’s going to behoove you to get SOC to certify. It’s going to help streamline your deals. It’s going to help build your security process. And I guess just to address the question directly, if you do receive a SOC 2 of your cloud provider from a vendor, you, unfortunately, have to go back to them and have a conversation and say, “I hope you realize why this doesn’t apply. We actually care about what you do with our data aside from where it sits in the cloud.” And that will either get you to questionnaires or conversations or threat models. Which is not as fast, but it’s what you need to do to protect your data.

Reed: 00:50:28.832 Awesome. Have another question asking that, hey, we’re at an early-stage start-up trying to understand what to focus on. Should I focus on building kind of a security team or should I start learning about some security practices instead? And what resources should I look to or study from?

Michael: 00:50:48.565 Yeah. At the very beginning, you’re kind of just saying, “We got to just start doing the basics.” The blocking and tackling of cybersecurity will make tremendous gains. So if you focus on things like, as a company, we have the first few employees, we have less than 10 employees. You’re going to go to Google and you’re going to enforce 2FA on all your users and you’re going to buy them hardware tokens and you’re going to send them to every one of them and say this is how this company is going to work. We are going to use these and we’re going to eliminate the concern of phishing." Because phishing you will never win unless you really have a hardware token like a FIDO token. Do it at 10 people. Because you’re going to find people, like, “I don’t know how it works.” You’re going to build your guidance once to tell them how it works. And then when you’re 100 people, you’re like, “Wow. How do we have tokens for 100 people? This is amazing.” Do that. Think about patching. You’ve got to have a regular way that you’re always patched. The reason people get breached is not the zero-day. If you look at the Verizon data breach report, the average breaches are happening because of the 100-day or the 365-day. People are getting breached by things that have been patches available for quite some time. And then I think that the other thing about an early-stage start-up is to think about the core controls in what you’re building. If you’re building a microservices app, start to ask yourself, “Why can one part talk to another?” We should have this notion of authentication from the beginning. And then lastly, resources to look at? Look at the SANS list, the OS lists, the CIS Secure Hardening list. Those are all good kind of prescriptive lists that are generic and very applicable.

Reed: 00:52:31.687 Awesome. So on that same topic, I just want to take it a little bit further from my own question here. So you talk a lot about patching and about 2FA things. Obviously, a lot of people don’t have a security team and they can’t spend a lot of time on those things. What are kind of like the top three things that you recommend that people kind of focus on? If you only had time to do three things to improve security, what would those be?

Michael: 00:52:54.332 One of the things would be actually understanding where your most critical data lives. There’s a lot of different things you can start to worry about, but just number one, what’s the most important thing to you, so you can draw your attention to that? Second, it would be strong authentication. Understanding, how do your users connect to you or systems and services? How do your employees connect into your systems and services? How do the services connect to each other? And then number three would be, actually, if you’re a small security team, using other experts. This is not something that you’re going to be able to scale on your own. But the most important thing is to say, “This problem is the most important one to solve, and I will solve it with the expertise of someone else that can do it.” At Twitter, I did this a lot. Like, “Does this problem needs to be solved by the few engineers I have?” And of course, we had lots. [laughter] But still, relatively speaking, the few. Or does this need to be solved with money?" Because I can take money and I can go get something. And if it’s the right solution for the right problem, then that’s perfect, too.

Reed: 00:54:05.369 Awesome. You mentioned a lot of things that have worked in the past, but I think it’s just as important to learn from our failures and past mistakes that we’ve tried, things we’ve done so that we’re not trying to repeat the same mistakes. So what are some kind of security improvements or enhancements that you’ve tried in the past that just ended up not working or just were not worthwhile at all?

Michael: 00:54:28.928 Yeah. At Mozilla, I attempted to implement a data classification policy. And I think everybody’s like, “Oh, that makes sense. You should have that.” And so we documented up what the criteria were for different data security controls. Then, we went back to the company and presented it and said, “Here are the controls, and here’s how you should label your documents with the right label of classification.” The only document that was labeled was the example document. Nobody did that. And so the learning here was the security control was right. Nobody’s going to argue about that. The expectations of users was wrong. The users are like, “No. We’re not going to do that.” And that was a moment where we realized, “All right. If we ever wanted to pursue this, it has to be automatic. It has to be transparent to the user because this is not an expectation of them that is reasonable.” Another one that I remember from those days that is similar was, and many of you might remember this. Do you remember the Insecure content warnings in the browsers?

Reed: 00:55:32.357 Yep.

Michael: 00:55:33.082 We prompted users with this. “This page contains Insecure content. Do you want to load them? Yes? No?” That is a false choice. Users have no idea. They’re like, “I don’t care. I just want to see who won the game.” So again, it was another thing where we put unreasonable expectations on the users. Which gets me to a very important point that defaults rule the world. And so whatever you make the default is going to have a huge impact on security or usability. And don’t fool yourself into thinking that giving them a choice changes the equation. You have to figure it out. And it’s hard, and it matters a lot.

Reed: 00:56:07.615 So you mentioned that. I think that’s a great segue, too. Is like, okay, you’ve done a lot of stuff. You talk about these tips for navigating this kind of hyper-growth at companies. Are there a few things that you wish that young Michael would knew back then? Would know back then? Are there a few things like, “Okay. I’ve been doing this security now for 10, 15, whatever, plus years. What should have changed back then to make my life so much easier now”?

Michael: 00:56:37.818 One of the things that’s going to be most helpful as you grow in your impact in security and as the company grows is after you start in the security details and the weeds and the fundamentals, is moving and thinking about, “How is the business successful and how can we do things that help the business be more successful and move faster? What matters to the business?” The more you can be an intermediary to speak between the security and technology, fundamentals terms, etc. to business objectives, how that impacts your bottom line, your top line, your PNL, how that impacts risk, whether or not you need to have a DNO insurance policy, all of those things, that is where you become even more valuable. And one way to start to think about that is, if you are going to go sit in front of your board, your board audit committee, which you should be doing, what are you going to tell them? Because if you tell them nuanced things about patching, it’s going to be that so what thing again. And that ability to up-level things into how the business can be successful, that’s why the CISO role exists and that’s how we take it from, as you might have heard, a little C to a big C.


Reed: 00:57:48.177 Awesome. Well, that’s all the time we have for today. [music] Thank you to everybody who was able to join us. And special thanks to Michael for sharing those insights into how best to balance security and agility with today’s fast-moving companies. As always, this recording will be available on Teleport Security Visionaries website in the coming days. Our next speaker as part of the Security Visionary series will be on April 20th at 10:00 am Pacific with Keren Elazari presenting on Innovation Lessons We Can Learn From Hackers. We hope that you are all able to join us again then. Until then, have an awesome day and a great rest of your week.

Join The Community

Try Teleport today

In the cloud, self-hosted, or open source
Get StartedView developer docs