Teleport Workload Identity with SPIFFE: Achieving Zero Trust in Modern Infrastructure
May 23
Virtual
Register Today
Teleport logoTry For Free
Background image

Infosec for startups - Overview

Key topics on Infosec for startups

  • One of the harder jobs in security is to be the first security person at a startup since startups typically have various types of security problems, and you can’t expect one person to cover all of those fields.
  • Considerations when evaluating security consultants are the breadth of services being offered and the billing model being used.
  • One way to describe the SOC 2 standard in the least number of words is: do you do what you say, and do you say what you do?
  • Seven best practices to pass SOC 2 are defined in LVH’s The SOC 2 Starting Seven blog post.
  • The Crypto 101 e-book is an introduction to cryptography basics for application developers.
  • When determining the programming language to solve a given problem, it’s important to use the right tool for the job.

Expanding your knowledge on Infosec for startups

Transcript

Ben: 00:00:02.803 Welcome to Access Control, a podcast providing practical security advice for startups, advice from people who've been there. Each episode we’ll interview a leader in their field and learn best practices and practical tips for securing your org. For today’s episode, I’ll be talking to LVH. LVH is a principal and co-founder of Latacora. Latacora is a security consultancy that's focused on creating security practice and maturating in-house capabilities. Teleport has been partnering with Latacora for a number of years and we've found that they’re valuable as we've grown. I was fortunate enough to work in the same office as LVH during my time at Rackspace. Along with enjoying LVH's hack day projects, I always learned a lot about new security and encryption technologies. LVH, thanks for joining us today.

LVH: 00:00:48.347 Thanks for having me.

Having a security practice in a startup

Ben: 00:00:49.340 To kick things off, can you tell me what it means to have a security practice in a startup?

LVH: 00:00:54.452 Great question. So I think one of the original challenges that we saw when we started Latacora is that there was lots of startups who were trying to do security things. For many of them, that might mean — I'm going to go get a pen test, right? For many of them, unfortunately, they'll look at that pen test and the impact on security for that company if you go look a year later. It's not necessarily that valuable. And I don't think that's an intrinsic problem with pen tests. I think that's an intrinsic problem with the context of the pen test because if you go look at a large mature organization with a large mature security organization or large security practice, they're going to be working in pen tests in a completely different way. It's not the entirety of their security practice. They're looking at a pen test almost in the sense of hypothesis validation. It's like, "I have this idea of what might be wrong and I'm going to ask a third-party to go double-check and go make sure that I have a decent [inaudible] on that." And so as a consequence, you can't really expect to get the same results if you just copy the — I'm doing the pen testing part but not do the rest of the proverbial owl. The idea behind running a security practice — I mean, it's very broad but effectively it is to identify and then mitigate risk for the company. What does that mean? Well, that can take a lot of angles which is one of the reasons why I think it's very challenging for individuals to do that.

LVH: 00:02:16.658 One of the harder jobs in security is go be the first security person at a startup. It’s not because people are just not capable or something. It's because one of the issues is we see security personnel at a startup, but what does that mean? Startups don't have a security person problem. They have an application security problem on Monday; they have a cloud security problem on Tuesday; they have an IT security problem on Wednesday; they have a compliance problem on Thursday; and a network security problem on Friday, right? And I say application security as if that's one field. The person who is going to be competent at auditing your web front-end or your mobile application or in the case of Teleport a bunch of Go backend services with fairly complex cryptography going on — they're all different people. And you can't reasonably expect — you might be able to get like one or two expertises; if you're really lucky you can get three or four, but you can't reasonably expect someone to cover all of those fields. There's nobody who's going to be excellent at AWS, and Azure, and GCP, and also knows everything that there is to know about SOC 2 and HIPAA and whatever it might be. And not every startup is going to need every one of those. You can hire the world's best Kubernetes security experts. The day that you have an Android security problem, it doesn't really help you. I guess didn't really answer your question directly, but the idea behind “What is a security practice that identifies and mitigates risk for an organization, what does that mean specifically?” For us, we do that for startups so we're going to be doing startup looking things, right? We're going to be doing that on AWS most of the time or at least on a major cloud provider. We're going to be doing that typically with reasonably modern languages. Most of our customers are on [inaudible], Python, or Go or something, etc., etc.

Ben: 00:03:53.589 And then is there a stage in which people come to you? There's sort of an inflection point they're like, "Okay. I understand what security practice is. I need to bring an external party." At what point can you see people sort of pick up the phone?

LVH: 00:04:07.504 It used to be the case that we started working with people where the low water mark was about 12 engineers, and the reason for that is because Latacora's remit is so broad. It has a tendency to expand horizontally. By which I mean, for example, we have a compliance practice and we have a cloud security practice, and new AppSec practice, etc., etc., and there's no particular reason why a new practice wouldn't exist. So it tends to expand a little bit. And as a consequence, there's a minimum viable team on the other end to be able to just drink from the fire hose. And it was always really hard to kind of pick like, "Could we have done a smaller version that only did AppSec?" Yeah. But that then we only did AppSec. And the problem that we wanted to solve was go do security services for startups, like go build security practices for startups. And so you can't do that by only caring about AppSec. And so it can tend to naturally expand. What we've been doing in — I want to say that we've been building it out maybe for the last year or so — is a program that we call Latacora Jumpstart where we're working with far smaller customers. The smallest customer that we started with, I believe, they had three people when we started. And there we're really focused on, like just the advisory work, and then all of the stuff that we have tooling for in the background because it's relatively easy. It's expensive for us to maintain and develop, but it's reasonably economical for us to go run for an extra customer, and then really just focusing on the advisory work because they're still building the app. So doing a full- bore assessment for six weeks, it's kind of pointless because I'm going to be looking at a bunch of code that's not going to exist a year from now and hoping customers build that. And the one thing I have been surprised about with Latacora is the range. So we started at like 15 people. We had a couple of companies go public under our watch which I would not have ever, ever believed if you'd told me that when we started.

LVH: 00:05:58.063 It's a pretty broad range. I would say that if you're trying to hire a security person, the industry standard ratio seems to be somewhere like 1 in 25 to 1 in 50 depending on how security- sensitive and how tightly regulated you are. Of course, you have all the aforementioned problems of which security person are you going to hire. So the minimum viable security set might be, I don't know, three or four.

Questions to ask when evaluating consultants

Ben: 00:06:19.285 So for people who are thinking of hiring someone and especially bringing in external consultants, what sort of questions do you look for them to ask when they're just bringing you onboard?

LVH: 00:06:31.503 So I can tell you what makes an easy Latacora customer, which is a slightly different interpretation of the question. But because our remit is so broad, we're trying to find the commonalities between customers and then exploit those mercilessly and therefore it's like AWS, right? And if somebody came to us with Oracle Cloud — I'm not saying that I have a problem with Oracle Cloud, I'm saying I don't have enough Oracle Cloud customers to warrant building an Oracle cloud practice internally, right? And so similarly, GitHub if you're using Perforce. Unfortunately, startups don't use a lot of Perforce, but if you were using Perforce, then I don't have a way to engage with that. And same thing with Slack, same thing with Google Workspaces as opposed to Office 365, something like that. When you're evaluating a consultant, what sorts of things should people ask? There's definitely some services out there that I have some questions about, and I think one of the bigger ones are like: "What's the breadth of services that you're offering?" And then the second part is: "What does that billing model look like?" Because one of the flaws that we've seen with customers — Latacora's flat rate. The reason that it's flat rate is because I never want to be responsible when somebody­ decided that a particular feature is like, "Oh, is this dangerous? Is this not dangerous?" Well, I don't know, but I'm not going to spend 10 grand to find out. But the benefit of having us — a lot of what we do is modeled after the sort of perfect mythical unicorn security person that was able to do everything, like all those fields. Obviously we're a service so we get to cheat a little bit on the inside. We don't have a lot of unicorns. And so we kind of spread the load across those. I would say that be very careful about what the things are that a service is allegedly good at.

LVH: 00:08:04.812 There's nothing wrong with someone just being a compliance person. If you need compliance, go do compliance. There's so many companies, especially these days — SOC 2 is table stakes for so many enterprise deals. Even smaller business-to-business deals. There's nothing wrong with just going, "Go get your SOC 2." I would generally say that most security services are designed by and for security people, and for security people at large companies. I do have my doubts that if you — let me put it this way. Let's say you buy EDR, enterprise detection and response. They're going to go and install a bunch of agents on all your endpoints, with a bunch of log management and it's going to go back to their SIEM, and they're going to do all sorts of analyses on that and maybe alert you. If you don't have a security practice internally, what are you going to do with that information, right? If you don't have someone who is also doing the work to make sure that — I don't want to say, full BeyondCorp — is also doing the work that you could reasonably block out. Let's say the machine's compromised. Okay. What do you do, right? If you don't have the security practice to make sure that you could reasonably, effectively act on that information, then congratulations. You just paid a bunch of money for a thing that just gives you anxiety. The only thing worse than not getting alerts is getting alerts. And it just doesn't seem useful. And keep in mind, I'm not saying EDR is bad, right? There's classes of companies for whom EDR is great. I just don't think it should be your first security purchase. And same thing as like buying your own SIEM. I think that's a really questionable expenditure. It's just like bathtub curve of value provided.

LVH: 00:09:33.561 The first couple of dollars that you spend on a SIEM are fantastic. The first time you get any — you go from no visibility to minimum viable visibility. That's fantastic. And then there's a giant chasm of Splunk spend where you're not — and I'm not trying to rant about Splunk. Splunk is a fantastic tool. They're one of the best in the industry. But you could spend so much money on Splunk and get zero freaking value out of it. I see plenty of startups do that. And so you can't — you've got to have the backing, right? You need a senior security leader to actually go build out the program so that it makes sense for you to have a Splunk and that you have something for that to plug into and go affect organizational change. Because if it's just a thing that produces Slack alerts, then I mean, I don't know, you could do that yourself. You don't need to spend money on anything.

Ben: 00:10:20.486 Once you start an engagement, what are some of the things you look out for?

LVH: 00:10:23.871 We start by doing what we call a security architecture review where we basically just interview everyone because one of the observations is that a lot of the audits are designed to be in a sort of that — they're like audits. They're intentionally validating. One of the things that we found is — I'm not saying that's not useful but it's useful in a different context — it's useful for the hypothesis validation that I handed out earlier. We just interview a bunch of people to begin with and just ask them questions like, "What's keeping you up at night? What's the code that — what's the code that you're worried about?" We've got a whole host of questions across application from really high-level stuff, "Who are your users?" Right? If you had to describe to me, who are the stakeholders? Who cares that this entire application even exists? Walk me through that. What would they do with this? How do they engage with it? All the way down to pretty detailed stuff like, "Can you give me a reasonable description why you think that you're not vulnerable to, I don't know, a systemic server side request forgery attacks?" And the answer might be, "We don't have a web hook functionality or anything like that." Or it might be like we deployed smoke screen, or. I don't care what the answer is necessarily, but we're trying to figure out across a super broad set of things including global device management. Bring your own device or are they tightly locked-down Chromebooks or something in between? And so we're trying to do a super broad assessment across all of those. And parts of that that we're going to be able to — either we have to validate because the client might not know. So for example, we started asking: "Are you using multiple AWS accounts?" when AWS organizations were still super new and almost nobody was using it. And so a lot of clients were like, "I didn't realize that — I suppose that's possible but I don't think so."

LVH: 00:12:00.432 There's things that we have to go validate. Did anyone actually do that? Same thing on the AppSec side, same thing on the CorpSec side. But most of it is really just asking questions because it's not an adversarial setting. Nobody's trying to swindle me into believing your security practice is better than it is. Kind of like lying to your lawyer, lying to your doctor. It's not exactly a plan for success. And so that's kind of how we start and that informs the roadmap. And then from the roadmap, we start doing things like, "Hey, let's do an application security assessment?" We did a bunch recently for Teleport. I mentioned application security assessment. It's not the only one. Do a cloud security assessment. You've got this many apps with this many permissions in your Google Workspace. Let's go get that number down. That sort of thing. But the point is it's informed by a business risk. I could just randomly go do that or I could say, "Hey, sounds like Google Drive is super important for you. Tons of super-sensitive information is in there. It sounds like I should be going through that with a fine-toothed comb." Different customer might barely use Google Drive and doesn't really care, that barely uses Gmail. They're lucky if they use the Calendar once a week and I'm like, "Okay. Great. Maybe I'm not that worried about how many authorized apps you've got."

Ben: 00:13:08.168 And this is a good segue into a post you shared with me which is the SOC 2 Starting Seven, and I think it covered a bunch of key areas for all sets sort of compliance in the SOC 2 process and things which are not necessarily security. They're more compliance. We could probably touch on all seven of them but just to give people a high level, what exactly is SOC 2 for people who aren't familiar?

LVH: 00:13:33.527 SOC 2 is an auditing standard or an audit that you've — I mean people use the term to mean slightly different but related things. So typically when a company talks about their SOC 2, they're talking about their SOC 2 audits, meaning either a Type 1 which is a point-in-time assessment of the organization and a Type 2 which is covering a larger amount of time. Historically, that's always been a year, but these days, we've seen six months, nine months, even three months. The way that I would describe SOC 2 in the minimum number of words is do you do what you say, and do you say what you do? So the idea is: do you have policy work that reasonably describes what you actually go do. And then do you actually go do those things? And where possible, do you have evidence to prove that you're actually doing those things?

The significance of SOC 2

LVH: 00:14:23.675 One of the things that people don't always get about SOC 2 is I feel like SOC 2 has gotten a little bit of a — people see it with some fear in their voices sometimes, I get the impression. And I understand why because if they were to somehow miss out on it — like if your B2B sounds like that's — I don't want to say a death sentence, but it's really hard to sell without getting a SOC 2. I think part of that is because it's become so commoditized that even five years ago when we were starting Latacora, I feel like SOC 2 was — if you sold to a major financial institution, you cared about SOC 2 maybe. One of the interesting things is SOC 2 is issued by the AICPA. The organization of, yes, of tax accountants which — it sort of makes sense if you consider — yeah, obviously there are forensic accountants who are trying to find fraud and so they're used to this concept of auditing, but it's still kind of interesting that your SOC 2 is — they're a CPA. They can technically do your taxes. They're probably more specialized in that but technically — regardless, it really doesn't say a lot. Right? SOC 2 is not — and I don't mean this in a bad way. It's not that high a bar. It really just says, "Have you thought about what you do? Do you see what you do, and do you then prove that you actually do that thing?" Right? For that reason, I would say that I don't think that SOC 2 is that — I don't want to say, that hard. It's not like it takes no effort at all, but it's a reasonably low bar and if you do it well — the thing that I always find — maybe not frustrating but certainly a missed opportunity. There's this historical divide sometimes between compliance people and security people or compliance people and developers where a developer is like, "Why are you making me write down all this encryption policy stuff? I'm going to go do good things. Why can't you just trust me to do my job?" And the answer is because lots of people screw it up. And you might not have thought about this as much as you — because you think you're going to do a good job. Have you actually thought about what you're doing, and why? And have you then written that down? When you say, I'm only going to use AES 256 — one of my favorite examples. Some of the people that we —

LVH: 00:16:20.525 So we've heard, I think twice, maybe three times, where somebody had their — they eventually got the report because the SOC 2 report does not mean passed with flying colors. There's three levels, and the words are escaping me right now, but it's basically like, pass with no comments, pass with comments, fail. You have to pull some — to fail. Pretty much you're an outright fraud. You would know if you were going to fail. Most of them are just pass with no comments because they end up getting remediated during their SOC 2 prep or doing the Type 1. Sometimes there's pass with comments. Really, the bar that's being set there is relatively low. So there's pretty much two standards that are relevant here. There's the TSC, SSAE 18. Honestly, for compliance documents they're relatively — they're not impenetrable. You could go read them. You could go read them. You could reasonably learn something from them. The vast majority of the time though, they describe some organizational controls, they describe a handful of technical controls. I think it's the TSC that mandates anti-malware for example. The level of specificity — I think a technical person would not describe those controls as specific. It's just saying like, "You need some anti-malware." Okay. Well, I've got a bunch of Windows machines. They're running Windows Defender. Is that not anti-malware? I think it's anti-malware. It's described as anti-malware. And if your reason, you're running a Mac, that's where it gets weird. Lots of people buy anti-malware software or antivirus for Mac because Macs don't come with antivirus and that's not really true. There's XProtect which Apple builds as an anti-malware. If Windows Defender counts, XProtect definitely counts.

LVH: 00:17:54.438 And so there's little things like that. You effectively write down what your controls are. An auditor will go through them with you, kind of will review them, make sure that they're — if they're auditing to a specific standard which almost always is going to be a combination of TSC and SSAE 18 — technically you don't have to, but that's de facto what happens at least for small organizations. They're going to come back and say like, "Hey, give me evidence for controls XYZ and the other thing." And so, for example, you might say, "We will check out our cloud infrastructure and audit it against common-known misconfigurations, and we will do so on a weekly basis or something." And the auditor will say, "Great. Prove that you do that on a weekly basis." And the evidence for that can be really broad, right, because at that point, you're talking to a human, right? You just pretty much have to convince the auditor that they're going to sign their name to something that is reasonably descriptive of reality. If you say like, "Yeah. We've got this bot. It runs. It dumps some stuff into Slack. It's happening on a weekly basis. Let's grow up and that's fine." Right? There's no specific technologies that you need or you don't need endpoint monitoring. There's tons of stuff that I've heard people say. One of my favorite ones was, "Hey, that's client access. What do you like for host intrusion detection systems?" And I'm like, "I'm sorry. I don't like any host intrusion detection system. Why would you ever — why would you ever —?" In the context of this particular customer, it really made no sense for them to deploy HIDS. I'm not casting aspersions generally in the direction of HIDS. And it turns out that it was literally just a misunderstanding with their auditor. Their auditor was like, "Says here in your — you have document with inventory, it says you've got a bunch of hosts. It says here that you're going to do something to do intrusion detection. That you're going to detect intrusions for them, and it's like an MTBR, an MTBD. So therefore it stands to reason, you have some kind of system to detect the intrusion on the hosts."

LVH: 00:19:42.137 And so literally they'd just stumbled on host inclusion detection system not realizing that it was a term of art — me nearly blowing a vein in my forehead. Turns out it was fine. They ended up having intrusion detection systems that were to the auditor's satisfaction. But that's really common for SOC 2. Auditors will say a thing. It will get interpreted very literally and it's not necessarily a thing that is actually required. It could be that that same auditor last week was working on an organization 1,000 times the size. Or last week they were working on a grocery store and the controls that they come up with are completely different and make no sense for that particular organization. So there's a little bit of care and feeding for that process.

Seven best practices to pass SOC 2

Ben: 00:20:23.004 Yeah. So your blog post covers the sort of seven— I think they're just great best practices for any organization. And I think the thing that's sort of interesting — is it sort of covers a lot of — though its support of Teleport is actually just IT administration, it's not a developer or cryptography or what you'd necessarily think as traditional security.

LVH: 00:20:45.456 Yeah. I say that's fair, and I think part of the reason for that is that because SOC 2 is coming — SOC 2 is a compliance standard and a lot of the compliance standards are focused on what are people going to do? What sort of access control mechanisms do you have? How do you know that that is actually happening? As a consequence, a lot of the controls end up being more in what you might call the IT space. So we actually have a fully-fledged pillar of — so we've got application security, SecOps which is cloud security and a couple of other things. What we call CorpSec, corporate security, which is mostly like ITs or mobile device management security and managing Google often and etc., etc. As a security person, I would love to be known for the ninja alien space hacker wizard vulnerabilities that we find, but the truth of the matter is, I think by body counts, CorpSec might be leading, because a lot of the problems that they find just end up being like these company-ending events for what feels like a relatively minor misconfiguration going from — for example, the AppSec side, let's say that we find like Cross-Site Scripting, right? I mean, depends on context. Sometimes it might be pretty much nothing. But we'll find this bug and like, "Okay, fine, you fix it." But it's pretty difficult to go from Cross-Site Scripting variability to startup is over now. Whereas for example, one of my favorite bugs that CorpSec keeps finding is a self-service Google group that allows you to add yourself to that. And then there is a critical, some critical third-party service that is only protected by a password, and they have an account recovery process involving that group. And so you add yourself to that, and suddenly you have escalated privileges like that. That one, we found way worse in terms of company impact. And then to your point, a lot of it is just really important for a [inaudible], like from a GRC. So from a governance compliance and regulatory perspective, and it's being, yeah, really helpful. So we have a full-time finance person who — actually he started their career as an applications — as a security engineer. They're properly technical, then they went on to doing a really hairy — solving really hairy compliance problem. And now they did that for our customers.

Zooming in on two best practices

Ben: 00:22:48.377 I think one thing that stood out to me in this blog page, as far as recommendation was, I think, two recommendations. One is using multiple AWS accounts. And I think the second one is making sure CloudTrail is turned on and use AssumeRole. Can you sort of say why those two things are so important for many startups who run on AWS?

LVH: 00:23:07.802 The main reason behind the multiple accounts thing is that because we found that, in many cases, it's really hard to get people to do a great job at managing AWS IAM, like people just write a particular permission. It's super broad, and they just keep adding infrastructure and infrastructure and infrastructure. And before you know it, that ability to read from every S3 bucket is suddenly far more terrifying than it was when you agreed to it. Add to that, that the tooling within AWS for minimizing your permissions is nowhere near as mature as it is in — if you go look at GCP — GCP decided to do a very good job at solving a simpler problem. And AWS has taken much longer to solve a much harder problem. By which I mean, your GCP — because the AWS gives you far more granularity for expressing — like there are plenty of very granular IAM constraints that you can express in AWS that GCP just has no option for at all. And so when I say, yeah, GCP has like a button you can log in. I'm sure anyone who's used GCP for any amount of time logged in and looked at the IAM. There's a button there; it says remove— I forget what it says — like remove permissions or something like that. It's very user friendly. It's very hard to miss honestly. And part of the reason they can do that is simply because they have fewer permissions and they have a far less complex [inaudible].

LVH: 00:24:22.553 One particularly hard boundary for AWS IAM is multiple accounts, and we found that it's a very good sort of first pass where you give your engineer — like if your engineers want to mess with a particular, service great. Give me an AWS account, right? It's reasonably well sandboxed — doesn't really matter if you you're clicking around in the AWS Console, and it tells you it's making like, "I'm just going to create this one cluster," like, "Yeah, I know." You created a cluster and a VPC and a security group and like five IAM roles and you created a bunch of stuff, and I don't have a mastery of everything that just got created. So it's also pretty hard to go tear down again, but the wizard is — it's user-friendly enough. You can just tell someone like, "You're not allowed to mess with the service until it's fully Terraformed." You basically just told them not to go mess with services. And so we think the AWS accounts are really, really fruitful way to get just — like a super-basic, almost like a product-level or per-engineer or dividing dev and prod. And conveniently, it means that you don't have to do a ton of work now. Because you're already thinking about like, "Am I —" hopefully you're thinking about whether you're in dev or prod before you drop the tables. So you're already thinking about that. So that much of an extra — it's a much extra mental bandwidth that's being asked for. But then on the flip side, from a security perspective, I have an extremely strong guarantee that things are going to stay separate. And similarly, if an auditor comes and asks, then we can say, "We don't have to talk. We don't have to spend hours talking about AWS IAM policies." We can just say, "Yeah, separate accounts," like completely separate off domains. It's not even a problem. So it's just a question of how much value do you get for how much work you have to put into it. And multiple AWS accounts is such a no-brainer.

Ben: 00:26:09.404 For the prod account? I think we do this — that we always — if you use AssumeRole, you get more audit events, as opposed to a shared login or other —

LVH: 00:26:19.185 There's a bunch of reasons for why we like AssumeRole. So one of them is if you're — well, one problem is if you're using AWS access keys directly and then they're somewhat necessarily in your home directory, generally, in plaintext. I know you mentioned that you'd like to talk about supply chain attacks at some point. So I'll try and leave that one for a little bit later. To your point, you mentioned that you get more audit events, that is certainly true. Particularly it becomes easier to start doing IAM minimization down the road, because one of the things — like back when, the Segment blog — Segment, before they got acquired, they had these wonderful articles about their AWS environment. And their AWS environment was a thing of beauty. I'm sorry. I said was? I assume it still is. They were doing all of these super great things. But one of the challenges with it is that they're showing you the end state right? And I get why; this is not a criticism, right? I get why they're showing you the end state. They want to entice people to come work at Segment, and it was pretty effective. The thing that they're — yeah, that they're not talking about is kind of — or very often, is how do you get there, right? Like what are the minimal viable things that I have to go do along the path — that I can plausibly go get engineers to do tomorrow — that are going to materially mitigate risk. Account separation, fantastic for the first example. AssumeRole is the next step. Why? Because if you've got a bunch of roles that you're using reasonably judiciously, even if you're assigning all of them star, just by the fact that you're separating them out, and they have a name that I can go reason about, then it becomes a lot easier than what — you'll have to make all of the decisions and know what every single person in your company ever needs. You can go do that down the line. You can do that one at a time — project where we're doing analysis on CloudTrail, in order to infer what the — similar to what GCP is doing, like infer what the permissions ought to be. And you can do all those things. But you don't have to do them today. Right? And so a lot of our programs are kind of like set up so that you can — you got to start somewhere, right? And you start by looking at — look what Segment has, then I think it's really likely that you just sort of give up because it takes a while to get there. It's a really impressive setup.

The best practice of evaluating VendorSec

Ben: 00:28:15.398 Shifting gears to another one of the Starting Seven. You kind of mentioned VendorSec. And I think the thing that sort of piqued my interest in this is how you talked about — even possibly evaluating which Chrome extensions can do certain things on the DOM, which was sort of a more fine-grained review than I've seen previously. Can you talk about what vendors you look out for and things you might think, "That's a bit suspicious."

LVH: 00:28:43.769 It is definitely true, like our VendorSec process is more in-depth than a lot of others. So for example, we found SAML bugs of vendors for clients. And I doubt that the median VendorSec process involves auditing for several months. That said, for Extension 1, that was actually reasonably easy. So there's a tool called CRXcavator. And because CRX is the Chrome extension, the Chrome extension, extension. So if you were to ship a Chrome extension, it would be in a file, and that file would be called a CRX file. And if you've never written a Chrome extension before, the access to the Chrome Extension has is described in a file called manifest.json, which exists in the top level of that — or I mean, interlays the file. You can just simply inspect that and then know, if I allow this Chrome extension, then it's going to have effectively Cross-Site Scripting on every website, it's going to be — that's the default script like the star — star permission is basically on every origin, give me the permission to inject code, which I think that that is what a Cross-Site Scripting attack is all about. There are mitigations though. So for example, there are some extensions that I'm not comfortable running in my production Chrome profile. So I have a separate Chrome profile that I just use for that. And there might be a — there's a Video Downloader extension or something. And because it works on pretty much everything that uses the video tag, it tries to have access on every page and like, yeah, I'm not giving you access to a Latacora's [inaudible].

Ben: 00:30:12.319 Do you know someone who had a social sharing web extension that was very popular, and they made money by rewriting everyone's links to Amazon affiliates back when this was a possible thing?

LVH: 00:30:23.214 Yeah, that —

Ben: 00:30:24.569 Did show you some of the — I mean, and that's not great if you can rewrite a link.

LVH: 00:30:28.790 So a Chrome extension can do pretty much anything that Chrome can.

Ben: 00:30:31.577 Another thing that kind of comes up a few times is, is it risky to run 1Password Chrome extension? Or is it better to use the Mac App or just use it on your phone?

LVH: 00:30:40.946 That's a rough one. So it's not near the top of my list of things that I'm super worried about. That said, I don't run the Chrome extension. This is not necessarily an endorsement, but for what it's worth, I do two things. I use Chrome's built-in password manager combined with Pass on the CLI, which is like a command line password manager. It's not particularly fancy. Does what it says on the tin, and move on with your life.

Ben: 00:31:09.642 Yeah. I mean, I guess talking about secrets, nothing you recommend is using AWS Vault, which I think is a project from —

LVH: 00:31:18.514 99designs.

Ben: 00:31:19.055 99designs. Yeah. Describe this project?

LVH: 00:31:21.535 Yeah, Absolutely. AWS Vaults is a 99design/aws-vaults. on GitHub. And it's effectively a — it's a steppingstone. It's a perfect example of that slowly improving your IAM practice thing that I was mentioning, because it's a steppingstone, where it's really easy to get started. What it does is it takes your existing AWS access key ID sort of the thing that starts with AKIA and your secret. And it puts it into the closest equivalent to your operating system's Keychain. So if you're on Mac OS, it's really a Keychain. And then what it does is when you — you can use it to AssumeRoles, which I recommend that you always do, because there's weird little pitfall if you're not using roles, They won't go into too deep but — and then in order to go use that credential, what you do is AWS Vaults, exact name of a particular AWS profile, which typically maps the role. And then you do dash. I don't know, AWS STS GetCallerIdentity, which is the AWS command for like, "Tell me who you think I am."

LVH: 00:32:23.194 The beauty of AWS Vault is that it will take that permanent credential living in your Keychain. Ask Amazon or AWS to transmute it into a temporary credential, and then go use the temporary credential. Another benefit is that you can start doing things like — let's say that you're rolling out those multiple roles, and you don't have them right now. Right now everything's administrator access, because of course it is. I don't mean that like a snide way like, "Yeah, you're a brand new startup. Don't feel bad about it. Nobody has decent access control." What you can do is you can start getting into the habit. Like for example, AWS Vault exec prod deploy — terraform apply or whatever, right? And even though, right now, all of those roles are attached to exactly the same- one account, one role, one everything. Down the line, you modify your profile, your AWS profile. So it also integrates neatly with AWS existing. Its own configuration. It integrates neatly with AWS's existing profile concepts in their configuration files. Down the line, you can effectively, transparently or pretty close to transparently, without having to modify what people type on the command line, introduce new roles, introduce potentially new regions, accounts, etc. And so I think that we have multiple scripts called manage_profiles. We'll go edit that config file and modify it to be the thing that we need. And so from that perspective, as long as you regularly run that, run that scripts, then your process never changes, but you just shed a ton of permissions and you had to do nothing, which is nice. I like the steppingstone.

Ben: 00:33:56.024 Yeah, and especially as your team grows, it's like a nice —

LVH: 00:33:59.068 Exactly.

The LatacoraAWSAuditRole

Ben: 00:33:59.068 — easy ramp for them. If you can use this tool, it's not a huge change in your process. [inaudible] will ever hold up almost around centralized logging. But I'm going to segue in this into another question. So because we use Latacora, I've noticed in — our AWS account, we have the AWSAuditRole in our sort of CloudTrail logs. What do you look out for and monitor from that role?

LVH: 00:34:23.062 We do two types of access or two types of auditing, and I'm using the term — I know that we talked about SOC 2, been using the term in a technical sense and then use it to determine the CPA sense. One is what I would call resource level and the other is event level. So event level B ingest CloudTrail and learn that somebody is logging in with AWS root account or something like that. And the other one is literally called API is like, I don't know, maybe [inaudible] to describe instances and the audit role is for the latter. So we use it to go call a pile of API's, save all that information, I mentioned ago, a really cool internal tool. If you wanted to call all the AWS API's, it's actually a graph, you can't just like call all of them directly, because they all have dependencies. They have arguments that they depend on, right? So if you wanted to, I don't know, describe easy to describe image properties, then you first need an image IDs. You have to know which image IDs exist. You have to first call list images etc., etc. And that goes through all of AWS.

LVH: 00:35:21.318 So we've got a tool that figures that out on its own, and just call all of AWS and sucks all that information down, puts an end variable format, and then we run queries on top of that. The reason that we have both the event side and the infrastructure side, events side is nice, because it tells me when something happens at literally as it is happening, whereas — and that's really hard to do on the resources side, right? Because you don't know when something interesting has happened in advance. But the real power comes from being able to combine them, because it's not necessarily so interesting to see if I see a CloudWatch — oh, sorry, CloudTrail events, where some of these modified some particular security group like, "Okay, but I need to know what's in that security group before I know if that's interesting or not," right? Like is that the VPN concentrator? There's a new thing in the VPN concentrator, a group that I care more about that than some internal security group that's locked down anyway. Right? And so we combined the information from those two places in order to be able to enrich incoming events and be able to do more to ask more interesting questions.

Ben: 00:36:24.934 And so I guess that kind of goes into, I think, so now she mentioned your SOC 2 [inaudible] that the UI and the AWS Console is sort of evil, everything should be — infrastructure is code, I don't know if she actually said it was evil. [laughter] That's what I got.

LVH: 00:36:38.211 There are things about the AWS Console that are great. There are certain functionality — there certain functionality that is either not available outside of it or is very, very difficult to get to, without using the console. And so for example, there are resources that if you attempt to delete them, AWS will just tell you, nope. Why? I don't know. And if you go delete them in the AWS Console, then it will go find all of the dependencies and actually go do [inaudible] that you asked for it. And so there are quite a few things that are significantly nicer to do in the AWS Console than elsewhere. However, two problems. One, don't use the Auth portion of the AWS Console, like sign into it with SSL, sure. Sign into it with — so you can actually sign into it with — most people don't know this, AWS makes it sound like you've got two types of credentials; you've got the username and password, which you used to sign into the console, and then you've got the access key, which you use to call the API. You can totally sign into the console through the access key. It's just that the API for it is kind of obnoxious if you don't do it through a browser at first. And conveniently, AWS Vaults is built in for you. So just do AWS Vaults, login, name your profile, and it will open a browser that has already signed in. It does that with AWS STS GetFederationToken under the hood. You don't use the username password MFA side of things. One of the things that I — I love AWS to bits. I say this with all love in the world. I'm using this internally for everything. But what the heck happened with that U2F integration? I don't know. But I'm not going to start talking about it.

LVH: 00:38:15.028 So you're saying it's better to use your Single Sign On provider instead of adding an MFA in the console?

Ben: 00:38:20.600 Absolutely, yes. We are trying to get all of our clients to a point where there are literally no usernames and passwords attached to IAM users. You can have a username and password, or you can have a password, but it's on the root user because you can't prevent that. And then we disabled the root user through SCPs and [inaudible] accounts.

LVH: 00:38:38.314 A while ago, I lost one of my tokens to my AWS personal accounts. And so I had to go through account recovery, and I had to get my driver's license notarized. And I think one of the funniest quirks was they locked my AWS account, but it was linked to my Amazon account. So [inaudible] your Amazon AWS accounts, don't link it to your personal Amazon account.

Ben: 00:39:00.728 Yes. So another thing that I find as one of the reasons that I tell people not to use that functionality is the ways that your Amazon account and your AWS account — you would think reasonably that they are completely separate, and the ways that they are linked I can only describe as arcane and eldritch and mildly terrifying. General point, don't ever — but one of the flaws for example is that because you can't — which I don't know if that's the problem that you ran into, you can only ever have one U2F key I believe.

LVH: 00:39:30.175 Yeah, I think at the time. Yeah.

Ben: 00:39:31.252 They might have fixed that by now. It's a mess. Don't use the front door. Do a couple things. One, there are very specific route like know when you're using the console. Use it judiciously. If there's a very specific reason — and I'm not saying never, ever, ever touch it. Certainly things that are completely fine are anything — like if you have — once you have a read only role. Yeah, sure. Use the console all you like, right? If you feel like the console is doing a better job of describing information, then what you're getting out of the API call is great. Go use the console. But ideally do it for read only stuff. The second thing that we're concerned about is — well, so one, if you do it with Read Write, then you're basically encouraging people to use the wizard and leave a bunch of stale infrastructure behind. Yeah. And very often that infrastructure is optimized for being easy to get started with. It is not optimized for being locked down. You end up with quite a bit of stale infra. And yeah, that's the other reason which, if you're going to have people do that, fine. But that's where the multiple AWS account recommendation comes from. Make sure that it's in a — if there was a — if there was a feature that I could get out of AWS, it would be far easier account deletion. Because right now, if you're in an AWS organization, you create an account, that's an API call. That's easy. It's like five seconds, and you got a full-fledged AWS account. It's great. But deleting an AWS account involves like resetting the root account, adding a password, and then adding a new billing account to it. And if it's a whole mess — it's a manual mess. And confirming a phone number, it's a whole it's a whole thing. And if that was that was easier, especially for accounts that were created within the organization to begin with — if it was an invited account, then sure, fine, whatever. I'll go do the manual process for that. But like I said, it's an account that I just created. Like, come on. Let me delete it. And the reason for that is because like it's still annoying to have all of that stale infrastructure floating around in your development account. So I'd rather have that gone as well. But maybe next [crosstalk].

Ben: 00:41:26.696 Yeah. I know I've seen that is one project. I think it's called like AWS-Nuke that will —

LVH: 00:41:30.647 Oh, yeah, so there's — oh, sorry, I thought you're talking about — I got excited because I thought you were talking about a different project. So there is AWS-Nuke, which tries to delete a bunch of infrastructure, but so many different tools, like it has the flaw of always lagging behind. And usually the infrastructure that I want to try it the most is the most recent service. So it's always the one that AWS-Nuke doesn't actually work for. So like AWS-Nuke is great. It's not AWS Nuke's fault that AWS is probably launched, what, three new services since we started talking. And it's been a whole 15 minutes.

Supply chain attacks

Ben: 00:41:58.485 [crosstalk]. And another tenant, this is PR's protectors branches in CI/CD. And I want to segue this into supply chain attacks, which has been a hot topic of 2020. Don't even necessarily know where to start. It's such a broad topic. Discourse sorts of thoughts on risks and mitigations, using for supply chain attacks.

LVH: 00:42:22.488 So I'll start by carving out a bunch, because somebody pointed out to me recently that when we say supply chain attacks, we're typically talking about software supply chain attacks, and specifically in the form of open source software. They pointed out that like, "Well, I'm in the hardware business, trying to make it specific enough to be useful to the listener and not specific enough that it's identifying." From their perspective, they're like, "Yeah, we have supply chain attack problems." And supply chain attack problem is like the super micro backdoor a chip, is what they think of when they say when that. So I'm assuming that your question is specifically about software supply chain attack?

Ben: 00:42:56.122 Software. Yeah.

LVH: 00:42:57.139 For the listeners who aren't familiar, brief summary, the idea is, when you write any kind of software these days, let's say you're doing it in [inaudible] or even if it's frontend. You're relying on the npm ecosystem, where you're writing in Python, you're relying on Pi BI, or you're writing in Java, you're relying on Maven Central, no matter what you're pulling in tons and tons and tons of open source software. Most of these ecosystems are subject to this kind of attack in some way, some more than others. So for example I think npm — because JavaScript has — or npm ecosystem has this concept of — as soon as it's the rough pad being sort of the example that most people make fun of, but this idea of like as soon as you got any amount of software, like the bar for creating a new package is super low. And so the consequences like you do React, you create your [inaudible] first React allocation from a template. And you look at your transitive dependency tree in npm, it's something frightful. The same thing is to some extent true for Python, some extent true for Java, for pretty much everything.

LVH: 00:44:02.241 One big difference is that with Go, you're typically vendoring so it's harder to be — you're less likely to be dependent on random third part. You have to download the dependency all the time. But the core attack is basically somebody compromises typically the credentials of the author. Compromise can happen a lot of ways. In other ways, just like I show up, I do two or three PRs in this poor anemic projects, this one person from middle of nowhere has been lovingly maintaining for the last 11 years. They only ever hear from anyone when it breaks. Trillions of dollars of business is dependent on this one mole project. And so this person comes along and adds a couple of really useful PRs, and they go like, "Hey, I know that you're kind of overstretched. I'm happy to pick up maintainership." And now you have push rights to a repository that half the world depends on, and so the next thing you do is you put some malicious code in there because if you think about it — first of all, most of these — or many of these at least require running code on installation time. So certainly for a Python, like de facto, you're running setup.py's all the time. And so you can put whatever you want in there.

LVH: 00:45:16.964 But even if it's not installation time, still doesn't really matter, because why are you importing the package? Or why are you adding the dependency? You're about to require import, or whatever your language calls it, you're about to run some code, right? You have the malicious code. And what happens? Well empirically, what tends to happen right now is these are smash-and-grab techs, right? They're super broad. They're typically not going after a very specific company. They're trying to get as many AWS access keys as possible, and then they go mine Bitcoin with them, right? Because it's a clear — it's an easily automatable way to monetize having access to a bunch of computers. But as soon as it's a need — any more targeted than that, imagine the type of access that your median developer has in your cloud environment, and imagine what kind of damage you could do with that. And it might be pretty sizable, right, so.

Ben: 00:46:04.254 The SolarWinds, which is an — I guess, they're not an open source, open core company stemmed to be that they had an open CI/CD system.

LVH: 00:46:13.221 It's a form of a supply chain attack, and this is where it gets tricky, especially in terms of remediation and like possible controls. One of the, the first observation that we landed on is like, we can't tell clients — we can't really mitigate this risk on the source side. I can't plausibly look at the median React application, like most of our Reacts, obviously, super popular. Same is true for among our customers. And I can't tell a customer like, "Hey, nice npm dependency package you've got there. Would be really great if there was like 10% of the dependencies in there," because that's just not how it works. Like you basically you told them that their app isn't allowed to exist, right? And so you can't really — I mean, you can; there are ways that you can mitigate, like there are unforced errors there. You can kind of chip back at that a little bit and identify, for example, packages that are relatively few maintainers or relatively under maintained or have a lot of like inverse dependent, have a lot of reverse dependencies, that sort of thing. Like, you can do some graph analysis on that and try to identify particularly risky packages. But for the vast majority of the time, you're not depending on a particular package, because you really want to, right?

LVH: 00:47:27.718 The vast majority of packages are going to be in the transitive tree, where you're dependent on them because someone else is. So the way that we've approached this problem, is we started looking at what our attacker is actually doing when they get these creds, and how can I mitigate that? Because the problem is, you can't really do it from the source side. You can't really significantly put a dent in the number of dependencies that people have. Particularly so, for I'd say, for the npm ecosystem, because a lot of people, they look at their transitive dependency trees, and they haven't heard of the vast majority of packages in there, right? Because they're not pulling it in. [Inaudible] is pulling it in, or they're not going to pulling it in — React just pulling it in. Possibly transitively itself. Can't really tell them to try and make a dent and that sort of dependencies because you basically just told them that their application is not allowed to exist. There are places where we might be able to do that, and usually it’s combined with some other form of risk mitigation to warrant the amount of effort that it takes to remove the dependent because you're depending on this for a reason, right? Like you're going to have — presumably you're calling it a bunch of times, and so we do stuff like, we're working with a company that is doing graph analysis on the open source database side.

LVH: 00:48:38.005 So they're looking at things like what are packages that have relatively few — or relatively many reverse dependencies, relatively few maintainers. Yeah, what are some of the other indicators of potential problems with that package, code quality issues, etc., etc., both from us. So they're attacking from a supply chain perspective, both in the software supply and the open source software supply chain attack vector that I mentioned, meaning somebody gets maintainer access to a critical package and starts pushing malware through that channel, but they're also doing it through what are the odds that this package is going to have significant vulnerabilities in it or is just going to break all the time and it's going to cause damage that way. There's a company called Philum, P-H-I-L-U-M. They're very early on, but they're doing really, really cool work and really excited to be working with them. So you get that information about sort of like a package score effectively, right? Like, well, how bad is this package and why? And that could be licenses, could be code quality, could be questionable maintenance, but you also have to combine that with — we do, typically with other risk factors like, "Hey, you're running inside your Django web application that is plenty to all these other requests. You're also doing, I don't know, like image resizing or something," and like, how much do I trust that image parser? Not at the time, especially when you're directly feeding it [inaudible] control data, when a person that is, go find me at [inaudible]. But that's a lot of work. So the other approaches that is, how about we put it in a Lambda that has no permissions and gets called with a pre-signed S3 URL.

LVH: 00:50:14.265 So it literally has an empty role. And then I mean, sure, maybe somebody gets remote code execution. Congratulations. It's remote code execution in a completely uninteresting part of the application. Bottom line, I don't think that really getting the set of dependencies down is going to be — like it severed the entire answer. It's an important part of the answer. And when there's unforced errors, you should go attack that, but not a complete picture. That's where stuff like — for example, AWS Vault comes from or the OpenSSH. We did a blog post about OpenSSH keys. And there are password protection. That's where that came from, as well. So we start with threat modeling, like "Okay, well, let's assume that's a given that there's going to be a dependency, it's going to get popped, we're not going to know which one it is in advance," there's nothing that I could do about it. I can't prevent the problem from happening because I would basically be prohibiting React or Django or some other extremely common package. Where do we go from there? What do we do after that, and AWS Vault is a perfect example because we have to guess what people do when they get these creds, or so when they get remote code execution, they're stealing AWS and GCP creds. Right? And they're doing they're using them to mine cryptocurrency. So then from there, we start thinking like, "Well, what else can I do?" and SSH keys is a big one because that's how I would get persistence, GitHub as SSH key based off. So I don't have to know what your GitHub username is, I can just ask GitHub. And so that's what I would do, and that's where I went down the rabbit hole of figuring out how to hack password encryption worked for a SSH keys.

Other security considerations related to OpenSSH and passwords

Ben: 00:51:47.234 [inaudible] for me to say you had this blog post, and it kind of stuck in my mind too. And it said, it could like went over the sort of default OpenSSH key encryption. But you also said that adding a password is worse than not having one. It's whenever you can explain why adding a password is less secure?

LVH: 00:52:05.769 For what it's worth I think that it's true, but I will admit that that was a little — there's a reason it was in the title. And that was clickbaity and not ashamed of that at all. Basic story is that to OpenSSH's credit, I have no idea if they did it in response to the blog post, but when we posted it, the next release had the default key format, the one that does not [inaudible]. Good for OpenSSH. And good for all of us. Right. So the flaw is, if you had a — so if you had a key, typically an RSA one, so you'd have the public key and a private key. And presumably, most people have used SSH, know about that because you've had to go get id_rsa.pub from your home directory, and then go save it into GitHub or whatever. You have the private key, and that private key is typically password protected. And in the old format, that password protection was not great. It was to wit MD5 over the password. And so the consequence, it's reasonably easy to enumerate. So it's unsafe for the same reason that it would be unsafe to use MD5 as a password hash in a database.

LVH: 00:53:12.077 So you would simply enumerate every possible password, and you will run it through a fast GPU that does a couple of gazillion MD5 guesses per second, and then you would before you knew it, have the actual password. The reason that I said that it's worse than plain text is because at least at the time, and to some extent, this is still true, plenty of people were upgrading. Let's say if you're on Mac OS, you were getting your SSH from Homebrew. It wasn't integrated into the OS Keychain. [inaudible] Keychain was fine, right? Keychain didn't have that flaw. When you'd be typing in your password all the time into something that would unlock when Keychain unlocked and so be a password that you would type all the time, and so you would of course reuse it because what other password you use? You're going to use your desktop password. So we validated that. We asked a bunch of people and they were like, "Yeah, that's definitely what I did." I'm not blaming anyone.

LVH: 00:54:03.895 Absolutely my SSH key password was I was expecting it to be the same thing as — I was expecting a locket with Keychain, right? It's not [inaudible] looking good admission. So the problem with that is it is much easier — like I can't brute force your password against Keychain, right? Because Keychain is hardened against that sort of thing. But I can absolutely brute force your password against your private key, because it's crypto. It's not software like I can just try as many ones I can asynchronously, as I want asynchronously. And so the idea is that basically it produced an alternative route to be able to get people's passwords that was significantly more performant than any other such route. Whereas every other system, like I don't know, 1Password or your Keychain or Chrome or whatever it is, is going to be built — I mean Chrome often by virtue of using Keychain, but you know what I mean— they have password-guessing in as well. Now they're going to use better password hashes there is something I can log on to or whatever [inaudible] they're going to use hardware backed cryptography, like in the case of your phone typically or a combination of both.

Thoughts on cryptocurrencies and cryptography

Ben: 00:55:10.150 Moving away from — one thing I remember at Rackspace — because you'd always introduce me to lots of sort of cryptography basics. And I thought within the time, the space of cryptocurrencies also sort of blown up. And so it's like an interesting cross-section of distributed systems, and cryptography. What do you think we can learn from this ecosystem?

LVH: 00:55:31.552 That's a very challenging question, and I'm glad that you asked that because it forced me to kind of like not just be a pure cynic, because it is no secret that I am no fan of cryptocurrency for a variety of reasons. But right now, the absolutely absurd energy usage of Bitcoin is certainly near the top of the list. But to your point, it did — there are a lot of fantastic things that came out of it. A couple that came to mind is certainly the number of cryptographers. First, people who became cryptographers, and then second of all cryptographers who were gainfully employed doing really cool cryptography research and we're, yeah, sure, they were doing it for cryptocurrency, but like a lot of the cryptography research, ends up carrying over particularly a lot of the model working the signatures was prompted by requirements of fancier crypto schemes which tended to be cryptocurrencies. I don't necessarily know for a fact. I don't have data to prove that cryptocurrency-funded specific types of research that a cryptocurrency paid for, but I think it's pretty fair to say that cryptocurrency is the 500 pound gorilla in the room there. And so all sorts of really interesting crypto research I think being paid for by two things, cryptocurrency and Google, seems to be the summary.

LVH: 00:56:47.887 And that's, I mean, it's not fair. Google is doing a lot of really cool practical applied crypto research. So is Microsoft or this Microsoft Research, which is a reasonably separate arm. For me, it would have to be like a lot of advances in zero-knowledge proofs, which recently added a pet project that I'm not quite ready to lift the veil on, but I had a pet project required a designated-verifier-zero-knowledge proof, which I — did it exist before cryptocurrency? Yeah. Would it be as broadly available? And would I probably have thought about it if it wasn't for cryptocurrency? Probably not. I think the same thing is true for certificate transparency. Certificate transparency is this project by Google where the mission is: what if we captured every TLS certificate that was ever created that anyone ever saw. We're going to put them in this big tree and make sure the tree is designed in such a way as nice and efficient, but the consequences that you can't present a certificate to your browser without having publicly committed to that certificates. And that, for example, addresses issues like — I mean any kind of mis-issuance.

LVH: 00:57:56.258 So it could be a relatively benign, like a CA screwed up by, I don't know, issuing a certificate with the wrong duration or something. I'd say relatively benign because the really bad failure mode is oppressive regime tries to impersonate Facebook or tries to impersonate another critical service and use it to actively hunt down journalists like that's — sorry, and not before and not after being minted for the wrong time. That's certificate transparency. Is that a consequence of cryptocurrency? I mean, no. First of all, it's not solving a Byzantine consensus problem, right? Like it's not proof of work or proof of stake or whatever. Google is going to — I mean, most of them are operated by Google. Google sign stuff, and you do it because Google says so. It's single writer, but I think that people wouldn't have been thinking about because — but it looks like blockchain, right, as entry references the previous entry. I don't think that people — people might not have thought that if cryptocurrency hadn't been taken up so much headspace. As much as I'm generally not super excited about cryptocurrencies or at least have some questions, it is absolutely because of them, and they're funded so much cryptography research, and a lot of it's pretty practical. Some of its well out there, but good for them. Like I'm glad people will get a chance to work on it. And what used to be a purely academic field and now you don't have to go into academia.

The Crypto 101 e-book and revisions made

Ben: 00:59:25.359 Thanks. That's a great answer. Talking about just cryptography, the great e-book called Crypto 101, which is sort of an introduction to sort of cryptography basics for I guess, like application developers. Do you present it at PyCon? Since it was written in 2013, if you were to make a revision, what would you add or remove from this book?

LVH: 00:59:47.464 First off, we've been adding parts to the book since 2013. So it has gotten a decent chunk of revisions. If there's specific things that I'd add — so the structure that Crypto 101 takes is it starts from the minimum viable primitive that we can talk about literally an XOR gate, and then kind of walks you all the way up to TLS and talks about all of the primitives along the way that you needed in order to get a real cryptosystem, a real useful cryptosystem, TLS being kind of like the [inaudible] for everything else, partially because it's got kind of everything in it, right? It's got block ciphers, it's got authenticated, it's got signatures, key agreements the whole nine yards. And what's still missing? All the attacks are cryptographic attacks. They're direct cryptographic attacks. You've got an authenticated CVC. Let's do a bit flubbing attack and then let's expand that into a padding Oracle attack. And those are cool. They're useful. One I would add a couple of attacks that currently aren't in there, but I think are really cool, mostly, I haven't added them because they're pretty complex. And I liked the fact right now the Crypto 101 is sort of very upfront about you need like middle school level math at most. And most of the time, not even that, right? And some of the more texts that I think are really interesting that I kind of want to add, are — I feel like they would come with a little bit of warning, because I don't want people to feel — I don't want people to feel stupid, because they just like haven't seen a particular math thing, right? It's still not complicated, but I find that math scares people. It's kind of the point. And if you're there to explain it, then that's fine. But if it's a book, it's kind of static, and —

Ben: 01:01:32.574 You can't.

LVH: 01:01:32.574 I don't want to dissuade people. But the two attacks that I would add is, AES-GCM truncated tag attacks where basically, if you ever use a truncated tag, like the total security of GCM against forgery is like whatever the shortest tag was, which is very, very surprising to a lot of people. There's all sorts of cool attacks on GCM, that I think people don't like because it's the good cryptosystem, right? It's the one that you want, and most of the time it is, but it's also kind of dangerous, and it's safe in the very specific ways they're using TLS. A lot of elliptic curve attacks because I don't think I've ever added, for example, off curb attacks. I think I might have described that one. I forget if it was in a blog post if I actually had that book, but that one, so there's a couple of like fancier crypto attacks that I'd love to write up. And then the second thing that I've had is protocol design. And the way that I currently build up from cryptographic primitive to primitive to primitive to primitive, building up, building up, building up and the [inaudible] do that, but for — once you've got all that build the protocol and that noise and that the noise protocol suite, and go through things like key compromise impersonation and stuff like that.

Determining the programming language to solve a given problem

Ben: 01:02:41.660 So moving on, kind of again, towards the end, so you're both a fellow of the Python Software Foundation and a co-founder of Clojurists Together Foundation, which obviously two different programming languages. How do you use a programming language to — when you think of a problem, how do you use a programming language to solve it?

LVH: 01:02:59.050 Super easy. You start, you pick Clojure, or you make a mistake. It's fine. So jokes aside. I like Clojure quite a bit. In case in point — to your point, I do a decent chunk of work funding the open source ecosystem for it. You mind if I rephrase the question and aim it at clients, because then it's not my weird [crosstalk] —

Ben: 01:03:19.019 Perfect.

LVH: 01:03:19.019 —activities? It's like, what should people who are listening to this do? And the answer is, of course, use Clojure. Weren’t you paying attention? There's a lot of really good choices these days. And I think that for the vast majority of applications, there's — honestly if you're using a halfway modern framework, do whatever you want. So people have asked me a question in terms of like — what should I do if I want to write secure code? Should I write Python? Or should I write Ruby, or should I write —?

Thoughts on using Rust

Ben: 01:03:46.729 I think I had Rust, was my follow-up question.

LVH: 01:03:48.726 Don't get me wrong, I love Rust. I think Rust is fantastic. It's one of the things that I'm most excited about, like being able to put a dent in the long tail of gnarly C libraries that everyone depends on. But for our clients, I don't know that they're — the vast majority of our clients are —B2B SaaS companies, and I think that there is — sure are there safety gains to be had from Rust? Yeah, but realistically their counter — their alternative wasn't C, right? If their alternative was C, then I would go yes, definitely go write in Rust. Do not write C; do not right C++ either. Stop telling me how — if I have to hear one more time about how modern C++ is basically the same thing as Rust, then I'm just going to — I don't know — I'm going to take a sharpie or I might actually write a counter example in blood, miscellaneous C++, terrible, terrible C++ features, that people use all the time. Am I excited about Rust? Yeah, I'm excited about Rust for replacing parts of the Linux kernel. And we're excited about Rust for replacing — and the same thing for iOS. I'm excited about — I guess they're more likely Swift. But Swift, sure. Great. Right? I'm excited about memory safety. For the long toll of things that are memory unsafe. But like most of our clients, they weren't going to use C, they were going to use Python, right? And if you already have like a Python, like a Django web application, like a Django REST framework or something, I can't see a super-compelling reason for you to go rewrite that in Rust. It's not like I think Rust is bad. It's just I don't think Rust gets you anything.

Ben: 01:05:19.389 It's the right tool for the job.

LVH: 01:05:20.797 Exactly.

Ben: 01:05:21.429 And I guess, if you're in the business of CRUD, stick to [inaudible] that make it easy to create forms.

LVH: 01:05:26.917 That might be Rust, right? I'm not saying not Rust, right? Like, it's fine. I'm just saying, don't rewrite it. Because I don't think it's — if it's buying you something, if you feel, for example, "Hey, my application has a bunch of state machines. And I am much more comfortable if I can express those in the type system." And the rest type system is way better at expressing this than Python, then go hog-wild. But it's just like I can't remember the last time that we actively found a memory corruption vulnerability that was exploitable, and one of our customer's out. And every time that we found it, the way that we told them to prevent it wasn't, "Go rewrite the entire thing in Rust." It was, go take this tiny piece of code I mentioned earlier with the Lambda; go take this tiny piece of code, go run it somewhere else where it can't do any damage because realistically, you're not going to reimplement libjpeg, right? You're just going to keep doing what you're doing. And it's also a question of how much effort is it, right? If you're going to rewrite the entire app in Rust, that's quite a bit of work. Whereas take this piece of code that you've already written and literally just rip it out and put in this one other Lambda. It's not trivial, but it's plausibly a day's worth of work, right? It's not that bad.

Exciting tech wins and security advancements

Ben: 01:06:34.615 During our time at Rackspace, you always had a good pulse on the latest tech and security tech. What's got you excited recently?

LVH: 01:06:40.628 Feel like depending on which security person you ask, which might be just the same person, but on different days, you're either going to get like a horribly cynical answer, where it's — turns out I've convinced myself, I've got job security for the next 100 years.

Ben: 01:06:55.419 You're moving to the woods.

LVH: 01:06:57.356 Or it's [inaudible]. I'll tell you what I'm excited about, but it's not necessarily an endorsement or a counter endorsement for that matter. It's just like me, being very cognizant of — just because something is exciting to me does not necessarily mean that it's the most useful thing for anyone to go deploy tomorrow. I do get excited about sort of like the little wins, right, every time that we can figure out a way to make our IAM minimization strategies slightly easier to get started with. We talked about AWS Vault. I love that entire document because it's so easy to get started. The material real world impact of that has been so much greater than if I just describe what the beautiful end state aesthological version of that is. I try to be excited about cool things like — I'm doing a lot of AWS IAM work recently. That's why I keep track of AWS IAM. In terms of things that I — every time that I play with them, I am reminded that the world is full of beauty. Is that a reason to play with the Linux PERF subsystem and eBPF the other day. Most of the time, don't get an opportunity to use it because, well, I mean, because we're running on like highly restricted environments like Lambda and Fargate, which most of the time are absolutely the right call for us. But it's so cool to be able to play with those. I am super excited since we talked about Rust earlier.

LVH: 01:08:20.023 Yeah, like Rust and Linux Kernel. I am super excited about cutting down the number of [inaudible] I have to care about code execution with the kernel. I don't know, other things that we're messing with recently — talking about kernel exposure gVisor. So gVisor is a user-space program by Google that basically emulates portions of the Linux Kernel API. So you make syscalls into gVisor, instead of making syscalls into the Linux Kernel. That's really cool from a sandboxing perspective — that has all sorts of great advantages. So I'm kind of excited about that. But it’s a niche application, right? There's a lot to be excited about. Most recently, one that — okay, here's one that's applicable to a lot of people: GitHub Actions. If you're using it with AWS, we can put a link in the chat I assume. So recently, they made it possible to directly integrate between GitHub and AWS. So that you could assume a role directly, instead of needing to save an access key in GitHub secrets because — come on, it's a matter of time before that before that's going to leak somewhere. I mean, I'm sure it's safe, but it always gives me the heebie-jeebies when there's a system that has like — because also it has like admin access half of the time, right, because people are using it to deploy infrastructure and stuff. So it's not a mild trend, right? It's star star. Yeah. A lot-to-be-excited-about sort of thing.

Top 3 lesser known tips to stay secure

Ben: 01:09:44.843 Cool. That's great. So just closing up, what are your top three lesser-known tips to stay secure?

LVH: 01:09:53.540 Yeah, so my three favorite ones — I mean, we talked about a bunch of them already, but certainly AWS Vault. It's hard to oversell. I think AWS Vault, for how easy it is and the return on investment you get from it. WebAuthn and U2F keys which again, I imagine to most of your listeners are maybe not that unknown, but it's just the reason I bring it up — this is like AWS Vault, just the ruthless efficiency of it; the number of attacks that are completely irrelevant. Because Google has come out and said, "Phishing is dead. We've killed it. WebAuthn did it. We're good." The third one — that one I'm going to have to think about, but I don't know. Maybe next time.

Ben: 01:10:31.436 Maybe next time. All right. Well, thank you so much for your time today. LVH. I've had a great time with you.

LVH: 01:10:37.389 Likewise, thank you for having me.

Background image

Try Teleport today

In the cloud, self-hosted, or open source
Get StartedView developer docs