Securing Infrastructure Access at Scale in Large Enterprises
Dec 12
Virtual
Register Now
Teleport logoTry For Free
Background image

Hot Takes Episode 1

What is Hot Takes?

Hot Takes covers contrarian points-of-view around identity, infrastructure, security, cyber security, and identity access. Each month, industry panelists will discuss a Hot Take and share their own point-of-view on the topic. In the second half of the show, the audience will break out into their own individual groups to share their own opinion on the topic.

This Month’s Hot Take

Five renowned experts from the Kubernetes community will be discussing how human-error is the most significant threat to the security of your infrastructure, NOT hackers and ransomware.

Panelists

Kat Cosgrove - Developer Advocate @ Dell

Divya Mohan - Tech Evangelist @ Rancher Labs

Frederick Kautz - KubeCon Co-Chair, Director R&D @ TestifySec

Chris Short - Senior Developer Advocate @ AWS

Kunal Kushwaha - Developer Relations Manager @ Civo

Ben Arent - Hot Takes Moderator & VP Developer Relations @ Teleport

Questions addressed in this episode:

- Is it more secure to host Kubernetes?

- Who is ultimately responsible for managing IAM?

- Should you give developers direct access to the Kubernetes API through kubectl?

- What causes misconfiguration in Kubernetes clusters in organizations?

- Poll: What extra security measures do you currently have in place for your Kubernetes cluster?


Transcript - Hot Takes Episode 1: Protect Your Infrastructure from Yourself

Ben Arent: Okay, hi everyone. Good morning, good afternoon, good evening, no matter where you are. Let's see, do we have everyone here? My name is Ben Arent and I will be running Hot Takes today. This is a new series from Teleport, and we're going to be focusing on possibly contrarian points of view around identity, infrastructure, security, and cybersecurity. And today we're primarily focused on Protect your Infrastructure from Yourself and then also a deep dive mainly into Kubernetes. This week — oh, for so the format, we're going to go round-robin some questions that I have prepared. And then at the halfway point, we'll break out into breakout rooms and have a more intimate discussion around things that are on your mind.

Ben: This week in particular, there's a few interesting things that have come out and I've actually put some resources here. We had the Datadog Security Report and also Wiz Kubernetes Security Report that actually had a few interesting nuggets that are sort of relevant to this discussion today. Wiz actually found it takes 22 minutes until an EKS cluster has a malicious scan against it; and when people are probing the Kubernetes cluster, they've been finding that there are long-lived credentials available.

Ben: And so sort of to kick things off for the Hot Takes, I want to sort of dive into — this is like an easy one. Kubernetes can be a complicated platform to run. And I think this might be a good one for Chris, since — oh, actually, let me do introductions before I talk Kubernetes to Chris. So today we have Chris Short, who's a senior developer advocate at AWS; Kat Cosgrove, a developer advocate at Dell; Divya, who's a tech evangelist at Rancho Labs; Frederick Kautz, who's a KubeCon co-chair; and Kunal Kushwaha — I hope that's correct - a developer relations manager at Civo Cloud. So this is actually probably good for both Kunal and Chris — is it more secure to host Kubernetes? User-hosted Kubernetes provider?

Is it more secure to host Kubernetes?

Chris Short: Oh, that's a loaded question. I think in general, yes, but just like all things on the internet, if you don't want it to get kicked over, don't plug it into the internet, right? Like the default nature of, "Oh, open to the internet by default so I can get to everything," is a really terrible idea these days, even to the point where you need to start scanning your outbound traffic to look for malicious things. So yeah, the cloud will give you overall more protection than your — I mean, unless you have an excellent security team of some sort that I don't know about — in general, it'll give you more protection out of the box, but you still have to configure it correctly, which is where the 22 seconds comes from.

Kunal Kushwaha: I can go next. Thanks for sharing, Chris. Couldn't agree more. Since we're talking about hot takes, maybe I can take an analogy: if you talk about using a hosted Kubernetes provider, it's like you can think of it as putting all your precious jewelry and the house papers and everything in a bank's safe deposit. Now it may seem secure, but you're trusting someone else with all of the assets that you have. So the bank — they have all the fancy security measures and stuff, but what if they decide to close up shop or they get robbed themselves? And when you run your own Kubernetes cluster, it's like you're burying your own jewels in your own backyard. So you have complete control over it and as long as you keep your secrets well hidden — no pun intended — it is as secure as you can make it. So I'm not gonna say yes or no. I think if you know what you're doing — and like Chris mentioned, even if you're trusting another provider, you still have some measures to take. But think of it like, who needs a fancy bank when you have a shovel in your backyard?

Kat Cosgrove: Kunal, do you keep your paychecks under your mattress? Just curious.

Kunal: No comment?

Kat: That's all I could think of with that. But no, it is a trust thing in part, but it's also an ease-of-use thing for me. You can absolutely say that the difference is like — do you trust Google's security team more than you trust your own security team? I'm not trying to be snarky or say that Google's security team is inherently better and your security team is inherently worse. It's genuinely, you could have more targeted experts than they do. It's also — how easy is one to stand up and maintain versus the other? Like, I work at Dell. I work at a hardware company, right? So obviously I have, by virtue of who signs my paychecks, a bias towards on-prem. But in actual reality, I have never not stood up a cluster on GKE instead. It is easier to get going. It is faster to get going. It is easier and faster to maintain. A lot of things you don't even think about — it just happens for you. There's just, there's more considerations with managing a cluster on your own, rolling your own, that I personally do not want to deal with. And I'm a big fan of automating myself out of a job as often as I can.

Divya Mohan: I'd like a point to add to what Kat just said regarding standing up Kubernetes clusters in GKE, because that's pretty much how I got started when we were onboarding — I mean, not at my current job. This was at my previous job before I joined Rancho. So our first exposure — my first exposure, rather, to Kubernetes was via GKE and administering clusters on GKE, to be honest. And the only thing there that I would probably not say was a great user experience was the amount. Like if you are in a financial or regulatory — something that requires a heavy amount of regulations, it's the amount of in-between steps that you have to go through to actually get a cluster to work on GKE; primarily because a lot of people do not trust their data. Which is great with people with cloud providers; which is great, not saying no. But also, there's a lot of intermediate steps and it also becomes very difficult from a management and administration perspective.

Divya: So it truly does boil down to how you as an organization want to deal with the complicated nature of how you would like to set up. You probably would want to trade off between, say, the complicated nature of setting up with this thing in case you're a regulatory or a financial organization; but you could also — if you are just doing an on-prem installation, you could sort of also — it's not less complicated, but it's also easier to get it started if you have your own infrastructure. But again, there's a whole trade-off involved, in my opinion.

Ben: Yeah. Frederick, do you have anything to add, or should I go on —?

Frederick Kautz: Yeah. But I'll try to keep it short. I've done both. I think doing Kubernetes on-premises is hard mode. The number of times I've seen people set up an initial Kube admin — they don't upgrade it because they think, "Why am I going to upgrade this thing? It's behind. It's not connected to anything. I have to —" And a year later, their certificate expires and the cluster grinds to a halt. If I recall, there's seven different places where you have to rotate certificates across different areas. So if you have this expertise on hand, you have the people who can do it, by all means, there's no problem with doing it yourself. That being said, if you go with a cloud, you're going with AWS or GKE or Azure or others, many of them have FedRAMP compliant environments. They have HIPAA environments. They have things that are there.

Frederick: Now you can't just hop on a service and say, "Hey, I'm starting to use it." You actually have to let them know, like, "I need this," because they have requirements on their side as well. Legal requirements need to be signed. But in general, I do think that going with the cloud is — it's a trade-off on risk. I do think that it's a great option. Especially if you know you don't have the resources on-hand to do it properly, you should outsource that risk. I do wonder that if there are issues with Kubernetes being compromised, why do we not have clouds scanning their own customers and trying to do those compromises themselves first, and then saying, "Hey, this thing has a misconfiguration"? Or if there's a lead time, like, "Hey, it takes you 30 seconds to configure a Kubernetes cluster properly. Why don't we have those things behind a firewall and then we only open them up after those configurations have been performed and are ready to go?" Maybe that's a complicated question, maybe there's reasons for doing that, but they are thoughts that do run through my mind. I'll leave it at that for this point.

Who is ultimately responsible for managing IAM?

Ben: I know we could go deep on this one topic. But one thing we kind of mentioned was the initial setup. Whether you're setting up on premises or where the hosting provider is, you often have the one sort of super-user account. And as you sort of roll out to production, you have the whole process of dealing with IAM and identity access management. Who is ultimately responsible for sort of locking down IAM or giving people the tools that they need to get their job done? I don't know if anyone wants to take this question.

Kunal: Oh, no, I'd just say that it depends on the organization's — the complexity, size, and how it's structured. But ideally, IAM should be shared amongst all the stakeholders. If you want a hot take, maybe instead, why not let each employee control their own access permissions? After all, who knows best what they need than the users themselves? That's a hot take. But my real answer is I think it should be shared amongst various stakeholders like sysadmins and developers as well, business owners and — yeah, I'll keep it short. So I'm happy to hear others.

Kat: Yeah, I think the answer is — not you. The answer is your boss. Personally, that is not something I would ever want to take responsibility for or ownership of out of fear. Look at the number of enormous breaches that have happened over the last few years that were strictly the result of bad IAM configuration. I'm scared of that. That's magic. I am not qualified to handle that appropriately. But serious answer, I think the whole company is responsible for that. The business entity is responsible for that. That should never, ever, ever come down to one individual person or one individual job role. The potential ramifications of messing it up are business-ending, in some cases. And I just do not think it is worth — you can't put that liability on a single person or a single role.

Chris: No, to put it on one team, even, is quite a stretch.

Kat: Not okay.

Chris: Yeah. IAM is one of those things that ends up becoming a lot like your org structure, right? That rule applies. But your org structure also needs to realize the impact a misconfiguration can have. And if they don't, you've got to come up with an education process of some sort to just explain, "These are the permissions you have. Use them wisely," right? You could easily expose a cluster to the internet and watch it get kicked over because you didn't patch it to the latest version or something; you didn't use the right binary. Who knows? There's so many things that can go horrifically wrong with IAM that, yes, you need a, basically, trusted pipeline to manage it, essentially.

Frederick: I think if somebody can expose something that's that sensitive unilaterally, that is a bug in the process. And it really upsets me when I hear some — especially at the executive level, they'll say, "That intern is the reason that we had this massive breach," or that engineer, or that [crosstalk] —

Kat: Infuriating.

Chris: Absolutely enraging.

Frederick: Yeah. That person just discovered a bug in the process and it was a terrible bug. Unless there's malicious intent involved, that's a whole other story. But just day-to-day, people are going to make mistakes. And so I think — but we have this thing where it's like, we have people, people at the very top, we all work on our processes and we try to get those processes in place. And then those processes then impact us as people again. So we have this loop. But I do think we do need to consider this as — not as a bespoke, "This is the group that's doing IAM," but we need to look at this as, "Who has a unified effort where we cross-check and double-check each other?" And if we don't understand something, like, "Why is it configured like that?" Then we need to be empowered to ask questions and not have repercussions.

Divya: Just to add on to what Frederick said, there was — where I used to work previously — because we used to have IAM as like a shared service. But I think IAM should more — it should be a shared service, but also more of a shared responsibility with the guidelines in place, with the guardrails in place around the policies and everything else, because putting it on one person or one team or a group of people really is not the best way to go down. Wherever you are, whether it's in the cloud or outside of cloud, it's just not the best way to go.

Should you give developers direct access to the Kubernetes API through kubectl?

Ben: Yeah. And going back to our topic of protecting infrastructure from yourself, I actually worked on a team, which — this is for server access. But if anyone opened SSH into a server, that machine was automatically scheduled to be recycled and removed after an hour. And this kind of brings me to my next question is, should you give developers direct access to the Kubernetes API through kubectl?

Chris: I have a pre-Kubernetes-era story to tell about that, right? I was at a company where data centers were their job and they have a Java-based web portal. And I was brought in to simplify the deployment of that because it took three days, as opposed to the three hours I cut it down to. But in the process of doing that, we're implementing monitoring, telemetry, all these things. And then all of a sudden you get an alert and it's like, "Oh, the lead dev somehow has access to the production and they just changed something and it just broke everything because, well, you didn't audit the access of keys and so forth to the various systems." And yeah, you kind of have to take that into account, right? Like, who gets access to the entire kingdom? If you're opening up the API server to everyone in your organization, that might be something that you need to put some gateways and pathways of safety around.

Frederick: I don't want to be the person who deletes all the cats. I think I'll leave it at that for the moment.

Kat: I am going to say it depends. I hate saying it depends, but it does. RBAC exists for a reason and isn't like access control teleports things. Obviously, there are ways to do this relatively safely. So not letting your engineers have access to kubectl can kind of limit what they're able to do; or at least, it makes it more difficult to do what they need to do to a degree that I think may not be necessary, considering the tools we have today. I would rather be able to do what I need to do quickly than have to bother the cluster admin every time I need something.

Kat: This is a discussion of, how do you weigh the security risk of allowing your developers access to kubectl versus business needs, right? I don't think that question exists in a bubble. My opinion is — developers should have access to kubectl just by virtue of there are tools now to make that safe. This isn't like Kubernetes 1.0, right? We have ways to make that relatively — reduce the impact of them screwing something up, right? So I say let them have it, but it is a thing that you have to weigh versus business value.

Kunal: Yeah, I can go next.

Frederick: [inaudible].

Kunal: Oh, Fred, go for it.

Frederick: I was going to say, I do have a more complete answer as well. I think in development modes, developers should always have access. We should never tell them, "You do not have access while you're developing something to debug, to go find information." Production — it depends entirely on the — as Kat mentioned below, that — or before, rather. She's below on my screen. But part of the reason for that is we want to make sure that it's right size for the business. So if you're working in a highly regulated environment, the people who are developers may legally have no access to it. And if those controls are not put in place, you sometimes could even have a person held legally responsible. It's not — hearing about, just even recently, a CISO being charged for things related towards access and controls not being put in place. So I think that it really depends entirely on what your risk model is and what you're trying to defend against. And it's not an easy question.

Kat: I'm glad I'm not the only person that had an "it depends" answer to that one.

Kunal: You can make it three. Yeah, this discussion is so good. But yeah, I think like what Kat mentioned, okay, it's 1.0? Maybe, yeah, sure. Because a follow-up question can be, would you hand a chainsaw to a toddler and expect them to build a sandcastle? But now that we have these tools in place — so it's like what Kat mentioned around RBAC. And going back to Chris's point, I don't know about others, but I think developers are notorious for being risk takers; and kubectl will provide you that power to bring something down or whatever. Which brings me to Frederick's point around in development environments, sure; but with production, yeah, maybe there's some more granular rules or whatever. But it's a really big "it depends" question. Yeah, I think before these tools existed, I would say my answer would lie somewhere between, and just leave it in the hands of administrators who understand the complexities, but I think there are solutions now.

Divya: I'm just gonna say I'm agreeing with everyone here because honestly speaking, I've worked in environments where we have not had access to very specific servers. So having a backway route to things actually helped. And giving developers a backway route to things actually helped when it was — again, it was required for them to have that access. We were not being malicious actors there. Just clearing that. But having that backway access helps. But when you're talking about the current state, like all of the folks on the panel mentioned, I think we have tools in place to actually provide the guardrails; and it truly does depend on your risk appetite and your organizational complexity and the kind of data that you work with.

Divya: Because honestly speaking, there's a lot of data that I have worked with previously that actually is restricted data and it's not accessible to people out here. And the servers themselves were not allowed to be accessed by us. So in such situations, having an indirect layer helps, but if there is a direct way to access the API with sufficient guardrails enforced, I think it would be more empowering for developers to actually take that route as well.

What causes misconfiguration in Kubernetes clusters in organizations?

Ben: All right. Moving on a little bit from IAM and access to another common issue for Kubernetes clusters is misconfiguration. And to kind of go a little bit deeper into misconfiguration, I just wondered if people have any stories about what have they seen as the biggest reason that's led to misconfiguration in their organizations?

Kat: Hubris. Yeah. Just —

Chris: Enough to be dangerous knowledge.

Kat: Yeah.

Chris: Yeah. That's very much a thing. There was a local company here in Detroit that had their Kubernetes clusters compromised because — just a knowledge gap that they didn't even know they had, it existed. So yeah, [crosstalk] —

Kat: You don't know what you don't know, right?

Chris: And there's very good papers and everything — like Dr Richard I. Cook, Above the Line, Below the Line. Look that up and read that paper. It's a pretty short read and it'll give you a sense of, "You don't understand everything. You can't. It's impossible. But there are things you can understand, and that's good. We just have to acknowledge that and not be so full of pride in our work," I guess. Yeah.

Kunal: I blame the human evolution, because our brains are wired to seek novelty and take all the risks and everything, which can lead to all these experimentations; and you're changing the settings and configurations, like you mentioned, without fully understanding, like what Kat mentioned, not knowing what you don't know. So I think it's important to strike the right balance between curiosity and caution.

Chris: Yes. Agreed. The implications are just too great, I think.

Frederick: I tend to think on misconfiguration, as well — granted, it's not talking about the source, but the risk attached. And one of the things that I like to tell my teams is that if you misconfigure a system, that if you have an attacker who's focusing on you, they're not going to use a zero day on you. They're just going to walk in through the front door because the JWT library was not set up properly or your QR back was not set up properly. And we often want to focus on delivering features. In fact, we are incentivized as developers to deliver features. We're often not incentivized to secure the system properly or similar. There is impact if it doesn't, but very often we find that the human incentives are not aligned in order to make us spend more time in that area, when we really should.

Kat: Kubernetes is hard, but people are harder.

Divya: Yep.

Ben: All right, I know we have — Divya, do you have anything else to close out for misconfiguration?

Divya: No, not really, except that please don't resort to defaults. That's a thing that I want to just — if anybody's tuning in, please don't resort to defaults because that creates a lot of problems for admins. And it's not the best-case scenario when it is in production. So, yeah.

Kat: Actually, how did we end up with this public perception that Kubernetes is secure by default? Because that's a myth that's been around for a hot minute.

Chris: Yeah. Nothing. No, like, absolutely not. That's just not true.

Kat: It's not. It's not. But people still go with that assumption. When they don't have enough experience to know otherwise, they still go with that assumption, at least when they're new. I don't know how that popped up in the first place.

Kunal: Linux is secure by default, right?

Kat: Yeah, totally.

Frederick: Yeah.

Kat: Yeah, yeah. For sure. I've heard that.

Chris: I'm sure when you install that web server, it's not going to just open itself up to the world. No, absolutely not. No.

Kunal: Yeah. I think it also depends on how it's configured and deployed and managed.

Kat: Yeah, but out of the box, default?

Chris: Never.

Kunal: Yeah, no. No.

Divya: It's as secure as that pen drive on the street, in my opinion.

Chris: Yeah.

Frederick: So it's securable. And one of the things that's — really, that I've spent a lot of time thinking about is, why are things not secure by default? And I think it comes down to when you want to show it off, when you want to get that first initial spark of magic, "It's secure by default. Okay, I have to go fix RBAC. I have to go fix IAM. Okay, now I can run the app," versus "install, run". And then a minute later, I'm jumping up and down, excited. I think it's the dopamine.

Poll: What extra security measures do you currently have in place for your Kubernetes cluster?

Ben: Yeah. So we have, I think, a minute and a half before we go into the breakout rooms. Each of the speakers here will be moderating a speaker room. I actually have a poll, which I'm going to launch, which might be a good way to start the discussion, which is, what extra security measures do you currently have in place for your Kubernetes cluster? I'm kind of interested to see where people land on this. So if you're currently watching, this is a good way to interact with us before you're chatting with us. You should see it pop up. Let me see. You have to go to the polls tab. No one? Oh, I think it —

Kat: I see it.

Ben: Did it see? Yeah. Share it. Oh, someone shared the wrong — okay, I think someone has shared the wrong one. Here we are. Oh, okay. Obviously, something's glitchy with the polls today. Well, we have to do the polls in real life, but I at least got two votes for misconfiguration. So I don't know. I think we'll just be teleported to these breakout rooms in 20 seconds, hopefully, so I guess we'll hold in here and see where it goes.

Frederick: One giant breakout room.

Join The Teleport Community

Background image

Try Teleport today

In the cloud, self-hosted, or open source
Get StartedView developer docs