Teleport Workload Identity with SPIFFE: Achieving Zero Trust in Modern Infrastructure
May 23
Virtual
Register Today
Teleport logoTry For Free
Background image

Managing Multi-Account AWS Console and CLI Access with Teleport

Are you using multiple AWS accounts to separate your environments like many of the engineering teams we work with? If you are, you may be looking for a better way to access those accounts.

Join us for a session, in which Nivathan Somasundharam, Implementation Engineer at Teleport, and Ashok Mahajan, Senior Partner SA at AWS, discuss some common challenges that arise when using multiple AWS accounts to separate cloud environments and how to use Teleport to solve these issues. Some of the most common issues people face include:

  • Onboarding, off-boarding, and restricting engineer access to AWS accounts
  • Auditing access to AWS accounts
  • Restricting access to sensitive AWS accounts, solved with Teleport Access Requests
  • Running Terraform, boto3 scripts, and CLI commands without AWS credentials

Transcript - Managing Multi-Account AWS Console and CLI Access with Teleport

Nivathan: Hey. Hi, everyone. Hi, Ashok.

Ashok: Hi, Nivathan. Let's share the slide deck.

Nivathan: Yeah.

[silence]

Nivathan: Greetings, everyone. Thank you for joining today's webinar. Let's wait for a couple of minutes for others to join. Let's see where everyone is from. Feel free to drop your city name on the chat. There is Ali from Austin, Lexi from Chicago, Robert from Calgary. Hi, Robert.

[silence]

Nivathan: Welcome, everyone, again.

[silence]

Nivathan: Chintan from San Jose. Hi, Chintan. Let's wait for one more minute and get started.

[silence]

Nivathan: Maybe 30 more seconds, we'll get started. Hi, everyone, again.

[silence]

Nivathan: Okay. Let's get started. Once again, thank you for joining today's webinar: Managing Multiple AWS Account Console and CLI Access with Teleport. I'm Nivathan Somasundharam. I'm an implementation engineer at Teleport. Before Teleport, I was a DevSecOps practitioner at VMware and Cradlepoint building, managing, and securing cloud environments. I've been here at Teleport for a year and a half. I live in San Jose, California. I'm partnering with Ashok from AWS today on this webinar. I'll let Ashok to introduce himself.

Ashok: Hi, everyone. My name is Ashok Mahajan, and I'm a senior partner solution architect with Amazon Web Services. I'm part of the Global Startup team focusing on ISVs, especially our partners in security segment. I've been part of the information security community throughout my career. It's a little over 17 years now. I've implemented a lot of solutions around data security, identity, and access management across multiple domains.

Nivathan: Very nice. Here is the agenda for today's webinar. Let's start with the challenges with delegating access to multiple AWS accounts, and then Ashok will go over AWS shared responsibility model in respect to AWS IAM. Then I will introduce Teleport to you all and show how Teleport will solve some of the challenges that we are discussing today. Finally, I can show an interesting demo on how AWS access can be done with Teleport. And followed by that, we can go over some Q&A. Let's talk about the challenges. We all know it's a great strategy to isolate AWS environment and segregate the AWS account for building, governance and resource isolation purpose.

Challenges with multi-account AWS access

Nivathan: But there are some complexities involved when it comes to delegating and managing access to multiple AWS accounts. Let's get into them. The first one is it's hard to centralize the authorization of different AWS accounts for all your users at different levels. As an engineer, you have to switch between accounts to access different infrastructure components, which requires an engineer to assume roles and change the configuration on the terminal, especially for the CLI access. We all have to maintain various compliance requirements based on the industry standards and in the industry where we operate in. Auditing is a core part of compliance. When we have multiple AWS accounts, there are some challenges with auditing all the accounts at a centralized place or in a centralized system. And the last one is we strive hard to achieve least privileges.

Nivathan: So we cannot let developers to have default access to your critical or your production accounts or your critical infrastructure all the time. When we restrict the engineers to a certain production access or a critical infrastructure, they are handicapped and their productivity goes down. To address this, we can use something like just-in-time access, also known as temporary access to the critical resource, and which is quite challenging to provide access like just-in-time for your AWS accounts. That is the last challenge. Now I'll pass on to Ashok to discuss the AWS shared responsibility model with respect to AWS account. Here for you, Ashok.

Fine-grained access on AWS reduces cybersecurity risk

Ashok: Thanks, Nivathan. Next slide. Yeah. So I think we all can agree that the core of IAM basically revolves around three critical elements, right: the who, access, and what. Right? Who can be made of your users, employees, and your customers. What basically represents the resources you're trying to protect. And what connects the two is basically the access management. With AWS Identity and Access Management, you can specify who can access services and resources in AWS, centrally manage the fine-grained permissions, and analyze access to refine permissions across AWS. So in AWS context, who can be an identity and access management user, a group, or an IAM role. And IAM user, it's not just restricted to a human, but it can also be associated to an application or a service.

Ashok: For example, when you are setting up access for Amazon Elastic Compute, commonly known as EC2, it's trying to access an object in S3 that is basically an IAM user in that context. What can be an AWS account, an AWS resource, such as Amazon EC2, or even a fine-grained access object when you associate it with Amazon Simple Storage or S3, or even a DynamoDB table item. The access is basically managed by policies and permissions. Policies contains the permission that define and ensure whether the request should be allowed or denied. AWS IAM is offered at no additional charge and integrates with many AWS services so that these services can leverage the same IAM-based permission model to manage and control access. Next slide. Since we are talking about AWS account, I just want to do a quick recap of what an AWS account is.

Ashok: It is basically a container for all your AWS resources that you create and manage. It provides you that administrative capability for access and billing. AWS account is basically your security boundary for all your AWS resources. The resources that you create in an account are available to users who have credentials for that account. Among the key resources that you create in your AWS account are basically your identities, such as users and roles. Identities basically have credentials that you use to sign in or basically authenticate to AWS, and they have the permissions and policies that are associated, which defines their authorization. Now let's take a look at the AWS shared service model. Nivathan, next slide. So cloud security at AWS is our highest priority, and leveraging this shared responsibility model can help you relieve the operational burden as AWS operates, manages, controls the component of host operating system and virtualization layer down to the physical security of facilities in which these services operate.

The Shared Responsibility Model

Ashok: You as a customer assume the responsibility and management of guest operating system, including updates, patching, any associated software, as well as configuration of AWS-provided security groups. As shown in this chart, this differentiation is commonly referred to as security of the cloud versus security in the cloud. So in this equation, AWS is responsible for security of the cloud, that includes the physical infrastructure, facilities, the compute, the building block that runs the AWS services. You as a customer are responsible for security in the cloud. That means you secure the workload and application that you deploy in the cloud and have the flexibility to put more emphasis on the security of sensitive data as per your business requirement.

Ashok: You basically own and are responsible for managing your data, classifying your assets, and most importantly, using the IAM tools to make sure there are appropriate permissions defined based on your business needs. Next slide. Some customers have sensitive workloads and prefer to have more control what and where the data and your applications are running. The shared responsibility model basically gives you that flexibility, and it changes based on the services being used. For managed services, AWS takes on more responsibility to elevate some of the heavy lifting that you may need with the trade-off of less general control. As an example, when we look at Amazon Elastic Compute, or commonly known as EC2, AWS is responsible for protecting the infrastructure that runs AWS services in the cloud. Your responsibility lies with controlling network access to your instance, managing credentials, managing guest operating system, and even managing the IAM roles that give access to those instances.

Ashok: Another example is Amazon Simple Storage or S3, where you don't have to manage the operating system, but you will be responsible for managing your data, classifying the objects, setting up the level of encryption, and on top of it, managing access through IAM roles. Note the IAM requirement in these services. Right? IAM was common in both these scenarios. So AWS basically provides you mechanism to configure the identity and access management side of the requirements so that you can adapt based on your needs, your compliance, and security objectives. As an example, when it comes to different users in AWS, there is an account owner, which we also know as root user, there is an identity center user, there is a federated user, and there is IAM user. And you have options to provide long-term or temporary credentials and basically adapt and configure these based on your specific needs and requirements.

Ashok: Now in the next slide, let's take a look at what AWS provides in terms of the core identity management, access control, and governance features. And I think we all can agree that these basically form the core security pillars for any organization of any size and any type. If you look at these slides, the AWS provide these identity controls for Zero Trust, which you can use to authenticate identities, to evaluate identity context, and force fine-grained identity-based authorization before allowing any identity to access applications, your data, and your systems. In the next few slides, we will learn more about some of these services and how these services help you manage access to your critical applications and data. Next slide.

AWS IAM Identity Center

Ashok: Let's begin by talking about AWS IAM Identity Center that helps you securely create, excuse me, or connect your workforce identities and manage their access centrally across AWS accounts and application. In fact, IAM Identity Center is the recommended approach for workforce authentication and authorization on AWS for organization of any size and type. You can create or manage identities in AWS, or you can even connect to your existing identity sources, including Microsoft Active Directory, Okta, Ping, and others. With multi-account permissions, you can plan for and centrally implement permissions across multiple AWS accounts at any time without needing to configure each of these accounts manually. Next slide. Now, as Nivathan was pointing out, in today's ecosystem, we have many applications, multiple teams, and each of these teams have different operational, regulatory, and budgetary requirements. And many companies, because of these reasons, follow a multi-account model.

AWS Organizations

Ashok: These models simplify billing, where resources used within AWS account can be allocated to a business unit responsible for that account and also provide isolation and tight security boundaries enforced by built-in isolation between accounts and consolidation of workflows with similar risk profiles. AWS Organization is basically an account managed service that enables you to consolidate multiple accounts into an organization that you create and centrally manage. AWS Organization includes account management and consolidated billing features that enable you to meet your budgetary, security, and compliance requirements, and also allows you to manage and control policies and define guardrails for your organization. It basically provides you native tool to build your environment so that you can scale quickly by creating accounts and allocating resources, customize your environment based on your specific needs around governance, and also helps you organize cost and identify cost-saving measures.

Ashok: And if you would like to jumpstart your AWS environment using a simple UI and built-in best practices, we even recommend using AWS Control Tower that we'll be talking about in the next slide. I think we all can agree that managing a large-scale enterprise with a large number of applications, distributed team, cloud setup, and governance can be complex and time-consuming. This can sometimes even slow down innovation that you're looking for to achieve your customer requirements. That's where AWS Control Tower comes in. AWS Control Tower offers you a straightforward way to set up and govern multi-account environment following the prescriptive best practices. AWS Control Tower basically orchestrates the capabilities of several other AWS services, including AWS Organization, AWS Service Catalog, and AWS IAM Identity Center to basically build a landing zone in less than an hour.

AWS Control Tower

Ashok: Resources are set up and managed on your behalf. AWS Control Tower basically enables end user on your distributed teams to provision new accounts quickly by means of configurable account templates in the form of account factory. Meanwhile, you can use your central admin capabilities and can monitor all your accounts to make sure they are aligned with your established organization-wide best practices and controls. In short, AWS Control Tower offers the easiest way to set up and govern a secure, compliant, multi-account AWS environment based on the best practices established by working with thousands of enterprises. Next slide. AWS IAM basically provides you a number of security features to consider and develop and implement based on your specific needs.

IAM best practices

Ashok: Along with these tools and controls, there are also some best practices, which are basically guidelines that you can follow. These do not represent basically a complete security solution, but these are best practices that we have learned with our years of experience. As an example, you can protect your root credentials, like you protect your credit card or any sensitive secret, and use them only for the tasks that are needed. Use IAM roles to manage access, use temporary credentials as and when possible, use multifactor authentication for extra security. You can even use AWS Organizations service control policy, commonly known as SCP, to limit the permissions of root users and even establish permission guardrails for your account, grant only permissions that are required to achieve your task, and even use IAM Access Analyzer to validate your IAM policies to ensure you follow the principle of least privilege.

Ashok: When it comes to managing access, along with these native AWS services, we also have a strong network of AWS partners with deep expertise and specialized solution to meet your specific needs. By leveraging AWS services along with these partner solutions like Teleport to manage your connectivity, authentication, authorization, and even your audit needs, you can scale your AWS infrastructure while reducing the potential of unintentionally opening up AWS resources to unauthorized access. Now, I think let's hear from Nivathan how Teleport solves the complexity involved in managing access to multiple accounts and makes life easier. Over to you, Nivathan.

Why Teleport?

Nivathan: Thanks, Ashok, for going over the Shared Responsibility Model and explaining about some of the great services from AWS and also the best practices related to IAM from AWS. Now let's see why we need Teleport and what are the problems that Teleport solved. Let's go over some introductions before getting into the details. Here is how engineers access infrastructure today. We all know that secure infrastructure access is hard. As an engineer, we want to access various pieces of virtual infrastructure, like servers, Kubernetes clusters, Windows machines, databases, CI/CD systems, AWS consoles, and other applications. And the list just goes on. How do we access all this infrastructure today?

Nivathan: Either through a bastion, like a VPN kind of solution, or we are still using secrets like passwords, SSH keys, and so on. So everywhere, there is a dependency for either a secret or VPN kind of perimeter-based security to secure the infrastructure. But there are problems involved in this VPNs-based security and the secret-based security. If a single piece of infrastructure here is compromised, the bad actor can make little movements inside your infrastructure and they can dump your sensitive information and make the blast radius huge. To address this issue, let's see how Teleport solves them. You all can see how better it is now when you can access your infrastructure through Teleport.

Nivathan: Teleport unifies access to all your infrastructure components. I'm going to introduce you to one of those rare times when a security solution makes it easy and faster for your engineers to get their job done while also increasing the security posture at the same time. Many organizations we speak with have implemented several security solutions, but developers didn't adopt. Do you know why? Because they are a pain to use and developers are smart enough to get around them. If you want someone to adopt a security solution, it has to be dead simple. And that's why Teleport has done with — that's what Teleport has done with Identity-Native Infrastructure Access. Let's get right into it. The solution to address all the challenges is Identity-Native Infrastructure Access. Identity-native consists of two things.

Identity-native: secretless + zero trust

Nivathan: The first one is secretless and the second one is Zero Trust. Let's talk about secretless first. There are passwords which are always getting stolen or phished. Passwords and secrets are always meant to be shared between teammates, which is a bad practice. And passwords and secrets don't have an identity attached to them. So let's try to avoid the legacy PAM systems or keywords to manage passwords. So clearly the better approach is secretless. The second component of Identity Native is Zero Trust that aligns with Google's BeyondCorp initiative for securing their own infrastructure. Zero Trust completely throws out an idea of access between human and machine — is completely based on identity. Zero Trust doesn't trust anyone based on the network.

Nivathan: Always, user identity is checked and the access is granted based on that. Now let's see how Teleport does Identity-Native Infrastructure Access. Here is a high-level architecture diagram of Teleport. The Identity-Native Proxy sits between a user and the infrastructure resources, like servers, Kubernetes clusters, databases, applications, and cloud consoles like AWS Cloud Console. And Teleport itself is the certificate authority. Whenever an engineer logs into Teleport by proving his or her identity, they are provided with a short-lived X.509 certificate. So whenever a user logs in, they are provided with this short-lived certificate so that they can be logged in for that short time based on the TTL that we set.

Teleport integrations with AWS infrastructure resources

Nivathan: Teleport also has the fine-grained RBAC capabilities that controls who can access what inside the Teleport. So based on the identity, the RBAC takes care of deciding who can access what. Teleport can be integrated with SSO providers like Okta, Azure Entra, and our GitHub, and various other SSO providers. You can also use WebAuthn-based biometric hardware authentications like YubiKeys, Touch ID on your Mac, and Hello on your Windows to authenticate to Teleport. Everything is audited, and those audit logs can be also exported to your preferred scene. Users also can request for access through Teleport. For example, whenever an engineer requires elevated privileges, they can request for that, and admins can approve that request.

Nivathan: Teleport integrates with a wide range of AWS infrastructure resources, and we are also platform agnostic. Teleport can integrate well with all your day-to-day workflow systems as well. I wanted to remind everyone again, Teleport is an open source. If you compare us with the legacy solutions, the key difference is Teleport is user-friendly for developers, and developers will love Teleport because we have created a solution from ground up with developers, DevOps engineers, and SRAs in mind. Our founding team came from this world and experienced these problems firsthand. That is what truly makes us different. Once again, you can check out our open source on the GitHub. Now let's go over some advantages of using Teleport, and specific to AWS Console and CLI access.

Advantages of using Teleport for AWS access

Nivathan: The first one is that Teleport integrates well with SSO providers. So you can still hold your identity source to be your IDP providers, like Okta, Azure Entra, and GitHub, etc. And Teleport will help to manage the lifecycle of employee, especially workflows like onboarding and offboarding of employees can be automated. We can add and remove users by just making IDP changes. And then the second one is just-in-time access for AWS account. Consider there is a highly critical piece of infrastructure that sits in one of your AWS accounts. As Ashok mentioned, it is better to segregate your accounts for security and governance and billing reasons.

Nivathan: So consider there is a very sensitive account. We don't want to default access to those accounts to engineers. Engineers can request for access to that particular account and gain access for a short time using Teleport. No infrastructure-as-a-code changes are required here. There is no waiting for any PRs to be approved. It can be also integrated with your daily workflows, like Jira, Slack, Mattermost, Teams, PagerDuty, Discord, etc. In fact, Teleport can do automated reviews based on PagerDuty on-call schedule and grant access based on the schedule. Compliance and auditing for your AWS account activity using Teleport is one of the main advantages that I want to mention.

Live demo

Nivathan: And Teleport's detailed auditing logs help to maintain compliance and also keep the activity of users in different AWS account at a single and centralized place. You can see all the account activity at a centralized place. You can also export these audit logs from Teleport to your preferred SIEM. Okay. We have gone through the advantages of using Teleport. We can see them in action on how to access AWS account from both Teleport UI and CLI. I have another tab here. So I can log in to Teleport. I'm going to use my SSO. So I'm going to use GitHub now to log in to the Teleport cluster, the Teleport environment. Okay. I'm logged in. This is my GitHub. I use my GitHub user to log in.

Nivathan: And you can see these are the resources that I have access to. I have AWS consoles. I have Kubernetes. I have some databases, RDS databases, and Windows machines, everything on one single place. And the second tab here is more like a management. As an admin, I can manage things over here. Let me quickly log in to one of my AWS accounts here. You can see there is a dev account, staging, and then prod. Let me just do a login to dev account. I don't need to enter any passwords here. Everything is secretless. I just logged into the AWS account straight into the EC2 console, and I can go and see the machine that is running on it. Very similarly, I can do on CLI as well.

Nivathan: Let me check if I am logged in. Okay. I'm not logged in now, so I'll go and log in now. So I have successfully logged in. I have used my SSO provider to log in again here from my CLI. You can see I have acquired the certificate, the short-lived credential that I mentioned, and it's valid for 1 hour. This can be set for the maximum of 12 hours based on your employee or the developer working duration. You can set it 8 hours, which is pretty common that we see. And now, I have logged in. I'll do `tsh status` again.

Nivathan: Yes. I'm active now, and I'll do `tsh app ls`. So this is the only single binary that a developer should use. They just need to install tsh on their local machine. And that should be good enough for all access, not just applications like servers, Kubernetes, everything. Here, I have the three AWS Console and some Grafana and other applications as well. So let me log in to this `prod` AWS account and try to run some AWS commands. Okay. I'm logged in now. I can do `tsh aws s3 ls`. I’m able to see these buckets.

Nivathan: And I can do all the AWS CLI operations. And also, I can start a proxy that will help me to run any Terraform or any other automation scripts with AWS, especially running Boto3 scripts. And so it works very natively with CLI and UI as well. It's just like you don't have to use any secrets and everything is unified. All your access will be unified. Now I have one more user, Alice. Let's consider Alice to be a junior developer who doesn't have any access initially when they log in. And there is an access request feature from Teleport which Alice can use to request to any of these resources or to the role, which is AWS products as role. So let me request for this AWS products as role.

[silence]

Nivathan: And the request is pending now, and the request is created by Alice. And me as an admin, on the other hand, I can go and see if there are any requests that came in. And you can also notify this request to a Slack or to any Discord or any other day-to-day workflows. And also, you can automate these approvals through PagerDuty schedules. If Alice was on PagerDuty schedule, we can automate that approval process through that. So here is the request from Alice. I'm going to approve. Everything is customizable, like how long we can provide them access.

Nivathan: And everything can be customized based on who can review it and who can request this. Everything is customizable through Teleport's extensive RBAC capabilities. And, yes, I have approved it. Let me refresh from the Alice. So you can see the role has been approved, and Alice can assume this role. So once Alice assumes this role, you can see “assumed and expires in 23 minutes” and check the resources. Alice can see the two resources. And here, I can go to the same prod account and see the S3 bucket that I showed on CLI.

Nivathan: Yep. You can give this access as granular as possible. I mean — it can be from any role, you can map it, and a user can choose any role based on the RBAC setup that we have configured before. So this is the demo on just-in-time access. And along with that, I wanted to show about the auditing. So here is the audit logs. So you can see who has accessed, and there was an app session started by Alice to AWS Console prod. And every detail on that application access is in JSON. You can parse them, and you can send it to your preferred SIEM.

Resources to get started

Nivathan: And also, you can see the access request created and who reviewed it, everything in detail here. And this will definitely help with a lot of compliance requirements and auditing your user access to different pieces of infrastructure. That's all about the demo. So where to begin? You can join our Slack community, and there is a Teleport AWS Marketplace listing. You can check them out there, and you can connect to all your AWS account to Teleport for AWS access, not just to AWS Console. You can collect your EKS clusters, RDS databases, EC2 machines, everything. We have very great integrations with AWS to enroll your resources automatically in an easy way.

Nivathan: And your end users can access everything to Teleport as a unified platform to access all these infrastructure components and check out our GitHub page. We are open source, again. And Teleport has a Team, which is more like a — there is a freemium, a trial version. You can use it for a few days and try it out. And we are also open source. You can try it out, setting up the cluster. I am open for Q&A.

Q&A time

Kateryna: Awesome. Thank you so much for the presentation today, Nivathan and Ashok. Hi, everyone. My name is Kateryna. I'm just jumping in to help out with the Q&A. You may have seen my name in the chat. Again, thanks so much for joining. And while we go through these questions, we still have some time in the session today. So please go ahead and submit any questions you may have to the Q&A section at the top of the panel on your right, right next to the chat. And we've also put together a list of kind of the top questions we get asked around today's topic. So we'll be asking those as well. So Ashok, Nivathan, I hope you're both ready. So we have a question from Fabiano that we can go ahead and start with. And let's go ahead and share that onto the screen. So Fabiano was asking, "Hey, I have three different environments: CI/CD, dev, and prod. How my topology would look like. I'd like access to applications inside of Kubernetes inside each account and database." And the question's around, "Do I need to install the cluster into one of these accounts? And into the other accounts, what components should be installed to join an initial cluster and be able to connect to applications inside to different clusters?" Nivathan, I think this one is going to you.

Nivathan: Sure. Very great question, Fabiano. Yeah. Your three different environments, and it can be in any AWS account. And it can be Kubernetes clusters and databases. There is a Teleport cluster that you can deploy on your environment. And also, we have a SaaS offering as well. So you can set up the cluster in any one of your AWS accounts, and all your resources will be connected to Teleport through a reverse proxy tunnel. So to simplify it, you don't need to open up any ports. There isn’t any infrastructure-level changes that you need to make. All that you have to do is install an agent on your resources, like on your server and Kubernetes cluster.

Nivathan: If you are on AWS, you don't have to install the agents as well. We have other ways to integrate, like role-based integrations. So that will make it more easier without installing the agent. You can enroll those EKS clusters, servers, databases from different environments to Teleport. And you can use Teleport to access all the environments. And you can give granular access to CI/CD, dev, and prod by using labels and RBAC inside Teleport. Hope this helps.

Kateryna: Awesome. Thank you so much, Nivathan. Fabian, I hope that was helpful. Please let us know if you have additional follow-up questions. Looks like that was helpful. We've got another question from Harsh. So let's go ahead and share that onto the screen. Harsh is asking, "The role-based solution — is it available in open source?"

Nivathan: The RBAC capability is available in open source, but some of the features, like IDP, is available in enterprise.

Kateryna: Cool. Awesome. We've got another question. Wow, these are coming in hot. Another question — what's the EC2 recommended instance type for database access? I think, Ashok, that one might be going to you.

Ashok: Yeah. So I think the instance type recommendation is based on your architecture. Right? There is no, I would say, prescriptive guidance on you have to use t3.micro or t2.micro. It’s all based on your specific needs, how many users are going to log in, what type of access they need. Is it just acting as a bastion host for one scenario or multiple? So it all depends on that specific needs. And AWS offers more than 500 different instance types, so you can pick the instance that fits your specific needs.

Nivathan: Yeah. If this is for AWS database agents to run, t2.micro should be enough. The Teleport agents that run on your target systems are very lightweight. It doesn't take much of your resources.

Kateryna: Cool. Awesome. There was a question from Robert. It was asked in the chat, so I'll go ahead and direct it into the Q&A and share it onto the screen. Yes. This presentation is recorded, and it will be shared with you. You should be getting an email with the recording link 24 hours after. So tomorrow, you'll be getting an email. It will have a purple button that says, "Watch on demand." If you have any questions, please feel free to reach out to any of us on Community Slack or find us on LinkedIn, on Twitter. We'll make sure to get the recording over to you. It will also be posted on our public YouTube channel. We have another question from Maksim. There are two options on the pricing page, $15 per user and Enterprise. And are those options for self-hosted solution or SaaS? That's a great question. I could answer it, but Nivathan, I'll let you handle that one as well.

Nivathan: This is for the SaaS. And you can reach out to sales. That would be a great place where they can give you the best answers related to the pricing. But the $15 per user mentioned on the website is for the SaaS.

Kateryna: Yeah. So that's the Teleport Team. And Nivathan, I think the broader question can also be, can Teleport be hosted on-prem, or is it just a SaaS solution?

Nivathan: Teleport can be hosted on-prem as well. It can be on any cloud, any data center, and it can be as a SaaS platform as well. With SaaS, we take care of managing the whole cluster and also version upgrades. Everything is taken care of by us. On on-prem, it has to be managed by the customers.

Kateryna: Awesome. Thank you. I'll go ahead and pull one question from the ones that we get asked often, and this one's going to be for Ashok. So how is the AWS Control Tower different from the AWS Organizations, and when should we use it?

Ashok: Yeah. I think that's a good question. Right? In terms of the AWS Control Tower, right, basically, this provides an abstracted and a prescriptive experience on top of AWS Organization. So in a way, it kind of complements Organization for efficient management. Right? Control Tower basically orchestrates and extends the capabilities of AWS Organization and automatically set up Organization as an underlying service to help you organize accounts and implement these guardrails using the SCP policies we talked about earlier. And typically, AWS Control Tower is for customers who want to basically create and manage their multi-account AWS environment with built-in best practices that are already available in Control Tower.

Kateryna: Awesome. Thank you. We'll have more questions for you shortly, Ashok. But the next one is from Guna, and that one's also for you, Nivathan. Will the Teleport agents create any overhead on the server? Is there an agentless implementation of Teleport?

Nivathan: So Teleport has agentless capabilities. I mean — Teleport has various integrations. With AWS, particularly with AWS resources like RDS, EC2 machines, there is no overhead at all to install these agents. But in some other servers, if on-prem servers when there is no integration, the agent has to be installed. The agents are very lightweight. The reason why we use agents and why we do agents is Teleport auditing capabilities are more detailed. Teleport uses eBPF, which is the Berkeley Packet Filter, to gather all the system-level calls that happens inside a server. So basically, any command that you are running inside a server or even a Kubernetes part will be recorded. And that is done through the agents. So that is the huge benefit of agents. And there is no big overhead because we have integrations that take care of the installation of agents. And also, we have some automation that will help you to install these agents anywhere. And it's lightweight, at the end of the day. It doesn't take any big resources on your machines.

Kateryna: Awesome. Thank you. And thank you for the question, Guna. Next up, another question for you, Ashok. You mentioned during the presentation that — we talked about following least privileged permissions. How do users work towards achieving that?

Ashok: Yeah. So I think to implement the principle of least privilege, right, I would say it's a journey. It's not like there is a switch that you can just turn on and you will have it enabled. Right? As you start, let's say, migrating or implementing your solutions in AWS, you can start with the broader permission set, especially in your lower environment. And as you mature, you can basically refine your permissions to grant access based on the type of access your developers need, your admins need. Right? AWS provides you various tools to refine your permissions. You can even use AWS managed policies to begin with. You can use AWS IAM Access Analyzer. You can even use AWS CloudTrail logs to basically inspect and adapt based on your specific needs. You can use IAM policy simulator to test and troubleshoot your policies. So there are multiple mechanisms that are available, but as I mentioned in the beginning, it's a journey. You can start with a broader set of permissions, and as you progress and mature, you can basically refine the permission set based on your specific needs.

Kateryna: Gotcha. Yeah. I think it's a very nice way of putting it, as a journey, especially when there are so many solutions and ways to approach infrastructure access. We've got one more question from our list from top questions. This one will be for you, Nivathan. Do we need to create any changes to the existing infrastructure to add on and use Teleport?

Nivathan: No, actually. As I mentioned earlier, Teleport doesn't need any changes to your infrastructure. All we need to do is integrate them with the integration that we have or install the agents on the resources, and the resources will connect on a reverse proxy tunnel. So there is no whitelisting of IPs or port openings or any other network-level changes or infrastructure-level changes that has to be done on the customer environments. It's super simple. We just need to have the cluster and install the agents. Automatically, everything is enrolled. Once they are enrolled, users can take advantage of using Teleport and just use a single unified platform for every infrastructure access.

Kateryna: Awesome. Thank you. We only got a few more minutes left, so if you have any last-minute questions, please go ahead and submit them in the Q&A box. We only have one or two more. So Ashok, one more question for you. Does AWS share any best practices for managing multiple accounts? I think that ties into the topic for today's conversation really well.

Ashok: Absolutely. So I think, if you saw in the last slide that I was sharing, there are some best practices for IAM in general. But most of the time, these guidances and these best practices vary by services as well. So in scenarios of multiple accounts, we do offer these guidances. Some of these examples are — use multiple accounts to organize your workloads. Right? Use a single organization to collate these accounts together. You group workloads based on your business purpose, not basically based on your reporting structure. Enable AWS services at the organization level and not at an individual level. This gives you more flexibility. So there are various guidelines, and most of these guidelines are available in the user guide for that specific service. So you can feel free to look at these guidelines based on the service that you're looking for.

Kateryna: Awesome. Thank you. And everyone, you'll be getting the recording of this, so you'll have that slide with the best practices that Ashok was sharing. Okay. Awesome. So we'll wrap this up with one last question, and then I'll jump off video so we can wrap up today's session. Nivathan, what other AWS services does Teleport integrate with?

Nivathan: Actually, Teleport can integrate with everything, like your EC2 machines, your Kubernetes clusters, the EKS clusters particularly. And we have the integration, no need of any agent installations for especially EKS. There are other ways to do it, simple ways. And there is server access, and all the RDS and wide variety of AWS databases are supported, and also the Windows machines. And any application that is hosted on AWS can also be integrated with Teleport. And users can use Teleport to log into — basically we cover most of the AWS services.

Wrap-up

Kateryna: Awesome. Well, that's a wrap for Q&A. Thank you so much for joining for the Q&A. I'll hand it back to Nivathan and Ashok to wrap up today's session.

Nivathan: It was really great. Thank you. Great questions from everyone. And thank you for attending the session today, and wishing you all a very happy holidays. Merry Christmas and Happy New Year to you and your loved ones. Thank you, Ashok, for partnering with on this webinar.

Ashok: Thank you so much for having me here. And everyone, happy holidays and Happy New Year in advance.

Nivathan: Sure. Thank you.

Join The Community

Background image

Try Teleport today

In the cloud, self-hosted, or open source
Get StartedView developer docs