Securing RDS with Teleport
Securing RDS with Teleport
The Amazon Relational Database Service (RDS) is one of the most popular AWS services, making it simple to set up, operate, and scale databases in the cloud. As these databases often hold sensitive data, implementing secure access is often one of the first security challenges cloud engineering teams must solve. In this session, we will explore how Teleport can be used to achieve the following:
- Identity-based, time-bound access to private RDS endpoints
- Just-in-time elevated access
- Full audit trail connecting identities to database queries
Transcript - Securing RDS with Teleport
Dan Kirkwood: Hi, everyone. Thanks very much for joining. We will give everyone just a minute or two to join, and then we'll start the session. So hang tight.
Dan Kirkwood: All right. Hopefully, everyone has been able to join the session. And thank you very much for joining and giving us your time today. We've got a lot to get through. So we might get started. Before I start, I just want to verify, maybe a show, just let me know. Are you seeing a slide shared through the webinar?
Ashok Mahajan: Yes, Dan.
Dan Kirkwood: Awesome. Perfect. Well, welcome, everyone. And thanks for joining us from AWS and Teleport today for the topic, which is securing RDS with Teleport. We're going to be spending the next hour with you on this topic. And by way of introduction, my name is Dan. I'm a senior solutions engineer at Teleport. And I'm based out of Sydney, Australia. And it's a really privilege-- it's really a privilege for me to join you in this session today. Joining me is Ashok. Ashok, do you want to introduce yourself?
Ashok Mahajan: Sure, Dan. Hello, everyone. My name is Ashok Mahajan, and I'm a senior partner solution architect with Amazon Web Services. I am part of the global startup team focusing on ISVs, especially our partners in security segment. And I'm based out of New York, Tri-State area.
Dan Kirkwood: Awesome. Thanks, Ashok. So to understand what we're going to be covering over the next hour, here is our agenda. What we're going to do is start off with an overview of Amazon RDS and Ashok will lead us through that. After that, I'm going to give you a brief overview of what Teleport is and what it does before we get into the specifics of Teleport for RDS access. I'll run through a demo of how this works, and then we'll have some time for Q&A at the end. If you have questions as we go, please feel free to put them into the chat. You should see some messages from Katarina in that chat already. Please place them there. As I'm sharing, I can't see the questions as we go, but we'll make sure if we're not able to get to them during the session, we'll hopefully get to them at the end. So with that, we'll kick off. And, Ashok, if you want to take us through Amazon RDS.
Ashok Mahajan: Thanks, Dan. Let's jump onto the-- yeah, thank you. So before we dive deeper into RDS, I just want to quickly talk about the needs of modern application today, right? Most of the applications that we see today, they are built using microservices and in the cloud. Most of these applications have varying needs when it comes to databases, and we have seen that one-size-fits-all does not really work, and ultimately lead to trade-off and compromise for our developers. Moreover, as we see, as businesses are today spending more time innovating and building new application, they don't want to spend time on managing infrastructure. As you innovate quickly and move to DevOps model, you also need to think about the rapid rate of change that comes along with it. These dynamics combined have changed the way businesses are building applications and why modern applications along with databases are moving to cloud. In this regard, Amazon offers a wide range of database services that are purpose-built for every major use case. These services are fully managed, battle-tested, and provide deep functionality that allows you to build applications that scale easily. Our fully managed database service include relational databases for your transactional application needs. These include Amazon RDS and Amazon Aurora.
Ashok Mahajan: And when you think of relational databases, you can choose from seven popular database engines, which includes Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL, MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server. And if you have a need where Tableau database would be ineffective, we have Amazon DynamoDB, a database that optimizes running key-value pair at single-digit millisecond latency. They have ElastiCache for in-memory workload, and we have Neptune for graph databases. For today's conversation, we'll be focusing on Amazon RDS. Dan, if you can move to the next slide, please? Thank you. Amazon Relational Database Service or as you commonly known as RDS, makes it easier to set up, operate, scale a relational database in AWS cloud. This service is launched in 2009, and customers benefit from over a decade of ProVal operational excellence, security best practices, and innovation. Amazon RDS removes the burden of managing a number of database tasks like automated failover, backup, recovery, push button scaling, so that you can focus on innovating on behalf of your customer. As a result of this innovation, Amazon RDS offers you the ability to customize your managed workload for a high availability and durability that you need. Next slide.
Ashok Mahajan: Before we dive deeper into RDS and the security mechanism AWS offers, I just want to put some emphasis on our shared security model, or other, I should say, shared responsibility model. Cloud security at AWS is our highest priority, and we basically follow this shared responsibility model where AWS is responsible for security of the cloud, the physical infrastructure, facilities, and the actual compute and building block on top of which AWS services run. You, as a customer, are responsible for security in the cloud. This means you secure workload and application that you deploy in the cloud and have the flexibility to put emphasis on security of sensitive data and adjust as needed in areas where you feel you need more or less secure concerns. You own and are responsible for your data, including encryption options, classifying your assets, and even using the IAM tools to apply the right appropriate permission. In terms of database as an example, Amazon RDS is responsible for hosting the software component and the infrastructure of DB instance and DB cluster. AWS also provides you with a mechanism to implement security and compliance controls. You, as a customer, are responsible for using these mechanisms properly. This also includes query tuning, improving query performance by using Amazon RDS monitoring, and RDS Insight, just to name a few. Next slide.
Ashok Mahajan: And if you can click next, yeah. Most of us are familiar with the design principle of defense-in-depth. A common relational database permission and security control follow this concept. Defenses in-depth basically refers to layering multiple security controls together in order to provide redundancy in case a single control fails. So in the next slide, we'll talk more about what different layering mechanisms RDS provides. So when it comes to security of your database, you can manage access to Amazon RDS resources following this principle. And the method you choose really depends on the workload that you have and based on your user needs. RDS makes it easy to control network access to your database. You run Amazon RDS in Amazon Virtual Private Cloud, which enables you to isolate the database instance. And in case you want connectivity with your on-prem instances, you can use industry-standard encrypted IPSec VPN. RDS allows you to encrypt data at rest and data in transit. It uses AWS Identity and Access Management policies to assign permissions that determine who is allowed to manage Amazon RDS resources. RDS also lets you use the security features of your DB engine to control who can log into database on a DB instance. And these features just work as if you have a database running on your local network. RDS also offers a wide range of compliance readiness for finance, healthcare, government, and more. As an example, RDS offers HIPAA eligibility under a business associate agreement with AWS.
Ashok Mahajan: Beyond these external security threats, Amazon RDS also offers protection from insider threat using a mechanism called database activity stream, which is currently supported for Amazon Aurora and Amazon RDS, which provides a real-time data stream of activity that is happening in your database. When integrated with the third-party database activity monitoring tool, you can monitor and audit database activities to provide safeguards for your database and also meet compliance and regulatory requirements. Next slide. Amazon VPC security group basically acts as a firewall to control inbound and outbound traffic and allow you to specify rules for your database. You can use security group to control what IP address or EC2 can connect to your database. Each security group is a combination of protocols, port range, and source of the traffic that you allow into the database. For the source, as you can see from the diagram, you can set up an IP address, a particular CIDR block, governing multiple IP address, or even another security group. This gives you the flexibility to have a multi-tier architecture for your database access. Amazon RDS encrypts your database using keys you manage in AWS Key Management Services, also known as KMS. KMS is a managed service that provides you the ability to create and manage encryption key, and then encrypt and decrypt your data using those keys. All of these keys are tied to your AWS account and are fully managed by you. KMS uses industry-standard AES 256 encryption to protect the data. On the database instances running Amazon RDS encryption, data stored at rest is encrypted, as well as its automated backup, read replica, and snapshots.
Ashok Mahajan: RDS also encrypts communication between your application and your DB instance using SSL and TLS that provides a layer of security by encrypting data that moves between your client and your DB instances. RDS for managing access is basically integrated with AWS Identity and Access Management and provide you the ability to control the action that your IAM users and groups can take on specific RDS resources from a DB instance through snapshots, parameter groups, and option groups. One of the feature that helps a lot is you can also tag your RDS resources and control the action that IAM users and groups can take on a group of resources together. For example, you can configure IAM rules to ensure developers are able to modify development databases, but only a DB administrator can make changes to production database. Amazon RDS supports several ways to authenticate a database user. It's password authentication, Kerberos, and another concept that is IAM database authentication. With password authentication, your database performs all the administration of user accounts. Whereas when it comes to IAM database authentication, you don't need to use password when you connect to DB instance. Instead, you basically use an authentication token, which is a unique string of connectors that Amazon RDS generates on request and has a lifespan of 15 minutes. You do not need to store credentials in the database because authentication is managed externally using IAM. Additionally, Amazon RDS supports external authentication of database using Kerberos and Microsoft Active Directory. You can use Kerberos authentication to authenticate user when they connect to your database.
Ashok Mahajan: In case you want to still use passwords to connect, Amazon RDS integrates with Secrets Manager to manage your master user password for DB instance and even multi-AZ DB clusters. AWS Secrets Manager is a secrets management service that helps you protect access to your application, services, and IT resources. It lets you replace hard-coded credentials in your code, including database passwords with API calls to Secrets Manager to retrieve the credentials. You can encrypt secrets at rest to reduce the likelihood of unauthorized user viewing sensitive information. To retrieve secrets, you simply replace secrets in plain text in your application code to basically an API call using Secrets Manager API. You can then also use IAM policies to control who can access these secrets. And the best part is you can rotate passwords on schedule or on-demand basis for supported databases without risking any impact on application. I think we all can agree, monitoring is an important part of maintaining the reliability, availability, and performance of any application. And Amazon RDS is no different. In fact, it provides you several tools for monitoring your resources and respond to potential incidents. With Amazon CloudWatch, you get a collection of around 15 to 18 metrics automatically. You can view these metrics like CPU utilization, memory, storage, and latency in RDS Console, in CloudWatch Console, or even using CloudWatch API. And you can pull them into any of your monitoring tool of your choice.
Ashok Mahajan: On top of it, RDS also gives you enhanced monitoring with access to over 50 additional metrics when enabled. And you can define the granularity at which you run these metrics from 60 seconds all the way up to one-second interval. Using Amazon CloudWatch alarm, you can also set up metrics that send you notification using Amazon SNS topic when you define threshold. AWS CloudTrail monitors and records every activity that happens in your AWS infrastructure, giving you control over storage, analysis, and remediation action. CloudTrail provides a record of actions taken by user, role, or even an AWS service, and Amazon RDS. It captures all the API calls for Amazon RDS as events, including calls from Console or from your code using RDS API. Amazon GuardDuty is a threat detection service that continuously monitors your AWS account and workloads for malicious activity and delivers detailed security findings for visibility and remediation. RDS protection in Amazon GuideDuty analyzes and profiles RDS login activity for potential threats. When RDS protection detects a potentially suspicious or anomalous login attempt, it indicates or even if it feels like there's a threat to your database, it generates new finding with details about potentially compromised database and send it to you.
Ashok Mahajan: Along with these native mechanisms and services that I just described, we have an advanced network of partners like Teleport. Teleport and AWS together empowers an organization to easily control who can provision and access your critical AWS resources in order to improve security and compliance for your infrastructure. Now let's hear from Dan how Teleport manages your connectivity, authentication, authorization, and audit needs that you need to scale your AWS infrastructure while reducing that access, increase developer productivity, and save time.
Dan Kirkwood: Great. Thank you very much, Ashok. Appreciate you taking us through that. And I would say talking to customers and people that are looking to use Teleport, RDS is up there in terms of one of the most popular services that we see in use on AWS. So what I wanted to do next is take a few minutes to talk about what Teleport is and just level set to understand where Teleport fits in your cloud architecture. And the first thing that I want to look at is just to understand that today we are really focusing on one technology. But usually, when teams are thinking about secure access, your database makes up one part of a much larger stack. And the challenge that we see for teams that are heavily using their cloud environments just like AWS is that they end up with a mix of different access technologies that can be dependent on the service that they're using. It might be different depending on which region they're accessing. It might be different for dev, different for prod, different for accessing a data center or on-prem, different for accessing a different cloud. And the problem here is that these teams with many different solutions end up with a bit of a mess when it comes to understanding who has access to what. And there is a follow-on effect there where it then becomes very different to follow best security practices, such as rotating credentials. It becomes very difficult to audit to understand, "Okay, if I have an engineer in the dev team, which RDS instances do they have access to at any one point in time, let alone other technologies that make up the application stack?"
Dan Kirkwood: So what we do with Teleport is we standardize that access layout. This means that for servers such as those running on EC2, Kubernetes clusters that might be on EKS, databases including RDS and some of the other database types that Ashok pointed out earlier in the presentation, internal web applications, and Windows Servers, Teleport can facilitate secure access to those services. And it does this for two different types of client. We'll focus mostly today on users. And this might be engineers. In the context of databases, this might be analysts. This could be platform engineers, security engineers. But we do this for services as well. And the benefit of standardizing this is you then have a single place to apply good security controls. Something like role-based access control can be done in a single place. Putting in a control like single sign-on or multi-factor authentication becomes much simpler once this is standardized. And of course, you've got one place for good audit logging to understand who is doing what across your infrastructure estate. Now, we are not the only solution in this space, and we're not the only tool that achieves this kind of access standardization for infrastructure. But there are two key reasons why I see people choose Teleport over other solutions. And they're on the screen here. First is the idea of being secretless.
Dan Kirkwood: I speak to a lot of teams that want to, as much as possible, move away from any kind of credential handling by their end users. They, of course, do not want developers to be hard-coding credentials anywhere. They don't want end users like developers even having to think about credentials at all. They don't want to have to use a cumbersome privileged access management system, which they might have in some of their legacy environments but isn't necessarily suited to a very dynamic environment like the AWS cloud, right? The kind of application architectures, which Ashok was mentioning at the start of this webinar, aren't necessarily suited to a legacy privileged access management solution. These are teams that maybe don't want to use something like a Key Vault. Again, this idea of having long-lived credentials, even if they are centralized in a kind of hardened solution like a Key Vault, still represent some risk around credential handling. And of course, most importantly, these are teams that even though they want a security uplift, they're very sensitive to the idea of user experience. They don't want productivity loss for their engineers. They know that any security solution that gets in the way introduces the risk of developers trying to work around that solution. Okay. So this idea of secretless and that security uplift is something that we offer with Teleport.
Dan Kirkwood: On the other side, we have Zero Trust. Now this is a very broad term. Lots of teams use Teleport to achieve Zero Trust because it is so focused on identity. With Teleport, there is no concept of having IP-based rules focusing on subnets for deciding who does get access or not. Everything must be identity-based by default. It's kind of a very opinionated stance that we take. With Teleport, you can decide which elements of the solution are internet-facing, user-facing, public-facing. But then you can also have very secure access for any resources that you want to keep private, for example, your RDS instances. And for a lot of teams, this gives them the opportunity to remove a VPN architecture. And we see teams that have maybe started with a quite nice VPN architecture for their cloud environment, but as they've scaled in their AWS usage and they've grown the amount of applications, the amount of VPCs, the number of regions that they're using, their VPN architecture becomes too complex to manage securely. So for them, using Teleport is an opportunity to move away from that architecture. And there is a huge number of integrations that are possible with Teleport. And of course, we partner heavily with AWS. It's the environment that we see most when we talk to customers. It's the place where we focus most of our engineering effort. It is the cloud where we have the most rich level of integrations. But we try and cover any part of the infrastructure stack, which might be supporting your applications.
Dan Kirkwood: So to get a little bit deeper into the topic for today, when we talk about RDS access through Teleport, what does that mean? And I'm going to show you what the end-user experience is for this. But first, I just want to draw out a very simple architecture for how this works with Teleport. On the right-hand side, we have a VPC, which has some RDS instances. Now, this could be plain Amazon RDS. This could be Aurora. This could be RDS proxy instances. We work with all of them. We also work with all of the database engines, which Ashok mentioned earlier. I'll put a star in there in that RDS. Oracle today is not supported, but it is on our roadmap. So anywhere that you have your RDS instances, you can also place a Teleport gateway service, which is sitting on the EC2 instance within this VPC. This is a lightweight single binary. You can run this on EC2. You could run this as well via a Helm install in EKS. And this gateway service will be configured to discover RDS instances, okay? And you can give it some parameters which are listed here as to which regions you would like it to discover in. You can give it tags that it should be looking for for auto-discovered RDS instances. The other thing is that this gateway service will be configured to reach out to a Teleport control plane, which is in the middle left of the screen here.
Dan Kirkwood: And in this example, I have this hosted on Teleport cloud, which is our SaaS service. This is where we host and maintain a Teleport control plane for you. And it's definitely a very popular option for people that are using cloud heavily. You can host Teleport completely yourself. So you could host and manage this control plane. But in this example, it's sitting on Teleport cloud. So you will have a tenant on Teleport cloud. Your gateway service will be configured to reach out to that tenant. And once it does that, it will start to populate a service catalog of all the discovered RDS instances. On the very left-hand side, we have our end users. End users can also reach out to this Teleport tenant. There's nothing that they can do against that tenant unless they authenticate. And for us, the ideal form of authentication will be some kind of identity provider. And we integrate with anything that speaks SAML or OIDC. Once end users have authenticated, they will be able to use Teleport to reach out to these RDS instances using whatever tooling makes sense for them in their environment. And this is important when we think about that idea of end-user experience and no productivity loss. This is not a kind of hosted bastion solution where we are forcing end users to jump onto a hardened box or environment where they then talk to the instance. This is facilitated connectivity from the end user's workstation.
Dan Kirkwood: And what sits in between is our RBAC model. And this is a really nice RBAC model because it can move you very close to this idea of attribute-based access control where I can match tags from my discovered RDS instances and compare them to metadata that I get from my identity provider to decide who should have access to what. So what I want to do next is show you this in action. I'm going to be using our desktop application called Teleport Connect. It's not the only way to connect to teleport. We also distribute a command line tool called TSH. We also have browser-based access. And all three of these work across Linux, Windows, and Mac. Okay. But our desktop application is pretty handy for database access. And you'll see why in a second. Now, as I mentioned, I can't do anything against Teleport unless I authenticate. So I'm first going to log in via Okta. Okay. I've signed into my Okta account already. So I'm logged straight into Teleport. And I can see once I log in that I have access to a single RDS instance. And there is RBAC at play here, okay, because there are more resources in my account. But I have RBAC that says anyone who is a developer - and I'm logged in as Arwen today - who is a developer gets access to dev instances only.
Dan Kirkwood: And it goes further than that to say that I should only have read access to those instances. Okay. So I have my service catalog here. I can connect to this database. And you'll notice here that the user that I get access to is related to my single sign-on user. This is another nice quality-of-life thing with Teleport in that I don't need to necessarily pre-provision users in my target databases. Now we support this for Postgres today. We're about to add MySQL for dynamic user provisioning. We can work with pre-provisioned users as well. But this makes the whole operational overhead of managing the database for access makes that a lot easier. So I can connect with my user. I get to choose a port here, and we'll see why. I can choose a database that I want to connect with. And I then have two options. So I can use my local CLI tool, which for me is PSQL to connect to this database. I could also use what I see most developers or analysts using is some kind of GUI tool. And we absolutely facilitate that kind of connectivity as well. And just to show you how easy that is, I've got DBeaver here. I'm going to set up a new connection for Postgres. All I have to do is point DBeaver at local host at the port that I chose earlier. I'll choose the right database and the right user.
Dan Kirkwood: And this connection should work, right? So my connection test is successful. And I'm now able to jump into this database as though I was adjacent to it, right? And this is the idea of Teleport. And I should be able to get to some data pretty easily here, right? So I've got this vendor's table which I can read from. Now, I went through that pretty quickly. One thing that you might have noticed is that there wasn't really any credentials that Arwen is handling here. Whether she's connecting over this CLI tool or via the GUI tool, there's no idea of any kind of passwords or credentials. And how we handle this with Teleport is there's two mechanisms. And I like to think about this as a domain of trust. There is a domain of trust that is set up between Arwen and Teleport, which is all based on certificates. Every request that I make to the Teleport control plane is authenticated via a certificate. This certificate is time-bound, right? So I've authenticated via single sign-on. I then have a certificate that only lasts for eight hours, in my case. Between Teleport and RDS, we then use another domain of trust, which is IAM, right? And hopefully, everyone here who's familiar with AWS is aware of the power of IAM. It's a great construct to use for trusted access. And that's how Teleport is able to reach out into the RDS instance and facilitate this connectivity. But the nice thing is that end users don't think about any credentials beyond that initial single sign-on login.
Ashok Mahajan: Okay. So this is nice. I've got access. I want to show you the RBAC at play here, right? Because this is read-only access. Let's say I try to insert some data. We're going to add a new vendor here, which is AWS. I'm going to try to put this into the table, which I have read access to. Okay. And that is going to be denied. Now you might be thinking, okay, "This is nice. I got connectivity pretty quickly. There's no credential involved." But if this RBAC system is too inflexible, we might run into that problem where developers try to get around it, right? What if we build an RBAC system but someone does need access to prod suddenly, or someone does need right access in the event of some kind of outage or troubleshooting? How do you facilitate that? And the nice thing is that Teleport is built for that situation as well. So I'll close out this connection. We're going to take a look at access requests. With access requests, I can offer out the opportunity for someone to connect to something that they don't normally have access to. Okay. So here is the dev database that we connected to earlier. However, I've also got a prod database, [Ariador?], which I can't connect to by default. There is no way for Arwen to get access to this with her regular level of access, which is decided by the metadata that we get from Okta. However, she can request access to this database.
Dan Kirkwood: Okay. So she's going to request read access to the prod database. She needs to give a reason. Maybe there's a ticket number. And once Arwen hits submit on this request, it triggers a workflow. There is a workflow in Teleport around notifying the right people that this request exists. And this is also one of our integration points. So you might have an existing tool that you use for ticketing. Something like Jira. You might have collaboration tooling like Slack or Microsoft Teams that you use for incidents. You might have something like PagerDuty. Teleport will integrate with all of these tools in the event of an access request if you need to tie into some existing approval mechanism. I'm going to show you the view of another user within Teleport. This is someone on the security team. This is Gandalf. He has access to be able to approve requests. So he can see this request that's come in from Arwen. He can see that she's requesting read access to prod for a specific database. And he can choose whether to approve this or not. So he's going to approve that request. If we go back to Arwen's view, she can now see that that request was approved and any message that was included as well, and she can straight away assume that role.
Dan Kirkwood: My view of the service catalog has now changed. I can now access prod, whereas I had no access before. This kind of flexibility or dynamic access is something that a lot of our customers really love. It's something that's quite difficult to achieve with the traditional constructs of single sign-on and IAM. So this kind of dynamism might be something that you're looking for when you think about how you do differentiated RDS access with a tool like Teleport. And it means I can connect. We use the same port. And I'm in. Now, someone on the security team like Gandalf also gets a few other tools for visibility and control using Teleport. I want to show you a couple of these. And the first thing I want to look at is this idea of control in a breach scenario. Teleport has the concept of locks. And locks mean I can choose a user or a resource, which I suspect may be of compromise. Maybe it's just a resource that I want to take down for maintenance. But for whatever reason, I want to cut off connectivity to or from a specific thing at a specific time. Coming back to that graphic that I showed earlier, where Teleport is kind of centralizing this connectivity, because you've got a single control point, this becomes very, very easy to do. So what I'm going to do is I'm actually going to add a lock onto Arwen.
Dan Kirkwood: Let's say I suspect some kind of compromise. Maybe I've got an alert from the service that I was showing earlier around suspicious activity. So I'll give a reason again. I can set a TTL. I'm just going to put five minutes on here. Keep it very short. And Arwen is now locked out. So if we come back to Arwen's view, if she tries to do a read operation here on this vendor's table, it's going to fail. And actually, if he tries to connect to anything else, that's going to fail as well because everything between Arwen and the control plane is locked out. This comes back to the power of something like a certificate, right? A certificate is a really useful artifact because it's very easy to identify a certificate and revoke access from that specific certificate. And the final thing that I'll show you here. I mentioned visibility earlier. Teleport has a great audit log. And some teams use Teleport just so they can centralize all of their access-based logging in one place. So if we work backwards here, we can see when Arwen tried to connect to a database even though she was locked out. We can see the access request that was created by Arwen. We can see that it was reviewed by Gandalf. We can see detail about that approval. For example, the reason why it was approved, which might be really handy when you're looking back through an incident.
Dan Kirkwood: And we can of course see all of the relevant information about the connection between Arwen and the RDS instance. So here is the log where Arwen tried to insert something into the database. We can see all of the logs from the operations that DBeaver was doing to display the information through the GUI. And everything is always linked back to Arwen as a single sign-on user. And this is another thing that we see teams struggle with in terms of logging across different parts of their application stack is just that idea of correlation. Here I can see a query against the database. Here I can see an API request to Kubernetes. Tying that back to a real user can be very, very difficult. It's something that we include with Teleport out of the box. Okay. So that's it for the demo. I've got a couple of more slides that we'll go through before we finish up. First, just to touch on why is this better together. Why would you use Teleport with a service like RDS? And I summarize this into four buckets. I won't go through all of these out of respect of time. But the most important ones that I would pick out here is, for connectivity, keeping your RDS endpoints private, right? How do you make that security group construct that I showed earlier? How do you make that as simple as possible, as foolproof as possible?
Dan Kirkwood: Teleport makes it very easy to do that. For authentication, you can link a user from your identity provider of choice through to a set of database permissions very, very easily. End users don't need to be aware of authenticating first via the CLI to AWS to get a set of IAM permissions and then authenticate again through to their database. Everything is handled fairly seamlessly. And because it's centralized, it means that I don't just have access to one RDS instance like I showed today. I might have multiple regions. I might have very isolated network segments. I might have other MySQL instances which exist on-prem. All of those can be standardized behind a single layer of authentication. Authorization. This idea of eliminating standing privileges is something that I'm seeing a lot of cloud teams move towards. They've had a history of allowing some kind of base access for everyone. And they want to try and restrict that as much as possible and move to just-in-time access. And finally, under audit, this idea of having queries linked back to single sign-on users, which we just saw within the Teleport audit log. If you're interested in what you just saw and you're thinking, "Hey, I would love to just test this out in my environment. I've got this particular quirk about how I do AWS. I want to make sure this works," we make it very easy for you to do that because Teleport is open source.
Dan Kirkwood: You can head to our GitHub repo. You can head to our documentation, which I'll show you on the next slide, and actually get started with a lot of what I just showed you is available in our open-source tooling. And the open-source tooling will take you very, very far. If you're really interested in starting, the repository is a great place to start. Here is the next places that you should go. So we have a Slack community where myself, a lot of my colleagues, a lot of our software engineers are very active in interacting with the community and answering questions when people are testing out integrations like RDS. We've also got our listings on AWS Marketplace to help you get started there as well. We've got a free trial for our cloud products, Teleport Teams, which will give you two weeks to test out our hosted control plane. And of course, we've got our documentation around working with RDS and all of the other AWS services that we support. So with that, I want to thank you for sticking with us through this webinar. And we do have a little bit of time for Q&A. I'm going to reduce this screen because I can't see everything. If you have any questions for Ashok or myself, please pop them into the chat. Okay. So I can see there's some questions there already. We'll go from top to bottom. Or unless have these been answered already?
Dan Kirkwood: Sorry, Katarina. I can't see if there's an answer. No, I don't think so. Okay. So the first question is, is it possible to enable MFA for login? Yes. So you could enable MFA at the single sign-on layer if you like. You can also enable MFA at the Teleport layer. You can also enable MFA per session. So you might choose to trust single sign-on and only make someone do single sign-on every eight hours. But you might want proof of presence anytime someone opens up a database connection. So you can have a MFA tap. I've got a YubiKey here that I use. You can link this to something like touch ID or Windows Hello. You can have a token that you use. But those are the kind of the three places where you might choose to do multi-factor authentication. There's another question. Can access requests be timed, give access for 24 hours only? Absolutely. So the access request that I joined there, I don't know if you saw it in the application, but it actually had a one-hour TTL. So as soon as I access that, I've got a counter, which will go down from an hour to zero. As soon as it hits zero, that access request disappears. And that value is configurable. So you can make it as short or as long as you like. Okay. There's another question. How does the instance with the Teleport binary access RDS? Is it via roles? Absolutely. So the EC2 instance that I showed, or if you're running this on EKS, needs to have its own IAM permissions to be able to list RDS instances. It needs an RDS connect permission via IAM as well.
Dan Kirkwood: Okay. And then there's a little bit of bootstrapping that you need to do for that binary to be able to create and manage users if you want to do that. And again, that's in Postgres today. MySQL coming very, very soon. Or those users can be preexisting on the RDS instance, and they just need to have RDS-- sorry. IAM auth enabled against those users if you want to use users that are existing. So Partha asks, do we need to add external IAM roles into AWS IAM to allow? If I understand this correctly, I guess you're saying because Teleport runs in our SaaS service, are you giving some permissions to our SaaS service to reach into your AWS environment? If I've understood that question correctly, the answer is no because you will have that teleport gateway service which sits in your AWS environment. That is an element of the solution which is fully managed by you. So you can just give that service the appropriate IAM permissions, and then it will facilitate connectivity for you. There is also an OIDC join that you can do between your Teleport instance and your AWS account to facilitate things like connectivity if you want to do it at a more kind of umbrella level rather than service by service. And Tristan asks, will a recording be available? I believe, yes, we record all of these. I'm not certain on that. But yeah, I believe we record everything here. So we will have a recording available and out for you.
Dan Kirkwood: Okay, Patha asks, if there is an attack on the network layer, can Teleport help? Not in a kind of responsive manner, right? But once Teleport is implemented, can VPN be taken off completely? Yes. So the way that we do this, and it comes back to that idea of Zero Trust that I spoke about earlier, if Teleport is in place, the VPC where I have my RDS instances, suddenly the security group rules that I have on that VPC or those database subnets become very, very restrictive, right? I can turn off all ingress into those subnets. I have no ingress at all, right? And just by doing that, you're already reducing your attack surface and really denying any of these network layer attacks from even being able to start because I don't allow any ingress. Once I've got that Teleport database service in the subnet, I can allow communication just from the Teleport database service to the RDS instance, okay, in the same subnet. And then I just allow communication out from the Teleport database service back to the Teleport control plane. So I'm allowing very specific egress over a single port between the service and the control plane, but I'm allowing no ingress at all. So a lot of teams like this posture because it means that you're reducing the possible vectors for some kind of network layer attack. You're really drastically reducing those compared to other solutions. Absolutely right.
Dan Kirkwood: So you've still got to think about what are the trusted elements of your application stack. Those need to be allowed. I'm not suggesting that things are locked away to the extent where you break connectivity for things that are important. Now, Teleport can facilitate connectivity for services as well. So what we typically see our customers doing is breaking up the programmatic connectivity to databases. They break that up. So anything that needs low latency, very critical application communication, they allow that directly. And there are some really good AWS constructs that allow that connectivity to happen. For things like ad hoc queries, CI/CD pipelines, scripts, things that aren't very sensitive to latency, they might put that through Teleport. And the idea there is, again, that you're kind of simplifying your posture. You're not thinking about, "Okay, what are the security rules that I need to allow for users? What about admin users? What about applications? What about CI/CD?" You're saying, "Okay, most things go through Teleport. I'm going to really lock that down. Apps that are really sensitive to latency, we're going to have this set of very defined security group rules for that connectivity."
Dan Kirkwood: So with that, I want to say thank you to everyone for joining us today. Really appreciate your time. Thank you, Ashok, for spending time and running through RDS with us.
Ashok Mahajan: Thanks, Dan, for having me here.
Dan Kirkwood: Really love partnering with the AWS guys. Such big customers. Big use cases. Great services. So it's always a lot of fun. And if there are any more questions, we might close it up. Thanks very much, everyone.
Ashok Mahajan: Thank you.
Join The Community