Securing Infrastructure Access at Scale in Large Enterprises
Dec 12
Virtual
Register Now
Teleport logoTry For Free
Background image

Multi-layered Trust with Yash Kosaraju

For this 18th episode of Access Control Podcast, a podcast providing practical security advice for startups, Developer Relations Manager at Teleport Ben Arent chats with Yash Kosaraju. Yash is Chief Security Officer at SendBird. Sendbird's mission is to build connections in a digital world, providing APIs and services for chat products with API and tools to integrate into apps. This episode dives into how teams can build multi-layered security systems to go beyond zero-trust to let teams do their work but also provide checks.

Key topics on Access Control Podcast: Episode 18 - Multi-Layered Trust

  • Sendbird provides APIs and services for chat and products to integrate into applications.
  • As Sendbird is B2B, B2B2B, and B2B2C, its customers use Sendbird to build chat applications which their own customers use, resulting in a lot of data that enters Sendbird's system and that needs to be secured.
  • Compliance and security go hand in hand. You determine how compliance requirements fit your business, and use them as a baseline to improve your company's security posture.
  • Two guidelines for access control are a multi-layered design (where more than one thing should go wrong for something bad to happen within the company) and keeping things as simple as possible.
  • A sound access control philosophy ensures that people in the company have access to what they need to do their jobs.
  • Security is always should be a balance between usability and providing security.

Expanding your knowledge on Access Control Podcast: Episode 18 - Multi-Layered Trust

Transcript

Ben: 00:00:00.000 Welcome to Access Control. A podcast providing practical security advice for startups. Advice from people who've been there. Each episode, we'll interview leaders in their field and learn best practices and practical tips for securing your org. For this episode, I'll be talking to Yash Kosaraju. Yash is a Chief Security Officer at Sendbird. Sendbird's mission is to build connections in the digital world, providing APIs and services for chat and products to integrate into applications. Today, we'll deep-dive into how teams can build multi-layered security systems to go beyond zero trust to let teams do their work, but also provide the checks and balances. Welcome. Thanks for joining us today. To kick things off, can you just give me a little bit more about your background?

Guest background

Yash: 00:00:40.246 Thanks, Ben. I'm excited to be here. Most security practitioners start off with a non-traditional background, but also the opposite. I sort of have a traditional background. I got my master's in cybersecurity from Johns Hopkins. From there, I did some consulting with iSEC Partners. From that, moved on to Box. I was there for about four years doing AppSec and a bit of everything else. From Box, moved to Twilio. Built product security teams there, did a whole bunch of interesting stuff for four and a half years. From there, moved on to Sendbird where I took on the CSO role.

Sendbird overview

Ben: 00:01:16.102 And then for people who aren't familiar, can you just give it a little bit of an overview of what Sendbird is?

Yash: 00:01:21.095 Sendbird provides a conversations platform for different applications. We allow any app or website or platform to quickly and very easily embed rich real-time chat voice or video experiences to create connections with users and/or between users.

Ben: 00:01:40.506 So if you are a startup and you're looking to add some sort of real-time communication, Sendbird is sort of a place to go to get the APIs to connect and make that app magic happen.

Yash: 00:01:50.537 Yes. So we have rich APIs and SDKs. If you're looking for one-on-one communication support chat, anything that you want to quickly embed into your application, Sendbird would help you do that.

Role at Sendbird

Ben: 00:02:01.978 And then what is your role there at Sendbird?

Yash: 00:02:04.301 I oversee security compliance, IT, and some aspects of physical security as well.

The first 30/60/90 days as CSO at Sendbird

Ben: 00:02:11.283 I think you said you've came in about a year ago as one of the first sort of security people at Sendbird. Can you tell me a bit about what you did within sort of your first 30/ 60/90 days as Chief Security Officer?

Yash: 00:02:25.185 I was the first leader in the space that they hired externally. The first 30 days were purely listening. I was talking to existing people that were doing IT and compliance, talking to other head of departments, the CTO, engineering, infrastructure, and just listening to how things were done, what the pain points were, what we had in place, what their expectations were from a security team, and what the gaps were. After a lot of listening, you kind of get an idea of what needs to be done and what the company is expecting from that role. After that, it was putting a plan in place together, putting a strategy and a vision for the security team along with getting started on the resourcing aspect. And that was moving a few folks from other teams into a consolidated security organization, identifying gaps within the capabilities we had, and then starting to hire to fill those. As that progressed between 60 and 90, you start tackling some of the projects that were in your project planning. I usually pick few that are low effort and high visibility and high impact because it also shows that you move the needle plus it earns you some street cred, which you could then later use to build and do more things as you go on.

Security playbook and challenges

Ben: 00:03:48.928 And so you mentioned at Sendbird, you oversee security, IT, and compliance. Is there any sort of specific playbook that you go for each one of those sort of key areas?

Yash: 00:03:59.608 They didn't currently have a playbook. Each company is different. Their needs a different — it's very individualistic in that sense, see what we do today, what we need to do to get the company to where we as a company need to be in the next couple of years.

Ben: 00:04:14.526 And are there any specific compliance regimes that you currently have to follow?

Yash: 00:04:19.431 Yeah. So we are SOC 2, HIPAA, and ISO compliant.

Ben: 00:04:22.749 And so as you were sort of coming in and sort of building this dedicated security team, can you tell me about some of the challenges you faced and how you have a curve in them?

Yash: 00:04:31.351 Sure. So you have the traditional challenges when you build a new program, right? Because the security team essentially needs other teams to do things that they weren't used to do, like a vulnerability management program where they have to fix bugs, which takes their time away from doing the engineering work that they love doing. So I had the typical challenges as other practitioners have. However, some challenges that were unique to the role I took were because Sendbird has a big presence in the US and in South Korea. So on top of the general challenges, I had a lot of cultural, language, and time difference barriers which I had to work through because things are done very differently in South Korea than how they're done in the US. So that was a very big learning curve for me to sort of learn how they do things, sort of get onto that level, build relations, and sort of build the security program on top of those things.

Ben: 00:05:29.604 And how big is their presence in South Korea?

Yash: 00:05:32.295 So interestingly, the company actually started in South Korea. And as they grew, they moved over to the US and then got incorporated here. So a significant part of the engineering teams are still based out of South Korea. I'd say about 65/35 splits. 65 being in South Korea.

Ben: 00:05:49.814 Super interesting. Is there anything as a sort of a company — does that mean that Sendbird has more sort of an Asian kind of like presence, or are most of your customers in the US?

Yash: 00:06:00.275 So it's a pretty global presence. So we have customers from all over and even the employees are sort of distributed. We have US, South Korea, some in Singapore, some in India, in the EU as well. That's what I was expecting, but it's a pretty global company.

Ben: 00:06:18.211 And then just to get back to some of the challenges you faced, do you have any sort of examples of sort of communication between the different cultures that you've sort of had to overcome?

Yash: 00:06:26.129 So the language barrier's number one, not everybody there speaks English. And the interesting thing is they think less of themselves because they don't speak English. And time and again I have to remind them saying, "Hey, your English is better than my Korean. My Korean is like zero. I probably know like three words of it." So the language problem is big. You have three sort of layers, right? They're in different cultures, they speak a different language, and they're in a different geography. So you're not face-to-face, you don't speak the same language, and you think differently inherently. So it's layer upon layer of complexity which adds to communication. So I usually travel back and forth once every quarter at least, to sort of go there, break bread with people, sit down, talk after office, go those communication channels in a much stronger way, and build relations that then enable my team to do what they need to do.

Risks that are specific to the messaging business

Ben: 00:07:19.589 Yeah. Yeah. Definitely makes sense. And so what are some of the risks that you see specific in the messaging business that you're in?

Yash: 00:07:26.161 We have a lot of data. We're B2B, but it's also B2B2B or B2B2C, right? Our customers use Sendbird to build chat applications which their customers use, and their customers could be enterprise or end users. So that's a lot of data that goes into our system. So securing that data is one of the interesting challenges we have. And then being an API company, API security is very different than the traditional security challenges that you get. We do have a web presence, but it's more of, "Go and create an application. Get a token." But after that, it's mostly using the SDKs and the APIs to build things up. Even if you take things from a bug-bounty perspective, right? Majority of the bug-bounty hunters are more focused on web, in those sorts of things. But when you say SDK, you have to reverse-engineer something, APIs, and things like that, the pool of researchers is slightly smaller. So you have those challenges, as well.

Ben: 00:08:22.637 Let's say if I was to integrate an SDK of Sendbird into my mobile phone app, what sort of best practices you recommend to make sure that I don't leak a token that I can read and access to all of my messages?

Yash: 00:08:34.936 You get a token to get the Sendbird token, which pretty much will give you access to everything you need to do with our APIs and get messages. One of the main things is not to embed the Sendbird token into your application for people to sort of get. It's more to give the user of the application-specific permissions and specific scope tokens so that they can do what they need to do as the end user of your application and not have access to the Sendbird token that's powering your chat and every other user of the applications chat, as well.

Ben: 00:09:09.694 Yes. That would be sort of a server-side token that the mobile phone authenticates. And then there's a different token to figure out which ID of which person is talking to which user.

Yash: 00:09:17.433 Yes. Yeah.

Ben: 00:09:18.375 Have you seen any sort of interesting security vulnerabilities just with people poking around sort of API keys? I guess it's kind of interesting because I guess, a third party might implement Sendbird incorrectly and it may not necessarily be a Sendbird issue.

Yash: 00:09:31.461 I think that's a gray area, at least in my head. I feel we shouldn't remove ourselves from the responsibility by saying, "Hey, you implemented this." It's not as if it's almost a shared responsibility model of search where we work with our customers to make sure they know what the best practices are and what they should and should not be doing. So whenever we make changes, when we find something that, "Oh, this API is leaking or giving out more data than it should and we make a change," then we work with our solutions engineering to talk to our customers to see how they're implementing things and work with them to change things on their end to sort of make them more secure. I would say we work very closely with our customers to make sure they use Sendbird the right way. And we work together to keep their customer data safe, as well.

Ben: 00:10:19.036 And so I guess, in that case — I guess — an example would be if I had my server-side app that talks to Sendbird, it's accidentally public and I publish my env file, do you do anything as far as scanning GitHub for Sendbird tokens at all?

Yash: 00:10:32.033 GitHub does inherently scan your tokens. We don't do that today. That's something we will explore in the future. But if we find something that shows that a customer is leaking Sendbird's sensitive or private information in any form and we get that through any channel, we will work with the customer to remediate that.

The role of compliance as a security officer

Ben: 00:10:54.950 This is sort of segue because my next question is around leaking of customer data. There's a whole area of sort of compliance to sort of GDPR. There's the California data privacy as far as — this leak in your customer data has huge implications. How do you think about what we're just talking about from — not a security — but more from a compliance standpoint?

Yash: 00:11:13.529 I think compliance and security should flow hand in hand. And the way I've thought about compliance has changed and evolved over time. When I was very early in career compliance, I was like, "Oh, it's these lists of checkboxes that you need to go through to get a stamp that says, 'You're SOC 2 compliant,' or, 'You're ISO compliant for customers to be able to trust you.'" Fast forward to now, I feel compliance and security should work hand in hand. And you take those compliance requirements, see how they fit your business, and sort of use those as a baseline to improve your company's security posture.

Ben: 00:11:49.167 Yeah. I think that's sort of the people’s process between SOC 1 and SOC 2 of having SOC 1 — is sort of rate yourself and then follow standard practice, getting an external auditor. And then even though you do make many of the compliance checks yourself, the order to come in be like, "Are you sure you're offboarding every person? Are you sure you're doing these checks and verification?" So I think having that checks and balances is a very valuable thing to have. Also, we're talking today because Sendbird is a user of Teleport. And I know you've been a long-term open-source user. We don't always have Teleport users on the podcast, but it's great to have one. Not to forget Teleport as a tool for accessing infrastructure. And just before we go deep into Teleport, I just wonder what your thoughts are on just access control and accessing infrastructure in general?

Thoughts on access control

Yash: 00:12:37.896 There are two things that I go by. One — more than one thing should go wrong for something bad to happen within the company. The second one is to keep things as simple as possible. And both of these do apply to access control, as well. Multi-layered access control and making it very simple for users to use.

Ben: 00:12:59.029 Yeah. And so when you have sort of access control systems, what's your philosophy? For example, I work somewhere and if anyone SSH into a box, they had a script that would automatically decommission that machine. And it was seen as a human was on a machine was tainted that VM and you needed to get rid of it. Do you have any specific philosophies about who can access which machines and what they can do?

Yash: 00:13:22.428 My philosophy is people in the company should get access to what they need to do their jobs, and it's the security team's job to work with the different teams to make sure that they're given access to what they need in the most secure way possible.

Ben: 00:13:37.616 Can you just sort of go a bit deeper about how you view the most secure way possible? What do you look for as far as that definition of the most secure way possible?

Yash: 00:13:46.261 It again depends on what systems they're trying to access, not going to get a user to do 10-step verification to go change their email for HR purposes or something like that, right? But if you're talking in general, I would say a multi-layered approach is what we look at. And multi-layered doesn't always need the user's touch. It could be device-based authentication and other heuristics which doesn't need the user to interact with anything. And on top of it, you have hardware-based tokens, and with the new MacBooks and other laptops coming up with fingerprint as well, that makes MFA a lot more smoother than what it used to be. So I'd say what we try to do is a multi-layered approach and make it easier for people to access it. They may not even know that there are other authentications that happen, and authorizations from their laptop would just happen in the background.

Philosophy on SSO and hardware tokens when locking down systems

Ben: 00:14:40.017 Yeah. And so I think when you sort of lock down a system, you have to think about — so you have a single sign-on, but then you might also add a hardware token as well. And so I think most people sort of click there, Okta, whatever they use. They kind of go in and then there's a second hardware token, so you sort of have two checks in place.

Yash: 00:14:57.973 A little more than that. So what we do is — we do use Okta. So we have the username, password, and then there's a hardware-based MFA that goes on top of it. And for critical applications that authenticate and authorize through Okta, we have a device-based trust as well. So in theory, you shouldn't be able to access sensitive assets like cloud infra or code from a non-standard IT provision machine.

Ben: 00:15:24.054 And so that gradient use is sort of the malware attacks that we've seen exfiltrating kind of cookies. And I can make sure that you can have people's machines up to date and from a device trust posture.

Yash: 00:15:35.633 The malware that extracts session cookies, I don't think this will bypass that because you get a session cookie that's already authenticated and authorized so you pretty much get that. So that's a very different scenario. For malware, we have EDRs and things like that on our devices that will look for signatures from malware and alert our incident response teams. What we have that will protect against any phishing or any other password stealing attacks where attackers try to get the username and password and then click the user into doing an MFA either through SMS-based stuff or just as a proxy or push fatigue and other techniques that they use.

Ben: 00:16:18.827 Am I correct in saying using SSO and hardware tokens, like a YubiKey in your current setup?

Yash: 00:16:24.257 Yes. So we have either YubiKey or fingerprint.

Ben: 00:16:27.819 Was this rolled out before you joined? Or was this sort of initiative that you rolled out?

Yash: 00:16:31.505 The IT team did have Okta in place before I came in. After I came in, we made a big push to put as many applications behind Okta as we could. And today, I think we have every application that supports SSO behind Okta. And the YubiKeys were something that we recently rolled out after I came in. Initially, it was a username password in any second factor. So we slowly changed that. We moved SMS, did Okta push or YubiKey if you had one, or fingerprinted and slowly sort of made the transition to hardware tokens.

The wording around Zero Trust

Ben: 00:17:06.495 I think that's a good kind of rollout. Before this interview, we were sort of chatting. And I think one thing we talked about was around the concept of zero trust. And I think some of the topics you sort of touched on are in that sort of realm of zero trust, but we also said that the choice and branding of the word “zero trust” isn't really ideal. Can you sort of explain why you think the definition of zero trust wording isn't ideal?

Yash: 00:17:30.345 My opinion is you need to trust something. It can be a certificate you have, a password you know, or a hardware token you possess. I feel that's different from when you call zero trust. I feel you need to trust something. Zero trust, the terminology, has usually been in place where people want to convey that, "Hey, being on a network doesn't give you implicit access to something." That I completely agree with, right? Just jumping on the VPN shouldn't automatically give you access to some application behind the VPN. VPN can be a layer, but then you still would want your SSO authentication on it. So again, coming back to zero trust, I feel you need to trust some things as your trust is not quite the same.

System inputs involved in multi-layered trust

Ben: 00:18:13.536 I think that's what goes into the topic for this podcast. It's around multi-layered trust. And so let's say, on one side, we have zero trust. It's you don't trust any entity in your sort of network to — you are trusting an individual. What are the sort of checks and balances do you see in place to provide that multi-layered trust to make sure that using that device is sort of doing what they should be?

Yash: 00:18:37.391 Security as always should be a balance between usability and providing security. So keeping those two in mind as long as we can implement something that's multi-factored, relatively phishing-proof, and easy to use where it doesn't bother the user too many times — I don't need the user to do a hardware-based MFA every 15 minutes on every app that they try to access. So somewhere in between, you have good security practice which the end users will also love using.

Ben: 00:19:12.750 And of all this sort of information that you collect, do you put it like SIEM solution? Or do you have telemetry to figure out, "Okay. This user's behavior sort of changing and possibly to change this sort of trust," sort of those feedback loops?

Yash: 00:19:27.758 We do send our logs first and we have alerts based on the logs coming in. Also, Okta and other SSO providers do provide some of those capabilities. And one of them is like, "Hey. If you log in from California within 15 minutes, you shouldn't be logging in from China." Basic rules like that. And they restrict access or they ask for additional MFA. So we have those capabilities turned on and tested. We also have automation on our end from our same solution where if malicious login does appear, our workflow notifies the IR team, also sends a slack DM to the user saying, "This is what we noticed. Do you know anything about it? What were you doing?" And based on the response that the user gives, the IR response will vary. If they give an explanation and say, "This is what's happening," maybe it's a VPN they connected to or something else, that response is recorded in our ticketing system. If they say, "I have no idea. I didn't touch up in the last 30 minutes," then, that sort of automatically triggers an incident response on our end.

Ben: 00:20:29.793 So I guess that's quite helpful to you because it also covers your security and also the compliance because you have a paper record, the checks and balances that it went through a procedure, the person checks or verified, and so it does — I think that's kind of a good example of security and compliance kind of working together, but not being a blocker. With all these sort of multi-layered systems, are there any other inputs? You mentioned device trust. Do you use IP-based block to access infrastructure from individuals?

Yash: 00:20:59.484 The device-based checks are more certificate-based that are pushed onto the laptops during provisioning through our MDM solution. So we have integration between our MDM and Okta. So the certificate that's on Okta, that's on the device, plus Okta's Verify also on the MacBook as a combination, give out signals that says, "This device is owned and managed by Sendbird IT infrastructure team." When you're logging in, there's communication between the Okta login and these things on the backend, and then that also acts as a layer of authorization.

Ben: 00:21:38.485 Yeah. So you sort of trust in the device and the person combined. And do you support a range of operating systems? Or do you just provide macOS or Windows?

Yash: 00:21:47.573 We are very heavily macOS, so that makes things relatively easier for us.

Thoughts on break-glass scenarios in the world of multi-layered trust

Ben: 00:21:53.985 We are in the same place at Teleport. I think most have limited it mostly to macOS for production systems, as well. It makes it very easy to lock it down. And then, let's say there's a case in which there's a break-glass scenario. You're on holiday or you have access to a Chromebook. You need to get access to systems. How do you think about break-glass scenarios? So you have all these layers of trust, but sometimes you might need to break them to sort of fix the problem. How do you sort of approach those scenarios?

Yash: 00:22:24.741 That is one of the biggest concerns from engineering and other teams when they said, "Hey, we're going to lock these down during X, Y, and Z." And their question was, "What if you have an incident? What if we cannot access the set system during an incident and we need something else?" Break-glass scenarios can also be multi-layered. A good friend of mine — or from back at Twilio, he used to run cloud security. He had a good approach for at least the AWS accounts that were used as a break-glass account. A couple of things we did there, which can be extrapolated to make other break-glass accounts for other services too, is, one, the email confirmation that comes in when you log into that account or change password, you put it to a PagerDuty account or a group alias so that multiple people know that this break-glass account has been accessed. Two, throw away the password because that then needs to go to the password reset. MFA — you either put it in a safe with two people sharing the code. This was back in the day when we would go into the office. We had a physical safe with an MFA sort of hardware device in the safe. But since you can't do that, the other option is for the MFA and sort of a QR code — you can break that down into an actual seed. So you break the seed into multiple pieces, give that to multiple people. If you break it into two pieces, A and B, you can give A to two people and B to two people. So a combination of two people from these four sort of guardians of the seed can then let you get into this break-glass account. And when you do get in, it'll trigger email alerts to a larger audience so that the IR team and the infra team, and whoever else needs to know, knows that this is going on that somebody has access to break-glass at hand.

Balancing security vs. convenience

Ben: 00:24:08.301 Yeah. Sounds very familiar to, I guess, multi-signature wallets in the crypto world when you can have multiple parties and having that other verification, which — I guess — this solves the wrench attack which is famous from xkcd which is an unfortunate attack. I hope no one has that, but you need to compromise multiple people. And so I think we talked a little bit about security vs convenience, what are some of the things that you've put in place which have been sort of unpopular and sort of people have seen us slowing them down?

Yash: 00:24:36.696 I think the most unpopular thing we put in place was restricting access to our code on GitHub to company-provision laptops. So based on everything we spoke, we put out Okta rules for GitHub to start with saying, "You can only access code from company-provisioned laptops and through Okta." People weren't happy there. Some of the complaints were now, "I cannot review PRs when I'm on the train on my phone commuting to work." And then as these complaints were coming in, I think around the same time, Okta had their breach. And that sort of helped me make my case of, "Look, this is why we are doing this. I don't want our source code to leak." But that was very unpopular.

Ben: 00:25:23.031 Do you have any engineers? And engineers will always find ways around any systems. And I know we've got a bunch of people who like just deploying it. Sometimes it's due to having old provisioned laptop hardware. It depends on your refresh cycle. And so by the end of it, they just fire up a big virtual machine and they just develop in the cloud. Do you facilitate or allow any of those sort of cloud-based workflows?

Yash: 00:25:46.024 No. Not today. And to your point, I've had people who used to do that. When I was talking to somebody, they're like, "Hey, this is how I bypass your MDM controls." When I heard that, I'm like, they're two trains of thoughts going in my head. One was, "I'm absolutely mad at this person." Two was like, "This is brilliant. You're red teaming my systems and giving me actionable feedback. And I don't have to pay you for this red teaming service." So I think the second approach is, "If you can break these things, so can the bad guys. If you do break it, come work with us." That's been really helpful to work with other people in the company who have found ways to bypass this. We just use those and make our systems better.

Approach to incident response and post-incident analysis

Ben: 00:26:25.548 Yeah. I think it's always good to have that as a feedback loop. How do you make yourself approachable for people to give you feedback?

Yash: 00:26:31.955 I'm pretty open; I'm available on Slack. I come into the office on the office days. So that's pretty open. I also just go talk to people that I don't know either here in the US headquarters or when I'm in the Korean office. I make it a point to go introduce myself, tell them what we do, explain why we do certain things. And then just be cautious of the fact that we may be adding inconveniences to people and sort of give them a chance to tell us why they don't like something and work with them to make things better. And as you start doing that, people talk, right? And you're like, "Oh, I don't like this. I went and talked to security. They made it better." So that's going from a security team that puts barriers in place and makes people slow to a team that is empathetic, is open to feedback, will acknowledge their mistakes, and work with everyone else to do what's right for everyone.

Ben: 00:27:23.682 Yeah. Yeah. No. That's great. And so if there is an instant incident, let's say, an engineer gets phished, some cultures can be kind of embarrassed. Sort of what's your thoughts on approaching incident response and sort of post-instant analysis within Sendbird?

Yash: 00:27:39.004 So I think the first thing I'd mentioned here, which is slightly tangential to what you asked is — somebody getting phished or getting into an incident is not that employee's fault. It's on the security team to make processes and tooling and detections robust and good enough that they will detect and stop phishing in place. To not blame the employee as sort of point number one. Because they fell for a phish — that's not why you were breached. You got breached because you didn't have things in place that would prevent that.

Ben: 00:28:09.852 And I think your second factor seems almost phish-proof. And so even if someone does click a link in an email or put the password in, it's not going to become an instant.

Yash: 00:28:18.555 Yeah. My goal is —people should be enabled to click links on their email. That's what emails are for. You have links in them. You have information in them that you should be able to access. And then you as a security team need to have things in place: EDRs, phishing protection, MFAs, all of those things to help the user do that in a secure way. Coming back to your question about the IR process — once something happens, hopefully, we get an alert from our [inaudible] systems, we start our investigation, we bring the relevant parties in, block access, contain the incident, and then sort of start investigating on what went wrong, how deep was it, what we need to do, bring in the relevant parties, depending on what data is in question, and then go from there.

Ben: 00:29:04.372 Just talking about post-instance analysis, how often do you find that you have all the information that you need, and how often do you sort of use it as a learning mechanism to sort of improve your systems?

Yash: 00:29:16.076 Every incident that I've been a part of at Sendbird, and before, has learnings. Some nature, right? The learnings could be — we could have detected this faster, or we could have had something in place to prevent this, or we could do documentation better. That was the learning from one of the incidents that I was part of. We need to do in documentation better so six months later when we look back, we know exactly what happened. So irrespective of how robust or how mature your program is, I always think there is room for improvement and incidents are a good way to pick up these little snippets of improvements that you could do and follow up on them.

Security measures and technologies implemented at Sendbird

Ben: 00:29:54.046 And so talking of sort of improvements, can you speak to any specific security measures or technologies that you've implemented to protect user data and prevent breaches?

Yash: 00:30:04.155 Access control, since we're talking about that in this podcast. Access control is a big one. Things like having our infrastructure AWS access go through Okta and make it time-bound and device-bound and a low privileged user versus a root user — things like that to secure its management being in place to other sort of detections in the cloud that will alert you and an anomaly happen. So I think it's a combination. I wouldn't necessarily focus on measures that are to protect data because the breach doesn't start at the source of the data, right? It's going to start at the weakest point you have, and then go from there and escalate till you get to data. So all of these points that come from the breach point to the data, all of those are something that you would need to secure.

Final practical security tip

Ben: 00:30:58.342 Great. So I think we're coming near the end of our time now. I always like to close it out with one sort of practical security or access control tip. Do you have any recommendations for listeners?

Yash: 00:31:08.293 Since we're talking about access control, let's say, make sure you have more than one thing in place that needs to go bad for you to lose data. That could be a VPN block in front of your authentication, could be a hardware-based token or two-person rule, but always make sure you have two things in place to protect your sensitive data.

Ben: 00:31:27.363 Awesome. I think that's a great way to close it out. Well, Yash, thank you for your time today. I really enjoyed our conversation.

Yash: 00:31:33.014 Thanks, Ben. It is my pleasure to be here.

Ben: 00:31:35.283 Oh, and do you have anything else you would like to say about hiring? Do you have any other sort of pieces of information you'd like us to talk about?

Yash: 00:31:43.153 Sendbird is definitely hiring. There are security roles open. There are other roles open. They're definitely growing and it's an exciting time to be here. So do take a look and see if something interests you.

Ben: 00:31:56.309 This podcast is brought to you by Teleport. Teleport is the easiest, most secure way to access all your infrastructure. The open-source Teleport access plane consolidates connectivity, authentication, authorization, and auditing into a single platform. By consolidating all aspects of infrastructure access, Teleport reduces attack surface area, cuts operational overhead, easily enforces compliance, and improves engineering productivity. Learn more at goteleport.com or find us on GitHub, github.com/gravitational/teleport.

Background image

Try Teleport today

In the cloud, self-hosted, or open source
Get StartedView developer docs