Hacker-Powered Security - Overview
Key topics on Hacker-Powered Security
- Bug bounty programs and vuln disclosure programs are similar, except the first pays and the second doesn’t.
- The scope of bounty programs usually encompasses a company’s main application where the production sites are happening. What is out of scope is mostly third parties.
- Rules of engagement depend on the bug bounty program and the company.
- Some programs pay for credential stuffing, but not for phishing since companies don’t want you to phish their employees and customers.
- How much hackers are paid in a bug bounty program is entirely up to the company and depends on its budget.
- Determining the bug severity level depends on a combination of the vuln type and how critical it is and the asset itself.
- Hackers care more about how fast they get paid than about how quickly the company fixes the issue.
- A bug bounty program doesn’t make you a bigger target.
- Building a public bug bounty program depends on the product and size of the company.
- Improve Input validation to reduce bugs created
Expanding your knowledge on Hacker-Powered Security
- SSRF Attack Examples and Mitigations
- Preventing Data Exfiltration with eBPF
- Teleport Kubernetes Access
- Teleport Application Access
- Teleport Access Platform
Ben A.: Welcome to Access Control, a podcast providing practical security advice for startups. Advice from people who have been there. Each episode, we'll interview a leader in their field and learn best practices and practical tips for securing your org. For today's episode, I'll be talking to Ben Sadeghipour, head of Hacker Education at HackerOne and a hacker by night. With over 685 vulnerabilities found in major sites such as Snapchat, Airbnb, and even the US Department of Defense, HackerOne helps companies by providing tools to help with response assessments and running their bug bounty programs. Hi, Ben. Thanks for joining us today.
Ben S.: Yeah, thank you for having me.
Bug bounty finding in the early days
Ben A.: So to kick off, can you tell me about the first bug bounty that you found?
Ben S.: It's been a very long time since I've been doing the bug bounty thing. But I want to say the first bug I found was back in late 2014, 2015. There wasn't a lot of bug bounty programs, but it was a private program, I believe, on a bug bounty platform where they wanted us to just pretty much test their assets. And I was able to identify a self-XSS. And if you're familiar with self-XSS, it's not a vulnerability. You really can't do much with it. But they were nice enough, and because it was the earlier days and people were still figuring out, like, "Do we pay for these things or do we not pay for them?", they actually paid me for it, which now thinking back at it, I probably shouldn't have earned that bounty anyways. Fast forward, the real first bounty that really I can think of that was a pivoting point for me was actually discovering the Yahoo Bug Bounty Program. And with the Yahoo Bug Bounty Program is where I had a lot of my success. I learned a lot and was able to find a lot of vulnerabilities that I use in my methodology today.
Introducing bug bounty programs
Ben A.: Yeah. And so for people unfamiliar with the concept, can you introduce what a bug bounty program is?
Ben S.: So there's bug bounties and vuln disclosure programs. They are similar things, except one pays and the other one doesn't. So the bug bounty is where you receive a bounty, you get paid for your work. And the vuln disclosure programs are the ones that they recognize you, but they don't give you any monetary reward. For a bug bounty program itself or a vuln disclosure program, they both come up on a web page. It could be on their website. You could go to — for Facebook, for example, it's facebook.com/whitehat, or for some of the other bug bounty programs, it's security. Whatever the page is, you go to it. It's an agreement between the companies and the hackers. They pretty much tell you: "Hey, as long as you do these things and don't do these other things, you work within the scope of our program," so if my website is site.com, "if you only hack on site.com, you're good. Otherwise, we may not pay you or recognize it." Most of the time, they're not going to come after you anyways if you accidentally find something out of scope. But the whole point is, "Come hack on our assets, and we will pay you or reward you in return."
The scope of bug bounty programs
Ben A.: And so could you give an example of things that people decide to put in scope as opposed to things that they don't put in scope?
Ben S.: So the things that are in scope are usually their main application where the production sites are happening, where customers and their users are usually going to typically. And the stuff that are out of scope is mostly third parties. Think about companies that use Zendesk, for example, that's not owned and operated by them. Their status page, for example, those are the things that usually you see are out of scope. There's also times when a company could be a giant infrastructure, but they don't want to have everything in their infrastructure in scope because they're just building their bug bounty program. They only want to test a few assets. So in that case, they only put the main applications where they're valuable to them, where it hosts the majority of their users' data.
Internal vs. external services
Ben A.: How do you divide up between, sort of, internal services? So let's say it could be an internal admin dashboard for creating accounts as opposed to the users' login at a dedicated .com site?
Ben S.: So those all depend on the company. A lot of companies that I personally work on are the ones that actually allow you to do anything. So it's everything we own is in scope or everything that's within this application, whether it's the customer or the user, the manager positions, they have different access rules all in scope. It all depends on the program or the company, and they're very, very good and very vocal about what they want and how they want it to be done within their policy.
Ben A.: And so in policy, I think, would phishing and getting credentials from employees be in or out of a program?
Ben S.: Funny enough, in our industry, a lot of red teamers and a lot of offensive security folks rely on phishing and credential stuffing and that kind of stuff. With credential stuffing, I've seen some programs pay for it, but phishing has never been something that I've been able to participate in with bug bounty programs. They don't actually want you to go after their employees or their customers and phish them. They want you to purely hack on their web assets and find a way in as an outsider.
Ben A.: So mainly code, infrastructure, other services they have, and that's core that they can sort of update, manage, and maintain?
Ben S.: Yeah. Think about it as purely testing on web apps. Or it could be even beyond that. It could be mobile applications, IoT. It could be anything, but it's just not the humans behind the application, but the actual thing, the product that this company owns and provides to their users or customers.
Deciding what to pay
Ben A.: Yeah. And so you said at the beginning, there's bug bounty programs which pay and then vulnerability programs which don't. How do companies decide how much to pay for different bounties?
Ben S.: That's entirely up to the company a lot of times. It's on their budget — how much they can afford for that quarter or for that year, how much they want to spend. Obviously, the bigger the company — the enterprise-level companies — pay more. If you look at companies like Uber, Verizon Media, and Google, and Facebook, they pay top dollar. They have the money. They have the budget. They burn through it.
Ben A.: And so can you give an example of a sort of monetary value and what would be the vulnerability?
Ben S.: So yeah, the bugs that are being paid typically are based on criticality. So with bug bounties, you want to show impact. One of the things that I tell all my friends or hackers or new hackers is always ask yourself, "So what?" So what can you do with this vulnerability to affect the user or the infrastructure that belongs to this company? If you can't answer that question for that vulnerability, you probably don't have anything or an impact for it. So, for example, if you have a CSRF that turns off notifications, so what? I may miss my new notification. It's a bug, they're going to pay you, but it's not going to be that much.
How to set severity
Ben A.: A remote code execution would be a lot more severe bug.
Ben S.: Exactly. When you find an RCE, for example, and you can actually run arbitrary commands on a production server and you can pivot through the network and get access to internal stuff, dump databases, then you have the keys to the kingdom at this point. Then it's going to be a top-of-the-line vuln. But also, there is the concept of production versus marketing side. If you're getting RCE on a marketing site that hosts a landing page and some images and marketing data, so what? You have RCE, but that's gated completely off the entire production side versus going to production and finding an RCE on there.
Ben A.: Is the severity also set by the company, too, based upon — I guess that's probably the perfect example, marketing side or production? There could be some companies in which the marketing side is very sensitive. People could short the company if you're an oil company and you post poor earnings as opposed to a B2B SaaS product could have more sensitive customer data, and they just care about their production databases.
Ben S.: Yeah. With HackerOne specifically, the platform has the ability for you to mark what assets are critical, medium severity, or low. And not that they're not valuable, but again, if the company is spending $50,000 on a critical, they can't give you $50,000 for owning a marketing side but also give you $50,000 for owning the production side, right? So there's going to be that scale of combination of criticality of the bug, of the asset, and what you can actually do towards the end, and how much of that you can actually prove the impact for. And some companies have a different approach, but a lot of times, we'll see similar patterns of a combination of the vuln type and how critical it is and the asset itself. And we use CVSS, which is a blessing and a curse. But I enjoy using CVSS personally because, as a hacker, I have the tools to make an argument on why this is a critical versus a high or why is it a high versus medium based on the CIAs that come with the CVSS calculator.
The resolution time for a company to fix an issue
Ben A.: Once a company has the bug report, is there a set resolution time that they need to sort of resolve it and then pay whoever found it?
Ben S.: The thing with HackerOne is on the right-hand side of the program, you can actually see the stats; how long does it take for them to first respond to you? How long does it take to actually triage a vulnerability? How long does it take them to pay and then resolve it? For me as a hacker, personally, I don't care when this gets fixed. I would hope you fix it soon because if it's an RCE, you don't want it to be there for days. But as a hacker, personally, I care more about how fast am I going to get paid? Am I going to wait three months, or am I going to wait three days to get paid? And a number is being shown for an average of the last 90 days. And a lot of companies have gotten better with the payment part of it because they understand that you did your work — you've done everything you could. You have provided all the information. You have a working proof of concept. Impact is there — everything is there. They pay you and they ask you to re-test it once it's fixed.
Whether a bug bounty program makes you a bigger target
Ben A.: Some teams still like to use security through obscurity. Does having a bug bounty make you a bigger target?
Ben S.: If an adversary is going to come after your company, they're going to come after you. They're not going to be like: "Oh, let's just look at what companies have bug bounties and own them." So they kind of come after you as what we call a malicious intent hacker, someone with bad intent. They're not going to care. They're not going to look for your bug bounty program. They're going to go after you. Hackers are doing that no matter what. They don't care what company you are. You have customer data, you have Social Security numbers, credit card numbers, PIIs — whatever you do, you have it, they want it. They're not going to care whether you have a vuln disclosure program or not. But if you do have a vuln disclosure program or a bug bounty program, the same hackers or some other hackers with the same skill sets are going to be looking for similar vulnerabilities that could help you prevent from a potential breach or where that vulnerability could affect your infrastructure or your users.
Ben A.: So it's more a case of not if but when, and do you want it to be responsibly disclosed or not?
Ben S.: Yeah. It's just a matter of, are you going to work with the hacker community to find these vulnerabilities, or are you going to try and do a pen test, whatever the engagements are internally? It all depends on the company size and their budgeting and that sort of stuff. But at the end of the day — I get this question asked by a lot of CISOs, by a lot of folks at conferences like, "Do I become a bigger target?" No, you are already a target by having stuff on the Internet. The day you put a site up and you started collecting data is when you became a target.
Stopping abuse of APIs and services
Ben A.: And then is there any way to stop abuse of, let's say, internal APIs or services through, sort of, port scanners or other tools which might be used by people sort of probing the services?
Ben S.: It all depends. For the case of a company the size of, let's say, Apple, for example, I'm always going to be port scanning their stuff. I'm going to be looking at all their web assets. And a company like that has the resources to be able to handle that kind of traffic. And I'm not the only person doing that. There are thousands of companies doing this already, let alone hackers. You can't fully guarantee that the hackers — whether they're white-hat hackers you're working with or random hackers on the internet — aren't going to port scan you. But you can ask them nicely in your policy — and that's something that we do and we encourage, is limit the number of requests per second to X. So if you're directory brute-forcing, don't do 1,000 requests per second. Tone it down. Don't do multi-threads. I've also seen companies ask for a particular header, request header that says, "X username, white hat hacker," and then, like, your username is in there. There are different ways you can ask. A lot of hackers would comply with that. At the end of the day, the white hat hackers, the bug bounty hunters you're working with are here to help you and they're here to obey by whatever rules you've put up. They may choose not to participate because the rules don't make sense to them, but it doesn't mean they're going to break those rules, especially when there's money involved, and they want to get paid for their work.
Steps for building a public bug bounty program
Ben A.: All right. So I'm sold on a bug bounty program. As many of our listeners are startups, what are the steps for considering a public bug bounty program as a company? So just listening to you, I'm like: "Okay. I'm going to sign up." How do I go about it?
Ben S.: So it all depends on the product and the size of the company and what you have done in the past. I would do more of a private approach first. So maybe work with one of these bug bounty platforms. Come work with us at HackerOne. Get a private program. You can ask how many hackers you want to work with. The team that we have would advise you on how many hackers to work with. We'll build up your program for you. You don't want to jump into a public program and have thousands of hackers look at your program — overnight you're receiving thousands of reports. You don't have the staff to charge these, let alone fix them, validate them, and then award them. You want to start small. You want to work with a small group of hackers with a small scope and then eventually expand and span that out. So previously, I worked at a media company and I created a bug bounty program for them. And our approach was literally this — we went from one single asset, 10 hackers in our program, and then building that up to doing more and more. So we went to 50 hackers. 20 hackers, 50 hackers, more assets, more assets. And then by the end of the time when I left within that year, because of how much we have grown our hacker base because of how strategically we were moving with these and how we were adding each asset, how long we were putting them in scope, we ended up going to maybe 100 to 200 hackers and almost everything we owned in scope.
Ben S.: So you want to slowly work up on those and make sure you are taking these things internally and fixing them. For example, if you see a pattern of cross-site scripting happening everywhere that you are allowing a user input, you have some stuff for you to change in your code before you start working with other hackers. If you're seeing that there are some mistakes being made by your developers when they're deploying code, they're leaving things behind, code behind, Git folders, SVNs, whatever it is, that's a pattern you want to work on and fix. Then it's also creating that internal process. What happens when you receive a bug? What is that lifecycle of vulnerability? Obviously, the bigger the organization, the bigger that process becomes. But you want to have that process. What happens? Who's front line? Is it being triaged by a company? Perfect. But what happens after it was triaged? What are your SLAs internally? Who are you going to work with? Do you know how to identify who owns these products or micro applications within your company? So if we start small, it allows you to build these processes by taking these internal bugs, going internally, working with your teams, creating the right teams, creating the right processes to make sure things are getting fixed in time and being fixed properly as well.
Ben A.: And does that normally fall in the office of the CISO, or would it go to, sort of, engineering?
Ben S.: I think it's a mix of both. I think the CISO should be involved. At the end of the day, you are the chief of security pretty much. I think it's a thing that you might want to be involved. But we have seen CTOs get involved before. We have seen engineers, VP of engineering being involved more. But a lot of times, the more successful bug bounty programs that I have personally worked in as a hacker were the ones that were directly run by the security teams. And they internally work with those engineering teams or DevOps teams or whatever teams that they're working with internally to create these processes and explain to them: "Why is this vulnerability happening? Can we use libraries to prevent that from now on?" They're educating the engineers on how to do these things better because the point of the bug bounty is the hackers want to get paid. But as a company, you want to take those learnings and make sure you, A, educate your engineers internally, and B, prevent the same pattern from happening over and over and over again.
Ben A.: For example, using content security policies, CSPs to stop cross-site scripting.
Ben S.: Yeah. The other thing is you want to run a few scanners. If you've already worked with a company that provides web scanning, they're not going to find everything, but they're going to find those low-hanging fruits, and they're going to find those XSS that are easy to identify. Do those before you go to a bug bounty program so you kind of understand — what are your flaws? What are your weaknesses? Things that you need to fix before you go to a bug bounty program. Because once you open up that floodgate, especially if you're going to go down the public program, there's going to be hundreds of people, if not thousands, looking at your bug bounty program if you're going down the public route. So take your time. Start small. Build a process internally. Know who your key players are, who your stakeholders are internally, who you are going to work with when a bug comes in, and then who's accountable to get these fixed, what does an SLA look like, and all that good stuff.
Ben A.: And what would be a good budget to start with this private group of hackers?
Ben S.: That's something that really depends on your budget. We have had companies that pay anywhere from $100 to $3,000 as a start. I've also seen companies that are paying $100 to $10,000. It all depends on how much money you have. But also, the more you give, obviously, the more the hackers are going to come back. So if I, as a hacker, participate in your program, I'm making a crap ton of money, your team's very nice to me, you're fast at paying me, you communicate with me, you tell me if the bugs are valuable to you, and if not, they're not valuable, why? Why should I not look for these bugs? Then you communicate that with me, then I'm going to come back to you more and more. So that budgeting depends, but you want to make it worth their time. You want to make sure if you want to go after top hackers — some of these hackers are making millions. I think we have 9 or 10 hackers who have made over a million dollars on bug bounties. They're not going to come look at a program that pays $1,000 max when they have other companies that pay 15, 20 thousand dollars. So you have to understand what kind of talent you're going to attract first. It's okay not to have that $20,0000 bounty max, but also working up towards that eventually to pay more once you add more assets and you expand your scope of your program.
Ben A.: If a bug is found and it's gone through this whole process, as a company, what's the best recommendations for disclosing that an issue has been found and reported and resolved?
Ben S.: So that all depends on the company itself. On HackerOne specifically, some companies like Shopify do disclose everything that's been found. They have a full 100% transparency, which is great for their customers, the hackers. The entire community enjoys seeing it, and it shows that you're working towards these things. And there are the companies that have a zero disclosure policy. They don't disclose anything. I think disclosing that you're working with hackers, white hat hackers to fix your products is a great thing. It could be used as an advantage to say, "Hey, we're working with professionals in the industry that are helping us secure our applications or our products." But it's not a requirement to disclose them directly yourself. You can also allow the hackers to disclose it on their blogs. So you could ask the hackers to disclose it on their blogs. They could write about it. Some of them may make videos, whatever that is. But it's not a requirement. It's just, as a hacker, it's very nice to be able to write about my experience, talk about the research I've done. There's some amazing research that comes out of bug bounties, if you look recently, that you'll be mind-blown at — how have these even been found? So it helps the entire ecosystem as a security industry versus just a bug bounty as itself.
Ben A.: But what happens if something comes in and the bug is through sort of an upstream dependency?
Ben S.: That depends. There's been times where — so actually, I recently disclosed a bug on Lyft about a year ago. We found a 0-day in an application they were using in their backend. There was a PDF generator. We really wanted to own Lyft, so we owned the entire application. And they actually helped us get this third party to help us get it fixed. They awarded us for it. Most of the time, a lot of companies help or pay for these things because it's also sometimes based on the implementation of this software and how they use it or the dependency, whatever it is. First of all, with the question that you asked me earlier about being a target, hackers with malicious intent have 0-days in their back pockets a lot of times. They're using those against companies. So it's going to be happening to you. But it comes down to — do you want to not pay this because it's not my fault this software I'm using is vulnerable or this dependency I'm using is vulnerable because it's not maintained by us, or helping secure that through your company? So I get a report — for example, I've had hackers finding things in Jira, for example. They report it to Jira. A lot of hackers wait the 90 days. Some of them don't wait the 90 days. But you forgot to update within the first 90 days, and the hacker is going to report it to you. Those are things that are going to happen. It just comes down to how fast you want to fix them and whether or not you want to be involved in the process of getting that third party fixed and reporting it. There was a vulnerability that came out not too long ago by a hacker named Alex. And sorry, Alex, if I say your last name wrong. I think Birsan is how you say his last name. I can't remember exactly what it was, but he found some way that there was a repo that was unclaimed or some sort. Oh, I'll see if I can find it and re-link it to you for you to put it up somewhere. But he had access to a lot of different companies where he could have owned them.
Ben A.: He squatted on an old NPM module.
Ben S.: Yeah, it was a module that you could commit code to, and it was affecting everybody. And every single company paid it because it was a big deal. It was something that there could have been a lot of — think about that. If that was an adversary, what would have happened? It could have been very, very bad depending on the company and how many companies were involved. So this was actually a dependency confusion. And he went after companies like Apple, Microsoft, I think Yelp was one of them. And it was all because they were using some dependency that it was like a misconfigured pip installation of some sort. And a lot of companies took it as a valid thing because also it's how you implement it. You're not expected to check for it as an engineer, but you also kind of are because you're deploying something that's not yours. And at the end of the day, it's in your infrastructure.
Ben A.: Yeah. And I think the other risk is it can also be part of a build system, which, basically more like CI/CD, which may not be the final artifact, but sort of it gets into your application at some point.
Ben S.: Yeah. It's a weird thing because I get it — you're not responsible for this thing. Even if it's you — I was mentioning Jira and all these different — Atlassian tools or GitHub, GitLab Enterprise, all those things. Yeah, they're not yours. But first of all, if you're exposing it, why is it exposable in the first place? And second of all, update them. You see a notification coming in — they're really good at sending those notifications when there is a really impactful 0-day coming out. Patch it right away. It's still your infrastructure. It still affects you as an organization, and you can't blame those third parties when something bad happens.
Ben A.: And it goes back to the resolution. As part of starting small, you get better at this cycle of reviewing things coming in, patching, updating all of them out so when something does happen, it's much easier for your organization to quickly be more resilient to stuff that comes in.
Ben S.: Yeah. The thing that I've realized a lot is, with really good hackers, it's pattern recognition. Companies are really good at these weird patterns of mistakes. It's the same pattern that repeats itself over and over and over again. And it's just getting better at identifying those patterns early on when hackers are sending them to you and getting in front of them and making sure that you find ways to test for that pattern, prevent that pattern, educate your internal employees on those patterns, and make sure it doesn't happen again. But a handful of times, it's either just basic vulnerabilities in the web app or exposing things that aren't supposed to be exposed, updates, dependencies, and that sort of stuff. But it's always, always, always a few patterns that keep repeating themselves.
Approach to finding bugs
Ben A.: So as someone who's been on the bug hunting side, can you tell me about what your approach is to finding bugs and what patterns you look for?
Ben S.: It depends. It depends on the company. A lot of times, I like to do — like I mentioned, I like to go after big companies that allow me to go after all their infrastructure. So companies like Apple is great, Red Bull was a fun one, Snapchat, Airbnb, they all allow you to do that kind of stuff. Verizon Media, Google, Facebook. I personally like doing the holistic approach of looking at an organization all the way zoomed out in what they own, all their acquisitions, all their domains, subdomains, their ASN, and looking for those things. But my favorite option on things to do usually is looking for those, what you mentioned, the CI/CD pipelines of finding all these tools that they use. Finding GitLab Enterprise, GitHub Enterprise, Jenkins, you name it, things that they could be using in their deployment process. Those seem to be the funnest things to look for because they're usually pretty well-hidden, but they also have the most impact. Getting into a Jenkins, for example, having access to a CI, it's pretty fun because you can get the code base, you have some internal tokens you may get to, you can arbitrate —
Ben A.: Get secrets. There's everything.
Ben S.: Yeah, there's times you can arbitrarily execute code. It's a lot of stuff in there. There's been times where I've found GitLab instances, for example, for big, big companies that were exposing GitLab. And there's endpoints you can hit that give you access to repositories that are not gated for anonymous users. And guess what? Some of those have tokens inside the code. I've gone as far as seeing the build tokens being exposed to a guest user, an anonymous user because it was in one of their builds. It was in the logs of one of their builds they had on their GitLab instance, for example. Those are the things that I like to do. I went from hacking to find bugs to make money to, "I want to see the internal parts of this company." My end goal is I want to see what that corporatesite.com environment looks like. What do you have behind that corporate site? I want to see what tools you use. What does it look like internally? And that's been the funnest challenge for me as a hacker personally.
Ben A.: Yeah, I know. And especially at the larger companies, I think at one point, Uber had something like 9 or 10 thousand microservices. And you know that all those different builds — nothing is standardized. So you can imagine there's lots of different avenues for entering a large organization.
Ben S.: Yeah. Seeing the internal parts of the company as a non-employee is always fun because you wonder, like, "How much of these things are actually password-protected once you pass that VPN or the access control they have in place?" With SSRFs, for example, when it gives you access to an internet network, you can see a lot of that stuff. And a lot of times, it's been just fascinating seeing what tools they have internally, like, "What assets do I have access to internally with this application that I just owned?" And just seeing the insides of a company and how messy or nice it could — how unsecure or how secure it is. It's a fascinating thing because I can say, "I don't work for this company, but I've seen your internal stuff." There's been a few companies that I have friends who work with and I'm like, "Hey, I've seen this domain you guys have internally. I know what's on it now. I've always wondered what it was. You wouldn't tell me, but I did it on my own. I know exactly what's on there now."
Ben A.: Yeah. So I think that's another good tidbit of advice is, often, it's like an afterthought securing these internal apps, or "No one's going to see it. It doesn't really matter." But people will find it and see it, so.
Ben S.: There's been a lot of times now where I've found things through an SSRF where there's no authentication in front of it at all because they're like, "Oh, you have to be VPNing into our network to see this." Doesn't matter. If I own your application that could talk to these services, it's game over.
The most rewarding bugs found
Ben A.: Yeah. So what's the funnest issue you've found?
Ben S.: There's been a few. There's been very few that just it's raised my eyebrows because these are big companies that — nothing gets the red teams at all, but it's just you go, "Holy crap. Imagine if I had a bad idea about like I wanted to just really screw this company. It would have been really easy to do it." A lot of the time, it's server-side vulns. SSRFs, in particular, would be really, really fun. The most entertaining one was I found some backend of some sort for a company and it asked you for a login or you could have just hit Forgot Password. After directing a brute force, I found this endpoint that was like, "Register a new user."
Ben A.: So you just signed up.
Ben S.: So I signed up, but it was read-only. So I could only read data, not modify it. So I just dumped the list of the users that I could see, all their email addresses through the API. I just pulled the email addresses and sent it to Burp Suite and I had it checked for, like, 1234567, 123456789, 1234567A, Password, Password01, and Password and the year of — it was 2019, for example, Password2019. A few of them came back. A few of them had really good access, which was really, really fun. Those are some really good ones. I've had a few. With Snapchat, for example, I found access to their Jenkins, which it was a development environment. They didn't have anything really sensitive, but it allowed me to execute an arbitrary command on their network, which was really cool. I've had a few SSRFs with — Lyft was a really good one I did recently. I actually wrote it up. It's on my blog, I think somewhere on my YouTube, maybe somewhere. But actually, I could see everyone's expenses of some sort, so I could actually — I had access to their AWS instances, so I could pull the keys for that AWS instance. And I'm pretty sure that's where they were generating all the invoices for all their enterprise customers so you could do your expenses. I'm pretty sure if we would have dug through it, we could have probably had more access. But at that point, they were very quick to fix it. Those were some of the highlights of, like — besides the vulnerability type itself, it was also the fact that I can say I've seen the insides of Snapchat a little bit. I've had some impactful bugs on Lyft, for example, and just being able to say I've seen it — it's been a very, very good feeling.
Fixing issues from the root cause
Ben A.: So we have a lot of security engineers listening. Can you tell me how they can make your life harder?
Ben S.: Focus on input validation. I think it's 9 out of 10 times, it's like input validation from the user is what causes a lot of issues. Forgetting those colons, semicolons, quotes, single quotes, those are always the biggest ones. Don't filter things. Actually, fix it instead of filtering them. Don't try to find a way to delete whatever I'm giving you because it matches a pattern. Actually, go look at the root cause of it. Spend some extra days to fix it from the root cause, which is a filtering. Quit the web app firewall crap. It could only go so far. People find bypasses all the time. And the other thing is I look for a lot of server-side stuff in two ways. One is with SSRF with a server-side request forgery, which allows you to pretty much make a web request or other protocols to fetch internal resources or external resources and abusing it. Make sure you're gating your applications. So if this application doesn't need to talk to your internal network, don't let it talk to your internal network. If it only needs to talk to a database and a few other services, just make sure it's restricted to that. So if it does get owned, it doesn't get extended to other parts of your network or infrastructure, right?
Ben S.: The other one is like — this is something that I'm actually talking about at DEFCON's Recon Village as a part of my keynote — is your assets, managing your assets. Making sure you have a process created for when you spin up a new subdomain. What happens after you take that subdomain down? Did you delete the record, or did you just take down the AWS EC2 instance? Those are things that could make a huge difference. There's also the “what are you exposing”? If it's supposed to be a corporate website that's supposed to be internally facing, make sure it's not directly accessible by an IP address because that's also a thing. The domain I type in, Jenkins-corp.site.com doesn't load. I hit it by the IP address, it loads. Those are the things that you want to make sure your assets are really gated. If it's not supposed to be externally accessible, make sure it's not accessible at all.
Ben A.: So kind of a good place to start would be get your DNS records and just do an audit of everything that's showing up.
Ben S.: Good inventory of what you have, making sure you know what's supposed to be internal, what's supposed to be externally facing, what's supposed to be accessible by users, what's supposed to be only accessible by your VPN, all that stuff. It's just asset management. Asset management is a big part. A lot of companies ask me, like, "How did you even find this asset?" Like, "You didn't know this existed?"
Ben A.: Yes. This is in the world of, like, sort of shadow IT people that are creating domains and apps without the rest of IT knowing.
Ben S.: I just put up this dev site, the next day, I get fired, the next day I quit. Does anybody else know that I spun this up? Right?
Advice for those not yet ready for a public bug bounty program
Ben A.: So for people who aren't quite ready for a bug bounty program yet, what advice would you give them?
Ben S.: If you're not ready for it just yet, try and do an internal one maybe. If you have enough engineers internally on your team, maybe do a bash internally. Say, "Hey, come and find vulnerabilities with our red team or security teams." Whatever team finds the most gets a voucher or you get a day off or whatever that incentive is. Pay your internal employees if you want to for whatever reason. But work with your teams internally. Obviously, do your pen test. Get some decent pen tests done on your apps. I think you got to do them anyway for compliance or something anyway. But start with the basics. Getting some scans done, running Burp Suite on your own. Just make sure the low-hanging fruit is being done before you jump into a bug bounty program. And if you really can't afford to pay for it, I hate to say it, but I'd rather see a vuln disclosure program than not seeing one at all. Create a way for hackers who accidentally find a vulnerability in your platform to be able to report it. A lot of times when I order stuff, I could buy a mattress, I could buy a chair. I wonder if I change that ID at the top of the URL from 12345 to 12346, is it going to give me someone else's invoice? That's the first thing I've always wondered. And a lot of hackers check for it. If I find out and I see everyone's information is being leaked through their invoice number, don't make me go on LinkedIn and look for security engineers that work for you. Just have a security page that says, "Hey, email us at [email protected], and we'll get back to you if you find something vulnerable on our applications." Because at this point, it's not only affecting me but also thousands of other users that put their phone numbers, their address, their email address, their PI on your website. You want to make sure you hear from those people that have accidentally found the vulnerability. I'm not promoting to go and look for these kinds of things, but at this point, it's personal. My information is on your website. What are you doing to protect me as a consumer?
Using bug hunting to open new doors
Ben A.: And lastly, I know bug hunting has opened lots of opportunities for you. Can you tell me about your story and how other people can use bug hunting to open new doors?
Ben S.: A lot of the things that I always want to end with in the bug bounty interviews, anything that I do with regard to bug bounty, is you just not think about it as a way to make quick money. It's a very good way, too, for educational purposes. Whether you are a student, whether you are a security engineer, you want to make a change, it could be in QA, it could be IT, it could be helpdesk, you have technical knowledge, but you want to take that leap. You want to take a jump to the next point. Bug bounties are a really, really good way to get a hands-on experience that doesn't cost you anything but your time. You're not spending a dime on a certificate. You don't have to submit your resume to us. You just purely go sign up on the platform, whether it's us at HackerOne or other bug bounty platforms. You don't need to do a single thing to be involved. And you can hack on these applications to gain that experience, whether you want to become a pentester, red teamer, or a security engineer, to learn how to do these things. It's a very good place to learn. And also, for security engineers, it's a great place to understand the hacker mentality because once you understand and you engage with other programs, you see how they run their programs, how they triage their vulnerabilities, what kind of vulns you can find, you understand the ecosystem better and you can go back and improve those processes within your own bug bounty program as well. I just want to make sure people understand that it's not always about making money. It's done a lot more for me than that. But a lot of people underestimate the things you can do, the learnings you can get from a bug bounty program, participating in one and taking it back to your organization, your day-to-day job, or just yourself, personal growth.
Ben A.: And I think that kind of goes back to your point of pattern matching. You sort of can see how companies of the same size likely have the same issues that you have, but they just may not have been found yet.
Ben S.: A lot of the good teams that I've worked with either have hired people from the community directly, from the top hackers, or the people that are security engineers that also moonlight for fun after work and they hack on bug bounty programs. And they have had the best bug bounty programs I've ever worked with. A big example of it is Peter Yaworski, a good friend of mine. He's the author of Real World Bug Hunting. It's a book on bug bounty hunting. He works at Shopify now. And that program has been one of the best programs I've seen, the best paying fastest triage, the experience that comes with it, the rules of engagement. It's because they hired somebody from their community and he's brought that hacker perspective into this culture of their bug bounty program where they have someone representing the hackers. And it doesn't have to always be directly hiring from the community. If you go and participate as a security engineer, you understand how things work as a hacker, so when they get upset, you can relate to it and not just go, "You're being ungrateful," or "They're not understanding this is not properly — it's not worth it," or whatever that reasoning is. You have done it. You've been in their shoes. You understand it. You have more empathy for them.
Ben A.: Well, thanks, Ben, for your time today. Do you have any last closing thoughts?
Ben S.: I would say hack the world, but no, I don't have anything else. Thank you so much for having me. And I'm on all social media platforms. So if anyone ever wants to ask me a question about bug hunting, bug bounty programs, you're always welcome to find me on Twitter, Instagram, whatever social media platform you're on.
Ben A.: Cool. Great. I'll put your information in the show notes below.
Ben S.: Awesome. Thank you so much for having me. This was a blast.
Ben A.: [music] This podcast was created by Teleport. Teleport allows engineers and security professionals to unified access for SSH servers, Kubernetes clusters, web applications, and databases across all environments. To learn more, visit us at Goteleport.com.