November 18, 2019 • 47 minutes
Jeremy chats with Ory Segal about the differences between traditional and serverless security, the importance of the CSA's 12 Most Critical Risks for Serverless Applications, and what the future of serverless security looks like.
Jeremy: Hi everyone. I'm Jeremy Daly and you're listening to Serverless Chats. This week I'm chatting with Ory Segal. Hi, Ory, thanks for joining me.
Ory: My pleasure.
Jeremy: So you are a senior distinguished research engineer at Palo Alto Networks. So, why don't you tell the listeners a bit about your background and what you're doing at Palo Alto Networks?
Ory: Sure. First of all, congratulations for managing to actually say it, it's a mouthful. So yeah, actually I got this title after a PureSec, the company that I co-founded and was the CTO of got acquired in June of 2018, by Palo Alto Networks. So, as I said, I used to be the CTO and co-founder of PureSec a small vendor, actually the first vendor to offer a serverless security platform. And my current role at Palo Alto is mainly to oversee the research for the security algorithms and the product features for serverless security within the Prisma brand, which is the cloud security brand in Palo Alto.
Jeremy: Awesome. All right, so I want to talk to you about what you've been working on for, I don't know, how many years now it seems like, but serverless application security. And I want to start by discussing what's different about traditional security and why serverless security is a bit different.
Ory: First of all, I think it's important to get some background. I've been doing application security for I guess, over 20 years since the end of the 90s. Starting with Sanctum, which was the first company that that built the world's first web application firewall and later on Apps Scan, which was the first DAST scanner, which was later acquired by IBM. And after doing that for a while, I worked at Akamai for about five years leading the threat research for the cone and cloud security product. And at some point, somebody approached me and started talking to me about severless security. I can already tell you, that was one of the other co-founders. And the story or the technology behind severless sounded very interesting both from an innovative aspect but also from security. Everything I knew about application security seemed, at least from a protections perspective, seemed to be sort of irrelevant or not exactly fit the serverless model.
So obviously, and we'll talk about that later, you still need to do input validation, business logic enforcement and all of those things, but the form factor and the way you deploy serverless applications made it very challenging to the point that it was mind boggling and interested me very much and I started thinking about, okay, how can we apply runtime protection to serverless applications? And that, I guess, got me interested and eventually I left Akamai to join Pure Sec.
So, and back to your question, serverless security, should actually refer to this as serverless application security is indeed application security. The same old application security that we know and some of us love from other places like mobile and web apps. So input validation and configuring the platform and hardening and all of those things, but it has some twists, some very interesting twists that you definitely have to keep in mind when you're building those applications. It's a different way of performing threat modeling and different methods of input validation that you need to think about, where inputs are coming from.
Obviously, configuring the platform is very Different, we're talking about cloud native environments, usually public cloud. And again, we'll get back to that a bit later. So that twist is what I think makes it more interesting and obviously more challenging.That's a high level overview.
Jeremy: So let's get into a little bit more of the details there. So I think one of the things that changes quite dramatically, and I know you've written about this, is that shared responsibility model that the cloud gives us, right. So what changes with that shared responsibility model?
Ory: That's actually one of the topics that I really love talking and just discussing this offline, not always in conferences because this is something that I usually bring up when I talk about serverless security. So in every public cloud scenario, there's a shared responsibility model between the customer and/or the App owner and the cloud provider. And there's a line at some points, and really that line or where the line is drawn really depends on the type of cloud model or public cloud model that you're using. And so we start to think about infrastructure as a serverless, then the cloud provider is responsible for the physical infrastructure but any anything above that is your responsibility. So the VM, the host, the hardening of the operating system, and the users and everything, that's the responsibility of the cloud provider.
And in serverless that line reaches new heights, which is something very interesting, because for the first time you're really not responsible for the majority of security requirements or demands. If you look at PCI compliance requirements and you compare, and I have an article about that as well, between infrastructure as a serverless and functions or serverless, you see that your role is reduced or your responsibility is reduced to even less than half. Which brings me to the next point that's, theoretically speaking, serverless applications actually are a terrific enabler for application security. Takes away a lot of the things that we usually miss or we usually screw. So patching that we all know is a very tedious task that you have to constantly be on top of.
So in serverless your starting point, from a security perspective, is actually much better off. Somebody else is responsible for almost everything except for the application itself, which is, I think, the future of what I was hoping for application security to see all those things patching, and OS updates, and physical infrastructure taken care of by somebody else and leaving you to deal with the things you actually understand about, which is your core business and the business logic that you own.
Ory: I recently heard a very cool analogy about serverless. Somebody was comparing it to transportation or automobile industry where, when you own servers, it's basically like you own your own car. And then Infrastructure as a service is more like you rent or you lease a car, and then serverless is more like Uber, where you just drive the car when you need it, or you don't drive actually, somebody drives you to where you need to go, and that's your only responsibility basically. And from a security perspective, I think that's brilliant if you think about that. Do application security and leave the rest to somebody else. You just use the infrastructure and then dump it, which is very cool.
Jeremy: Also it's the pet versus cattle analogy as well.
Jeremy: Yeah. So I think that, that's a super important point, right, this idea of you having to worry less about the security, getting all of this perimeter security out of the box obviously with the cloud providers which gives you a greater security posture right off the bat, which is awesome. But then you can go even further, right, because you can take IAM roles and you can assign those to individual functions, which gives you this really, really fine grained security.
Ory: Yeah. And I think the IAM topic, which used to be mostly relevant for AWS by the way until recently, but other cloud providers are now closing the gaps there, is a very interesting topic because for the first time, as you mentioned, you can get very granular with access controls to the point where, and that's that's almost unheard of in the world of server full or I don't know how you want to call those traditional applications, where you can dictate that a specific function can only do a very specific action on, let's say, a database table. So think about allowing a function to only read. And you could never do that, and you can see that when somebody used to hack into a system and take over an application, usually it would end, it's a game over, like a remote code execution or sequel injection, that's pretty much game over because you can elevate privileges very quickly and do lateral movement inside the network.
In serverless if you do IAM like Identity and Access controls properly, you can get to a point that somebody completely exploits a single function, but is left unable to do pretty much anything other than what that function is allowed to do. So think about a function that, let's say, registered somebody's user, it's like a user creation function, the only thing that attacker will be able to do is create users, which is, okay, it's not good, but it's not the worst case that you can think of. That wasn't the case. If you had a sequel injection or some kind of injection in an App, you would probably dump the entire database in seconds, and that's something that if you do IAM properly, you are reducing the blast radius. And that could be by the way, very frustrating for it from an attacker perspective. I'm pretty sure you remember the article about Lambda Shell and if not, go check it out.
So from an attacker perspective, that could actually be the thing that will block any lateral movement further on. So, again, kudos to whoever thought about that very, very granular IAM model and using that obviously, in cognitive environments.
Jeremy: Yeah, because I think you make an awesome point about that where in the traditional server full application or you've got some application spread across servers that the entire code base basically has access to your entire database, right. So dropping tables or deleting rows, and not that you just can issue a standard delete or drop table or something like that in Dynamo DB environment, but the fact that you can just say, look, you can create as many users as you want to but you can't access the table, you can't select records from that table. And if You really want to get fine grained with sequel as well in serverless, and I think not enough people take advantage of this, but if you have Lambda calling sequel, you can create separate users that have limited access as well. So Lambda A, has access to maybe the write role, and Lambda B has access to the read role, and then lambda C has access to just delete. So there are certainly ways that you can add that granularity even if it goes somewhat beyond just the IAM configurations as well. So, very cool stuff.
Ory: Yeah. If I take you know, I'll give you a real world example of the exact opposite. So my wife runs a blog, it's running on WordPress deployment that she has somewhere in some hosting service provider. And I don't know, she's using some WordPress plugin which is obviously vulnerable and every month or two, we have to completely destroy the entire installation and install everything from scratch because somebody manages to find some vulnerability inside that plugin specifically, which was written very poorly, and take over the entire infrastructure. I'm not talking about the WordPress installation, but also the the OS, they're destroying everything, destroying files, rewriting database tables, and all of that because of a plugin. So, if that plugin had the minimal permissions it actually needs to run, which would probably be nothing, there's no way that would happen. So I think the the new IAM model is a blessing.
Jeremy: Yeah. And we'll talk about insecure third party modules or dependencies in a minute. But before we move on to that though, just quickly maybe, what are some of the security practices that I mean, you've been doing for the last 20 plus years? What are some of those security practices that we're missing the tools for in serverless now?
Ory: Okay. Where do we begin? Let's start with the simplest one, static analysis. I have yet to see an adequate static analysis solution that can handle serverless applications. There's just so much complexity in severless applications. There's a lot of logic that's not inside your code, there's a lot of logic that spreads across functions, there's a lot of glue between functions that, when people use cloud services like queues, message queues, and Kinesis and things like that, we're simply statically scanning the code of the functions and maybe even the configuration is not going to help you. It's going to be impossible to actually locate some data flow related issues like injection attacks and things like that. So static analysis is extremely limited today. These vendors are going to have to do a lot of research and improve the technology to support, I guess, cloud native environments. So that's one.
I'm not saying that they're completely useless. So, if you have vulnerabilities inside a specific function, then obviously if they support the runtime language, that should be okay. But more complex vulnerabilities, they're just not going to find.
Dynamic analysis tools. Again, if it's a web based application and serverless is just the backend, then you might get lucky, and you'll be able to instrument the API's and then fuzz insane attacks. I'm not sure regarding the automated validation. So, you don't always get a response directly. It could be something like asynchronous where you send an API or some data and then it shows up somewhere entirely different, not in the HTTP response. And so being able to validate that the injection succeeded is going to be a problem. And obviously, IAST, which is I think interactive or integrated application security testing, which is you deploy an agent, and then you run test, and then the agent hooks into the different syncs inside the application. I haven't seen anything being offered yet by any vendor. So, security testing is currently a big drawback or automated security testing, I should say, regardless if it's dynamic or static.
With regards to protection, I think I mentioned that in one of the ... I might have not blogged about it because it was after the acquisition already. But if you look at the range of inputs that serverless applications consume, and I'm basing this on I think Chris Month's a slide from a while ago or a tweets about a slide, then web triggers or API gateways, just some percentage, I think less than 20%. I have the number of somewhere I don't want to say the exact number. But not a lot of the triggers going to Lambda functions are actually coming from API gateway, which means that applying WAF, even if it's a cloud WAF, it's not very efficient from a coverage perspective. You have a lot of functions triggering from asynchronous events, from message queues, from S3 buckets. Where would you place a web application firewall there? And even if you did, we're talking about non web traffic, so eventually it is Restful API calls but it's not the classic standard HTTP message parsing that WAFs are used to.
So, WAFs were irrelevant for serverless or for large chunks of serverless applications, which is, by the way, one of the reasons why we came up with the serverless security platform in Pure Sec, given that, that's not giving you a lot of coverage.
What else, so host based solutions and network based solutions like IPS ideas, host based intrusion detection, Endpoint Protection solutions, even outbound traffic inspection like web security gateways that can help you to avoid server side request forgery and remote file inclusion attacks and things like that. Again, not relevant, there's no place for you to deploy them, which is a problem.
Jeremy: All right. Well, I'm glad I asked that question. So I think actually mentioning the things that you mentioned, brings up a really, really good point and I've spoken about this before, and I've even been criticized in some way in the past for this idea of FUD, right, this fear, uncertainty, and doubt especially when it comes to serverless security. And I have always been a practitioner of good security policies or at least I like to think I have been and I tried to be hyper vigilant and maybe I'm a little paranoid, but I ran a web development company that hosted servers and things like that. And so, I have a lot of scars to prove why some of these things are more important or need to be worried about. And so, I want to talk about the CSA top 12 that you spearheaded and did quite a bit of work with. And this is this list of the most critical serverless or the most critical risks for serverless applications. It was inspired by the AWS top 10 a lot of similarities there, although because serverless goes well beyond just web application security, there's a lot more happening behind the scenes so this is a little bit of a broader list. And so, I know there's been criticism again, because I've gotten some of it, that maybe this is instilling a lot of fear saying oh, now here are all these other things you have to worry about and serverless isn't secure and and I don't know how many times I've said this and I know you said it, serverless right out of the box is more secure probably than any other programming paradigm or application paradigm that exists. So maybe just give your thoughts on that about, why this list is important. And I know we've seen a lot of things with miss configurations lately causing lots of problems. But why is this list so important to you or important to everybody?
Ory: Well, you brought the topic of FUD and I think it's worth spending maybe two minutes on that and my views of it. So, I think FUD is not necessarily something negative. It is used in a negative context, especially when people want to call out FUD in a very critical way. However, well, there are two types of FUD. The negative one, which I think is the one that people are usually referring to is the vendor FUD, where you scare people that the end of the world is coming and you have to buy my product or else you're doomed, that's I agree, not the best approach for security marketing. However, my Security vendors, by the way, take that approach. And for the reason why I think FUD actually has some positive aspects to it.
To convince developers and stakeholders that they need to take security seriously, the toolbox that you have in your disposal is not a very rich, I guess. It can say, I don't know, maybe people won't see you as a good developer or you're a lousy Product Manager or things like that, or you can threat them that there's a liability issue here. And if somebody will find a vulnerability and will exploit that, it pretty much means that they will lose their job and the company's going to suffer the consequences.
So, I think that telling or explaining what is the worst case outcome of a security issue is not necessarily a bad thing. So, yes, sometimes fear, I'm not sure about uncertainty and doubt but instilling fear in people that they need to think about security and it's critical and not because ... I can't even find positive things to say of why you need security. You need security because-
Jeremy: You need security.
Ory: Yeah, because people will hack, and exploit, and steal, and ex filtrate your data, and you need to worry about that, it's a risk, you need to fear the risk. So I think it's not all negative. Now, back to the, and by the way, this might be a scoop that a security person is actually admitting that FUD is not necessarily bad and I'm accepting the fact that I'm sometimes spreading fear, I think, especially in presentations when you give talks in conferences, showing sexy attacks and spreading some fear generates more interest than when you speak in a very monotonous way and talk about I didn't know the fact that you need to apply, I don't know, strict IAM permissions. If you don't give the scary examples, people don't go away with anything. So, using drama is always, I think, good in this case.
You have to make sure you don't overdo it, of course, and that actually takes me to the top 10 and later on the top 12 that we published. You have to keep in mind that without the top 12 documents around, developers and architects wouldn't have any materials about serverless security to learn from. And this document talks about potential risks that we prioritize based on what we've seen with customers and prospects. And we collected the data from other evangelist and the industry experts. And prior to this effort specifically, if you take a look back to two and a half years ago, before the original top 10 came out, the majority of materials if you looked for serverless security dealt with IAM permissions and third party library vulnerabilities. And there's a reason why that was the case because either the cloud providers that's what they allowed you to control or the other vendors had legacy security vendors had to offer.
But nobody touched about the actual risks that were we're going to cover in a second, and so I think, at the end of the day, this document provides architects and developers with a good starting points, a good reading material that they can rely on and turn to, to understand what they need to worry about. It's in no way very dramatic, it goes through the list, it offers you the right remediation depending on the cloud provider, it shows examples. I don't think in any point in the document, there's anything scary or FUD like in that negative sense.
Now, there were some people that mentioned that the majority of the document deals with security of functions, I think, that the document exclusively looks at what you can do to secure functions and neglecting other aspects of the cloud native environments in which those functions run. And that's, I think, simply incorrect. If you look at the list itself, it talks about authentication issues, and cloud configurations, and permissions, and monitoring, and how to handle secrets, application secrets, and to prune obsolete resources and things like that. There's a lot more than just the security of the functions. Truth be told, there is an emphasis on functions, but if you think about it in serverless, today's serverless at least, the point or the location in which a developer can actually control input and control business logic is the function. It's where your custom code lives. And so I think it's almost trivial to say that in serverless, at least in serverless that's functions oriented or centric, application security is going to be probably mostly or some of it will be applied inside the functions in. And so I think it's not entirely wrong to pay attention to functions.
In the future where, I don't know, serverless platforms will not necessarily mandate you to write functions like code lists on whatever, you take some pieces Lego pieces and you glue them together. Even today, if you have an application that all only uses API gateway and then goes from there to some S3 buckets or some queue and there's no function logic then obviously, the only type of application security you will be able to do is configuration based.
Jeremy: No. And I totally agree with you. And I think that there are criticisms of some of these things. One of them is well, a lot of this is just plain application security. Well, good, right. That's a good thing. You should know that. I mean, the first one we're going to talk about is basically based around sequel injection or this idea of function injection, and that's one of the things I wrote a post about this where you can upload an S3 key that has sequel in it, right. And so if you aren't practicing good stripping out things or using the right way to parametrize your sequel, if you're using that inside of function that processes that, that has no WAF in front of it or whatever, these are just good security practices.
So I certainly agree with you that application security is really at the root of most of what this does. I also think there's a lot of overlap with what this points out versus what maybe also applies to other types of micro-serverless architectures or event driven architectures certainly. But that's not the point, right. The point of this document is to take these top 12 things that when You're writing a serverless application, whether some of it spills over into configuration, some of it is more traditional application security, some of its just good practices and maybe not specific to serverless, I do really like this document because it is a way you can put in front of a developer and just say, hey, be aware of these things because they can cause problems down the road. But that's my opinion so I certainly agree with you on that.
Ory: There's another comment, I think, and that's important in this phase of the lifecycle of this document is the question of, how much is this practical versus how much of this is theoretical? Because at the end of the day, we haven't seen a lot of attacks and a lot of vulnerabilities in that sense. So...
Jeremy: But how would you see those attacks?
Ory: Exactly. And actually, I'll get back to that point in a second. But you have to remember that since the early, I don't know, the dawn of this internet age, security researchers usually dealt with theoretical issues. If you look at some of the things that I was a part of the effort to discover them things like HTTP response bleeding, and sequel injection, and X path injection, and LDAP injection, cross-site scripting, when we publish those advisories 20 years ago, nobody was exploiting them, it was entirely theoretical. You could say we are to blame that people later on...
Jeremy: You made people aware of it, that was the problem.
Ory: Exactly. But you have to remember that as a security practitioners and specifically as researchers, we are trying to flag potential future risks. If we were to only look at what's being used and exploited today, we will always be in a dog chase with attackers. So, I think it's very good that security experts and security researchers look for the next attacks in a new technology and finding it before it's being exploited. And so you can then teach developers how to avoid these and hopefully, reduce the attack surface.
So I think it's not necessarily bad that we're pointing out things that haven't been exploited yet. And as you mentioned, regarding the evidence, usually attackers there aren't web forums where attackers share war stories of how they hacked into a system. So obviously, attackers don't publish anything about their techniques. And especially if you talk to the application owners and companies, most of them also don't like to share information. In fact, usually when they give the server a security conference talk, at the end, there's that five minutes that you save for questions. Usually, I know that nobody's going to actually ask a serious technical question because they are embarrassed. It's like something that you don't want to talk about around other people from maybe competing organizations.
So there's no resource to go and look at and see how people are exploiting and what are the vulnerabilities. We collected information from customers and prospects, I've reviewed dozens if not hundreds of serverless Apps at this point, and we collected this information to see what are the most repeated risks that people do.
Jeremy: And I think the proactive versus reactive approach is exactly what security people should be doing because again, it's no fun to go and clean up security breaches, it's much easier to stop them right away. So anyways, all right, so let's get into this top 12 list. There is an entire document on this and I will put the link to it in the show notes because it is certainly something people should go and download and take a look at. But just for the benefit of people listening, why don't we go through these and then just give me a quick minute or so on each one. And just to make people aware of them and then, like I said, definitely dive into these in more detail. So the first one is this idea of function event data injection, what's that all about?
Ory: So here's one that's interesting actually when people ask about what's the difference between serverless and I don't know, maybe web apps. In web applications that used to be called, originally, historically, parameter tampering, I think later on it was just called injection attacks. And that's simply when a malicious actor or a user can control some of the data fields that your application relies on, and manipulate those fields to inject some kind of attack payload. So think about sequel injection cross-site scripting, path reversals, command injection, all those injection based attacks, that function event that injection basically encompasses them. The main difference here is, I guess, the rich set of events that you can consume in a serverless function. And that's the main difference.
Again, if you talk to web developer and API developer, they know which fields they need to rigorously inspect. They know about parameters body and query. They know about headers and cookies. Maybe the passing foot path of the URL, everything they're really used to it and there's a lot of frameworks that help you to actually validate that input. But when we are talking about serverless applications, I'm not sure everybody knows which fields they should be inspecting. So think about when you get an S3 bucket event. Yeah, I'm not even sure you know all the fields that actually arrived from that events to the function, obviously it includes the file name, the change, but other fields as well. So which fields do I have to inspect? Obviously the ones I rely to, but do other fields, can an attacker even manipulate them? How is that going to affect my application? I don't necessarily know.
So the problem is the same problem, its input validation, it has been input validation since we wrote Coble applications for mainframe, and on mobile, on web apps and now serverless. But the way or what you inspect, how do you inspect, what are the environments that the value then goes to, requires some different attention than what we are used to?
Jeremy: Yeah. And I don't think this is specific to serverless. Any application you're building now that is getting events, certainly from SMS, or SQS, or any of the other AWS services or other cloud provider services, I don't think this is specific to it, but certainly, yes, if somebody can dump poisonous data into one of these, into one of these hoses, then your system does need to be responsible for parsing that. And I actually think this is somewhere or this is one of the applications where the CNCFs events project that they're working on is standardizing these events and what these payloads might look like, could be really interesting for a vendor to come in or some sort of open source to come in and build a WAF to some degree that could inspect these events.
But it goes back to trust too, right. I mean, that's the S3 example with the sequel in the key, what fields do you trust to right too? So, I mean, you might say, I don't trust user input but the name of the file might be one of those things you could overlook. So, just certainly something to definitely be aware of.
Ory: There are even more basic things that people haven't thought about I think yet. How do you actually trust the event? How do you know that the event actually came from the source you think? I can easily spoof an S3 event, send it to the function and invoke it, and claim that I'm S3. Is there any way for the function to know that the event actually came from the service that claims to be that service? What about schema validation, and you mentioned the CNCFs events. The schema for these events is not even standardized inside a single, if you look at AWS or Google, different types of events have completely different formats, with different fields, some of them with different formats. So schema validation, which used to be ... if you look at like XML security gateways, web service security gateways, schema validation is the bread and butter of how you protect API's, but how do you do schema validation something that who's schema you can only guess based on some examples you see on some documentation on the web?
So other than the input validation there's also, as you mentioned, the issue of trust and well formedness of the event itself, which is interesting.
Jeremy: Yeah. All right, so number two, broken authentication. So again, this is something that can be a problem for anything, but how does this apply to serverless?
Ory: It's the same old broken authentication that we know from any other type of applications, like you said. I think the main difference is, we as serverless practitioners, tried to preach for reduction in I guess, focus. So each function should have very laser focused tasks that it should be doing, at least in our [crosstalk 00:39:54]. Yeah, exactly. Exactly. The principle of single responsibility. You want a function to do one very specific thing. You don't want people to write monolithic functions. And so we are pushing people to break their application into dozens or more functions. And suddenly you have a lot of, I guess, input vectors into the attack or into the application, and you have to think about how you authenticate and authorize invokers.
So it's not a different problem, it's just it increases the scale of the problem. If you didn't do authentication well in a standard web app, and now you're breaking the web app into dozens of functions and each one can be invoked by God knows who, then the authentication issue becomes, I guess, more complex. It's not different, it's not worse, it's just there's more of it. [crosstalk 00:41:01]
Jeremy: It's typically right you're putting some authentication logic in middleware or something that every part of your application has to pass through but now you have the ability where you write a function that has no authentication at all, and your authentication to that function is from a higher service like API gateway, that is controlling that access, which again, it's just different and just something to be aware of. All right, so then what about insecure, this is number three, insecure serverless deployment configurations?
Ory: Yeah, if you drop the serverless and maybe turn it to Cloud and the Cloud Native, I think it becomes more obvious. So configuration and we hear about that a lot, and we see a lot of examples almost every day of people not doing their cloud configuration properly, and it's the same problem. In this case serverless is just because it's in the cloud, in public cloud, your applications use buckets, use databases I think I just read a while ago that Adobe left some cloud database open and people ex filtrated data. So, yeah, I think without configuring ... This is not different than, okay, you could have misconfigured your Apache server and leave directory [crosstalk 00:42:28].
Jeremy: You probably didn't misconfigure your Apache or [crosstalk 00:42:32].
Ory: Exactly. And your your PHP file included some nasty configurations. And yeah, that's not something new but you have to remember that, at least in serverless with the lack of other types of runtime protections, and firewalls, then the cloud configuration or the cloud and the IAM are your new perimeter. So this is your way of securing your account and your application, and so that is probably one of the most important things that you have to make sure that you cover. And there's a good reason why I think cloud security posture management vendors are very successful. So, that's something that everybody understand that there's a need to have. So you need to be aware of all your cloud assets, where they are deployed, how they are deployed, whether they have configuration issues and improper permissions, that's going to be the new command and control for CISOs those cloud security posture management tools.
Jeremy: Yeah, and I think this is one of the points in here that goes well beyond just this idea of serverless being only functions, right. And I mean, it doesn't matter what application you're building that's using these, it's just that when you're building a serverless application and you're using Lambda as the glue, or using API gateway to do some of these serverless integrations, things like that, that the configuration of these managed services is very important. I mean, we've seen people leave Elasticsearch wide open, right, and just be able to query Elasticsearch, just knowing the domain, and obviously the S3 buckets stuff in the Capital One breach some of those things, those configurations certainly there are many issues that can happen when you don't configure these things correctly, broader topic well beyond just the serverless aspect of stuff. But bringing it back to the serverless ... Sorry, go ahead.
Ory: No, I wanted to say and I think you're just about to take it back to serverless and functions. Functions also have configurations that are very important and have security consequences to them. Even the timeout settings, people think about the serverless applications because they're based on functions which auto-scale and support a lot of concurrent executions, people tend to think that these applications are automatically resilient to denial of service attacks but, and we've proven that's not the case, you really have to tweak and configure your functions, and the memory, and the timeouts, and the dead letter queues and whatever properly if you want to actually enjoy these benefits. So it's not automatically secure.
Jeremy: Right. And that actually is a lot of that configuration falls back on the developer, which you're probably not used to doing those things.
Jeremy: All right. So let's move on to number four. So number four is over privileged function, permissions and roles. This is one of my favorites because I feel like this is something that people do wrong all the time because it's just easy to put a star permission.
ON THE NEXT EPISODE, I CONTINUE MY CHAT WITH ORY SEGAL...