Episode #139: Tactical Serverless with Lee Gilmore

May 30, 2022 • 44 minutes

On this episode, Jeremy and Rebecca chat with Lee Gilmore about the complexities of productionizing serverless applications, what is Serverless Tactical DD(R), why serverless threat modeling is so important, how to think about your architecture layers, and so much more.

About Lee James Gilmore

Lee is a mentor, blogger, and cloud architect passionate about resolving complex problems with simple solutions, with a key focus on serverless technologies on AWS. He's currently a Global Serverless Architect at City Electrical Factors. Before that he worked as a Principal Developer / AWS Architect at AO across the five CeX (Customer Experience) teams; Customer Interactions, ChatBots, Order Management, My Account and Agent Experience, and also previously worked as a Technical Cloud Architect / Technical Lead on cloud native projects @ Sage PLC, after transitioning from Principal Software Developer, and with over 17 years professional experience in the industry.

He was a member of the extended leadership team at Sage within product delivery, with a keen interest in innovation, serverless architectures, and technology and has historically held long-term senior technology positions in two separate FTSE 100 companies, as well as running his own start-up, writing articles for ‘The Startup’ which has 680K followers, and mentoring in his free time. He's also 6x AWS Certified.

Transcript

Jeremy: Hi everyone I'm Jeremy Daly.

Rebecca: And I'm Rebecca Marshburn.

Jeremy: And this is Serverless Chats. Hey, Rebecca!

Hey Jeremy! It's good to see you again. I feel like I just saw you yesterday.

I think it was just yesterday that you saw me, yes.

Rebecca: Oh wow! Well, that's great and it feels quite nice actually to be less than 24 hours later and here we are again. I don't know if we have anything to talk about that's new? Do you have anything to talk about? Or should we just dive in?

Jeremy: Yeah we might just need to dive in. And I think we should dive in because we have an extremely knowledgeable guests today that is writing a lot of content, sharing a lot of content. But what I really like about this guest content is it's not just 'okay here's a tutorial on you set up, I don't know, here's how you set up streams on dynamo DB.' Not that those tutorials aren't super helpful and are needed but I think that this guest, the documentation and the blogs that he writes are very much so expanding the knowledge of where serverless fits into the larger architectural patterns that we're already using in organizations and how you can bring that serverless thinking and that serverless mindset into those organizations and apply them to very, very common patterns. So, I don't know if you'd like to introduce him?

Rebecca: I would love to. And I'm also really excited to talk about that content because this person is known for I would say 'multi parters' of content. Our guest today is global serverless architect at City, Lee Gilmore. Hey, Lee. Thank you for joining us!

Lee: Hey, it's great to be here. Thanks for the invite.

Rebecca: Yeah. Will you tell the audience a bit about your role and what you do at City to kick us off?

Lee: Yeah. Sounds good. So yeah, I'm Lee Gilmore. I'm a global surplus architect at City so that's based across [inaudible] and the US. And what I'm currently doing is working with the teams on a big serverless transformation at an enterprise level. So yeah, very busy at the moment. Lots going on. But it's very enjoyable.

Jeremy: Yeah, so I think it's really interesting to start seeing titles, too. Like 'global serverless architect.' I think we just started getting titles like 'cloud architect' and some of these other things but there is a very different mindset when it comes to serverless applications. And I think building cloud applications, again, whether you're doing Kubernetes or doing any of these things, there's a lot of complexity that is there. And one of the things that we talk about a lot on this show and I think we have Ant Stanley talk about this before as well, is it is pretty easy to grab a template for the CDK or grab a 'hello world' from serverless framework or from SAM or whatever. And get that that very simple 'hello world' up and running. And then even beyond that, a little bit of research, a little bit of time watching some videos, maybe reading some of your blog posts and so forth. People can get to that next step and they can maybe even get to the MVP. Maybe they're even using Amplify or something like that to get to it. But what I always find is is that you get this really nice proof of concept, things are running, and then all of a sudden it's like 'boom' you hit a wall because there's just so much that goes into that next step which is really getting it to that sort of, you know, enterprise ready or production level. So, I'd love to get your thoughts a little bit on that. You write a lot about this. But what are your thoughts on that the complexity of productizing the serverless app versus, you know, getting that original MVP up and running?

Lee: Yeah, so this is where I'm quite big on what I call the serverless Dunning Kruger effect. And I think it's so easy for teams to spin up a solid stack. So you know the react app or something in S3. An API layer with app sync or API gateway, Lambda and something like dynamo DB. And within an hour you could quickly get this put together, [inaudible] deploy. You know, it's in the cloud. And it's like yes, you know- serverless is so easy. It's not, I think it's very hard. I think you know you're plugging together a lot of services. Each of those have their own complexities, they have their own config. So I think it's very easy to fall into the trap to think serverless is very, very easy and that's where, at a certain, you know, at an enterprise level, I typically look at kind of three main areas to focus on, as we kind of transitioned from traditional architecture to, you know, enterprise level service solutions. So they would typically be: architectural layers- so, how do we build, in an enterprise, serverless applications. Tech radar- so, you know, get some governance around the technologies that we use. Get some kind of alignment and nonfunctional requirements. So an approach I call tactical DDR.

Rebecca: I would love it, when you first talked about what you were doing at City you were talking also about I want to take a step back. You're talking about how you're leading some of the transformation towards serverless. And can you tell us a little bit about the backstory, right? Like what was even the- it's not necessarily an easy decision to be like 'okay we're just going to go serverless.' And so there's a lot of discussion, exploration, I'm sure debate, that goes into what that means and what it will take to actually- it's not easy like you said- to do that transformation and move in that direction. Can you tell us a bit about the pre-story, if you will, to get us to this place?

Lee: Yeah sure. So the original architecture that we've got has allowed the company to be very successful. But that kind of move from, you know, a monolith, with all the technologies in an older database, that's not going to serve maybe in the future, as we scale out and kind of scale up. So, in that regard, I think from the top down it was very much serverless is the present and future of most production workloads and it's going to give us the benefits: quick agility and moving quickly allowing us to prototype very quickly and then productionize that and kind of push changes regularly in the production. So I think for all the right reasons we're using serverless. But again, around that, the teams have took time to build out PRCS and learn serverless, learn the complexities that you have. And we're at a stage now where the parts of the business and now moving to the serverless approach. And I think this is where you need to take people on that journey. There was a tweet by Elon Musk that said, 'Prototypes are easy but production is hard.' And I think that's very much the case, personally.

Jeremy: That's definitely true. So you mentioned this idea of like, in a year you have these people that are going through the journey that basically the maturity of serverless or the maturity of the adoption of serverless really only happens within an organization by having people experience it, right? You can't just say 'we're going to adopt serverless tomorrow' and then know everything that you have to know, right? So you write, and you mentioned this, this 'serverless tactical approach' right? Or the serverless tactical DDR. And very much so compliments this idea of the definition of ready and the definition of done that you that you write about. So I'm very curious about, I mean, I want to talk about this whole thing. And we can go wherever you want to go on this because it's just super interesting. And again, you have a whole blog post on this. I think you might have a series on this. But essentially the idea of, as an organization starts to adopt serverless, like what are the things that they can do initially, how does that progress? When do you say 'okay this is a prototype that we can test whatever.' When do you say 'this is a prototype we can actually put into production?' At least, you know, increase that maturity over time. What's that process look like and how do companies do that?

Lee: Yeah, so for our company, and using a kind of technical DDR approach, it's essentially looking at guard rails. At the definition of ready stage. So, as they pick up a ticket and they're looking at a feature- What kind of criteria, what things do we need to think about at that early stage before we start the way? And then as the team kind of progresses through it comes to the kind of 'definition of done.' So what standards need to be in place before we actually push this to production. And, you know, the things that make up tactical DDR are very much the things that I personally feel that teams don't think about. So that provides a multitude of things, so you've got things like threat modeling. A lot of teams don't think about threat modeling. I know people like Sheen Brisals has done a fantastic post on that which really got me hooked into kind of threat modeling. And I think that just allows teams you know at the very start, drawing out the architecture, just having to look at it and kind of finding any holes in it, as a group. Really fun. Sticking the architecture diagram on a mural board and quickly just adding on tickets like traditional strike model. You know looking at [inaudible] and number repudiation and those kinds of things. Even that, at that kind of early stage, allows you to kind of plug some of the gaps before it actually even starts. Where, you know, before the teams do any work whatsoever. But it also covers things like authorization. So again very easy to push something into production. But have the team thought about how do we authenticate the APIs authorization? Who should actually have access to that? You know, I've seen in production people use API keys and usage plans as the only kind of way of tying down those API end points. As things like compliance so a lot of teams don't think about PII, personal identifiable information. GDPR- that's a massive consideration when we're building out. And PCI DSS compliance covers things like testing. So do we have adequate testing? So end to end test and unit test and integration test and it might be using synthetic canaries and, you know, having things running in production, which is pretty cool. And the enterprise internationalization, so you know, if we think about, especially for City, if we build a demand service we want to be able to lift .That and put it in a different locale So have you thought about currencies and debts and, you know, front end internationalization? That kind of thing.

Jeremy: Nobody thinks about that though.

Lee: Until somebody actually says, 'right we need to drop this in a completely different country or locale. And then it's a ton of rework. So why not think about this at the very start. And I think that's where having this at the definition of 'ready stage' just means people think about this as they pick up a ticket. And as things like caution would this help any kind of caution with regards to costs or latency? Again this is usually something that happens when teams are struggling with one of those things: cost or increased latency for the end user. And auditing, so again, this kind of feeds back into threat modeling a little bit. But can we tell who's doing what on the system? So, you know, having got CloudTrail enabled? Have you got versioning on buckets and you know kind of access login? Can we see who's actually ?Accessing anything if we've got customers that are busy interacting with a system or we logging what they're actually doing for again non-repudiation? So if they turn around and say 'no I definitely didn't click on that button.' You can see that it actually did because you're logging it. That kind of thing. It also provides load testing. So people think with serverless 'well you don't have to load test it.' I think that's definitely not the case because obviously your services might scale up very, very quickly. But downstream services might be affected by that. So how do you have buffering in there with maybe SQS and any kind of batch and that kind of thing. Disaster recovery- most people don't think about this with production workloads. But, you know, with dynamo DB if you got point-in-time recovery, for example, what's your RPO and RTO? These are things that, again, with serverless it's so quick to get things pushed out. But typically people don't think about it. It also provides documentation. So have we got open API and swagger docs? Are we using EventBridge schemas? So we can document what an event looks like within a demand? And it also provides report. And so, as soon as you push something in production you know, management are going to be asking you 'how is it performing?' You know, how do you allow teams to be data driven? Very quick run through of tactical DDR, but that's the approach. So far it's been useful for the teams I've worked with.

Rebecca: So that is a lot of truths and ideas to hold in your head at the same time, right? Anything from PII and GDPR to caching to load testing to performance testing

Jeremy: I can't even remember all the things he said. There are so many things!

Rebecca: I know I'm trying to list it off. I almost started writing it down and I was like 'no I'll be able to list it.' And then you kept going. And I was like 'oh my gosh I should have started by writing this down!' But so that's so many things to hold at once. I'm wondering if, I imagine that you have a, you know, bullets/sub-bullets checklist like 'have we done all these things?' If you have that, is it somewhere shareable that people can access that checklist? And, if not, where are the ways where are some of the resources that you learned how to put this together, right? And how can teams start to think about their own checklist of what they need to consider If they're making a tactical DDR?

Lee: Yeah so, I do have a [inaudible] which has a infographic which has the acronym and a little bit of a bullet point list against each of those to kind of think about. But most of the teams that I've worked with have took the items that have got 'definition of ready' against them and added those to tickets. So, you know, a standard template it might be in as DevOps. So you know wherever you're picking up those tickets. And the same with 'definition of done.' You know, this isn't going to be applicable to really small changes. It maybe be a text change on front end. Well obviously, these won't be applicable. But you know, I think just having them there for maybe the feature level, even at an epic level when architects kind of start to have a look at this. It's worth just a quick glance through and at least in or you've covered the main areas that most people forget about because I've done a couple of polls on LinkedIn and these are the things that just generally don't get thought about in my experience.

Jeremy: Yeah and I'm curious too. I mean again, you listed a lot of things. There's a lot and I think Rebecca put it well when she said 'bullets and sub bullets' right? Cause there's a lot of little things And as that checklist grows and there's more things that have to be done, and again, I get it. Not everything has to apply every time you launch some small thing. But I'm wondering how specialized team members need to be; how much they have to know about a lot of these individual things? So software development life cycle and a lot of these broader ideas around 'definition of ready' and 'definition of done' and so forth. Like these are things that I think teams eventually learn over time. But I'm curious about the specifics, right? And the specialization of, you know, security in the cloud is tough. And I want to talk about serverless threat modeling in a minute, but security is tough even if you have all kinds of tools in place to do it. There's still things you have to think about. App security it's gotta be top of mind. You know, even just configure resiliency. You talked about load balancing or you talked load testing, right? So load testing- that's one of those things where 'what happens when this Lambda function the concurrency is met?' Or 'what happens when events start dropping and they start sending to a queue?' 'What happens when a queue backs up and it can't process because there's, you know, everything starts flooding a dead letter queue or something like that.' Like, there's a lot to know in there beyond just going through and checking off the checklist. So how specialized do people have to be? Do they have to be specialized? Do people need to be generalized in this stuff or as your teams grow Is that something that you really should think about it as saying, like, we need people to specifically focus on these areas?

Lee: That's a great question. So I'd say from my perspective, having sessions with the teams as an architect or an architect working across multiple products of demands, I think working through something like tactical DDR that means teams do start to understand that. And when you do serverless threat modeling, for example, and we're talking about have you got versioning on my bucket or is that API secured? I think just almost through osmosis teams start to kind of take this in and next time you do a threat modeling session, those things have already been thought about in the design, like the initial design. But again I think it's teams fall into the trap thinking serverless is easy and it definitely isn't. I mean, as you know, I think it's just a lot of education. You know I do a lot of presentations at work and write a lot of blog articles and then sit with the teams and go through the 'whys.' For example, recently talking to the team about Lamada scaling up and talking to legacy databases and talking about connection management and Lambda scaling up quickly, opening and closing connections, CPU going up on the DB and memory. And these are things that again teams don't typically think about. So I don't know if I answered your question there because it's very difficult.

Jeremy: Well it might

Lee: You know, it takes time.

Jeremy: It might be unanswerable, if that's a word. Yeah no, I think that's right. It's one of those things where it's over time It's definitely education. That's the big piece of it. And piecemeal education is always tough too. So it's like well how do I do X? And then you go find a few blog posts, you read some documentation on it, and then you learn that and that's fine. But if that's not applied to the larger, or if it doesn't become part of that culture and part of the process, you know then I can see those things quickly falling through the cracks.

Lee: Yeah and I would say that, at in enterprise level, this is where building out reference architectures. You know, teams shouldn't be reinventing the wheel continuously and like and it falls into the architectural layers having a kind of platform there at the bottom. Things like differentiated heavy lifting so, you know, serverless teams shouldn't have to worry about VPCs or transit gateways and that kind of thing. All of that networking should be done from like a developer experience perspective, allowing teams to kind of use those reference architectures from a serverless perspective, and you know bake in that kind of good practice and that security at an early stage

Rebecca: Let's talk about threat modeling because we've said it a few different times in there. It's almost like something that you want to be thinking about so it's baked in to all of the ways that teams are going to be developing something. But specifically let's talk about how and why you should threat model. I want to say it's like almost self-explanatory but let's talk about it. And then really I think where Jeremy and I want to end up, around this idea, is that in the new era of serverless right, you can quickly chain services together, you can make super complex architectures, you end up having to probably try to simplify them a bit into reference architectures that other teams can use. And with the ability to build on serverless, or to build quickly in any sense of the cloud right, there's also the added downside of increasing the overall threat landscape. And you have the added layer that serverless is quote unquote relatively new. And so there are these threats that maybe have not yet been discovered. Or maybe people are still getting educated around best practices, compared to more traditional architectures where the threats are known, per se. Or you know, heavy air quotes on all known threats. And so let's talk a little bit more about this threat modeling. First, can you give us a quick like 'how and why?' Like, what goes wrong when you don't. And then some of this downside of the threat landscape of serverless and how we should be thinking about it?

Lee: Yeah, so I think that the biggest benefit of doing threat modeling is quite often what I say is teams missing certain things. You know we talked about access logs, for example, and S3 or have you got CloudTrail enabled. And if teams don't think about this as soon as a security incident, that's when you wish you had these things. Or it could be an API that's only tied down with an API key and not using the cogni. Or for example, what I typically do is just work with the teams on that initial architecture. Just it should be fun, it should just be 40 minutes on the [inaudible] mirror boards we can all just add tickets on at the same time. Little Post it notes. And nothing's too silly either. It could be things like [inaudible]. You know what happens if somebody's off boards and leaves the company but you haven't actually removed them for your [inaudible]. Can they still access internal services? Do they have access to things they shouldn't have access to, including data? So this is typically why I kind of work with the teams on looking at that.

Jeremy: Now I'm curious from your perspective though. I mean, we talk a lot about the shared security model and just the things that AWS does, there's a great, Ory Segal used to do this really, really well in all his presentations, where he would basically show: here's the shared security model for VMs and for containers. And then here's the shared security model for serverless. And there were so many more things that you didn't have to think about when you were building serverless. So things like even patching like the specter and whatever. That was taken care of for you. Before you even heard about it, your Lambda functions were already patched. And then of course, there's this ongoing thing where, each each service that you use, dynamo DB keeps getting better right? It gets faster. It has more features. Things get added. Same thing with app sync. More levers and switches show up in Lambda functions and the different event mappings and things like. As that progresses and that goes, clearly there is a huge benefit that is taken off of your shoulders when it comes to, or you get this huge weight taken off your shoulders with some of these risks. But some of these risks still exist. Just in terms of when you're doing threat modeling and when you're looking at this: what are the biggest ones? Like what are the biggest things that you say ' if I had to choose one thing that I could make sure was secure or that I was I was really paying attention to with my serverless applications' what would that be?

Lee: I'd probably say just ensuring that, and this sounds silly, but I've seen this so many times in production workloads, but are APIs actually tied down? Because a lot of the time people just don't think about it. Or it could be that your federated, your authentication with AD. That's purely for the authentication but nothing around authorization. So it could be an internal application or in finance or somebody in the warehouse, because they've got an active login, and because the part of the company because the authorization's not done, they have access. Or it could be the cleaner could be the CTO or it could be anybody. Just because you actually have the authentication side of it. So again I've seen that quite a few times where the authorization aspect you know, taking a principal ID from the talk and then actually having some kind of lockup within the demand service. That's just not there. So definitely around authorization, I would say. And and same goes with people off boarding. You know, that's something that most teams, in my experience, forget about.

Jeremy: So beyond just serverless: locking down your APIs is just good practice, at all. But like in terms of specific serverless things, I mean you get functions that are Infinitely scalable, let's say. So we always hear about denial of wallet attacks and things like that, though we've never seen that actually happen. At least to my knowledge, I've never seen it. But you also have things like shadow APIs. Like just things like that where people keep publishing API gateways and publishing end points and then they don't tear down the stack. So you might have hundreds of APIs out there that could access internal data, could be running old code. Oftentimes I see that those shadow APIs run old shared code that wasn't patched. Because again, you make a change to shared code, most people aren't using layers and updating their layers every time there's a deployment. So you get a lot of this old stuff in there. I'm just curious I mean, you know I'm only pushing you on this cause I know you know more here. And we want to extract this from our guests. But I mean, serverless specific. Are there things specifically ,for serverless beyond best practices around security in general, but anything in serverless specifically that is something that you think people might need to look out for?

Lee: So one thing you mentioned there was denial of wallet attacks. So, a lot of teams don't think about unreserved concurrency on to what could be just a health check end point for example. Somebody had start, they've got an active token and they hit that a hundred thousand times a second, something silly like that. That's obviously gonna use all of your Lambdas within your account. That's then affecting production workloads. It's essentially throttling everything that you've got. So I don't see a lot teams using reserve concurrency for that reason. But definitely if I've got an internal application used by a handful of people, I would definitely add reserved concurrency onto those end points regardless of being internal or external, I guess.

Rebecca: So you're talking about, and I appreciate you Jeremy for having done this as well, where you're like 'Lee I know you know more.' Like the word 'extract' is a funny word to use but thank you, Lee, for letting Jeremy extract that from you.

Jeremy: We're in tech. It's not an interview. This is an interrogation.

Rebecca: Yeah. In terms of general security best practices and then even more specific things you can do to and for when building with serverless. And then there's this other thing that you're passionate about Is architecture layers. And I think that these two things are going to be interrelated, If the way that we think about architecture layers also needs to feed into, or needs play nicely with, the way that we think about security and securing each of those layers. So will you dive into this a little bit because you actually explained it to Jeremy and I before we were recording quite well. And I don't want to take those words out of your mouth.

Lee: Okay, cool. Yeah. so this is something that I'm quite passionate about. And this all kind of stems from, again, going back to that serverless Dunning Kruger effect. It's so easy just for teams, within their kind of silos, and very much Conway's law, just developing within their own like I say silo. And what happens off the back of that is things like business logic is not reusable It's not in the right place. People haven't thought about cross-cutting layers. So across concerns. Things like developer experience and kind of platforms. So it covers five main layers. And the first one's the experience layer. So that could be Alexa apps, It could be mobile websites, anything like that. And these shouldn't really have any business logic within them. They should be backend for front end APIs; very thin. The only logic that would be in there is kind of synonymous with what it is. So if it's an Alexa app that might have code in there around doing that integration with Alexa. But that just means that demand logic, which is an a demand layer, is then reusable. So I typically go, with my teams, we talk about order tracking. So if that order tracking business logic was in the Alexa app, then the mobile app can't use it and the website can't use it. So having that demand logic within a well encapsulated demand and that's, you know, private, not accessible on layer seven, you can't curl it or use postman. Well encapsulated it's got a version DPI on there. It might be, for asynchronous calls, it might be using the Eventbridge. It's got, again, version events schemas. So that would be the platform layer within there. And then you've got the cross-cutting layer. So things like .Login observability things that you don't want every single team to think about. And how would you do that at an enterprise level. Some quite big on that as well. And that covers things like authentication as well. How are you doing authentication between your backend to front end APIs and your domain APIs. So you know, could be a client credentials [inaudible] flow with open ID connect. Or again very secure talking about security and, kind of, threat modeling. Then you've got the data layer. So, for me, one thing I'm very big on is using Event bridge as that enterprise service bus. And that then allows obviously your demands to communicate synchronously based on events. But that could also include things like data leaks, [inaudible] report. And things again that you want to think about at that kind of enterprise level. I guess what you don't want this one team using MSK, one team using rabbit MQ, and no standardized way of actually communicating across demands. And if you've got 40 teams, you know, this is why you need that kind of governance, I feel like, around that. And then the kind of bottom layer is the platform layer. So that's, like I said before, the differentiated heavy lifting. And so we don't want every single team to think about VPCs and kinds of gateways, that kind of thing. How can teams very quickly spin that off, with a click of a button, it's a brand new demand or product. So I think that's very important. But also developer experience. If we're talking about [inaudible] deconnect, client credentials [inaudible] flow. How do you allow developers very quickly to say you know 'I've got this resource server. I've got a client. I want to give it these particular scopes.' You know, because for me again that authorization should be in a shared AWS account, like a shed tools account, almost. It should have an SLA around it and a team around it. And I guess these are the things that, thinking at that enterprise level, this is where you've kind of got the three things of: tactical DDR, how are we building things. Architectural layers, how we build them in the right way. Then looking at the technologies that we use. So doing a tech radar, created by a company called ThoughtWorks, which is very useful.

Jeremy: Yeah so I love I love breaking those layers down. And also thinking about where that ties into maybe domain-driven design for the different domains. We've had a lot of guests that have talked about this before, too. And just this idea of like common language and so forth. Like in order tracking for one domain might actually mean something different for another domain. So order tracking in the warehouse might be different than order tracking on the website or an order ID might mean something different or whatever. And having that language sort of encapsulated in that business logic or in that domain logic is super important. I also, you make a really, really good point: If you have an Alexa app that is allowing you to check your order status and that has to do some sort of business logic in order to make that work, that's specific to a domain, then you're just rewriting that across all of your different experience layers. And so you don't have that reuse there. The interesting thing you mentioned about the data layer- so I think there's two ways to think of the data layer. Or maybe I'd just love to hear your thoughts on this. So within your domain logic, within an individual microservice, or individual domain, my assumption, I guess that my contention would always be, that you want to have data specific to that domain in that specific microservice. Or within that bounded context. But I think what you mean by the data layer is, sort of, the way that data is shared and aggregated across these different services.

Lee: Exactly. Like you say, you want each of your domains to have polyglot databases that should be the specific requirements of that domain. But I guess having that enterprise service [inaudible] in there that allows you to listen to events and stop building your own read stores. So again keeping things loosely coupled. It could be, say, the order domain. And you're listening to stock events because you want to build up your own [inaudible] store within the order's demand. So you don't have that tight coupling with the stock demand. Probably a bad example because you want that to be up to date, I guess. But that kind of premise where, you know, that's what it's allowing you to do. And stitching all that data together as well for reporting [inaudible].

Rebecca: So a pastime that people may or may not know about me by now, is that I like to read our guests recent Tweets and then

Jeremy: We call it "stalking" but that's alright.

Rebecca: No people just call that Twitter now.

Jeremy: That's right.

Rebecca: I'm sorry that I use Twitter as Twitter and I read recent tweets of guests. Sorry, not sorry. But I really liked what you had tweeted recently, May 13th 8:06 AM. No I'm kidding. I don't know actually. It is May 13th but I don't know what time it was. And you talked about your AWS wishlist. I wanted to bring this up here because I would love for you to talk through what inspired it and then talk us through what the wish is of this wishlist and a little bit more detail, I guess. So you say: 'My hashtag AWS wishlist- for enterprise organizations: a service that allows you to share your versioned open API definitions for API gateway and Eventbridge in one place tied down by OIDC, allowing internal domain platform teams to discover integration details as well as external customers.'

Lee: Yeah. So what this was about basically is within the enterprise again going back to that [inaudible] example, you know, making it easy for teams, from a developer experience point of view, to say what does that up-to-date version schema look like without actually reaching out to that team. So I guess in a centralized service or place where you could go to list out your demands or your products, however your business is kind of set out. And that would allow you just to view any events from the schema registry of Eventbridge that is synonymous with that demand or platform. And the same with the APIs as well. Because again, for me, this should be very easy for teams to do and found in the enterprise obviously with a lot of different APIs and a lot of different teams, that becomes a lot of communication, a lot of emails back and forward. A lot of slack messages. So think just an easy way of capsulating that in one place, I guess. But there are other services that I would definitely love to see, as well.

Jeremy: Well, that's what I was going to ask you. Speaking of AWS wishlist. So we actually asked Werner Vogels this. We said 'what's missing?' What is the next step for serverless? What's the next major Eventbridge or step functions or appsync? Is there something major that we're missing? Or are we just at the point of incremental expansion or improvement.

Lee: Yeah I think that the biggest things that I think are missing, personally, would be serverless open search. I think that would be pretty cool. And serverless documentDB. So I've wrote quite a few blog posts around connection management with DocDB and Lambda scaling out and different ways of doing that. But I think those two things will be kind of cherry on the top of the serverless services, personally.

Jeremy: I've been saying forever, like every re-Invent, I'm like 'alright, serverless elastic search' or 'serverless search' or whatever. Like, that's going to happen this year right? Yeah that's tough And you're right. The connection model still with Lambda functions. I know RDS proxy is a goodsolution when connecting to Aurora and RDS but Mongo DB and some of these other ones that do also require connection management there, depending on how you connect to them, those would be helpful to have a really nice solution to that, as well.

Rebecca: I do love how wishlists often come out of, like wishlist is a nice term, but it usually comes out of some form of user pain right? Where you're like 'Ugh, I wish I could...' It's not like a wishlist is like, 'Man, I wish I could do this because I'm slightly suffering right now.'

Jeremy: My frustration list, maybe.

Rebecca: Yeah my frustration list. So it's a good, as someone who enjoys language and marketing, it's a good marketing. Yeah Instead of calling it like 'my why list' It's like ;oh what's your wishlist.' Isn't that nice? So Lee you create such a breadth and depth, a wealth of content. And you share this out through blogs all sorts of ways but your blogs are, I would say, they're like tomes. It's like a special experience to be led through the blog posts that you write. So let's talk a little bit about your creation process right? Like how do you, in the same way where you had AWS wishlist and you're like I wish that there was, you know, XYZ, how do you decide even what you want to write about? And then what's your process for writing and then sharing them out?

Lee: So the kind of inspiration side of what I do the blog posts on is typically I'll be working with a team and they're struggling with a particular area, so it could be caching or could be something around around that kind of area. And If they're struggling with it then probably other people are struggling. And again, we talked about the complexities of serverless and what I typically do is create a repo alongside it. Either serverless framework or CDK but something tangible for somebody to very quickly deploy it, have a play around with it, reading thearticle at the same time and having a look at some cord snippets. And I like to do a lot of visuals in there I'm personally a very visual learner. So if something's really called out like that I'll find it very easy to kind of take the concepts on board. So that's why I do it, typically. So I've got a big list of articles I still need to write. So there must be about 20 on there currently. But yeah, I think it's just how can you help people very, very quickly. And sometimes I'll have some kind of analogy or some kind of fake business or usually a fake logo and things like that- what is serverless with like a taxi example. Again, kind of talking about shared responsibility and that kind of thing but how would you make it easy for people to kind of pick this up and have a bit of a play around with and get and get that concept.

Rebecca: So you said that you have a running list. I think that's, especially for someone who loves to write you probably, that list will never get smaller. Only get longer. But curious what's at the top of your list? What's like one two or three that you're like 'I'm itching to write.'

Lee: So looking at the list at the moment, so I've got a change data capture on there. So, how do you do things like using a transactional outbox pattern and then use change data capture to take data from one system to another. I've got storage first APIs, L3 constructs, location services, Web socket subscriptions, appsync org, on this just huge list of things that, you know, I've just never had the chance to write an article on but hopefully they're going to be useful to people.

Jeremy: Well one of the things you mentioned, that was great list and I'm sure people will really appreciate you writing those things. But one of the things that we always try to encourage people too, because you mentioned this it's part of the reason why you do this is because you notice a team having a problem with this or somebody struggling with this. And then you just figure, somebody else is probably struggling with this too. And one of the things that we hear this all the time. Especially I hear this from people who are like 'well I don't know if I should write this down because there's already a blog post kind of like this and whatever.' And my advice to them is always like, 'just write it.' Like, just put it out there. If you think you figured out something or you had an experience, sharing your experience can really help somebody else. Even if somebody else had a similar experience, your presentation of the experience may be different and it may click better with them. And I'm just curious what your advice would be to other writers that, I mean again as a prolific writer as you are, is there some advice you might give other writers in terms of how they how they can either come up with ideas or you know just get started with blogging and things like that.

Lee: Yeah, I think the first step is obviously it's scary kind of building them public and open yourself. You know, you're opening yourself up to criticism and you know that kind of thing. But I think it's just about helping the community. I mean the serverless community is fantastic as a whole, which obviously you know. And being a community builder you see that the breadth of articles that people write, it is just absolutely fantastic. But I think just go for it. Start small. You know, just write a small article on something you're doing. And just get it out there. Put it on LinkedIn or Twitter or something like that but I think it's just taking that first step and then kind of building upon that over time.

Jeremy: Amazing. Alright well, Lee we are out of time. But this was amazing. And I think that there's a lot for listeners to get out of this. But if they want to learn even more and they want to read all these articles you do, or be able to creep you on Twitter and follow your AWS wishlist...

Rebecca: or your really really really long checklist for, I mean that's that thing is gold. The checklist for like figuring out bullet/sub bullet. Yeah where do they find you?

Jeremy: How do people find you?

Lee: Yeah. So if you do a search for Lee James Gilmore and serverless on Google there's quite a few articles there. Some on medium. I'm on LinkedIn so people can reach out. I love connecting with people and chatting all things serverless. And that's got links to Github and things like that.

Rebecca: Well that's excellent. And just in case people want a really short, A to B shortcut he's also on Github @leegilmorecode because I think that list is going to be total gold. So I'm going to go ahead and say that out loud here. We'll get that all in the show notes so anyone listening can also click on those links and Lee thank you so much for joining us.

Lee: Awesome Thank you for having me.

Jeremy: Thank you