Episode #81: The Best of 2020

December 28, 2020 • 73 minutes

In this episode, Jeremy shares his favorite moments from the episodes of 2020.

Watch this episode on YouTube:

Watch this episode on YouTube: https://youtu.be/QVauc83L8WU


Episode #30: What to expect from serverless in 2020 with James Beswick
Yeah, it's really snowballing in terms of popularity and certainly seeing just the sheer number of people from all these different companies. You have startups and enterprises and so many different types of industry all starting to pick up serverless tools. And a lot of things that we talked about just a year ago, that really seem an incredibly long time ago now, the conversations that don't really necessarily matter that much anymore.

There was a discussion about what is serverless and all these sorts of things. And now we're starting to talk about architectural patterns, and starting to talk how it's not just lambda anymore. Serverless is this concept of taking different services from different providers and combining them. So I think, you know, we see people building things where you connect API Gateway, DynamoDB, S3, but also with services like Stripe or with Auth0 and then Lambda is just connecting things in the middle.

Episode #57: Building Serverless Applications using Webiny with Sven Al Hamad
So when you're a small business, it's the cost of infrastructure that really matters to you because it's really efficient. You don't pay if you're not using it. But for the big guys, it's a combination of factors. And sure your bill might be slightly higher in some cases running on serverless, the cost of infrastructure. But the cost of managing infrastructure will go way down. You will have to hire less people, or the people you have will have to spend less hours working there.

But also what that does, it releases a big chunk of the budget or resources or man hours that you can now focus on product iterations. So your product can grow faster. And if your product grows faster, you can out innovate potentially your competitors, which can't afford that same level of innovation. So what I see with enterprises is that they see serverless as a competitive advantage, and that's why they moving to serverless. Although, you see all the blog posts about cost savings and stuff like that. Yes, that's true, but there's that agenda of outpacing my competitor, which serverless actually unlocks. And the moment you migrate to serverless, you can use that potential.

Episode #35: Advanced NoSQL Data Modeling in DynamoDB with Rick Houlihan (Part 2)
I mean, I get the question of is DynamoDB powerful enough for my app? Well, absolutely. As a matter of fact, it's the most scaled out NoSQL database in the world, nothing does anything like what DynamoDB has delivered. I know single tables delivering over 11 million WCUs. It's absolutely phenomenal and then the other question is is not DynamoDB too much overkill for the application that I'm building?

I think we can have great examples across the CDO of services. Not every one of our services is massively scaled out. Hell, I've got services out there, I've got five gigabytes of data and they're all using DynamoDB and the reason why I used to think that NoSQL was the domain at the large scaled out high performance application, but with cloud native NoSQL, when you look at the consumption based pricing and the pay per use and auto scaling and on demand, I just think you'd be crazy.

If you have an OLTP application, you'd be crazy to deploy on anything else because you're just going to pay a fraction of the cost. I mean, literally, whatever that EC2 instance cost you, I will charge you 10% to run the same workload on DynamoDB.

Episode #44: Data Modeling Strategies from The DynamoDB Book with Alex DeBrie
I introduced the concept of item collections and their importance pretty early on. I think it's in chapter two. And it was actually one of the solutions architects at AWS named Pete Naylor that that turned me on to this and really made me key into its importance.

But the idea behind item collections is you're writing all these items into DynamoDB, records are called items in DynamoDB. And all the items that have the same partition key are going to be grouped together in the same partition into what's called an item collection. And you can do different operations on those item collections, including reading a bunch of those items in a single request.

So as you're handling these different access patterns, what you're doing is you're basically just creating these different item collections that handle your access patterns. And that can be a join like access pattern. If you want to have a parent entity and some related entities in a one to many or many to many relationship, you can model those into an item collection and fetch all those in one request.

You can also handle different filtering mechanisms within an item collection, you can handle specific sorting requirements within an item collection. But you really need to think about, hey, what I'm doing is I'm building these item collections to handle my access patterns specifically.

Episode #79: What to do with your data in a serverless world with Angela Timofte
So the scenario was that we have, so people can sign up, but then they have to activate their account. That’s quite, like, a normal scenario, right? So they have to activate and if they don't have to be in like 30 days then we need to delete the account. And we're doing that in our only Mongo database for where we're keeping all the data for consumers. And of course, we're putting a lot of load, unnecessary load, on our primary database. So we decided to actually take this entire scenario out and we started, okay, of using events when consumers sign up. We will send an event to store some data in a DynamoDB which would say this consumer signed up and then we'll have another event coming from the activate ... like the activation API, saying this consumer activated, so then we'll delete the data in  DynamoDB and we had one Dynamodb with all the unactivated accounts. And then from there we could look at, like, when the account was created and we can delete whatever accounts that are not activated in time. So this way we took that whole pipeline to serverless in its own context and, like, its own service and then doing it’s spin there separate from our primary data. And we did it with, like, three events and DynamoDB and then, yeah, another Lambda that was listening to ... was querying this database.

So it was a very simple scenario but we took a lot of load from the main database by not going like every, I think was like every day, queried the database to get like all un-activated accounts. And so, yeah, it was a very simple scenario, but like this just shows how you don't have to, like, refactor your whole database. You can just take parts of it or, like, queries like whatever it … This was just a scenario and we took it out and its own being ... I haven't checked it in, like, a very long time because it's just working, you know? I'm thinking maybe I should go and check it. No, but, like, that's like one example, where, as I said, like, you don't have to refactor the entire thing.

Episode #33: The Frontlines of Serverless with Yan Cui
I don't know about the major breakthrough, but I definitely think more education and more guidance, not just in terms of what these features do, but also when to use them and how to choose between different event triggers. That's a question I get all the time. ""How do I decide when to use API gateway versus AOB? How do I choose between SNS, SQS, Kinesis, DynamoDB Streams, EventBridge, IoT Core. That's just six application integration services off the top of my head. There's just no guidance around any of that stuff and it's really difficult for someone new coming into this space to understand all the ins and outs and trade offs between SNS and SQS and Kinesis and so on.

Having more education around that, having more official guidance from AWS around that, that would be really useful. In terms of technology wise, I think I like the trajectory that AWS has been on. No flashy new things but rather continuously solving those day to day annoyances, the day to day problems that people run into. The whole cold start thing, again, often overplayed, often underplayed it's never as good as some people say, it's never as bad as some other people say. But having some solutions for people with real problems, where with cold starts we speak of various different reasons.

I really like what you've done with provision concurrency, even if I think the implementation is still, I guess it's a version one. So hopefully some of the kinks that they currently have would be solved. Other than that, I'd like to see them do more with some multi account management side of things. A control tower is great, but again, there's a lot of clicking stuff in the console to get anything set up, and it's also very easy to rack up a pretty big bill if you're not careful you can provision a lot.

NAT gateway for example and things like that. One of the companies I've been talking to recently as well, a Dutch bank, they are actually doing some really cool tool themselves to essentially give you infrastructure as codes. Think of it as a CloudFormation extension that allows you to capture your entire org. Imagine I have a resource type that's defines my org and the different accounts and then when they configure CloudTrail set up for multi-cloud to configure security guard and things like that all within my cell template, which looks just like CloudFormation. So some really amazing tool that those guys have built.

But having something like that from AWS would be pretty amazing as well. Because again, we've seen more and more people getting to the point where they have a very complex ecosystem of lots of different enterprise accounts, managing them and setting up the right things. The STPs and things like that. It's not easy and we certainly don't want people to be constantly going to the console and clicking things. And that's another annoyance I constantly have with AWS documentations is, they keep talking about infrastructure as codes, but every single documentation just tell us, go to this console, click this button.

Episode #37: The State of Serverless Education with Dr. Peter Sbarski
I think that's going to be what makes education effective in the future. It's that curated personalized education. You spoke about A Cloud Guru, we have full time training architects, instructors, who what they do every day, right, is they create content. Whenever anything changes, they update it. Right?

So when you go to the platform, you know that what you're getting is the latest version. You're getting that latest best practice. So suddenly, what you are learning in a lecture hall, right, doesn't really match what you could be learning online because, that content is much more up to date. So that's an interesting aspect as well. Yeah, the currency and the quality. Yeah. Because we can continuously iterate on it. I think that universities too have an important function that cannot be done with just an online delivery of education.

That element is really that ... It's going to sound harsh, but it's babysitting, right? Because just after you finish school, right? There's still a little bit of time for a lot of people to mature, right? They need to go through that maturation phase. Going to university, going to college allows people to do that, right? It allows them to build social connections. It allows them to learn how to work in a team, maybe better than they did at school. So it gives them that opportunity to mature before they go into the industry.

As much as I love online education, and I think it is the future, there is that element that still needs to be solved, that social element. But I think we'll figure things out, maybe it'll be some blended learning, where you do get that up to date curated delivery of education online. Then there's an additional element where you go and you socialize with your peers. So yeah, we'll see how all that pans out.

Episode #60: Going Green with Serverless with Paul Johnston (Part 2)
In the end, I know the joke is cloud is just other people's servers and all that kind of stuff. It's always underneath it. There's just servers and there's just servers. But I think that trying to make these servers more efficient, trying to make these data centers more efficient, there is still constant churn. We don't keep things efficient. Two, three years down the line, the server that you were using is not efficient. Six years, seven years, it's old. You don't want to be running stuff on there, you want to be running stuff on something that's efficient and new. Actually, there's an enormous amount of e-waste in terms of the data center industry. It's not straightforward.

The conversations around all of this are not straightforward. I think everyone needs to start thinking about moving to the cloud simply because we need to be reducing our impact. If you're running stuff, I think it's important to be able to go, "Actually, we need to be able to reduce the amount we run." But that means, understanding how that cloud, that you're choosing to work with, is working in terms of its sustainability. You can't just go, "We'll move it to X cloud, or Y cloud or Z cloud, or whoever it is, but we'll trust them to do the right thing."

You've got to still have that relationship. You've got to still be able to go that cloud, "You, Mr. or Mrs. Cloud person, you've got to tell me, are you using green electricity? Are you using renewables? How are you disposing of everything? What is your supply chain?" I think that conversation over the next few years is actually going to become a much more common conversation. It's going to become more important. You are not going to be able to get away with, "We just run efficient data centers." That's not going to be the standard and reasonable response. That's going to be a table stakes. Green data center will be a table stakes conversation, and the best practice will be, "Well, we're actually running 100% renewables and we're putting more into the grid, and we're being as good a partner as we possibly can. And all of that. We haven't got diesel generators, we've got batteries."

It's all of that conversation that I think comes back to. Maybe we will end up not using certain companies because their data centers are not green enough. Maybe that is where we end up, that actually societal pressure actually pushes these companies to do better. But I don't think we're there yet. I think we're probably a couple of years away, two, three, or four maybe, away from that.

Episode #42: Better Serverless Microservices using Domain Driven Design with Susanne Kaiser
...As mentioned that a domain model cannot exist without a boundary, then that's where we come to bounded context and a bounded context provides different types of boundaries for a domain model so it forms, it form a consistency boundary around the domain model and protects its integrity and it could also form a linguistic and semantic boundary so that the language's terms are only consistent inside of its bounded context. So for example, pending in one bounded context could have a different meaning than another bounded context, for example. And it also serves as an ownership boundary, so for example, bounded context could be implemented and evolved and maintained by one team only and a single team can, on the other hand, can also own multiple bounded context but it's really, really relevant that multiple teams are working on the same bounded context, because this enables a ton of teams working at their bounded contexts independently at their own pace and with minimal impact across other teams.

And this also serves as a physical boundary and can be implemented as a separate solution and can be deployed independently as separate artifacts and also enables separate data stores which are not accessible by other bounded contexts and, for example, also the source code could, of each bounded context, can be maintained in separate git repositories with their own CICD pipeline.

Episode #65: Serverless Transformation at AWS with Holly Mesrobian
Yeah. We recommend using a separate account per microservice and then also thinking about an account for each of your environments as well, your pre-production environment and your prod environment. Each one should have its own account as well. What that does for you, if you think about it, a lot of times, two pizza teams own a service or a small set of microservices, and you want to reduce the number of people who can actually access those services and make changes. I mean, it's an operational risk.

It's also a security risk having too many people have their hands on a microservice. You really want to make sure that the people who can access it are knowledgeable and know what they're doing. That will help you have a high availability as well as ensuring security. Of course, availability comes back to not only potential for someone to make a change that is a breaking change, but also things like ensuring that your limits are used and planned for in a way that makes sense for you.

Episode #36: The Cloud Database Landscape with Suphatra Rufo
Yeah. I think this is where things get really interesting. When I was at AWS, I worked almost exclusively on my creations off of Oracle and Azure at AWS. And a database migration is, by and large, the most difficult thing that you can do in cloud computing. It's really hard. You've got to do a lot of data modeling. You've got to do your schema conversions. I mean, it's really just a ton of work and what I have found is that when people are charged with, "All right. We got to migrate our database." We tend to do it in multiple phases and that will take multiple years, so oftentimes they'll first just re-host. Let's say they're on Oracle. They want to get off Oracle, but they don't want to be penalized. So they take their Oracle license and bring it to a different cloud provider. They keep all their data with Oracle still. They're just moving it. That takes six months to a year, then afterwards, they say, "Okay. Well, I think we're now going to replatform." And that's a whole nother workload and that's even more work, and even harder down the line is refactoring, which is where they might actually go from a relational database to a NoSQL database.

It's much more rare that you see people to a database migration where they go from a traditional relational database on one provider to a NoSQL database on an another provider because it's a really difficult piece of work.

Episode #39: Big Data and Serverless with Lynn Langit
Well, it goes to the CAP theorem, which is consistency, availability and partitioning. This is sort of classic database ... what are the capabilities of a database? And it's really kind of common sense. A database can have two of the three but not all three. So you can have basically the ability for transactions which is relational databases or you can have the ability to add partitions is really kind of to simplify it easily. Because if you think about it, when you're adding partitions, you're adding redundancy. It's a trade off. And so are you adding partitions for scalability? And so when adding partitions makes a relational database too slow, then what do you do? So what you then do is you partition the data in the database to SQL and NoSQL.

And again, I did a whole bunch of work back in 2011, 2012, 2013. I worked with MongoDB, I worked with Redis. And one of the sort of key, I don't know, books I guess, would be Seven Databases in Seven Weeks. It's still very valid book even though it's many years old. It tells how you do that progression and really turn the light on for me, because prior to that point it was, oh, just scale out your SQL Server, scale out your Oracle Server, which still would work but these NoSQL databases were providing much more economical alternatives. And of course I'm always trying to provide the best value to my customer. So if it wasn't a great value to buy more licenses for SQL Server or for Oracle, rather you want to get a Mongo Cluster up or a Redis Cluster up, you could partition your data if that was possible because there's cost to partitioning your data and writing your application.

So I just found those trade offs really, really fascinating. And of course during that time, cloud was launched, led by AWS. Microsoft had an offering, but they didn't really understand the market until a little bit later. So Amazon had an offering and they first started, it was really interesting. They started by just lift and shift with RDS at a PaaS level taking SQL Server and actually making it run effectively in the cloud. That was how I got started, because my customers wanted to lift and shift and maybe go to an enterprise edition and run it on cloud scale servers.

Episode #71: Serverless Privacy & Compliance with Mark Nunnikhoven (PART 1)
Yeah. And the tiering system is frustrating as it is for a lot of users, a lot of it does have that. If we use the AWS term, it's about reducing the blast radius. You don't want everyone in support to be able to blow up everything, and if you look at the Twitter hack was actually an interesting example, somebody raised the question and said, "Why didn't the president's account get hacked?", "Why wasn't it used as part of this?" And because it has additional protections around it, because, it's the leader of the free world ostensibly so, you want to make sure that that's not the average, temporary employee on a support contract, being able to adjust that. So the tiering actually is a strong play, but also understanding that the defense in-depth is something we talk about a lot in security. And it gets kind of a bad rap, but essentially it means don't put all your eggs in one basket.

So don't use one control to stop just one thing. So you want to do separation of duties. You want to have multiple controls to make sure that not everybody can do certain things, but you also want to still maintain that good customer service. And I think that's where, again, it comes down to a very pragmatic business decision. If you have two sprints to get something out the door and you go, well, I'm going to build a proper admin tool, or you're just going to write a simple command that your team can run, that will give them the access, you're just going to write a command that does the job. And you know what, in your head, you always say the same thing.

You put it in your ticket notes, you put it in your Jira and you say, we'll come back to this and fix it later. Later never happens, so most admin tools are this hack collection of stuff just to get the job done. And I totally get it from a business perspective. It makes sense. You need to go that route, but from a security and privacy perspective, you need to really think holistically. And I think this is a question I get asked often, actually, somebody just asked me this on my YouTube channel the other day, they said, "I'm looking for a cybersecurity degree, and I can't find one. All I can find is information security. What's the deal?" And I said, well, actually, what you're looking for is information security. In the industry, and especially in the vendor space, we talk cybersecurity because that's typically the system security.

So locking down your laptop, locking down your tablet, locking down your Lambda function, that's cybersecurity, because we're taking some sort of cyber thing and applying security controls to it. Information security is an academic study, as a field of study in general, is looking at the flow of information as it transits through systems. Well, part of those systems are people, are the wetware. Right? Or the fact that people print it out. This is a big challenge with the work from home was, you said, well, your home environment isn't necessarily secure. And you said, well, yeah, it has different risk models. But the fact that I can connect into my corporate system and download a bunch of stuff and then print it, that's information, that's still needs to be protected.

So I think if you think information security, you tend to start to include these people and go, wait a minute, Joe from support, we're paying him 15 bucks an hour, but he's got a mountain of student debt. He's never going to get out of it. That's a vulnerability that we need to address, not from locking it down, but help that person out and make them feel included, make them feel, as part of the team so that they're not a risk when a cyber criminal rolls up with some cash and says, Hey, give me access to the support tools.

Episode #52: The Past, Present, and Future of Serverless with Tim Wagner
I have these two strong reactions to that statement, right? One of them is I would say in some ways the most successful thing Lambda has done is to challenge thinking, right? To get people to say, do you really need a server stood up, turned on taking 20 minutes to fire up with a bazillion libraries on it and then you have to keep that thing alive and in perfect condition for its entire life cycle in order to get something done in terms of a practical enterprise application? And challenging that assumption is one of the most exciting, important and successful things that I think Lambda and other serverless offerings have accomplished in our industry. The flip side to this is to be useful, sometimes you have to be practical. And it's equally true that you can't walk up to an enterprise and say, "All right, step one, let's throw all your stuff away and then step two, you're not going to get past step one."

It's funny, we talk about greenfields, brownfields, it's all brown in the enterprise. Even if you write a net new Lambda function, it's running against existing storage, existing data, existing APIs, whatever that is. Nothing is ever completely de novo. And so I think to be successful and be as adopted as possible in the long run, serverless offerings are going to also have to be, they're going to have to be flexible. And I think you see this with things like provision capacity. I mean, when I was at Lambda still, we had long painful debates about is this the right thing to do? And for understandable reasons, because it is less stateless. It took the ... it's obviously optional. We don't force anyone to use it. But by doing it, it makes Lambda look more like a conventional, well, server container, conventional application approach because there is this piece that is a little bit stateful now.

And I think the arc here is for the serverless offerings to not lose their way, to find this kind of middle ground that is useful enough to the enterprises that still challenges assumptions that gets people to write stuff in a way that is better than what came before and doesn't pander completely to just make it feel like a server. But is also practical and helps enterprises get their job done instead of just telling them that ... because just sermonizing to them is also not the right way to do it.

Episode #78: Statefulness and Serverless with Rodric Rabbah
I think accessibility of the platform. And I remember when I first met you, right, we had this conversation about, we called it “serverless bubble” at the time, right, and maybe “bubble” isn't the right word because bubbles burst and that's not a good thing. Maybe “echo” chamber is better. But I think … one thing I've learned, and I learned this very early on when I left IBM sort of went to a developer conference at, yeah, there's a thing called serverless, the greatest thing, and it was like what's a micro service? Right? Instead of recognizing that the world hasn't yet caught on. There is part of, you know, the technology community that has sort of, you know, good for that. But recognizing that there are still a large interest in Kubernetes, still a large interest didn't EC2 instances in VMs. There's a massive world out there where building applications for the cloud is still hard. You know, just log onto the Amazon console and look at everything you can get. Where do you get started? Right? So the opportunity for us is making the cloud more accessible.

And so we like to think that from a Nimbella perspective, you can create an account within 60 seconds. You can deploy your first project, you know, not even having to install any tools, right out of GitHub. And hey, I have stood up an entire application. It's got a front end. It's got a dedicated domain. It's served from a CDN. My functions are entirely serverless, they scale. I can have state. I just did that, right. So, it's about really making the cloud accessible for a large class of developers from the enterprise, all the way to the indie developer who just has an idea for a mobile app or a website that they want to build. I think this is where really the opportunity is, you know, whether you're running things in a container or an isolate like Cloudflare does. It comes with implementation detail nobody's going to care about in the future.

Episode #67: The Story of the Serverless Framework with Austen Collins (PART 2)
It's the potential that's democratized for everybody, whether you're a large organization, or you're just a solo hacker, like in the basement or something, trying to get something off the ground like this power has democratized everybody. And that going back to our mission, like, yeah, we want to help every single person build more, manage less, leverage higher levels of abstraction, help them focus on outcomes more than ever. We're going to try and rethink developer tools and what that means in order to deliver that experience. And then the last part for us is just we firmly believe serverless is bigger than any one vendor at the end of the day. And we feel very strongly that there needs to be an application framework that provides an open level playing field for serverless cloud infrastructure across any vendors, because yes, we've talked a lot about AWS and the majority of our users are using AWS.

And the majority of the infrastructure is AWS, but not all of it actually. They are still bringing out, our users, our audience are very product focused. And if you want to build the best products, you got to be free to use the best of breed services that are out there. And so we see a lot of people still bringing in Stripe, still bringing in Algolia, still bringing in MongoDB Atlas, Twilio, right? There's so many great things out there. And helping people, developers have this, again, this open framework where it treats all these things as neutral. This level playing field where they could compose serverless infrastructure across any vendor into applications really, really easy. It feels like the destiny of the Serverless Framework to us.

Episode #58: Observing Serverless Observability with Erica Windisch 
From a perspective of open source developers though, my biggest issue is the culture. Every one of these open source projects or projects, however small or big that they are. Because I think, I said things like Kubernetes right? Are now multiple projects. You have things like Falco and so forth that are sub projects or adjacent projects or however you want to define them. But you have a community here, that operates a certain way, they have their own culture. And that culture is different, potentially than a culture that you as a company founder or as HR or a manager, or whoever of a company, that has not necessarily the same culture that you want your company to have, or your team to have, that is in the open source. Right? And how do you kind of resolve that difference because, one of the other things is that a lot of people hire from these open source communities.

So if you are building a team that is going to work in open source, and you want to make this a diverse team, for instance. But it's not a diverse project. How does that work? Right? Is the project and the other people in that project, going to discriminate against you, either implicitly or explicitly. It may not be intentional, right? There are implicit biases that exist. And I think it becomes very difficult because, when you have your own closed source application, and you're building things for your own self and your own teams, you have control over what you're building, how you're building and the construction of your team, etc. And I think that you lose a lot of that, when you're working in an open community.

Because if you're only working on open source, it's almost like while you're employed by one company, your co-workers are almost in a sense, a set of people that are not hired by your company. That may not actually hold the same values that you or your company holds. And I don't have a solution for this. But it's something I think about a lot. And it's one of the reasons I no longer really contribute much to open source.

Episode #50: Static First Using Serverless Front-ends with Guillermo Rauch
I think what's amazing about serverless is that it's exposed the essential complexity of the problem. It stopped developers from sweeping hacks under the rug. The best example that, I think, from this is you can no longer do async computation as a result of invoking a function that easily anymore. In the world of no JS, I would see a lot of customers just put lots of state in a process. When they respond, they continue doing things behind the scenes in that same process.

Functions have altogether made this impossible, but for a great reason, right? They were exposing, "Hey, that side effect that you were computing, you should have not been doing in that same process. You should have used a primitive like a queue to put your side effect, your event there, and then use other functions that respond to that event." Then it's so smart that they also put the developer into this state of success of saying, "Well, if it's a side effect that now can no longer be retried by the client," because the client is executing the function.

The function is responding with 200, and it queued the side effect, so there's no reason for the client to retry. Now, the side effect is loose in the universe of computation. That means that we need a system that can retry it because we want that side effect to run to fruition. Now, it forces you to put that into a queue, and queue can retry and then eventually also fail and go into a dead letter queue. So now just like going through all this in my head, I'm going crazy about the amount of complexity.

But here's the thing, and this is why I love serverless. That was to begin with the essential complexity that had to be managed to begin with. What we were doing before was chaos, was side effects that maybe sometimes run correctly and sometimes not, was unscalable systems and so on and so forth, but it is a complicated world.

Episode #40: HTTP APIs for API Gateway with Eric Johnson and Alan Tan
The most dangerous part of an application that I'll ever build is my code. Right?

So, when I build an application, I want to get that data stored first. That's the thing. I tend to go DynamoDB because that's what I like, that's what I use, but there's different purposes.

I know Jeremy, you and I have had this conversation before, and you're an SQS guy, so that's where you tend to go, and we do this because we look at okay what's the pattern for the retry or the DLQ or different things like that.

For me it's because I'm going to continually write back to Dynamo. On the app, I'm specifically thinking about it. But the idea is if API Gateway can directly integrate with the storage, be it S3, be it DynamoDB, something like that, then I've stored the data and I don't have to go back to the customer if my logic fails, right?

So, in an application I've stored the data, let's say I'm using DynamoDB, I do a stream, it triggers a Lambda, I start processing that data. If somewhere in there, something breaks, and again, it's going to be my code, but let's say something breaks, then I don't have to go back to the customer and say hey guess what, I blew it.

Can you give me your data again? Can you resubmit that? And continue to trust me, because I'm sure I won't lose it again. Instead, I've got that data stored, and I can write in some retry or take advantage of the retry from an SQS or an SNS or something like that.

So, I think it's a really cool pattern for building resilience into our application. Serverless comes with a lot of resilience anyway, that's how AWS has approached this on look as much as we'd like to say nothing ever breaks, let's write as if it does, right?

So, let's degrade gracefully. I think this adds even another layer of that, where I can degrade in my code and know hey I've still got the data. I can write some retry logic. I can use existing retry logic. I think it's a safer pattern.

It does require ... The storage first is the pattern I call it, but it requires asynchronously. What can I do after I've responded to the client and how do I work with them?

Episode #51: Globally Resilient Architectures with Adrian Hornsby
So a soft Time To Live is your requirement in terms of staleness, right? So you say, my Twitter trend lists, I want to refresh it every, let's say, every 30 seconds. So you give it a TTL of 30 seconds, a soft TTL of 30 seconds. So if my service requests the cache and the TTL, the soft TTL is expired, and everything is fine you go and query the service, right? But if my service doesn't answer at that moment, so you are, you've passed the soft TTL. Now, your downstream service doesn't give you the data. What do you do? Do you return a 404, or you actually fall back, and you say, alright, my soft TTL is expired, but I'm still within the hard TTL which is it's one hour, right?

And then you say, okay, your service returns the hard TTL and you say, "Oh, sorry, we just have one hour old data, because we're experiencing issue." So again, it's a possible degradation. And actually quite often cache could be used like this. I think it's all about how you create your cache and things like this and how you define your eviction and policies and things like this.

Episode #48: Serverless Developer Culture with Linda Nichols
I just started thinking about the fact that if I was developing something for the cloud, or just in general, if I start typing a lot, I pause and I go, okay, somebody's already written this. I'm not that clever. Not really, there's a lot of smart people in the world. There are a lot of people that code all the time. This is already done somewhere. It's either in a library or it's a service. And I talk to so many customers and people who they're like, "Oh, here's my great idea of a thing." And almost always I'm like, "No okay, so that is this." And I mean, even like messaging systems like Service Bus on Azure. I mean, there are so many developers that have tried to write messaging systems. And there are so many out there, there's so many people that tried to write Kafka. And they still are.

And sometimes I talk to people that are trying to create something, and they will say, "Okay, well, I'm going to put this in a function, this in a function." I'll say, "No, you don't need functions here. This is already a service, or you can already use something like Logic Apps, like you don't have to write any code." And, you just kind of connect some things together, or there's already built in services and that's still serverless, right? Like serverless is not just fast. Like, I don't have to write 100 Lambdas to be a serverless developer.

Episode #49: Things I Wish I Knew Before Migrating to the Cloud with Jared Short
I think it takes practice, right? You're giving up a lot of fundamental control that I think people are used to having, right? I can't walk into my data center, open a rack and turn off or turn on a server or pull wires or things. That's a huge fundamental shift for a lot of folks. And as we're migrating to people now, these days, that have never even walked into a rack of servers, we're having people that are coming out of college that AWS and going into ec2 and clicking launch instance is their concept of a server.

I think what we're starting to build towards in terms of this cloud native mindset is, we fundamentally can trust these larger providers to provide mostly good experiences, let me be careful there. Mostly good experiences around these cloud primitive services. And we have S3, which has kind of been referred to as one of the seventh or eighth wonder of the world. It's like this modern Marvel, right? That thing holds so much data and performs so well, and it's so scalable.

When it goes down the internet is just basically done. That's incredible that they have this service and we're trusting it. As cloud-natives, we're trusting these providers. I don't care if it's Azure or GCP or anybody, to provide these primitives that we can build on top of it. I think cloud-natives look at those primitives and you have an implied level of trust, and you're willing to build businesses and business value on top of them.

And I think it's control and being able to trust somebody else with giving up that control, so you can accelerate what you're doing and looking to build in terms of business value, is more of a cloud-native mindset than anything else.

Episode #73: Optimizing for Maintainability with Joe Emison
In general, I do choose third-party services for everything. My general view is, prove to me that this third-party service won't work. Now, again, I have a very strong difference though between a third-party service that's serverless and one that isn't. You can find third-party services where they want you to go into the AWS marketplace and run it on a VM. That's not serverless and I'm not interested in that. Or like, “Oh, it's an open-source project. Run it yourself.” Again, I'm not interested in that, but when it's serverless … My short definition of serverless is it's not my uptime. I literally can't influence uptime. Beyond, I could put bad configuration or a bad code in, but if some server fails, it's not on me to bring it back up. I think if you can have a serverless third-party API, I think your default should be to use that unless you can prove that you shouldn't use it.

Episode #80: Revolutionary Serverless at re:Invent with Ajay Nair
Flat-out, I think that is the biggest factor to speed that serverless brings to the table. Like the fact that you can cherry-pick components of your customer or product by relying on other people's expertise, right? So going out there and saying hey, I know Jeremy Daly, you have built this great chat service that has … and I trust you to offer me full lines of availability and a certain performance guarantee, and as long as the user API, I’m good, that the incentive for me to go and rebuild that elsewhere is negligible. Like it doesn't help my business to go and rely on anything else.

And I think what that basically does is your now recruiting an entire collection of experts of really deep domain experts to be part of your operational team, to be part of your development team where they’re continuously improving their portion of that tiny little product and making it better to move faster. The scale is getting better. The performance is getting better. The capabilities are getting better, while you innovate on the part of the stack that you want to. And what's fascinating for me is, you know, that is the true vision that we all had when we went on microservices development as well. Like you can do independent development of different pieces. They're all you know, small pieces loosely joined that talk to each other and they can innovate separately. The only difference is this is not just your organization sitting and doing it, your two-person startup. You have now, you know, 22 person startups and AWS innovating on your behalf, just to make your product better. Right?

Like, you're 1 millisecond example is a great one. Like if you were a start-up who was running on us today and you happen to use Lambda for your backend compute, your bill just got 40% cheaper, which you can now pass on as end-user savings with you doing nothing. Like imagine how much work you would have to do to go and get that kind of behavior over there and just one more thing, Jeremy, since you brought that up. I do believe the true power is going to be connecting all these ourselves together and getting them to interconnect a lot more.
You're starting to see this with some of the bigger ones, right? So Twilio, Workday, Atlassian, they’ve all added this programmable size component to them. They’ve got Lambda based extensions that they showing up, like Twilio Functions and Netlify Functions and others that allows them to add just a little bit of logic to them to then talk to other services via API calls, and kind of build forward over there. So I think just the flexibility and power this enables is really, really cool. And the fact that you can swap out one API for another is quite a testament to the whole dance around “am I really locked into a particular provider or not?” because it's quite easy to change the API call more than anything else.

Episode #76: Building Well-Architected Serverless using CDK Patterns with Matt Coulter
Yeah. So, it helps that Liberty Mutual as a whole is split up into different business segments, so my segment, GRS we call it, Global Risk Solutions, I’m lucky I remembered that, we’re basically large commercial and specialist insurance. But our CIO made a mandate; he put down what our vision is as a company and where we want to go, and he wrote down that we want to be a serverless first company. So whenever you have buy-in at the executive level, it helps a long way. But the second part of it is I haven’t mandated anything to any engineer who works anywhere because I’ve seen an awful lot of times that it doesn’t matter how good your idea is, if you come in and tell people, “I think I know better than you,” they just say no.

So that’s why I started with CDK patterns external, which is, given I haven’t introduced it yet, an open source collection of serverless architecture patterns and the idea was if I could go external and say, “Here is a thing, here is an actual industry thing, here are all the AWS Heroes that talk about the patterns that are in this, here’s the links to all their blogs posts, here are all their articles, here is me talking about it in the world and then go to them and conduct a well-architected review with their team and then instead of mandating it, just ask them, “Okay, I see you’re trying to build this particular solution, have you considered.” And then at that point because the things already exists, it’s already coded and they can pick up on it, I think you’ve reduced the barrier from the direction you want them to go rather than forcing it.