Episode #132: The Evolution of Serverless at AWS with Dr. Werner Vogels
April 11, 2022 • 44 minutes
On this episode, Jeremy and Rebecca chat with Dr. Werner Vogels about the customer pain points that led to the creation of Lambda, the patterns that emerged to create the larger serverless ecosystem, why we should be building sustainable architectures, the importance of developer community programs, and so much more.
Dr. Werner Vogels is Chief Technology Officer at Amazon.com where he is responsible for driving the company’s customer-centric technology vision.
As one of the forces behind Amazon’s approach to cloud computing, he is passionate about helping young businesses reach global scale, and transforming enterprises into fast-moving digital organizations.
Vogels joined Amazon in 2004 from Cornell University where he was a distributed systems researcher. He has held technology leadership positions in companies that handle the transition of academic technology into industry. Vogels holds a PhD from the Vrije Universiteit in Amsterdam and has authored many articles on distributed systems technologies for enterprise computing.
Hi everyone. I'm Jeremy Daly.Rebecca:
And I'm Rebecca Marshburn.Jeremy:
And this is Serverless Chats. Hey, Rebecca, how are you doing today?Rebecca:
I am doing extra well today. If you've listened to our show before or seen screenshots of me drinking out of a Common Room mug, this guest is one of the big reasons why I joined Common Room. It's focused on developer communities and community leaders. And this person is super influential in developer communities, a super big supporter of developer communities. And I'm a little giddy. So I'm just going to say, Jeremy, would you do us the honor of introducing our guest today?Jeremy:
Absolutely. This is an epic guest and we are so happy to have him here and have him spending some time with us, to share some thoughts with our audience. Our guest today is the chief technology officer at Amazon.com, Dr. Werner Vogels. Dr. Vogel's, thank you so much for being here. Werner:
Oh, thanks for having me, guys. I'm looking forward to it. Jeremy:
Awesome. So, I have a feeling that most of our listeners know who Werner Vogels is. You know, re:Invent and all the other amazing things that have happened. But just in case, could you just take a minute and tell us a little bit, or tell the audience a little bit about yourself and what the CTO of Amazon.com does. Werner:
Oh, How much time do we have? So, I've been with Amazon now for 17 years. So I joined in 2004. I was an academic before that, worked on distributed systems at Cornell and the fun story is actually that I almost didn't join Amazon. Yeah. So in those days, Amazon was just a retailer. Yeah. And so I got invited to come give a talk there. And, "Really? Do have to go?" Oh, these are really, it's a bookshop, it's a web server and a database. How hard can it be?
And, I did go and I was blown away by the technology behind the scenes at Amazon.
As really anything that you could find in any computer science textbook was done to the absolute max. High volume transaction processing, machine learning already way before everybody was doing it, robotics, everything. Supply chain. And so it, was an incredible, let's say, challenge to join them. And I've had a good time since. Now CTOs have different roles. Yeah. And I think there's sort of four or five different types of what you see around the world with CTOs. Some of our them are pure infrastructure managers, others actually manage large teams. And the CTOs, they are sort of big thinkers. And I like to believe that, when I joined them, that was sort of my role as an academic coming in, bringing so a bit of academic rigor to the kind of things that we were doing because we were reaching scale, that nobody else in the world had done before.
And so I think the leadership was hoping that I would bring a little bit of rigor and such that we not only from a practical point of view, we're really scalable, but also sort of from a theoretical, fundamental point. Now then all of these things happen and we built AWS. You launch AWS and suddenly you become a technology provider. And then your role as CTO changes.
You become what I call an external facing technologist, where it's important to talk to your customers, to get information back, to sort of get this, start this whole feedback loop and then start to think about what kind of new products should we be building for our customers, or what are the things that we're not doing right? And, so your role starts to get fairly different then, not that I asked for it, but it's just sort of evolved over time there.Jeremy:
Right. So, speaking of, you know, building products for customers, I think one of the, you know, the really great things about AWS and being customer obsessed is taking that feedback from customers and then building products with it. And so, let's go way back to maybe 2014, and talk about Lambda functions. Because I'm really curious, you know, we've had a lot of conversations with other guests about what was the genesis of Lambda functions and so, forth, but I'm really curious from sort of your point of view: What were the customer pain points that you were hearing from customers, that sort of led to this idea of building Lambda functions and this idea of, you know, sort of serverless?Werner:
So first of all, you know, Lambda and serverless are not the same thing. Jeremy:
It's everything else, almost everything else in AWS, is serverless by nature. Ya, whether it's SQS whether it's S3 whether it's Dynamo. Whether it's... Pick any product, almost. And they're all serverless. It basically means that costumers don't have to worry about scale. They don't have to worry about reliability. They don't have to worry about consistent performance, managing costs, things like that. And so, giving it that, sort of the basis of all the services that we were building, quite a few, in fact. The only thing that wasn't really serverless was actually compute. Now we weren't aiming for a serverless compute, but what we did see with customers is that they, quite a few customers, had these small tasks that they just wanted to perform.
And to do that, you have to run a fleet of EC2 instances because that was the only option that you had. So a good example there is a company called WeTransfer. I don't know if you've ever used them, if you have very large files, media companies use them all the time. Yeah. And so basically what they do to you upload it to S3 and then whoever needs to receive the file can download from there. But what they also did once the file was uploaded, they will run a virus check on it and compress it to sort of save bandwidth. To do that they basically had to run dozens of EC2 instances waiting for work.
Yeah. And so in essence, this is sort of a, It was an event based system, but there was no really any mechanism to trigg er the work that you wanted to do. And especially if this work was really small, it was hugely inefficient to run these very large EC2 fleets. So that made us think that, you know, in the [inaudible] firsts, what kind of operations would we want customers to do on objects in S3? It's basically bringing computers closely to the data as possible and then the trigger by changes in the data. And so I've always been a big fan of loosely coupled architectures. And so we're thinking about what kind of primitives can bebuilt, such that we can enable these new patterns that we were, they were trying to solve for our customers because it's way too complex. And so all the things that serverless does for customers, like what we just talked about, like scale and reliability, things at that. We also wanted to bring that to compute then. And that is, in essence, the birth of Lambda. Now, this stuff happened way before 2014 because in the early days, we could already see these patterns happening. We just didn't have the experience and the knowledge of exactly how to build this, also, in a way that would be, affordable for our customers. Ya? Because of course, one part of being truly serverless and whether it's compute or not, is pay as you go. And customers with EC2 were paying as they go. But at that moment, I think they were still paying by the hour. And it took a while before it went to the minute, but still. Whether you were doing any work, in EC2 instance or not, you have to pay for it.
And so making a switch. From the mental switch that you're only paid for actually execution time, was a radical shift in just thinking about how can we change compute to being lightweight and nimble, allow a whole new set of patterns who were to just arrive.
Now, you all know what happened since. And it's slightly, it's a bit more further than where we were in 2014. And they think that has to do with also the success of the overall of the, how well it resonated with customers. Because pretty quickly we saw a significant uptick in the use of Lambda.
Strangely enough, I would think so, by enterprise. Mostly, because normally when you think you do this very innovative new technology, it will be the young edge or whatever startups or that kind of part of the world that would be first adopting it. But the enterprises immediately had figured out that this was way too good a deal for them as well. Ya? Because now suddenly you only had to pay for compute. Now most enterprises had never encountered that.
Typical enterprise data center room gets 12, 15% utilization. If you're lucky. Which basically means that 85% of the energy that flows through their servers is useless. And so really going down to having the sort of smallest primitive and only have to pay for the execution of that, that also was a game changer. Rebecca:
So I love the idea of thinking about enterprises and being nimble. So often people don't put those two words together, right? But the idea of nimble compute and that, that is what the serverless paradigm and Lambda functions and everything that how it was already serverless since you were talking about like it's, we're moving towards more nimbleness. And so I'm wondering if, what were the different ways that you saw customers using Lambda functions and they came out or this like larger expanded paradigm of serverless that continued to push AWS to launch even more services and grow that portfolio so much in the past seven years, eight years. Werner:
Well, I don't think there is any, I mean, so many customers, millions of customers. Big, small you know, life sciences, oil, gas. Whether it's e-commerce there isn't a vertical that is not making use of AWS. And, of course they all have their own challenges in that sense. But if I think, for example, about anyone that runs very large e-commerce operations. Ya, quite a few of those can be just really encoded as a surface operation because most of those are very small actions that you want to take.
Put this in my shopping cart. Ya? Move the shopping cart to this particular state. Yeah. And so, the operations on it are relatively small. It allows you to build, I mean, I wouldn't say truly microservices, but I think the smallest unit of compute that you'd want to do. Now, I think it has lots of other implications.
For example, your security surface, your attack surface, will consume really small. And so, and you can become way more agile in developing your functionality. Now I've seen customers take mainframe code, which is actually largely event driven ,often, and you start chopping them up and just taking them out of the mainframe and start running them as Lambda functions. Now, if it becomes popular then of course, given that there's lots of development around it then you need to do all the languages. Yeah. And then everybody is happy with having Ruby and then you go on to, and why don't we have this? Why don't we have Rust? Where is Go?Rebecca:
W here is go?
Hey, .net6 launched, was it a week ago? So yeah. We continue to make progress there. And, so we're coming back to what you actually asked, about enterprises. I do think enterprises love efficiency. You know? And so there's two sides of efficiency here. You don't need a big dev ops operation to babysit these. Because after all, you don't need to manage instances. You don't have to replicate yourself over multiple AZs and automatically scale up and whatever, all the challenges you have. If you have a VM based approach.
Because, in essence, you still doing serverless. It's just not your server. It's not your physical server underneath that but you still running what we knew as a server. It's still a Linux kernel or Windows. And, as such, you know, you can significantly reduce your staff or have them focus on things that actually really matter for the enterprise, which is not doing this undifferentiated heavy lifting where nobody wins there. So let Amazon do that for us. Let AWS do that for us. So I think, enterprises really understand that part of efficiency and then, indeed, only having to pay for what you've used is great. I think a great example in the earlier days, I think we had them on stage, Ben Kehoe of iRobot. And so why did they build a completely serverless environment?
Because the digital services that they were offering came for free. Now you buy a Roomba and then you get all this with this house measurement and all these other stuff that comes with it. You don't have to pay for that. Or at least in those days, you didn't.
So for their business, it was important to completely reduce every possible cost yet have this wonderful digital experience around it. And so for them, they are an enterprise, after all. Now they really went down the serverless path to really make sure they could be as nimble as possible.Jeremy:
Right. And again, Ben Kehoe and iRobot, you always hear the story of, you know, Christmas day where, nobody gets woken up because it just scales and everybody opens their new Roomba for Christmas. But, you mentioned the smallest unit of compute, right? Which brings us to this, I'm very passionate about this idea of micro VM architecture. When AWS launched Firecracker, most people are like, 'Oh, I don't think I have anything to do with this.' I'm like, 'no, no, no. Think about this.' Right? So this is one of these things where, you know, this is close to the metal as possible. And this gets us to this larger idea where, I'm kind of bothered by the term serverless containers.
I don't know why I just am. But I feel like this idea of adding too many layers of abstraction between your code and the machine just is less efficient. It's not, you know, it's not where we want to be. So I'm just curious, what are your thoughts on serverless containers? And do you think that we need to move more towards this micro VM architecture? So that all of that security, all of that smallest unit of compute that that's running as close to the metal as possible, to make these services more efficient.Werner:
Well, you really gave the answer. So, what do you want me to say?Rebecca:
I'll step in here. No, I'm kidding. Jeremy:
Just want you to agree with me. That's all I Rebecca:
all I Jeremy:
No, I do think compute lives on a continuum. Yeah? So you start off with the VMs, with the containers, with your functions. Each have their own application area. Now. I mean, we have still have tons of software, especially if you bought it or you had it made for your own data center, that just needs to run in a server. It just needs to run in a big VM.
And so on this continuum, you know, if you look at containers, containers still run on VMs. It's still the same VM underneath there and you had to manage those clusters. So that again is work that has no impact on your application. It's just stuff you have to do.
Again, it's heavy lifting. So, can take that away. That would be great. Now, the roots of Firecracker are more underneath Lambda. So because when we launched Lambda, we still were using our own computing infrastructure, which basically is VMs.
But if you look at the Linux kernel and the VMs, the attack factor is huge there. And also, there's so much overhead, there's so many devices in there and so all these things that you absolutely don't need. Now in a parallel track, we started working on Nitro. Which is doing basically the same way we do with virtualization but then with the hardware. You can come back to that in a minute. B ut basically we were taking the same approach. So what is the minimum VM that you need so that you can guarantee security. And you can actually really optimize and manage, sort of, resource usage on your hardware. And that was the birth of Firecracker. You know, and I think it's massively increased the efficiency of Lambda. Drove the cost down, as well, both for you guys, as well as for us. And it also gave us a very good platform for, for example, working on how to get pre-warned much faster. You know, how do we, I mean, we would never have been able to do controlled concurrency using regular VMs. Yeah. So both of these, so you've working with Firecracker only initially, or originated for Lambda. Then we started looking at containers and saying 'ya, but these guys have the same problem.'
Yeah. They also have to run on big VMs, they don't run on small VMs. You know, I'm not really that keen on the security posture. If you want to run a lot of these things next to each other, then you know, you want to have better control. So I think that's really what Firecracker gave us. And, do I believe that containers can be truly serverless? Yeah. Because it takes care of the same thing. You, know, if you have configured it right. You know, we do the AZ spreading for you, we do performance management, we can actually have them grow and shrink.
There may be some more parameters that you need to set about sort of the kinds of things you want to do, but I do think containers have, are a great pattern for those engineers or those companies that want to move from the monolith that they have sitting in VMs. And actually start to think about modernizing their architecture.
Yeah. And so the decomposition of these big monoliths, and if you look at it, you know, there's probably components in there that need to scale massively. Let's say your login or your security, or habits that, your identity, for example, on each page that comes in and maybe things that don't need to scale that much. So maybe the shopping cart. Which only gets used when you actually have to check out. However, if you need to scale it to the smallest component in that , the component is the highest needs for scale, you have to scale everything.
So you can think about, 'I can decompose for performance reasons for scale. I can decompose for security reasons.' Just remember, I don't really want my shopping cart code to have access to the credential store for identity.
But however, you see so one monolith, you don't have that separation. So there's multiple reasons for being able to performance, scale, security, why you would want to decompose. And then the step going from there to sort of the next, going from the monolith to a micro services architecture, if you would want to call it like that, is easiest to do with containers. And why is because the development tools are almost identical. You can use the same compilers. You use the same frameworks. Things that you can do in a regular VM, you can also do in a container. Now the tools for serverless development, serverless compute development have still a long way to go. Until they are really at the same level as the tools we have for computing VMs or compute in containers. In a search, there is room for, for all of this.
Maybe there's a next phase which is so nano services or something like that, where you decompose your micro service into even smaller building blocks. And then you can do that with Lambda.
Like these things all need to work also seamlessly together. So I think, I don't think he can see Lambda without API gateway. I mean, the two are interlinked with each other. Yeah. And you'll also use API gateway to talk to your containers. So maybe, you know, some of your URLs go to go to Lambda and others may go to containers. Jeremy:
Alright. And you get that strangler fig pattern too, right? So you can. Werner:
Yeah, but it's also the reason why ,I bring up Nitro is that we continue to think about what is the minimum set of functionality that we need to have on the box? Right? And in this particular case of Nitro what we did is, we started offloading it to all the boards that we build ourselves. And that allowed us to build a VM that is absolutely minimal. Or at least the Dom0. It still runs regular VMs. On top of that Dom0, we can now use our own hypervisor and that one is minimal compared to the Dom0. It used to be a completely Linux kernel.Rebecca:
I love this. You're talking about, I mean, efficiency across different dimensions, right? And then it's like, is it the dimension and performance? Is it dimension of the security. Well, security is always number like priority zero, as we know. On so you're talking about decreasing overhead. And I love how you also say there's so much further to go, in terms of this technology.
Cause I feel like people already feel like they have come so far and they're like, 'oh my gosh, where are we going next?' And you're like, 'Listen, we're going to nano micro mini,.' But so you made this prediction that something else that will become more efficient, right? It's this move toward sustainability and that sustainability will get its own architecture. And I'm wondering if you can talk a little bit about what you mean by sustainable architectures, how these things fit together, and how serverless fits into that?Werner:
Oh, well I think that, first of all, is of course, that sustainability and efficiency are kind of linked to each other. if you start running ARM processors and you suddenly reduce the cost by 40%, you probably also have used less energy.
So, but that's all. So it's the role of things we have been doing in the first days, or even maybe before AWS.,where we've done so much innovation inside our data centers.
We are the largest wind farm operator east of the Mississippi or something like that. And so we've been investing in these massive, power parks, whether it's solar energy or wind energy, to make sure that every electron that we take in is a green electron.
So one way of thinking about sustainability is that, just as that we have for security, there's a shared responsibility model. There's parts of let's say the lower layers where we talk about data centers and hardware and chip development and things like that, where we've become extremely efficient because we've done so much innovation in that space.
It also helped us drive costs down of course, for our customers. But the way that we do power recycling, water recycling and moving power through the data centers. And so we have a whole series as options for our customers there to pick from. Yeah. Which hardware do you choose, which containers do you choose. And, of course, we need to make sure that our customers are aware of, sort of, their carbon footprint when they make certain choices.
And I think there's a lot more work going to happen in that space, but the carbon footprint tool already gives customers some insight in their historical carbon footprint usage. Now then, lay on top of that, where our customers have to make decisions. And I think a good example there is if they use the sustainability pillar of the well-architected framework. What technology should you be picking for the particular solution that I have? If I have these sustainability goals? Yeah. And I think sort of there are hardware choices or did you choose Lambda or you choose container for certain conditions or rules? Or should I run my own database or should I use RDS or Aurora? Those kinds of choices are choices that you make at the sort of technology selection level.
Then there's also the architecture level. How am I building this? Am I being able to stitch multiple Lambdas together in a, sort of, event driven architecture? Or, you know, what is my or how do I think about resilience? Do I really need to run in multiple AZs now? Or because, of course, that's how we've grown up, but because you gave you these AZS. I mean, we're very proud and, you can build these highly fault-tolerant applications on top of that.
But we don't all process financial data or life sciences or medical data and things like that. Maybe on a number of conditions. You know, you will be fine for this particular subset of your application to just do fail over and have a one minute outage. And so that might actually significantly reduce your usage. But then I also think that we need to start thinking not only about sort of that architecture, the impact on your customers, but also what do we present to our customers?
We've all become addicted to very heavyweight webpages, massive amounts of video and imagery and luxury. And you know, some webpages literally are tens of megabytes. The question is, is that really, if you think about the sustainability point of view, could we do with less? And the search, you know, save on storage, save on compute, save on bandwidth, and you might do that purely from a sustainability point of view. And I think we need to start to reflect on how we've built our applications whether that is the most sustainable way. Assuming that that is something that you want to pursue. Jeremy:
Yeah. And I think that has to do with this idea of resiliency too, right? Like, if you can build resilient applications, you know, if part of it goes down, is there a way that you can start serving, you know, a little bit more of it? And I know you're a big fan of distributed systems and I can't remember who said the quote, but somebody said something like, "Everything fails all the time." I can't remember who said that, but essentially, that's true in distributed systems, right? You always have information flying around.
And, one of the things that actually has sort of come back up and you mentioned event driven applications that were sort of, original, you know, mainframe things, but this idea of EDA has come back up big time now. Especially with, you know, building a tiny Lambda function that does this or having, you know, to share information across multiple microservices and things like that. So, I'm just curious your thoughts on sort of where we are with event driven applications now, or at least the tools that Amazon, AWS has built to do this. And is there more investment? Do we, is there more things we have to do to continue to enable and allow people to build better event driven applications? Werner:
So I think what we've seen, if you go beyond them though, I think if you just look at the Lambda ecosystem itself, Layers, SAM, all these other tools, earlier tools. Actually all have helped to become more efficient, but I also think you have to look at the complete ecosystem around it. API gateway, Event Bridge, you know, and then take everything else. Take SQS, take DynamoDB take S3. I mean, all of them are serverless. And since the integration between them becomes, is always important, but also how easier can we make it to build solutions on top of this? Because in essence, you know, I mean, we can talk a lot about sort of this one function that you want to build. but in the end, our systems are slightly more complex than that one function.
Yeah. And so I think step functions have become a crucial tool in all of that. Because again, you see we build on this Lambda and it's fun because you thought, 'oh, you know, you did deliver this file or this message in this queue or whatever. And this one function gets triggered. Well, turns out it's never one function and it is, then too many customers had to start doing all the heavy lifting again. They had to figure out, 'oh, has this function failed? What are the steps that they need to do after this function has failed?' And so, building step functions, for example, has I think greatly improved, the composition model, for Lambda, but I can never see Lambda separate of all the other pieces that we have, because I think serverless is just as important for those areas.
I mean, when now RedShift serverless. Now, you not only have to figure out exactly how many of these pillars do we need, or, you know, Aurora, RDS. I mean, all of this take RDS. RDS is also one of these sort of old fashioned kind of things, where we started off with. Yeah. In essence, you know, we had object storage, had network and security and things like that. EC2. In the database because everybody needed a database. Now, but in the beginning, definitely ideas still meant that you had to manage your database and anything you wanted to do. Do you want to have it.. Do you want to scale up? Did you want to scale down? Things like that were impossible. Now, it was also software built by other people, of course, largely. Because it was MySQL or Postgres, but then moving those to a serverless architecture means sensibly, 'hey, you know, you take the very, again, heavy lifting around it.' And then it also gave us the opportunity to do this massive innovation under the covers that became Aurora. Because I think that the sort of low file layouts is a complete departure from how we used to build relational databases anyway. Rebecca:
So thinking about Werner:
And I said,
No, but good. For me, it surfaces still, you know, Rebecca:
I still get annoyed by every piece of AWS that is not serverless. Rebecca:
That's it. That's actually the title of this episode. I still get annoyed by every piece of AWS that is not Serverless. Today's podcast is both literally sponsored by Common Room, the intelligent community growth platform that helps you deepen relationships, build better products and drive business impact and figuratively sponsored by it. Because my favorite co-host, Rebecca Marshburn, is the head of community there. Today's fastest growing companies that you know and love like Grafana labs, Temporal Confluent, DVT labs, Imply, Web Flow and Atlassian use Common Room to grow, engage with, and support their communities. With Common Room, they can see who's interacting across all of their different community sources. Including GitHub, Slack Stack, Overflow, Twitter, and Discourse to quickly understand things like how people are feeling about a new feature release, who needs product help, or where there's a bug.
Common Room also delivers great granular insights by providing filters that allow them to find community members by specific skill sets and contributions. Like who uses which programming languages, who made more than 25 pull requests, and who's creating excellent content that should be rewarded and amplified across the community. You can try out Common Room for email@example.com.Rebecca:
So you're talking about service evolution, right.? And obviously all these services evolve. And I'm curious, like from the serverless launches at re:Invent last year in 2021 like SageMaker serverless. I think sometimes these things happen, right? Where AWS launches them in preview, knowing that there's a full product vision, and it will continue to evolve.
And then at some point it goes GA and that's not the end of its evolution. Obviously it keeps going, depending on what customers need and what you're hearing. But I'm curious if you can talk about some of your, some of the serverless launches from re:Invent that came out in preview and how they're moving toward that evolution as they reach their full vision that you all had for them. Werner:
well, having a full product, vision is a bit of an overstatement, yeah? No, because if you would look, sort of, two years after the launch, is the product, still the same that we went GA with? No, absolutely not. And I think it's sort of, that's a little bit of an old style of thinking and not, not blaming you for that, but it's sort of thinking like: when software releases came in really big batches. And I'm not even saying about patching and bugs and things like that just a release every two years, maybe? It was something like that.
That works really well. If you have total control over the ecosystem. You're the one who decides how it's going to look like and nobody else cares. And everybody follows your rules. Now, I think, you know, Microsoft has done an amazing, great job in that over the years. And indeed, you know, when you switched from, whatever, Win32 to .net. Yeah, it was a radical departure. But you know, it came with all the guidance. It came with lots of documentation, again, with lots of community and all these things around it to get that done. And it was sort of the old style. And that's fine. As long as you control the complete block.
Now, when we started cloud, one thing that we really wanted to do was make sure that we wouldn't build in the things that were relevant 2 or 5 years ago when you started architecting, this really big piece of software. But that we will be again, nimble. Or focused on primitives. Yeah. Instead of these big blocks. Well, that meant that we needed to create a culture in which we will have things in the hands of our customers really quickly. And then see what they were going to do with it. Because to be honest, you know, we didn't know how people would want to develop in 2025.
And I think between now and 2025, there's a lot of cool stuff still going to happen based on how's, let's say, modern development is happening, in the suits. We don't want to be the gatekeepers that tell you what you can and what you cannot do.
Yeah. But that means that we do need to get things in your hands pretty quickly, and then see what you would want to do with it. Yeah. So when when we launched Dynamo, we already knew customers wanted secondary indices. And ,so but we left it off when we launched it.
Why? Because first of all, it was quite new technology. People hadn't really used it before in a search. We wanted them to get their hands dirty and then see what they were doing with it and where they would start to complain about. And, as such, the culture within AWS is such that we allow our customers basically to reorder a roadmap.
And for Dynamo, for example, it became clear that customers wanted certain security features. We have a much higher priority, then they wanted secondary indices.
And then what you start doing then is sort of looking at what customers are doing with your product. Where are they actually putting work in? That you think they shouldn't be putting work in. Cross region replication for S3, for example. Customers are doing that by themselves.
Or what we saw quite a few, is when we change the consistency model in S3. We saw that customers were doing all the work to make it strongly consistent. Where actually, we should have been doing that or we should be doing that for them. Yeah. And, as such, you know, you continuously look at your customers,
They may not even consider it to be heavy lifting. But we can look at it and go like, 'Yeah, but it's not only you doing it. I mean, Fox is doing it as well.' And see it then also. Yeah, so maybe, maybe we should fix this for you. And we can do that because we're not tied into this one big architecture that we decided on 5 or 10 years . We are continuously looking for how's modern development happening and what do we need to do? And so that, that allows us, even after GA, to continuously evolve the product. I think the official number is somewhere between 90 and 95% of all features and services in AWS are driven by customers. Not us.Jeremy:
So another thing that, I know you're a big supporter of, Verner, are all the AWS community programs like the Heroes Program and the Community Builders. You gave that awesome award, the Now Go Build Award to Matt Coulter on the stage at re:Invent 2021.
Which was awesome. He's done so much amazing work. And I'm just curious, you know, what is it about these communities that encourage all these developers to, you know, sort of learn and, and advance their careers and just build new things. Why is that so important to you? Werner:
Well, first of all, I think, you know, we so fortunate as developers. Yeah, I think we have all of the most creative jobs. in the world. We can go to work every morning, or wherever we work these days, and create something new. Or learn a new technology. And, as such, learning is a crucial part of my job. I don't know how many people are still stuck on C++ just because they don't want to learn any new languages, but I don't think they're that terribly much.
And the suits, you know, we learn new programming languages, you know, We learn new frameworks. We start thinking about how to architect things differently. We are continuously learning. And there is no better learning than from your peers. Now, of course I can stand on stage and lecture you. This is how thou shall develop software. But it's much, much more interesting to hear about the things that didn't go well. Whether they're hard or that you have to work around. You know what? Your peers probably know that. And I've been very fortunate to have such a huge group of people that are so passionate about our technolog, that are willing to, to brag about it and talk about it or do podcasts about it.
That is so valuable because you probably learn more from your peers, then you will learn from us. Yeah. The really hards. The reason why you go to stack overflow to actually do copy and paste on some codes. Yeah. And you can see stack overflow or platforms like that as a community.
You're ask a question, you get an answer.
Or you copy and paste something. But yeah. And plus, you know, as such, I think it is for us as Amazon, making sure that people that work in our community or that our Heroes actually get the right tools. Or get early access to the tools or get their hands dirty, such that they can actually be valuable players as. well. Yeah. And, I don't know Jeremy whether you've written a book already, but I saw that Coulter came out with a book on CDK.Jeremy:
Right. No book for me yet. Werner:
So there may be some other perks around it as well. And you know what, I don't know. As technologists, we also like to be heroes. We like to help other people. And, and as such, I think these programs are tremendously important, not only to AWS, but definitely to all the other customers.
So, Werner, thank you so much for being here and just sharing all this knowledge with, you know, with our community. And, of course, all the other work that you do. if people want to sort of, keep tabs on all the amazing things that you're doing and Amazon's doing, or even maybe share some of their customer success stories with you, what's the best way for people to do that?Werner:
a good answer.Werner:
No, to keep track of the, there's a whole series of blogs these days at AWS. And one of my favorites actually, is the Amazon science blog.Jeremy:
We talk a bit about hardware stuff that doesn't really necessarily look immediately applicable, but that sort of [inaudible], for example, all the posts that have went on there on automated reasoning. All the tools that sit underneath Inspector and Reachability Analyzer and Macie and all these kind of tools, the fundamental science underneath that. For example, we have a whole blog on that. And so the whole series of blogs that we have at AWS, I think are an excellent source of that.
But yeah. You also need to be able to get your event driven ping. And, as such I think, platforms like Twitter are ideal for that to sort of follow those four or five official AWS spokespersons there. And, I think you'll get a good handle on the kinds of things that are happening.Jeremy:
Awesome. And then, of course you have your blog, allthingsdistributed.com. Thank you again. And one thing I just want to say before I let you go. You know, the work that you've done over the years at Amazon, for me and millions of other AWS developers, I mean, this is our livelihood. Like I don't even know what I would be doing if serverless wasn't a thing and all of the stuff you've built.
So, thank you on behalf of me and all the other developers that, you know, that make our living off of the amazing things and innovations that have been created. Thank you. Thank you again for that. We'll get all of these links in the show notes. Thanks again. Verner, it was great. Werner:
Again, remember it's still day one. You've got a lot more things to do. Jeremy: