Episode #130: Serverless Framework v3 with Matthieu Napoli and Mariusz Nowak

March 28, 2022 • 47 minutes

On this episode, Jeremy and Rebecca chat with Matthieu Napoli and Mariusz Nowak about the latest major release of the Serverless Framework, how they streamlined the CLI developer experience, the new params feature, the future of the Serverless Framework, and much more.

About Matthieu Napoli

Matthieu is a software engineer passionate about helping developers to create. He’s the founder of Null, and currently a Senior Product Manager for Serverless Framework at Serverless Inc. Fascinated by how serverless unlocks creativity, he works on making serverless accessible to everyone.

Apart from consulting for clients, Matthieu also spends his time maintaining open-source projects. That includes Bref, a framework for creating serverless PHP applications on AWS. Alongside Bref, he sends a monthly newsletter containing serverless news relevant to PHP developers.

After years of talking at conferences and training teams on serverless, Matthieu created the Serverless Visually Explained course. Packed with use cases, visual explanations, and code samples, the course focuses on being practical and accessible.

About Mariusz Nowak

Mariusz has been involved with full-stack development of web applications since 2004 and actively engaged in the open source community. He developed and published many JavaScript tools and modules, which play important part in implementation of modern web applications (client & server side) that he's worked with.

He also implemented a light, highly configurable, in-memory database engine that allows decentralized, network independent and (while in network connection) a real-time distribution/replication of database data: https://github.com/medikoo/dbjs.


Transcript

Jeremy: Hi everyone. I'm Jeremy Daly.

Rebecca: And I'm Rebecca Marshburn.

Jeremy: And this is Serverless Chats. Hey Rebecca!

Rebecca: Hey Jeremy, how are you doing this morning?

Jeremy: I'm doing well, but you are in an interesting place right now.

Rebecca: I am. This podcast recording is coming at you live from Anchorage, Alaska, in the Spernard neighborhood. So that's me.

Jeremy: That's awesome. I am sitting in my office like I usually do listening to electricians make a lot of noise in my basement. So we will do our best to get through this podcast, but we have an awesome podcast today. We have two amazing guests, that are from Serverless Inc. where I work. But these two gentlemen work on the Serverless Framework team.

And there was recently a launch of Serverless Framework V3. So our two guests today are Mattheiu Napoli and Mariusz Novak. Gentlemen, thank you for joining us.

Matthieu: Thank you.

Mariusz: Thank you. It's great to be here.

Rebecca: So, before we dive into questions around Serverless Framework and Serverless Framework V3, why don't you tell us a bit about yourselves. Matthieu, will you kick us off?

Matthieu: Yeah, sure. So my name is Matthieu. I work on the Serverless Framework with Mariusz. I work as a product manager on Serverless Framework, specifically. Except that, I've been working with serverless technologies for a few years. I absolutely love these topics technologies.

I've been working on a few open source projects as well, like Bref which is about PHP and serverless. And I've been, also, AWS Serverless Hero for, yeah, more than a year now. That's about it.

Rebecca: Oh, that's it? Just a small little thing?

Matthieu: Oh, ya. I mean.

Rebecca: And Mariusz, what about you?

Mariusz: Yeah, sure. Yes. I'm Mariusz Nowak I work also at Serverless, Inc. with Matthieu.

I'm engineering manager at Serverless currently. I work mostly in the Framework, but also in the console product currently for the Serverless Framework. It's like three years that I'm involved with the Framework as an employee at Serverless, but it's five years that I work with the Framework. And yes, for many years, I'm a mostly JavaScript programmer, like down to the core. And yeah, that'd be the short introduction, I guess.

Jeremy: Awesome. Well, so let's talk about Serverless Framework version three. So this is a recent release. So before we get into some of the details, you know, maybe Matthieu, we could get your perspective on this, but, what was sort of the motivation, sort of behind this new major release?

Matthieu: Right. Kind of two fold. On one hand, we really wanted to simplify the developer experience. And we'll be talking about the details, but you're simplifying the CLI experience, adding features like for quality of life improvements. And on the other hand, with major release that's still plenty to clean up a few things. Both from us internally for our own benefit, but also for users' benefit, to cleanup old versions, old features that were deprecated and so on.

Rebecca: Will you tell us a little bit more about that? So you said that you have this idea of like simplifying and cleaning different parts up about it. There's also a completely new CLI experience. So will you tell us about that as well?

Matthieu: Right. So the idea with the CLI is that Serverless Framework is a very mature project. It's been there for years. And of course over the years the whole experience with the CLI got more complex over time. More and more information because we add, obviously, features to it. The Idea with V3 was to kind of start over with what you get when you interact with the CLI in the terminal and try to remove as much as we could.

So it was really hard. Like, what can we remove? What should we make more visible? What's the actionable information, the most useful part we want to emphasize on, depending on the command you are running. Whether you are you're deploying, invoking functions, fetching logs, all of that. And thinking about how do we format the output as best as we can for that specific use case?

So we've redesigned the outputs of the CLI basically. And we can get into the details.

Jeremy: Yeah. I mean, think, you know, certainly would love to get Mariusz's perspective here, too. Cause I look at the pull requests on GitHub and the comments that you put. You are such a detailed, like honestly, I don't know how you keep all of that in your head. And I know that, you know, over time, like you said, Matthieu, the CLI just got bloated, right?

Like, we just kept on adding things. And we have like, we had the Tencent component that was integrated in. And then there was components for a while. So this thing just got really, really, really big. So, besides just the fact that we've simplified it, I think one of the things now too, is it's like a 40% lighter package. So, I don't know. Mariusz, you want to talk a little bit about, like some of the things that were sort of cleaned up in there and, and some of those package size reductions?

Mariusz: Yeah. Sure. So the thing is that technically Serverless Framework V2 was a CLI, but that consisted of three very different CLI programs. Technically there was a Serverless Framework, but also on the other hand, we allow the Serverless Components version one, which was totally like a different CLI and components V2, which was totally different CLI and also included like Tencent version.

So, the big problem is that it was two, three very different products maintained differently with different set of dependencies. And some of them were not really maintained effectively for the past months. And they had shared some security issues, et cetera. And as we didn't end the vision to evaluate them further, we decided to actually drop those dependencies from the V3.

And they can be still installed by the users, but they need to use them directly. They are not part like the Serverless CLI now, currently. And one thing we also add that is that serverless stance that still needed to be part of the serverless package, but we didn't want to blow the regular framework users with their dependency.

So, now it's like installed on the demand. So, whenever we detect that someone is using the serverless Tencent, mostly in China, it can happen. Then we download it on the first usage and just, you know, redirect that come up to it. So that's what it happens. Then the same way, like we removed the tabtab auto completion.

It was like one minor part of this cleaning thing. The thing is that with tabtab auto completion was that it shirks some security vulnerabilities and also it was no longer maintained. And as currently our CLI doesn't have a, like, really prompt startup time and is quite important for tabtab to be effective, we decided it's not worth that much due to hassle with that at the beginning. We didn't have also, like, log user usage track. So, that's basically, we removed, technically not-really-used CLI. So it's mostly Serverless Framework. So what really people are using here. And in that way it was also 40% lighter now.

Rebecca: So before we get into, like, there is, a really big feature that was added to Framework itself. And then they're obviously more like deep details that we could get into about the new CLI experience. I love how both of you had said, you know, we redesigned it. And then we noticed there was a lot of bloat. We knew that there was like different things happening that made it too complex. I'm wondering if you could talk about, a bit, the almost like bellwethers or the indicators of those things. So for example, was it users that were saying like, 'Hey, this is getting like really difficult to use.' Or was it customers that were like, 'Hey, we want to use this, but now we're trying to, you know, onboard teams to it and they're getting stuck or lost.'

So, if you could talk a little bit about like that, when we redesign something, right? We're redesigning it for the person using our product. So maybe some of those like inputs into, like, okay, how do we start to break down what the issues are, where people are getting stuck? What feels too bloaty? Or is it something that just, you knew already from internally trying to use it yourself?

Matthieu: Yeah. That's the great thing about working with people that love, I mean, you, if you work at Severless Inc, you are kind of invested in serverless. So, so many people at Serverless Inc. do use the serverless framework. Everybody has an opinion and it's both really interesting to work with that. It's fun. But it's also challenging because when you talk about designing anything, everyone has an opinion.

So, it was kind of a fun process where we had to iterate, like, everyone was, even people like users we talked to would kind of want a new simpler design. We were talking about like, we had discussions ongoing about deprecations for example, which were kind of verbose and visible in the V2 output. But also like a lot of details. When you deploy, you have different details and steps going on, with logs as well, kind of, noise in the output, too.

So, we had all of these kind of signals. Everyone had an opinion, and then we tried to collaborate on the design with, we had some, basically, iterations internally on different propositions in trying to get feedback from everyone. We also had issues and that, maybe, Mariusz can maybe talk about that? That were a bit more technical, but quite interesting as well about standard error outputs. And instead of output, like, how we improved at parts.

Mariusz: Yeah, sure. And technically the tool logging was pretty primitive. Like everything was just flushed. It was a bit configurable, but nothing that, like, really great manner. So what we achieved with the new, and also through the math, is great design for new logs is that we're now providing very slick output, but it's also very configurable.

If users still want all of this information that was at user can easily get it by brightening on the local levels Like we have the verbose flag for the more verbose output that will give you technically all of the output that was in V2 You have also a new daback then back milk Just serves more developers and the plugging developers like developers of the frameworks and the plugging developers. Like, before V2 was outputting this information, some scenarios to the users, which was like totally noise for them, not really comprehensible and really polluted the picture for them.

So, now it's very well organized. It's configurable. And as Matthieu mentioned, we also separated substantial command output, which goes to the STD out. And, the technically, message logging to the STD errors [inaudible] to be processed. As we have a process like programmatically, we had requests from the users over the time that they want to, like, for example, I know consume logs, programmatically, or like a result of the local invocation.

And It was not really, well, SOS print, for example, it was not really possible because some messages warning were also getting into logs. And now with clear, separate streams, it's all possible.

Jeremy: Yeah, and I think that's a huge sign of maturity of any product. You mentioned maturity or, you know, just that it's been around for five some odd years and all of this user feedback, all of these people hitting up against it. I mean, one of the things that's added here is just strict handling of the parameters, like a validation on parameters that are going in, which is just a really nice feature.

It's nothing more frustrating than like putting in a parameter and it's not right. And you're just getting error messages back and you're not sure what it does. But also that simplification of the logs. It's now you just get back like exactly what you need, right? So just the feedback that you need, and if you really need to push that forward, then you have, like you said, that verbose mode and with all these different modes, but that motivation is really limiting the logs.

And just getting to the point where this is what you need to know and here's sort of that minimal approach. Like, what was some of the process that you went to, to sort of narrow that down?

Matthieu: Yeah, it was challenging. Just to take one example, the errors. So you have an error and you try to be as specific as possible, try to alter information in the right way. So we went through different, again, different iterations and like, okay, how do we use color effectively, not have too many colors. Just have, like for errors that we want to really in your face, like red cross and the error message extremely visible. And have yet supporting information. Like where can you open a bug issue?

When can you, how can you get into the forums to ask your questions? How can we output the version information so that if you report a bug, we can have the right version in there? So fitting all that information was kind of hard. It took a few tries. With the deployment itself we also figured, like, we want to show, so the idea is to show information that is useful.

So the deployment goes through so many steps. And, the question is, do you need to see all those steps by default? We settled with no, you need to see information that, either, for example, it's really long. So you need to know what's going on. You need to be reassured that the thing is not stuck or broken. So we show the steps that take a long time, like maybe packaging or uploading the files to AWS.

These we show. And as soon as that step is finished, we replace it. So it's kind of an interactive output and we show the next step. And there are things that we intentionally skip because you don't need to know about it. It doesn't bring any useful information. And when the deployment is done, we show again like the URL, because this is what you want to see.

If you deployed an application, you want the URL and you want to see the confirmation that everything went well. So you have the check mark and something that explicitly says service deployed. These kind of things,

Jeremy: yeah.

Yeah, and I mean, and the thing is, is that, you know, working with Austen, who is the CEO of Serverless, Inc, you have to be very particular about the colors that you use on the CLI. But you're totally right there about, you know, like, CLIs by the way, it's not like you're using a webpage, right? Where you've got all these design choices. Like, you're very limited in terms of what you can do, but, yeah, I think the team nailed it though. I mean, it really is the right balance of colors and information and output to let you know that things are working.

Rebecca: So let's talk about the big feature that was added to the framework itself, which was actually, I won't give it away. You tell me what it was, that way you can do the big reveal and then tell us what it's all about.

Matthieu: Right. We call the feature 'stage parameters', and this is a feature that was actually inspired by what some users already do. So they have the need to deploy the application to different stages development, staging, production. And they want, they need to change the configuration based on the environment,.So in production. you want to send emails and enable transactions and everything. But in development, you don't really want to do that. So just one example, and that is possible with V2.

You can figure out how to make it work but, it should be simpler. It should always be simpler. You shouldn't have to mess with nested variables and, and coming up with your own way of solving the problem. So we figured out we could shape a very small feature again, quality of life improvement, that solves that problem. So you can define parameters and values. for each stage, o verride them so by default, I don't know, like send email is disabled and then in production it's enabled or stuff.like that.

Mariusz: Yeah. And also what we do is I think, nice to add that it kind of overlaps with improvement to handling of the CLI params because we restricted now these UI prompts to only those defined by the schemas and users were like using all those free form CLI params and fu rther configuration variables resolution, then we want it to actually to restrict them to have this validation right. So, you know, nothing that's not intended to be deployed is deployed, for example. And now with the params, we are also version V3.3, allows pass those params through the CLI. So that way users can still use arbitrary CLI params, but through the param CLI they can then put the name in the value of params they intend to use and they will be resolved in the configuration as they will have a top priority over a configuration params, eventually dashboard params.

Jeremy: Yeah, and I think you're underselling it as a minor feature because honestly, every serverless application I've ever built with the framework uses custom variables. And then, you know, you create objects under those that these are the prod ones and whatever, and you're repeating variables and things like that. And then anywhere you reference it, you know, you have to have those nested variables, nested variables, and so forth.

And that just becomes super confusing. So this just makes it, in my opinion, super clear. So don't undersell it. I think it's really, I think it a really good feature. And then I think the other thing that sort of happened as part of that was that you rewrote the variable resolver, the entire variable resolver behind the scenes as well, right, Mariusz? So, can you tell us a little bit about that?

Mariusz: Yes. Yes, correct. Yes. He had the salt resolver. There was tons of issue with that. I mean the old resolver was like ReGex based. It was pretty flaky. Users could also override this RegEx so, in some scenarios, like not all expected characters were recognized. The other problem with the old resolver was that when you, for example, put the function that was supposed to resolve variables to receive the framework instance, which had uneven stead.

So it was the properties that were not yet resolved by the resolver,[inaudible] It was quite a complex problem, but we really wanted to have that right, as they would have constantly, you know, issued reports to ours the resolver are very quirky issues. So, technically I re-written it.

I didn't use actually the ReGex resolution, but more like state machine resolution. So this syntax is now very, very strict. There's a prevention against like the record receive a recursive resolution will reach the old resolver. They're just hanging definitively. So, yes, there were many other things fixed over there like some core configuration properties is provider plugins can now be set behind the variable resolution. This variable resolution also now happens at the earlier stage also to give this a more powerful means for the resolution. So we can technically put variables against all the configuration properties.

We have improved the error handling, like with the old resolver, if some there was a crash, for example, variable resolution was just silent. It was just warning print that now we actually crash. We also, the variable was not result of the variable was not at the target source.

It was also silent. Now we allow to decide whether it's a crash or whether it should be like clear overwritten [inaudible]. So I think it's all now really, really solid. Of course, we had some issues with the here resolvable at the beginning,. We had some bug reports come because this is quite complex engine, but at this point I'm not aware of any issues with it. And I think it's pretty solid and works really well.

Rebecca: And I'm guessing, so does this have any effect on plug-ins and plug-in authors, like anything that people should know?

Mariusz: Well,

Rebecca: question...

Mariusz: Yeah, sure. Plugins can add their own variable sources. And it was also the same with the old resolver. Although with the new one, they have had, I think, nicer interface and they have also access to all the other configuration preferred this result. Assuming they do not introduce some recursive resolution, but they will have the meaningful error if they try to do that.

So, they have a more solid, more persistent interface for that. I would just try, there was the one issue actually for plugin authors that now their whole variable resolution happens earlier. But I think it was not really that related to the variable resolution, but more to environment variable so I'm not going to dive into that. I think it's just, you know, all the positive aspects of it.

Jeremy: Yeah and again, the plugins are like, one of the things that make the Serverless Framework so great, is that there's so many things you can do and you can just extend it yourself. So it's great that if you're a, you know, if you're a company or a team and you want to do all custom things just for you, you can do those right through the plugins.

But, one of the things I wanted to mention, because this is something that was really big for me when EventBridge first came out, I immediately was like, 'oh, let's go ahead and deploy it.' And I'm like, 'Oh, there's no cloud formation support.' I'm like, well that's crappy which is, you know, happens every once in a while, but then all of a sudden Serverless Framework's like, 'oh no, no, no, we support it.'

So I do that. And then I see there's an extra Lambda function that gets deployed. Cuz it's using custom resources and I'm like, 'oh, this is not the way I want to do it'. But at least, Serverless Framework supported it. But now cloud formation exists for EventBridge and stuff like that. So that's one of the things that I know you changed.

So now it uses, cloud formation to deploy EventBridge. But are there other like little changes? I mean, Ithere was so many things that were done as part of this, but like, what are some of the other little tiny things that probably were bugging people? You mentioned the tabtab completion was removed. Anything else that, like, you know, little features that were sort of added or plugged in?

Well, as I think of the future, I mean, definitely for example, we needed to upgrade the supported runtimes by AWS because we cannot just remove from the configuration without making a breaking change. This is what we cleaned out. And we also made the Node JS version 14 as a default.

And the same with the Python as a default run time for, in the Python case it was not about the default. It was actually that we just recognize we no longer recognize ones that are no longer supported on the AWS side. It's a slightly different thing. And the other, a really rather small thing.

I think, for example, we reintroduced support for the HTTP API in the V2 but it didn't, then it was like missed that we didn't support provider tax to be applied automatically on it while in case of API Gateway, they are, and this is like a very minor thing. But with V3, we fix this issue and they are now applied also by default. In V2 it's behind the flag.

So, we also like improved programmatic usage for the framework, although it's not the thing that was really requested by the user. But I think that, also it's not requested because Framework was very, very painful to be used programmatically. And I think everyone who tried it was just stumbled on it and resigned and just treated it as a CLI.

Mariusz: So now currently the framework it, it still needs some steps to be like really perfect on that. But currently when you just, require the framework and you just intialize the serverless instance, you just pass the comment name, options, configuration, which should be actually resolved in this stage, and you can run, you can initialize and run the instance. And it will also not produce any, for example, console out. So it's really clean. So it's now way cleaner for the programmatic usage.

Rebecca: I probably should have asked this in the beginning, but I've heard you all speak about quality of life improvements, right? And I actually have only heard this like a few different times, in terms of like building software. And so maybe it's like used all the time, but maybe it's also something that you all talk about internally a lot.

And so I'm wondering, right? It sounds like all these improvements, like you're reducing complexity, you have, what was it? Stage parameter support, right? You have excellent correct color usage. You're like making sure plugin authors are taken care of. Do you have like an internal board of stack ranking of like quality of life?

Like, if I could change this, it would help me save this much, like, mental burden, right? Or this much actual time. Or it would help my colleague be able to perform their job X much better. So I like this idea of quality of life, like approaching all things that you're building in terms of, not just how does this make the product better, but actually improve the quality of life for everyone who's trying to use it. And I wonder if you can talk a little bit about this idea of when you're looking at what you want to solve and then drawing a straight line to quality of life improvement.

Matthieu: So I'm not sure if I'm answering exactly your question, but I feel it's kind of related. One thing that is really challenging when you, look at Serverless Framework and especially when you look at the repository. Is that there are so many people that use it with so many different use cases that you're like someone will want to add a flag to support this small CloudFormation feature or this small thing that applies for 1% of the user base. And it's kind of an accumulation of so many of these little changes that happen. If you look at the number of releases in the Serverless Framework, it's not just like you have V3 and that's it. You have so many releases every week with all of these changes, sometimes contributed, sometimes done by Mariusz or Piotr. And all these small improvements are kind of related to slowly improving the developer experience on all of these small details to accommodate all of these different use cases.

To me, this is the hard part because yes, when you have a feature that applies to so many people, like stage parameters, we could kind of imagine that it would impact a lot of the user base., but so many features are really like for use cases or small use cases. It's hard to really judge and prioritize sometimes. So that's, to me, that's the challenging part. We are trying by the way, to kind of improve that, have a process where we also ask the community to participate in issues or feature requests, by you know, voting in GitHub.

That's kind of working out. Sometimes we have, we use the prioritization on GitHub with those votes. It's kind of interesting to see what people are really interested into issues by reactions or comments. So that's something that we use. And we also look at how the Serverless Framework is being used.

Like what do people deploy the most with it? What are the most popular use cases, the most popular languages it's being used with? To try and figure out how can we have the most impact with some features, while sometimes doing big or sometimes small changes That's yeah. That's how it tried to work.

Jeremy: Yeah. No I think that makes a ton of sense. And I mean, and one of the things is crazy about the Framework and this is something I don't think, maybe people don't think about. Like you have a paid product, right. And there's, you know, you tend to hopefully get a lot of users and things like that, but being an open source product, I think people demand more of an open source product than they do with one that they pay for. Like, if you're not paying for it, for some reason, you know, you want more from it.

So you do have all these different use cases. You do have all these requests for features or flags or whatever it is. But I think one thing that's shared across all use cases is security. And I think a lot of people don't do security well, and I know there were a couple of security enhancements that were made to V3. And it'd be great if you could talk about that for a minute.

Mariusz: Yeah. Well, I think both, most of them were mentioned that we actually sorted out those insecure dependencies that were technically just not producing those NPM audit logs so maybe, you know, in a real usage, they were not really that insecure, but at this day they rise the alarms for that.

And the other thing was that we improved handling of the CLI params. I think it's more of a, I'm not sure if it's that much about the security bit. It's security in terms that user won't deploy something that he don't want accidentally, for example. So I think it's more about that.

And, in terms of other things, we try to keep our dependencies up to date, but it's, doesn't really specific for the V3 release. I mean, we did three zero zero. It's just like, we are constantly on that path.

Rebecca: Bottom line feels very secure, is very secure. Not even feels, just is.

Mariusz: definitely.

Rebecca: I'll reframe: is very secure.

Jeremy: It feels and it is feels secure because it is secure.





Rebecca: Well, how can people start using version three? And are there any major upgrade considerations or like, you know, things that people, is it going to be plug and play or like, how should people expect? Like, okay, I need, I want to upgrade to V3 and this is going to take 10 minutes, an hour, a day? Tell us.

Mariusz: Well, you it depends. I mean, it all depends a lot of the used plugins because sometimes users are really relying on the old, not maintained plugins and we had, sometimes trying really hard to put them into shape for v3 and for most of them we succeed. And for some of them, we needed to relax a bit, our approach to the changes in v3.

So I think they are just very few, very few plugins, which will not work in v3 and those users are really affected, but there are, I believe there are forks for those plugins, which have those issues fixed, or also user can also easily make a fork and do that. I think in most cases, a user should be very easy to upgrade. I think they are just individually probably cases where there can be issues. There was one specific thing related to how we hash Lambda versions. When, when someone is using version hashing and we needed to change it a bit in v2, but we couldn't really make a default because it was a sort of a breaking, because if someone now deploys the same service with different hashes then AWS complains and throws an error.

So we changed this in the v3 and this may also be a throwback for the users to upgrade. But we, created a really nice guide on how to overcome this and Piotr prepared a special common handling that will allow users to very easily to be directed to this new Lambda hashing, like with the two comments run and they are degraded.

So if user is willing to upgrade to v3 and doesn't rely on any like really outdated and problematic plugin, which are very, very few to my knowledge. I think you should be able to go, an hour, to me max, but you know, that's what I believe, but it totally depends.

Matthieu: I mean, we also tried like even beyond time involvled, some projects, like the deprecation system in place in V2 was kind of helpful because if you have a project and you are in V2 and you have no deprecations that means you can upgrade immediately without any change. This is kind of instant instead of v3. And it just works.

And if you do have deprecations then I mean, we have for each deprecation notice on the how to solve it. So sometimes it's about renaming an option in YAML or just using a different option. And if you follow those deprecations and you solve them in v2, then you don't have any deprecation left anymore, and you can upgrade, to v3.

So yeah, the summary is kind of: upgrade v2 to get the latest version. Check if you have deprecations. If you have them, solve them. And then you're good to upgrade.

Mariusz: Exactly. This is the way to go, as Mattieu mentioned. Technically the deprecations information tells you whether you can upgrade. If you have some deprecations solve them. And most of them are very really straightforward to solve. Unless they are triggered by the plugins, but then users should seek plugin alternatives for them, but that should be rare. And once that's done, they should just upgrade it and it should work. Yes.

Jeremy: Yeah, that Lambda hashing version warning used to drive me nuts though. I was like, why do I have to change this? But actually, so that's another question for people who are using v2, maybe they get a whole bunch of these deprecation warnings, you know, cause they're using older plugins or things like that.

Are we going to continue to maintain support for version two for awhile? And then eventually roll everybody over to v3? Or what's, do you know what the timeline is for that?

Matthieu: It's kind of a yes and no. I can say because if there is a critical bug, a security issue in v2. Of course, I mean, it makes sense to fix it. People like we are a bit more than a month after the v3 release. And we have already 30% of the user base in v3 so we still have people in V2. Of course, we don't want to leave them stranded, But yeah, most of the work happens now in the V3 branch. And new features obviously are added in v3. And so far we haven't seen any critical bug in v2. So that's the current status.

Jeremy: Awesome. Alright, well, so speaking about the future of, you know, future features and things like that, let's talk a little bit about the future of the Serverless Framework, cause V3, I think was a big step, cleaned up a whole bunch of things. Just another step in the maturity journey of a, you know, a very mature, open source project.

And again, always great to, serverless sync is putting a lot of time and energy and resources behind maintaining the framework and investing in the framework, you know, to keep that going. Now five years ago, six years ago, it was Cloud Formation and the Serverless Framework. Those seemed to be like the only two frameworks out there.

Now we have Terraform and we have CDK and there's Sam and there's SST and there's Pulumi and there's, you know, Architect. There's a million frameworks that are out there now. You know, luckily Serverless Framework and I think, rightfully, Serverless Framework still has a lot of usag as a very, very high market share.

But I'm curious, one of the trends that we see is that most of these frameworks, like Sam, even SST, are starting to adopt support for CDK. And of course, Pulumi is like its own CDK. So I'm curious, is this something that, you're seeing a lot of people doing,? It feels to me like everyone's mixing and matching, right? Everyone's like, 'oh, use the CDK for this. Maybe we put a stack up for, you know, shared resources, but then we're using the Framework to launch this part of our application.' You know, is there some future plans to integrate CDK or some sort of that programmatic scripting of infrastructure into the Framework?

Matthieu: Yeah, it's true that when you look at small teams, they have one tool, but as soon as you look at larger teams, enterprises, they don't, like, choose one specific tool. We will use Serverless Framework everywhere in every project. We do hear and see, that about people using Serverless Framework and Terraform and/or CDK and other technologies. So that's definitely here and we need to build with that instead of just keep and focus on, we need everyone to use our tool, and just our tool. So yes, this is something we are looking into. It's interesting to talk with users on how they integrate these different technologies. And why do they go and use Terraform, for example, or CDK. We, we do see. that, like Serverless Frameowork is great when deploying Lambda functions, deploying APIs and, you know, using all of this for applications,. Then some people need Terraform or CDK to deploy shared resources like databases, queues more complex pieces of infrastructure, and they connect with many different solutions.

If they can use this the same parameters, CloudFormation, imports and exports, what we have right now is, we try to, talk with as many people as possible to try to understand their needs and figure out how can we asically improve the situation, make that easier. So we don't have like a solution right now, but this is definitely something that this is a problem. that we want to solve. This is something that we want to address and, again, make it simpler.

And also, if I can just say there are so many interesting ideas out there. That's also something that we need to, that we need to acknowledge. And, I've used different things. I've used Terraform. I've used CDK as well. There are some of the things that are really awesome in these tools. Yeah. There's so many ideas to consider.

Rebecca: And so in the spirit of building for people's needs, right? And listening to what they're asking and then helping, like building things that are excellent tools that also work with other tools that like incorporate other people's ideas. You are all working on a new future, which I know is still in its early phase, I believe, but the problem you're trying to solve is the ability to deploy multiple services in tandem.

So especially those in monorepos. I'm sure this will resonate with a few folks, but can you tell us a little bit more about this problem, right? And the needs that it was solving and what you've been hearing from folks. And some learnings you might be able to share from discussions with users thus far.

Matthieu: Yeah, sure. And did you have teams that grow from just deploying one service with a few function to, we've discussed with Teams deploying like 25 Serverless Framework services and try to orchestrate all of them and the whole microservice approach with many services. And that's a very interesting problem. They managed to do this, but again, it's not as simple as it should be. When you look at these large projects, how can we make that simpler? Especially some of them, again, integrated with Terraform or a CDK, or even raw CloudFormation. So, yeah, we don't have that feature ready and with all the details to serve, but that's the problem we're going to solve. So we go around and try to spark a discussion with users. How do you deploy these things? And how do you pass values between services?

So for example, I may deploy a service that creates a database, or that creates an API, and I need to use a database or call that API in a different service. So I need access to table ARN or to the API URL how do I pass the values around? What do I use? What do you use to compile every, like, bundle and compile a few TypeScripts or whatever. Every service and share code between services. So all of these questions. What's really interesting is that we definitely see a common approach. Mono repositories with shared libraries. And with, usually, not every time, but usually a stack that contains the shared resources like the database or queues or buckets or domain recalls.

And then the details are always different. So that's interesting as well because there isn't one solution that works well for everyone. Some people share values using SSM, some use CloudFormation export and inputs, with all the upsides and downsides that comes with it. For compilation and orchestration, some people use NPM workspaces or learner or INX or. Tuberrip And, by the way, these tools are amazing in terms of ideas. Again, that's a lot of ideas to pick up from. But yeah, what we are thinking, is there are so many different solutions being applied, each one of them with pros and cons. How can we pick the best ideas and try to provide some guidance? And provide some features to help with that? So that's kind of what we are doing right now.

Jeremy: Yeah. And I think that, you know, I was lucky enough, because I also work at Serverless Inc, to see an early preview of this feature. And it's super cool. I mean, and I think there's a lot more learnings in there. Like you said, with the shared, you know, shared packages. That is such a huge problem to solve in so many cases.

I think Architect, is one of the best examples of how well they solve the shared library, you know, thing there. But then the dependency graph, that was one of the cool things that you had in that feature or that demo you showed. Was the ability to say, like this has to be deployed, but then these two can be deployed at the same time, as long as this one's deployed.

And then this has to wait for that, and this has to wait. So there's an ordering to these things and so forth. And I think that's just amazing. And again, you get into the idea of saying like, you know, you change shared code somewhere and the ability to detect that and know, 'okay, these four services have to be redeployed because code has changed.'

I mean, that's just, it's just amazing stuff. And I'm just, I'm curious. I know Mariusz, you did some work on that as well, but, you know, what are your thoughts on that whole composability aspect of deploying Serverless Apps?

Mariusz: Yeah, definitely, I think that it should work that way. Technically, you know, in Serverless, we had such engine kind of before, because technically components, we wanted in v2 were that way. There were also other design constraints there, and there are no longer evaluated. I believe that this new solution will lead us to the really final solution for that. But it's definitely what we should offer to users that they can deploy multiple services for multiple providers, through the, like, one one service set up. And not that the independent, we try to know at that, that's what they are doing now with monorepos in serverless. You know, Serverless Framework like totally distributed and not really connected.

Jeremy: Yeah. So, then what other, you know, or what are other people doing or like what other, you know, what other tools are there? I mean, you mentioned things like, you know, sort of, I don't want to call them hacky, but like, you know, using Lerner workspaces or some of that stuff, like it's just a lot to set up, right? But are there any other tools out there now that, or other community projects that people are working on to try to solve this issue?

Matthieu: Yeah, I want to call out two projects that are extremely interesting. We've been discussing with the others. So we have a purple stack that is being created by purple technologies. And we've been chatting with Phillip, which is one of the persons behind the project.

They have really great ideas. For example, they use git and, like the check changes with git to see which services need to be deployed.

They have a whole, like, they have the project on GitHub and a whole blog post about that. It's really interesting to dive into. We, are also talking with Theodore, about a project, which I think is called Swarmian. A new project of them. They use monorepos a lot and they have, again, so many ideas, and sometimes crazy ideas that I love about how to solve these problems and they are diving into something called, again, Swarmian, on how to, yeah, that's an idea that I find crazy, how to define API contracts for each service and then define dependencies between service, like this service calls this API. And so this service depends on the API contract of this one and if the contract changes, then you need to deploy them in a specific order. which is kind of, well, again, really interesting to dive into. Plenty of good ideas there.

Rebecca: So before we sign off, we don't want to go anywhere until you tell us about the Serverless Console product. If you could give us a little bit, and listeners, a little bit of a preview about what's going on with this?

Mariusz: Yeah, sure. It's in the very early stage, but technically it's like a new, better version of the Serverless Dashboard. It's no longer tied to the framework in the sense that it could also now be used with the other Serverless tool. So it's like a more generic in that sense. So the idea is that it can be used to monitor with whatever you deploy.

We started currently with AWS. We started with the node JS and we started with the framework, but the way that we prepare the extension, that will provide them monitoring is that is generic. It was open source and can be used for any other tool. Also, was very important, we are relying on the OpenTelementry, so it's OpenTelemetry stuff. So we are using the standards, which I know it's adapted to us by DataDog. I think it will make easier the cooperation between all those Console products. And I really, really believe that we will be one of the key players versions of that.

Jeremy: Yeah, it's a very cool product. And again, I've gotten to see all kinds of early previews of that as well. So, the open telemetry support is awesome. and the tight integration with the framework, but also you'll be able to use it outside the framework as well. So pretty cool stuff there.

Well, unfortunately we are out of time. But, we would love to know, or we'd love if you could share with our listeners, how people can find out more about each of you, how they can go and find out more about the Serverless Framework. So Matthieu, how do people contact you?

Matthieu: I would say the best is Twitter can find me @MatthieuNapoli on Twitter. And if you have any questions or topics you want to bring up, I'm always up for long discussions or even short ones.

Jeremy: And you, Mariusz?

Mariusz: Yes with me it's similar. It's Twitter I'm medikoo with double "o" at the end. I also can see my profile on GitHub. It's the very same same handle: medikoo. Medikoo maybe more in English. And, there's also, I think, email over there on GitHub if you prefer can contact me over email, but I'm also open on there on every discussion that matters.

Jeremy: Awesome. And you can go to serverless.com and check out the Framework.

Rebecca: I would also like to note that in the time that we've been recording this podcast, the sun has come up in Alaska and it has clearly set for Mariusz.

Mariusz: Yeah.

Rebecca: It is now dark where he is and it started out light.

Mariusz: Yeah. I'm not sure Matt Napoli...

Matthieu: Yeah. It's yeah, It's still there, for now, but just for half an hour, maybe so.

Rebecca: there's something. Thank you so much, everyone for joining from all corners of the world for this discussion.

Jeremy: Right. Yes. And we will get all of those links into the show notes. And the two of you. Thank you again for being here and go enjoy the evening.

Mariusz: Yeah.

Thank you very much.

Matthieu: Yeah, thank you for having us.

Mariusz: It was great speaking with you then.