Episode #34: Advanced NoSQL Data Modeling in DynamoDB with Rick Houlihan (Part 1)

February 3, 2020 • 47 minutes

Jeremy chats with Rick Houlihan about the use cases for NoSQL, why single table designs are so powerful, ways to model relational data with GSIs, and so much more in PART 1 of this two-part conversation.

About Rick Houlihan

Rick has 30+ years of software and IT expertise and holds nine patents in Cloud Virtualization, Complex Event Processing, Root Cause Analysis, Microprocessor Architecture, and NoSQL Database technology. He currently runs the NoSQL Blackbelt team at AWS and for the last 5 years have been responsible for consulting with and on boarding the largest and most strategic customers our business supports. His role spans technology sectors and as part of his engagements he routinely provide guidance on industry best practices, distributed systems implementation, cloud migration, and more. He led the architecture and design effort at Amazon for migrating thousands of relational workloads from Oracle to NoSQL and built the center of excellence team responsible for defining the best practices and design patterns used today by thousands of Amazon internal service teams and AWS customers. He currently work on the DynamoDB service team as a Principal Technologist focused on building the market for NoSQL services through design consultations, content creation, evangelism, and training.


Transcript

Jeremy: Hi everyone, I'm Jeremy Daly and you're listening to Serverless Chats. This week, I'm chatting with Rick Houlihan. Hey Rick, thanks for joining me.

Rick: Hey Jeremy. Thanks for having me on.

Jeremy: So you are a principal technologist for NoSQL at AWS. So why don't you tell the listeners a little bit about yourself and what it is that you do at AWS?

Rick: Yeah, sure. So I've been at AWS almost six years now, I guess five and a half years. My primary focus in life since joining AWS has really been NoSQL technologies. Shortly after joining organization, I joined the specialist team and then for about two years, I spent a large amount of my time focused on the migration of Amazon's internal application services from a relational database, specifically Oracle to a NoSQL technology which was DynamoDB, of course.

So that was my mission in life and the last two years or so, I've been more focused externally taking the learnings that we gained from that exercise out to our customers and helping them solve similar problems.

Jeremy: Awesome, and I love the fact that you are actually working with customers and working with these big data problems, actually solving these problems as opposed to just sort of advocating for ...

Rick: ... thinking about them, right?

Jeremy: Exactly. Exactly.

Rick: We got called out. Yeah.

Jeremy: And that's, I mean, again, obviously this firsthand experience and all this work you do, you see all these different permutations and different ways that you can use DynamoDB and NoSQL. So I think if anybody is in the sort of AWS ecosystem, if they've ever sort of thought about using DynamoDB, your name has probably come up. You've become somewhat of a legend at AWS re:Invent with your NoSQL talks on data modeling in NoSQL.

So there are a million different things that we could talk about obviously, and I could probably talk to you for quite some time. I don't want to spend a lot of time rehashing the things that are in your presentations and I will put these in the show notes and people can go and spend some time looking at these things. But I actually, I want to be a little bit selfish here because I have all these questions that I've sort of come across and some people have asked me and now that I've got you here, I would love to sort of ask you those and see if we can dig a little bit deeper in them.

But what I do want to do is be fair to people who are not overly familiar with DynamoDB or NoSQL in general. So maybe we start quickly and just kind of explain or if you could explain to us what's the difference between NoSQL and relational databases or RDBMS.

Rick: Sure. Yeah, great place to start. So if you think about the relational database today, it's about normalized data. We're all very familiar with the idea of a normalized data model where you have multiple tables, we have all these relationships, parent child relationships and many to many relationships. And so we built these tables that contain this data and then we have this ad hoc query engine that we write called SQL. We write queries in SQL to return the data that our application needs.

So the server, the database actually restructures the data and reformats the data on the fly whenever we need it to satisfy a request. Well, NoSQL on the other hand eliminates that CPU overhead and that's really what the cost of the relational database is and the reason why it can't scale because it takes so much CPU to reformat that data.

So with NoSQL, what we're going to do is we're going to actually denormalize the data somewhat and we're going to tune it to what we call the access pattern, tune it to the access pattern to create an environment that allows the server to satisfy the request with simple queries.

So we don't actually have to join the data together. So we talk about the modeling and whatnot in my sessions, we can get into how do we do that, but the fundamental crux of the issue here is that the relational database burns a lot of CPU to join the data and produce these materialized views whereas the NoSQL database kind of stores the data that way and makes it easier for the application to use it.

Jeremy: All right, and then one of the things I think that you see all the time when people are sort of migrating or trying to figure out that NoSQL mindset is they think about access patterns and one of their access patterns is something like list all customers.

Rick: Right.

Jeremy: And it's one of those things where I think it's really hard for people to make that jump from just being able to say select star from customers and understand that data will get back and we can add limits and things like that. It's not quite the same with something like with NoSQL, so when do you suggest people not use NoSQL?

Rick: Okay, so that's actually a really good question. So NoSQL is really suited and as we talked about, we have to denormalize the data, right? Which does that means I have to structure it and tune it to the access pattern. So if I don't really understand those access patterns, if they're not really well-defined, then maybe what we're looking at is a different type of application that's not necessarily so well-suited for NoSQL, right?

And that's really what it comes down to. There's two types of applications out there. There's no OLTP or online transaction processing application which is really built using well-defined access patterns. You're going to have a limited number of queries that are going to execute against the data, they're going to execute very frequently and we're not going to expect to see any change or we're going to see limited change in this collection of queries over time.

And that's a really good application for NoSQL because as I said, we have to kind of tune the data to the access pattern. So if I only have a small number of access patterns, then it makes sense, but if the customer comes in and tells me, "I don't know what questions are going to be asked. This is maybe my trading analytics platform and who knows what the brokers are going to be interested in today or tomorrow and I look at the query logs of the server and there's a thousand different queries and some of them execute once or twice and never to be seen again and others execute dozens of times."

These are things that are indicative of an application workload that may be, is not so good for NoSQL because what we're going to want is a data model that's kind of agnostic to all those access patterns, right? And it has that ad hoc query engine that lets us reproduce those results. So lucky for us in the NoSQL world, that's actually a small subset of the applications, right? 90% of the applications we build have a very limited number of access patterns. They execute those queries regularly and repeatedly throughout day. So that's the area that we're going to focus on when we talk about NoSQL.

Jeremy: Yeah, and so with those access patterns and you talk about highly tuned access patterns, and if you think about an application that says maybe it has to bring back customer orders, right? Maybe a customer might have 10 orders, maybe they have a thousand orders. I mean, really, the amount of processing it takes to pull back either 10 or a thousand is pretty much the same, but there's other things that might come back with a customer as well.

Maybe you want to see their billing information, their shipping information or maybe they have a rewards program or something like that that they're attached to and one of the interesting things that I learned from you from seeing what you've done at your talks at re:Invent was the ability to put multiple entities if we want to call them that in the same table.

Rick: Sure.

Jeremy: And that was one of those optimizations where SQL we say, "Okay, select * from customers where customer equals one, two, three. So let's start from orders where this equals that and maybe join it on order items on ..." Things like that. And also, now I can make another query and I've got to bring back those rewards or bring back the shipping information. So you have all these different queries that you have to run, but when you put everything into a single table and you optimize that access pattern, you can make one query that brings back all of those entities in one round trip to the server.

So I'd love to get your take, I know you're a huge proponent of single table design, but I just like to get your take on why that's so powerful.

Rick: Why that's so powerful. Sure. It makes perfect sense. So if you think about the time complexity of a query, right? When I have to join through multiple tables, I will use the example you talked about. I've got customers, I've got orders and I have order items, right? Let's just use those three tables for example. And what I really want is I want all the customer's orders in the last 30 days, right? So I can select star from customer where customer ID equals X, inner join orders on customer ID equals customer ID, inner join order items on order ID equals order ID, right?

So I've gone through multiple tables and the time complexity, that is going to be significant because I've got a ... And let's just assume that the joins are all accurate or that the indexes are all accurate and we're sorted on the joint dimensions and so we're doing really nice, efficient nested loop joins, we're not doing any crazy things in the database and we're still looking at a time complexity. It's going to require a login search or an index scan of the primary table and in login complexity for the next table, right? I got to scan that table for each item that comes back off the parent table.

Now you might argue with me that the outer table has a single row return. So it's just who login, but then the next one comes, right? It's the order items table and now I have to scan through all the customer's orders and then I have to join in all the items from each order on that table and that's an in login scan.

So it just increases the complexity, right? It's the more tables you join in. Now, if we were to take the NoSQL approach where I treat the table like a big object collection, I'm just going to drop everything in there, just then you think of these as all the rows from all those tables, we just take all those rows, we're going to shove them into one table and then I'm going to go ahead and index these objects on maybe tag each one of those objects with the customer ID, including all the order items and everything.

And then I can index and I say, "Okay, well give me everything for the customer IDX." And it brings back the customer object, the order objects, the order item objects, or all those orders. Now, I might want to include some additional sort constructs to return a limited number of objects, but you get the idea. What I've really done is I create a collection of objects and I use indexes to join those objects together.

There is no joint operator, but I can still achieve the same result because a join is essentially a grouping of objects and so that's what we're really doing with the NoSQL table. So it's much, much more time efficient to do an index scan than it is to do nested loop joins and that's really what it comes down to is about cost and efficiency.

Jeremy: Yeah, definitely. Yeah. And I like thinking of them as entities in the sense of it's almost like a partition keys, and we're going to talk about this in just a minute, but I like to think of partition keys as almost like folders, the directory structure and you're grouping all of the things in those folders that are common to one another that you might want to bring back.

And you've got a few short features and you've got some ways to limit it like by date and things like that. So anyways, let's actually do that. We can maybe come back to the single table thing as we talk about some of this other stuff, but just for people who don't know and just to sort of hear it again so that we can all kind of be on the same page because I think we're going to get, we're going to start geeking out pretty heavily here in a minute and if you don't understand these basic concepts that you ... I don't want your eyes to gloss over too much if you're listening here.

So let's start with a couple of these key concepts. So I mentioned partition keys and we also have sort keys. So just give us a quick, what do those mean?

Rick: What are they?

Jeremy: Yeah, what are they?

Rick: So in a DynamoDB table or in any wide column database in any NoSQL database, you have to have some attributes that uniquely defines the item. In a wide column database like DynamoDB, that's called the partition key. So if I define a table in DynamoDB and I define it as a partition key only table, then each item I insert in the table must have a partition key attribute and that partition key attribute must contain a unique value and that supports kind of a key value access pattern, right?

Give me everything with this partition key value, brings back one item. If I add the sort key, what you said comes into effect now is now the partition turns into a folder, right? So the partition uniquely identifies the folder and the sort key uniquely identifies the item within the folder and now I can start to collect items together and I can use creative filtering conditions on the sort key value to return only the objects that matter.

And I really liked that folder analogy because if you think about it, when I store things, when I collect documents at home and I'm putting things in my file cabinet, well what am I doing? I'm putting related objects into a folder and I'm putting it in the file cabinet and because that's the easiest way for me to go access it in the end, right?

I say, "Here's everything associated with my mortgage." Right? I don't say, "Here's the home inspection." And put that in one place and then, "Oh, here's the title document, title report. I'm going to put that in another place." And then when I need to go get all those documents for my home loan, I don't go searching through 20 folders to pull together all those documents, review them, and then sort them back into their individual locations, right?

We put everything in one folder and it turns out that that's actually the easiest way to access data, not just for us, but for the computer too. So this is really what we're doing. We're using these partition keys to create folders, we're using sort keys to individually identify the objects within those folders and then we're using conditions on the sort key to restrict the objects that are being returned when I query that particular partition and that's kind of the fundamental construct that we're trying to create here.

Jeremy: Yeah, and so the ... So when you use those two together, when you use a partition key and the sort key, we call that a compound key and then the sort key itself, if we add extra data in there, you had mentioned we could filter on it for example and use something like a begins with query and things like that. So if we combine multiple pieces of information into that sort key, that's what we call a composite key?

Rick: That's correct. So a composite key typically takes multiple, like you said, multiple pieces of information to give me some interesting constructs to go query, right? So I might like say, "Well, a customer's use case is to give me all of his orders within the last 30 days." I might choose to prefix all the order objects in the customer's partition with an indicator that they are in order like O and then concatenate that with the date of the order.

And then I could start, I can say, "Give me everything that's between O - date one and O - date two." And it will return the order objects from that customer's partition. I might have many, many types of objects in that customer's partition, but that sort condition will allow me to filter out just the specific order objects in that date range that I'm interested in.

So that's kind of an example of the types of composite keys that we're going to create to produce queries. Those things can include things like states, dates, rankings, all kinds of different things that we're going to do to try and create conditions that we can query.

Jeremy: And automatically when ... No matter what you create for that sort key, that is ... They call it a sort key because it's what gets sorted on and you can ... But so the way that that sorting works is that if you had O# and then some date for example that anything, if you did a between query for example, you would have to include the entire sort key, the entire prefix in there as well.

I think that's something that trips people up. If you were trying to order ... Maybe you're trying to order a list of songs or something like that. If you had your sore key as one, two, three, four, five, six, seven, eight, nine, 10, 11, 12, then it would sort as one, 11, 12, two, three, four. So you just need to be conscious of that to say it needs to be zero one, zero two, zero three, zero four, and so forth in order to get that sorted.

Rick: Yeah, zero padded or convert those to a maybe a four byte hexadecimal string or something that is string sortable. This is one of the kind of little caveats in using DynamoDB is that we really only give you one sort key attribute. So if we want to create these composites, you have to kind of create these string composites and stick it in that sort key attribute. If you look at other wide column databases like Cassandra, Cassandra gives you the ability to actually support true composite keys, right?

You can actually say, "I want to ... This attribute dot, that attribute dot, this attribute ..." And that would be the concatenation of those is your key and it will allow you to sort correctly between the data types. So that's a little bit of a difference between a Cassandra and a DynamoDB is that you have to maintain these composites kind of yourself and DynamoDB, but other than that, it's almost identical.

It's the same type of query conditions, right? In Cassandra, when I query the first attribute, it has to be an equality condition and the range condition can only apply for the last attribute that you query on the composite and the reason why is because all those range queries have to return a contiguous range of items from the CFP.

Jeremy: Right.

Rick: Otherwise, what I'm really doing is filtering the items from the partition.

Jeremy: Right. All right, so the other thing that all databases have is some sort of index, right? And obviously, what we're talking about with the partition key and the sort key, that's our primary index and we're going to talk about that a little bit more because there's some interesting things there, but one of the things that you can do is create other indexes LSIs, GSIs.

Again, we'll talk about all of that sort of, "Don't get lost if you're listening." But I do want to talk about this idea of overloaded indexes because this goes back to the single table design aspect of things where you have different values in different indexes. So that partition key for example, it might be a customer ID for a bunch of entities, but then it might be an order ID for other entities. It might be some other enumerated value to specify something else. So talk a little more about overloaded indexes.

Rick: Okay. So in DynamoDB, we've talked about the primary construct being this partition key and a sort key. The idea there is I'm creating groupings of objects and those object's grouping should be kind of related to the primary access patterns of your application, right?

In the customer example, we were talking about the partition key, it might be a customer ID, but there might be other objects on this table that I'm interested in tracking, right? Not all these objects might be customers, not all these objects might be related to customers and so let's say we had sales reps also. So I've got customer IDs and I've got a sales rep IDs.

So in order to kind of store customers and sales reps on the same table, when I define the table, I can't use a strongly typed attribute name, right? If I use customer ID as the partition key, then every object I insert into the table has to have a customer ID as a partition key, but sales reps aren't customers so they don't have a customer ID.

So when we define the table, we're going to use generic names. Things like PK for partition key, SK for sort key and this allows me to create multiple types of partitions. The object that I insert into the partition just needs to have a PK attribute. The value of the PK attribute depends on the type of the object that I'm inserting on the table and I don't have to worry about collisions here because when I query the system, I'm going to give that partition key quality condition.

It's going to have a value that's my application is aware of what it's trying to get, right? I'm going to query for a customer, I'm going to query for a sales rep, right? I know what I'm asking for. And so then what I'm going to do now is I'm actually going to say on the primary table that first access pattern is supported by the table, might be orders by customer, but I also might have a workflow that says at the end of the month, I need orders by sales rep or my customer, my orders aren't organized on the table by a sales rep, they're organized by a customer.

So what I'll do is I'll create an index, a GSI, and I'll use a ... I'll declare a new attribute on the GSI and I'm going to use that same naming construct, right? I don't want only sales rep items to show up on the GSI. As a matter of fact, I actually need customer objects. I need order objects to show up on the GSI and on the primary table, my orders are partitioned on the customer ID, not the sales rep ID.

So I'll create an attribute called GSIPK and all of the orders are going to have it and all of the sales rep items are going to have it and you know when we insert the sales rep item onto the table, it's going to have a partition key of the sales rep ID, it will have a sort key of the sales rep ID and then it will have this extended attribute which is also the sales rep ID. So you kind of notice we've duplicated that sales rep ID a few times, right? The reason why is because I'm going to index this item in multiple dimensions on the sales rep ID.

The first dimension that's actually indexed on is the table that's within the sales rep partition. Okay, while all those other order items are also going to have a customer ID as their PK, they're going to have a maybe an order date as there SK and then they're going to have a GSIPK, a sales rep ID. And so if I create a GSI on sales rep ... If I bring up an index with GSIPK as the partition key and the sort key, the existing sort key as the sort key for the GSI, what I've really done is I've re aggregated or regrouped the items on the table.

Now, the orders on the GSI are grouped by sales rep and if I query the GSI by sales rep ID, I'm going to get a copy of the sales rep's item and each order that he had and if I query it by sales rep ID greater than 30 days ago, I'll get only this stuff in the last 30 days, right? And so this is kind of what we're going to do. We're going to re aggregate, regroup the data in indexes to support secondary access patterns in the application and that's what we're going to use indexes for.

Jeremy: All right. So you mentioned GSIs in there and I want to talk about those a little bit more in detail, but let's start first with just this idea of accessing data via the primary index, right? So every table has the primary index. That is where you have your PK and your SK or whatever you want to call it.

Rick: Correct.

Jeremy: And one of the things that I see a lot of people do when ... Especially when they're asking me these questions and they're trying to format that is they often ... I think what they're trying to do is I guess what's the right word for this? They're trying to make DynamoDB work more like a SQL database and that they're trying to find a way to put ... Either create really, really large partitions and then use the sort key to trick the system into doing very large queries, but the thing that you need to remember or I think that the listeners need to remember, I know you know this is that we want to be careful about how much we're writing to the database when we're using some of these other indexes, right?

So for the primary key itself, or for the primary index itself, we want to, I guess, we want to make sure that we have all of our or as many of our primary access patterns accessible via that, right? When we want to be limited in terms of the amount of data we copy over to another index, right?

Rick: Yeah, I mean, if you think about it, every index I create is a copy of the data.

Jeremy: Right.

Rick: So if I can write the data onto the table one time and satisfy more than one access pattern, then that saves me copies of the data. Those copies of the data costs me money, they cost storage, they cost WCUs. So yes, we would like to store as many ... We would like to satisfy as many access patterns as possible with the primary table and then what you'll notice is as I start to create indexes, objects start to drop off because not all objects need to be indexed.

Not all objects need more than one access pattern. Some objects only have one and other objects might have three or four. So as you kind of go out across the number of index, as you're going to see fewer and fewer items transferring from the table to those extended indexes because they just don't need to be accessed on as many dimensions as as others. And so that's kind of what we're going to end up doing is looking at these individual items and say, "How do I need to group these?" Right? In the case of orders and I believe we've talked about so far, I need to group orders by customer, I need to group orders by sales rep, right?

But order items, I don't know, we haven't defined any other access pattern than group order items by order. But maybe there's another access pattern by order items, which might be the state of that order, right? Or of that item, is it back ordered, has it been shipped or whatnot. So we already have an index called our first GSI that we're indexing by sales rep and we're indexing order items by sales rep, but I can also use the same index, to index those order items.

If I decorate each one of those order items, it would say an order state that is also in GSIPK and then I'll just use the sort key of the item and I can say, "Okay, well which one of these things ... What orders were in this state at this date time?" Right?

Jeremy: Yeah.

Rick: And so those types of access patterns are we're going to do.

Jeremy: Yeah, and I think what I was ... I think the point I'm trying to get to is because I get this question a lot and I think you've explained it well in the past where when you ... So RCUs and this is something we didn't really talk about, but there's read capacity units and write capacity units for DynamoDB tables and the read capacity units, I think it's three...

Rick: 3,000. 3,000 RCUs.

Jeremy: Yeah, 3,000 versus 1,000 and you can read, it's the amount of data that you can read off of a WCU is ...

Rick: A WCU is four kilobytes, an RCU is one kilobyte. And I'm sorry, WCU is one kilobyte, an RCU is four kilobytes and I said that backwards.

Jeremy: Yeah, okay. So the point is is that you can read four times as much data as you can write in a single capacity unit and then obviously, a write capacity units cost a little bit or costs a lot more than the read ones do.

Jeremy: So I think what people tend to do, and this ... You say this all the time is that you should optimize for the write and I don't think people quite get what that means and that's, I think what I'm trying to get to here is that for every piece of data that you copy, you have to write that to another index. So if you're writing it to a GSI, you're paying for another WCU and it's also one kilobyte versus reading four kilobytes off of it. So if you are trying to write a lot of data, if you have a high write throughput, but you need to access that, putting another index to there just for the sake of maybe querying on one dimension might not always be the most cost effective thing.

Rick: Yeah, no. I mean, yeah, absolutely. And we see, I mean, it's a good point because it's important to understand that when you created a table for DynamoDB that you should use meaningful values for those partition keys, right? I mean, one of the mistakes we see people make is that they'll import their data from their relational database. They're going to go ahead and use those auto incrementing primary keys from their tables as their partition keys and they don't realize that the application is never really accessing the data using that value, right?

It's always saying, "Hey, get me all the objects for this customer log in." Or, "Get me the things that are related to this." And customers don't usually call up a help desk for example and say, "Hey, I'm customer UUID." So if your primary access pattern is get the information by customer, then probably using a, UUID as a partition key is not a good idea and using more something like an email or a login name or something like that that the customer is going to know is better because if I store the data on the table, that's one cost.

If I have to index it, again, that's another cost. Every index I create is an additional chunk of storage, an additional capacity, write capacity use is consumed, so make sure that every time I write the data to the table, that there's a meaningful access pattern associated to it. Otherwise, I've got to create an index to be able to read the data and that's just some dead data. Now, that's definitely a write optimization, but the other thing to consider when you're optimizing for the write versus the read is not just the structure of the data, but also the velocity access pattern, right?

If I'm not necessarily reading at a high frequency, then maybe an inefficient read is fine and I don't need an index, right? I can actually just write all these items on the table and maybe once a week I need to find the exception items. Okay, I'll table scan it once a week, but if I maintain an index or I have a nice efficient read, then I can make that nice efficient query once a week, but I'm going to pay every single time somebody updates to the table, I'm going to pay to maintain the index. Right?

I'm going to pay the constant storage cost of all those items on the index. So sometimes, the very inefficient read is actually the most cost effective read because it's the ongoing cost of maintaining the index. So I understand that there's a break point, right? When you're trying to find an index and that break point usually has a lot to do with the velocity, the access pattern, right? If it's not frequently accessed or used access pattern, then maybe optimizing for that read is not a good idea and instead, we should optimize for the cost and cost of the write.

Jeremy: Yeah, and I love that because this is one of those things that I think most people get wrong is just this idea of saying, "I need to be able to access this data in a number of different ways." And you may need to do, you have to understand when and why, how, how often you need to be able to do that.

So anyway, there are certainly cases where you can't just only do that, right? So we talked about GSIs briefly. So I do want to get into GSIs a little bit more and essentially, a GSI, you can basically choose any two attributes, right? And make one the PK and make one the SK.

Sometimes we see people do like we mentioned the index overloading. Well, we can kind of get back into that, but I want to start and I want to kind of reference your talk that you, you did at the last re:Invent 2019 and I've been looking at your table designs.

I posted something on Twitter. I've printed them out, I highlight or I make notes, totally geeking out over it because it really is fascinating to me and one of the progressions I've seen over the last couple of years, and I think this is maybe a change in your thinking too is that you started ... Rather than trying to reuse attributes, maybe do things like inverted indexes and some of that stuff, you've actually created new attributes that are specifically labeled GSI1PK or GSI2PK.

And you mentioned this earlier, you're copying data. So you're denormalizing data, not just across the table, but actually in the same records...

Rick: In the same item. Yeah, yeah, exactly.

Jeremy: ... Item in order to do that. So just what's the ...

Rick: So yeah, a lot of times we will do this because maybe I'm going to use ... Well, I actually want to create an index on the sort key dimension, right? But there's a lot items on the table and not all of those items on the table do I want to have that sore key index, right?

So simply flipping the PK and the SK is going to ... Will cause every single item on the table to translate to the GSI and so I think you're exactly correct. It's been a kind of an evolution in thinking. It's also an evolution in the complexity of the application services that we've been working with because that's becoming much more of a common case now that we've got lots and lots of types of partitions on these tables, some of which we want to have reverse indexed and others we don't.

And so in those cases, what we'll do is we'll take that sort key and we'll copy it to another attribute on the table and then we're going to create a GSIPK on that extended attribute and that kind of filters out all the items that we don't want if I just took the sort key and the partition key and flipped them around and so in some cases, it can save a significant amount of money to the customer.

Jeremy: Yeah, because you mentioned that if you just do the inverted index and you flip the PK and the SK for a GSI1 for another GSI. The problem is that every single SK value then gets indexed and then all of the attribute data depending on projections, we'll talk about that in a second. But on those all get ... So by not doing that, and this is something that I think is extremely powerful, sort of I guess technique maybe, but is this idea of sparse index, right?

Rick: Exactly.

Jeremy: Where yeah, where you just take a little bit of you take the data and you put it into the field and if there's ... Or in an attribute, and if there is no data in that attribute, whether it's the PK or the SK, it doesn't get copied over.

Rick: That's correct. Yeah. I mean, if the attribute exists on the item, then the item gets copied. If the item ... The attribute date does not exist on the item, then it just sits on the table and never goes to the GSI. So this is the technique that we'll use like you said when I ... When creating the just the inverted index with the key flip is going to cause a problem and that ... And why might that cause a problem? Well, because some of those items are going to have things like maybe a date stamp hash UUID and I ... That's a useless value on a...

Jeremy: You're never going to be able to use that...

Rick: So why do I want to pay to store it and replicate it and do all that? Right. So, and we've got a lot more of those complex use cases these days that are requiring that. So that pattern has always been around, you spend a lot less of a common case. Now it's becoming more of a common case. So we talk more to it.

Jeremy: Yeah, and I think the other thing that I sort of noticed about doing this is you had mentioned, you've always said this for quite some time that NoSQL doesn't mean flexible, right? It's not a flexible data model, but what you did say and your talk at re:Invent 2019 was that if you're using this, and I didn't quite reference it the same way or maybe I was just reading through the lines or I'm just too much of a geek and I picked up on it, but this idea of there is some flexibility here, especially if you do this copying of attributes within the same item.

You can go back and run some sort of batch query, redecorate the items, move some things around and if you're not relying on existing attributes in a sense, there is actually some flexibility there, right?

Rick: Oh, there absolutely is. I mean, we've now had to go back and extend, revisit, retool hundreds of of services that were built as part of the migration to Amazon as you can imagine. I mean, applications don't just stay static forever. So in many cases like I said, it's just simply add a new partition type, maybe add a new GSI or overload an existing GSI in a different way, right?

Rick: If I'm adding new types of items, I get to reuse all my existing GSIs again, right? Because those items need to be indexed and I have, let's say three indexes on the table. Every new item type I add can be indexed up to three times without having to create a new additional index.

Jeremy: Yeah.

Rick: So it gives you a lot of flexibility. If I need to regroup the existing items and the existing attributes need to maybe refine those access patterns somewhat, I can like you said, do a table scan, let's do a slight ETL, maybe we annotate those existing keys with an additional composite, or we change the hierarchy of the composite or something.

But yeah, it's not impossible. It's just simply a process and honestly, as I've gone more and more through this process, I don't see how it's different than relational. I just don't. We went through all this with relational databases, right? I got to add a new column. Oh boy, what's ... It can't have a default value, but it has to have a default value. Oh my gosh. It's a terabyte table. And so it will be three weeks while the database is offline, right?

Jeremy: Or adding an index. Even adding an index. It's like I'm going to go take a two week vacation on large tables and then come back.

Rick: Exactly. These are things that ... I mean it's just kind of like ... Look, altering the data model stinks. It's always been a pain. I don't know if it's any more of a pain in NoSQL. Now that I've done this dozens if not hundreds of times, I don't think it's any harder. I think the one thing that might be different is that the ETL portion of the process is more up to you as a developer in a relational database when I add an index and things like this, it just kind of happens in the background.

If I want to add a column with a default value, I don't have to worry about doing a table scan and write back, but I mean, other than that, it's not any real difference. So one thing I could say is that at least with NoSQL, when you're doing these types of things, the data is online [crosstalk] possible, right? Each one of these updates is kind of atomic in its own envelope whereas that relational database, you have that index, it's like come back tomorrow, right?

Rick: This is the kind of thing where ... And that's the other advantage of the cloud date of service, like DynamoDB is the ability to add a GSI and I want that GSI be available quickly so I'm going to give it a million WCUs for an hour or so that it will very quickly suck the data off of the table, populate itself and come online as fast as possible, right?

We can't do that in a relational database because you're stealing IOPS, you're stealing throughput and bandwidth from your workload when you add that index, right?

Jeremy: All of a sudden, every query slows down and you're starting to wonder what's happening ...

Rick: "What's going on?" Exactly. Exactly.

Jeremy: So another thing that has to do, I think, and this is important to me I think is cost optimization and that's why thinking about sort of these right patterns and optimizing for that, there's another way if you are copying data into a GSI that you can optimize what gets copied over and that's projecting data into those indexes.

And you can just copy the keys, you can copy some select keys or you can project all of that and so obviously, it depends on the pattern but just let's talk about projections for a second.

Rick: Cool. So projections are one of the most powerful features of DynamoDB. It's the ability as you said to restrict the data that hits the index, right? So in like document databases, like DocumentDB or MongoDB, when I create a compound index, I'm able to project additional attributes out of the index, but for each attribute I add to the index, it increases the complexity of the insert, right? Because those are essentially sorting dimensions, right? So if I say I want these three attributes on the index, I get to sort by those three dimensions every single insert that I make.

So that becomes a significant overhead on the system whereas DynamoDB, what you're going to do is you're going to specify those two keys, the partition key, the sort key, we don't store it on any other attributes, but you can choose what other attributes you want to project into that write and we'll go ahead and just take those attributes along, store them on the index.

Now, when I query the index, I don't have to ... I don't just get the items that matched, I get the items that matched plus the data that I projected. That saves me a round trip back to the table to go get the items, get the data from those items. So like if you look at MongoDB or DocumentDB, when you look at a query and you explain the query, it's going to tell you two numbers. It will say documents returned and documents scanned.

If documents returned is X and document scanned is zero, that means that the query was covered by the index. It means every attribute that I requested existed on the index definition. That is extremely rare. It almost never happens, right?

Jeremy: Right.

Rick: Usually what I'm doing when I query a document database it's saying, "Get me those documents." Or get me some subset of those documents. Get me something from those documents. So those are two things that happen in that situation. You're going to get a query explained back that says documents returned X, document scanned X. That means I found all the documents of the index and I get to go back to the collection and get all those documents and, "Oh, I have no choice, but I'm going to return the entire document because that's the only choice the document database gives you." Right?

Rick: So you're going to read the entire mess, right? So if you've got a bunch of 16 megabyte documents and what you're really requesting is just a couple of kilobytes of data, then you get to read all those 16 megabyte documents and pull that couple of kilobytes of data out and serve it up. So it's very inefficient way. One of the best things about DynamoDB is it just lets you choose which parts of the data that you need to tag along for that pattern. Maybe this item might be a couple of hundred kilobytes, but that pattern only needs five kilobytes of data. I'll only project the five kilobytes onto the index, right? So it's a lot more efficient.

Jeremy: Yeah, and I think that what you see is again, people just sort of copy over that index in there and I see this all the time. There's just like, "Oh, I'll just copy all the data over because it's just easier than I have it."

Rick: Right. Right. [crosstalk], project all.

Jeremy: Yeah. And I think that in some cases, maybe that makes sense, but I think in a lot of other cases, it doesn't make sense because again, every bit of data you're writing over, every attribute you copy is costing you and if you start going over that one kilobyte that write limit, then you're using more and more of these capacity units, these write capacity units to do that.

So the other bit of optimization around that though is that if you ... And again, this depends on the the access pattern, but if you were to write just the keys for example, maybe you have an infrequent access pattern. You have to think about and I'm sure you would agree with me on this, you'd have to think about it how often do you need all of that data? Is there a way that maybe you just need that and then it might be cheaper to go look that up on the primary index once you have [crosstalk]

Rick: Right. As a matter of fact, a great use case for this multiple service teams inside of Amazon look for exception cases, right? They don't expect exception cases and when they get those exception cases, they don't expect very many of them and so what we do is we maintain ... but they need to check quite frequently, right?

So in our order processing workflows for example, if orders are languishing, we've got to get those things out. I mean, we've got prime customers, they've got to get shipments out, if fulfillments and orders are languishing, we need to know and we need to know in minutes, right? So they're literally running these types of exception handlers every five to 10 minutes and all they're really doing is just looking for things that are in a state that is considered to be a languishing state or an air condition state that must be expedited.

Most of the time, those queries are coming back empty. Every now and then, they come back with a handful of items. So what are we really interested in? We're just interested in the keys, right? Oh, these items are an exception. Okay, go get those items and notify somebody and run the escalation workflow. But we don't have to store all those items on the table all the time because most of the time, we don't even get any items and when we do, it's not a big deal for us to go get them from the table. So save the throughput, right?

Jeremy: Absolutely. Yeah, and then one last thing on optimization here and again, I think ... I'm not trying to take money away from Amazon, I'm just trying to let people know that there's some good ways that you can think about this and one of those is the attribute name itself, right? The size of the attribute names, that affects the size of the item, right?

Rick: Yeah, sure. In all NoSQL databases, not just DynamoDB, right?

Jeremy: Well, of course, of course not, yeah. Of course.

Rick: As a matter of fact, one of my favorite stories is from my days at MongoDB, I was working with a university customer, I think 80% of the data they had in their system was attribute names and 64 terabytes data.

Jeremy: My goodness.

Rick: Yeah, so when you think about the impact of that, every item I write to the database carries a copy of those attribute name. So as a developer, I mean obviously, the simplest thing to do be do one, two, three, four, five or A, B, C, D, but that's obviously not very meaningful and you don't want to have to map all that, but if you use kind of meaningful abbreviations, right? Like Customer ID, CID, your developers are going to understand what CID means, these are going to be come kind of just parts of your vocabulary. Please do that. You'll save yourself a fortune in the long run, especially at scale and that's [crosstalk]...

Jeremy: Yeah, absolutely, absolutely. All right, so another thing that GSIs allow you to do, and this is ... You explained this brilliantly in your talks, but it allows you to actually do a lot of really cool relational modeling and there's things like z-index and there's ... You can do sort of graph or edge nodes and some of this other fancy stuff and there's a whole bunch of stuff on the Amazon website that kind of explains some of this. But I think one that is really, really interesting is this idea of adjacency lists. Could you just explain that quickly?

Rick: Yeah, sure. So now, adjacency lists had been around a long time, right? Adjacency list is really a graph. A graph is a nothing more than and it's at its core in many to many relationship, right? You have a bunch of nodes, nodes have relationships with each other, nodes can be related to many others and nodes can add many related to them, right? So what we're really doing when we build an adjacency list on a DynamoDB table is we're just creating a lot of partitions on the table that are related to each other, right?

Those partitions are kind of like nodes, right? We can have in the example we're talking about which was sales people and customers. We've got customer nodes and sales rep nodes and every time a customer buys something, our sales rep sales something to a customer, we can create a relationship between that customer and that sales rep and the way we do that is by sticking an item into the customer's partition that says, "Hey, I have an edge that points to this guy."

And so maybe the first item we stick inside of the customer's partition would be a customer item, right? It describes the customer. So we have, that ... The partition key is customer ID, the sort key is the customer ID and then we have some extended attributes that describe the customer name, login, email, all that stuff. Then let's say a sales rep comes along and sell something to the customer, we can put an item inside of that customer's partition that's sorted on the sales rep ID, right? And inside of that item, we can say, "Well, what did the customer sell on what date and how much?" Right?

Jeremy: Right.

Rick: And so that's how he's related to the customer, right? So you get the idea, what are we doing? We're building a graph, we're putting edges into the graph and these edges have properties to describe those relationships. So it's kind of the same thing as having two tables with a mapping table in between them, right? That many relationships. Now, inside of that structure, I've got a table with the edges inside of one partition.

So I have customer partitions, I've got sales rep partitions and the customer partition has all these edges, but the sales rep partition doesn't. So it's a directed graph, right?

Jeremy: Yeah.

Rick: It means that the customer knows what sales reps he's related to you, but the sales rep doesn't. So if I want to create an undirected graph where both sides know what they're connected to, then I'll create a GSI and in the GSI, what I'll do is I'll take the sort key and the partition key and I'll just flip them around.

And now, all the items, when I query by sales rep, I'm going to see all those edges for all the customers that he's related to, right? Because all those edges were sorted on the sales rep ID inside of the customer partitions. When I flipped the partitions around now, they're going to be sorted on the customer ID inside of the sales rep partition and that gives me the ability to query the other side of the many to many relationships. So this is one of the things about data modeling in NoSQL, it's an interesting exercise because it demonstrates that I do not have to denormalize to be able to maintain these many to many relationships.

Jeremy: Right. Yes.

Rick: And that is a very important learning so to speak because a lot of people will go ahead and create two documents. One for the customer, one for the sales rep and inside of each document, they'll update the other with the order and the details and all of this and that's how they're going to track those many amenities. But that is problematic, right? I mean, there's a lot of work you have to do with the application layer. What happens if I update one-half of the relationship and not the other? What happens [crosstalk] dies in the middle. I mean, all kinds of crazy things can happen, right?

So using this type of construct, which is I just make an insert onto the table, there's a 100% SLA guarantee in DynamoDB that that GSI replication will absolutely occur. You can count on the fact that both of those relationships are going to get updated and that's, that's actually very, very powerful.

Jeremy: Yeah, and there is like I said, there is a lot of information out there that the best practices, DynamoDB best practices on the AWS site explained some of this stuff as well, bu it is very, very cool and I will say, I had modeled quite a few tables using that and trying to do these overloaded indexes and reuse attributes and SKs and things. And then as soon as I change to doing it with these separate GSI attributes and so forth, it actually makes it makes it so much easier. It makes it so much easier to do.

So if you're confused with DynamoDB modeling, really take this approach of those extra attributes for GSIs. All right, so one of the things that you have never mentioned or at least I don't think I've ever seen you mention it, at least not in any of your talks for your modeling is local secondary indexes.

And I used to think, "Hey, this is great. They've got really strong guarantees and then it's sort of this great use case if you want to do a couple of different sorts." But LSIs are not quite ...

Rick: Not the panacea you might think they are.

Jeremy: Yes, correct.

ON THE NEXT EPISODE, I CONTINUE MY CHAT WITH RICK HOULIHAN...