The Docker Transition Checklist

19 steps to better prepare you & your engineering team for migration to containers

64. Post Gluecon Thoughts and How Aurora Serverless Works

Co-hosts Chris Hickman and Rich Staats listen as Jon Christensen shares his thoughts on GlueCon 2019 and discusses one of his favorite sessions titled, How We Built Aurora Serverless, presented by Anoop Gupta of Amazon Web Services (AWS).

Some of the highlights of the show include:

  • Out-there Talk: Quantum Computing Application Development is Here by Murray Thom of D-Wave Systems; details how to program for quantum computers
  • GlueCon has gone commercial; 70% of talks seemed like a sales and marketing pitch: “Here’s this tool we’re building. Here’s how it works. Here’s why you should use it.”
  • Aurora Serverless=SQL in the Cloud: Give relational database a schema, query it, and scaling up or down is done; don’t worry about managing connections, instance size, etc.
  • Overview of RDS: Aurora came along to handle scaling and licensing issues with other relational databases; it’s all about speed, availability, and storage
  • Network vs. Serverless: The price you pay for Aurora
  • What’s a serverless database? Automated capacity management, up and down scalability, pay for what you use, built-in availability, and same customer interfaces
  • Different Workloads: If you don’t have predictable but sporadic use, serverless makes sense; normal, steady workloads make serverless not the best solution
  • How it Works: Aurora Serverless scales down to smallest or up to largest instance size, but doesn’t scale out; switch out instance type via proxy and pool
  • Reasons to Not Scale to Zero:
    • You know what you need to save money with consistent transaction flow.
    • You’re not able to use it due to constant, long-running transactions
    • You need to react to scaling in less than 30–90 seconds.
  • Talking about a new thing in AWS called, Cellular Architecture; basically, it’s just partitioning, breaking things up into cells, and limiting blast radius
  • Aurora Serverless doesn’t support read replicas; if you have a heavy read system and  been relying on read replicas to scale out of it, don’t choose Aurora Serverless

Links and Resources

GlueCon

Amazon Aurora Serverless

Amazon Relational Database Service (RDS)

MySQL

PostgreSQL

SQL Server

MariaDB

Amazon DynamoDB

DockerCon

Ghost

TechCrunch

Chris Hickman’s Blog

Kelsus

Secret Stache Media

Rich: In Episode 64 of Mobycast, Jon shares his thoughts on Gluecon 2019 and then dives into one of his favorite sessions which focused on AWS’ Aurora Serverless. Welcome to Mobycast, a weekly conversation about cloud-native development, AWS, and building distributed systems. Let’s jump right in.

Jon: Welcome, Chris and Rich. It’s another episode of Mobycast.

Chris: Hey.

Rich: Hey guys. It’s good to be back.

Jon: Right on. Rich, what have you been up to this past week?

Rich: Closing deals.

Jon: That’s what I like to hear. Can you give me a few while you’re at it?

Rich: If I come across that, I think you should have and definitely passing your way. That’s for sure.

Jon: Right on.

Rich: Yeah, it’s been a good week. We’ve been working really hard on outbound sales, we’re doing LinkedIn outreach, we’re doing physical mailer outreach, and we’ve been doing it for a little over a year-and-a-half. It’s really starting to pay back all the investment, so, it’s a nice feeling. Although now the only problem where we have to fulfill all this work and do we have the resources, the team, and the time to do it, but it’s a better problem to have than no work, I guess.

Jon: Yeah. Well, I plugged your company’s Secret Stache at the conference I was at this past week. […] really good WordPress development. You are the one to go to for sure. How about you, Chris? What are you been up to?

Chris: It’s been a pretty normal week.

Jon: I like normal.

Chris: Yeah. Getting ready for a trip coming up here to San Diego. Although looking at the weather, it’s like Seattle. The weather’s a bit warmer than it is in San Diego which kind of go figure. I have been listening to a really cool podcast recently that I have been enjoying quite a bit. It’s called, To Live and Die in LA, and it’s a crazy story. It’s one of these true crime podcast where a Rolling Stone reporter gets introduced early on to this missing person’s case, but then the first week of this woman gone missing, and then just really goes on, just researches it, and goes deep into it with the private investigator. It’s a crazy story. It’s like truth stranger than fiction. I’ve been enjoying it a lot now.

Jon: As for me, what I’ve been up to lately is going to be the topic of the rest of the show. I went to Gluecon this week, which is a really fun conference down in the Northwest area, in the Northwest of Denver, just between Denver and Boulder. I’ve been going for about four years and it’s run by Eric Norlin, who I consider a friend. Usually, it just has lots of cool, innovative talks around open source, around cloud, around APIs, around when blockchain was hot, it had some deep talks about how blockchain worked and what people were trying to do with it.

Also, they tend to sprinkle in a few out-there talks. This year’s out-there talk was about how to program for quantum computers, which I was really excited to learn about and actually super surprised that I thought it was going to be like, “Oh yeah, here’s how the scientist do it,” but it was more like, “Here’s a place where we can go and sign-up for your own account. We have tools that help you restate your problem that you’re trying to solve in a way that the quantum computer can understand and everyone gets a free minute.” I was like “What? This is cloud computing now? You can get to a quantum computer in the cloud today?” and they’re like, “Yeah, we’re just trying to get people to realize that this is something they can use.” So, wow. Here we go. Quantum computing.

Chris, when you talked about DockerCon, you were sort of disappointed and I think it’s a Docker-specific thing, that there’s not a lot of innovation coming out of Docker, the company now, and it was really an enterprisey-feeling conference like, “Use Docker’s enterprise tools and here’s how Docker works,” kind of talks for the most part, right?

Chris: Yeah. It’s evolved over the years to become kind of a feeling of lost excitement, kind of more enterprise, more corporate. Just again, I say transition to go from a new startup with lots of innovation to now like, “Hey, we got to make money,” and what’s the business model, focus on customers, and the financials. Being that it is DockerCon and it’s the official conference of Docker, it’s run by Docker itself. So, they set the agenda, they set the talks, and just the overall personality of the conference.

Jon: Yup. At Gluecon, I’m used to general talks that aren’t really commercial. Maybe a couple of commercial talks by some of the top sponsors, but everybody was talking about how this year, a lot of the talks, maybe even as many as 70% felt a little bit like “Hey, here’s this tool we’re building. Here’s how it works. Here’s why you should use it,”  which is disappointing. I was going to a lot of talks, excited, ready to take notes, thinking this could be a great one for Mobycast, and then halfway through I realized I’m getting pitched and stop taking notes.

Chris: This feels like a trend now. This happened at DockerCon as well and it’s super disappointing. It basically feels like it’s a marketing pitch, and it’s very specific to that particular company’s product or service that they’re selling. It’s just not very useful and it’s a pitch. It felt like that’s been taboo in the past, like it’s just not allowed.

Jon: Right and how is this happening? Is it that people are clever in their CFPs and it seems general? I did not know a lot of these were going to be pitches until I was in there and realizing I’m getting pitched. So, yeah, it must be that. Companies are working around that.

Chris: The same thing. Every time I’m going to a talk, it’s like from the description it looked interesting and I was like “Oh, this is pretty cool,” like, how do you debug complicated distributed systems, microservices, architecture, distributed tracing, and it’s like “That’s sounds interesting. Let’s go check that out.” and it’s like “What? This is someone’s vaporware.” It’s just, come on.

Jon: Yeah. There was a lot of that. What’s ironic is that, what I’m about to talk about, which was one of my favorite talks is, unfortunately, actually a product plug for AWS. But since it’s AWS and since we talk about AWS a lot on the show, and since honestly, the real innovation, the trustworthy innovation currently seems to be happening inside the public clouds and less outside the public clouds, I guess it makes sense to go ahead and give AWS a pass on this and go through this talk, even though it’s a pitch for an AWS product.

Chris: There’s a big difference here, though, right?

Jon: Yeah.

Chris: The title of the talk was Aurora Serverless. It wasn’t like, How You Too Can Run a Relational Database in a Serverless Way in the Cloud, right? That’s the problem, right? It’s totally okay as long as you tell me upfront this is what it’s about.

Jon: Yup. That’s a great point. So, the talk was very good and I really am interested. I think a lot of our listeners are interested, too, and AWS pulls off some of their really cool operational stuff they do with their cloud tooling.

This is Aurora Serverless, which is essentially database in the cloud that you don’t have to scale up or scale down because you just send data to it and they take care of the rest. You just SQL in the cloud, basically. With a relational database you give it a schema, then you can query, and then you don’t worry about managing connections, you don’t worry about tuning it, you don’t worry about how big your instances, you just worry about sending SQL to it.

How did they pull this off, I was curious. It’s been a thing that I’ve been excited about because, scaling up or down a database, vertical scaling of your database in AWS prior to Aurora Serverless, has always been painful. It’s always been like, you got to find a window where you can migrate your data to a bigger database, nothing can be running during that window, and it’s not something you can do when you’re getting hammered by a traffic spike and everybody’s yelling, “Why can’t this handle the load?” That’s not the time when you can’t scale up your database.

Chris, this is will be one of those ones where I’ll have to do most of the talking, even though I’m not really the expert here on the show. What I’ll try to do is talk through some of the things that I learned and see if you have any comments or ways to think about it that I’m not considering. Sound good?

Chris: Sure. Sounds good.

Jon: Okay. First, the talk was by Anoop Gupta. He’s, I think, basically a Product Manager at AWS. He’s really product-focused on how Aurora Serverless works, but he’s quite technical and as all the product managers at AWS are, he was able to give a good sense of the architecture, the software architecture, and how it’s all put together.

He started by giving an overview of RDS, we’ve done that on the show before, the database in the cloud, everybody knows that, but it’s provision and RDS is literally MySQL, or literally PostgreSQL, or literally SQL Server, or Oracle, or MariaDB running on the cloud, and then Aurora came after that to deal with some scaling issues as well as licensing issues that those other databases have, and also just to make an AWS-branded specific tool to pull people in, right?

Chris: Yeah. The big deal with Aurora is its underlying storage layer. Aurora is all about speed and availability, so what you would expect from the commercial RDBMS engines. But with things like MySQL and PostgreSQL, it’s purpose-built storage, underlying storage architecture, which by the way is also used by Dynamo, this is like one of their core gems that they have. Their AWS is the Aurora storage and technology.

Jon: Right. So, Aurora works today with its provisions. You say this is the size you want. Maybe it’s the micro, or medium, or large, and then you pay based on that size and based on the amount of data that you store in it. The size, I think, is per minute or hour that it runs. Do you actually know off the top of your head whether it’s minutes or hours?

Chris: I think Aurora is really no different than the other RDS services. Again, it’s all about the underlying storage that you still have to provision these servers, you still have to pick the instance type, it’s running MySQL, it’s running PostgreSQL but again, what you’re getting is the performance and the availability at a fraction of the cost that you normally get. It’s really that storage layer.

Jon: Right. The reason why I was talking about how you pay for it is because that will come up again when we talk about Aurora Serverless. You don’t really pay about the instance type anymore and you do pay for how long the things are running. I’m sure on Serverless, it’s by the minute, so, probably the same on regular Aurora. It’s probably by the minute not rounding up to the hour.

Chris: Yeah. In a way, it doesn’t matter because with Aurora normal, it’s always on. You’re paying for 24 hours a day, right?

Jon: Yeah, for sure.

Chris: So it doesn’t matter whether they’re really billing you up the second, or the minute, or the hour. The important point here is that you’re just paying for 24/7 for as long as your database exist on the network. […] Serverless, totally different story.

Jon: Yup. So, he then went through the question “What is a serverless database?” I guess inside AWS they literally asked themselves that. If we’re all about making things serverless, what does that mean for a database? Some points he talked about were it has to have automated capacity management, it’s going to scale down and scale up on its own without people doing anything. It’s going to be, you only pay for what you use and you can’t worry about things like patch management, you can’t worry about things like performance management, tweaking database parameters, it’s got to have built-in availability so you don’t have to make sure that’s it’s running in multiple AZs or across regions or anything like that, it’s just got to do that for you.

Then finally, it’s got to have the same customer interfaces that you’re used to. Putting in a schema, or making sequel queries has to be the same as it was for Aurora, be that PostgreSQL, or be that MySQL syntax, it’s got to be the same. It can’t introduce a whole new language to talk to it. It’s got to be something that you’re used to. Those were their tenets when they decided to make Aurora Serverless.

Chris: Jon, did they make any comparisons to Dynamo? DynamoDB is NoSQL serverless, and Aurora Serverless is basically the exact same thing, but it’s for a relational database instead of a NoSQL database. I’m just wondering, as they were talking about what is a serverless database and what it meant, did they compared that to what they have already done to Dynamo?

Jon: They didn’t compare to what they have done with Dynamo and Aurora as a serverless database has a major limitation in terms of how it scales that Dynamo does not have, and I guess we’ll get to that in a minute.

When they were looking at how to build this, they thought about different workloads that RDBMS database typically handle. They were wanting to make sure that they can do things that are useful for people with really episodic workloads like div and text, like it just comes in and goes away, mostly idle or really spiky workloads like gaming, where you have to provision for the peak usage. You want to make sure they can handle things across the realm of the types of workloads they get, not just a consistent transactions, transactions, transactions coming in at a regular rhythm because if that’s all you have, then maybe it does make sense to just have your provision database you need. Right?

Chris: Yeah. It’s absolutely the case that you’re paying a premium for having the convenience of serverless, right?

Jon: Yeah.

Chris: So, if you don’t have the predictability, if you’re not using it 24/7, or it’s just sporadically using it, then something like serverless makes a lot of sense but if you’re just the normal, steady, workload type of thing, then serverless is probably not a great solution for you.

Jon: Exactly. Then he got into how it works. This is where we’ll spend the rest of the conversation. I guess the thing that what’s most surprising to me is that—it makes sense—it’s just vertical scaling. Aurora Serverless will scale you down to the smallest instance sizes if that’s all you need, or scale you up to the largest instance sizes if that’s what you need. It does not scale you out whatsoever, so no horizontal scaling capabilities at all. They’re not adding servers, they’re just changing servers.

He talked about basically how customers were already doing this today and I guess this is something that Kelsus has never done but it is a doable thing. If you want to switch out your instance type, you can put a proxy in front of your database and basically get another instance type warm, then switch the database instance underneath, and then the proxy can handle rerouting to the new instance.

Basically, the new instance is still pointing to like “All the data is in some disk,” so changing instances is just a matter of changing what program is pointing at the data on the disks. The data does not have to be moved, it’s just that the application that is fetching the data and inserting the data has to be on a bigger, smaller much capability machine.

That’s essentially what this is. They have a proxy layer, and then they have what’s called a pool of formed instances. As they see that you’re about to scale, they’ll warm up an the instance for you, make sure it’s ready, and then it will switch it up.

Let me tell you  a little bit more about exactly how that works. The proxy layer is a multi-tenant proxy layer. Of course, it’s multi-ID, it’s multi-region, and it’s ready for you. It’s multi-tenant, meaning, it’s not just you hitting that proxy layer. Then, I think that is some sort of  protect AWS’s own cause, they don’t want to waste containers or machines on customers that are not really falling through out of lane that proxy layer, and it is a place where they can keep customer data totally separated and segregated, and it’s fine.

The different calls through that proxy are not going to be able to see each other, and I guess they all started a lot of testing around making sure that there’s no noisy neighbor problems in the proxy, and the proxy all that it’s doing is just taking the SQL statements and sending them into the database. Then that warm pool, each instance is for a particular customer and that customer’s workload that they’re doing. The pool itself might have lots and lots of instances for lots and lots of customers, but whatever it’s warming up for your workload, it’s going to either be a bigger or smaller instance, based on the direction your work seems to be scaling.

Let me find here in my notes. There are some things about the timing in this that were just amazing. There’s that one third component to this, the multi-tenant proxy layer, they have this pool of warming instances and then they have a monitoring system. The monitoring system is essentially just CPU usage and connections. As it sees CPU usage starting to climb or connection’s starting to get up to towards max connections, that’s when it warms up a new instance. If you’re scaling up it’s going to get one ready. I guess they have the ability to read in pages of the database very quickly and have that thing ready, have that buffer of the database ready to go. Then, they look for a little window of traffic, like lowering, and then it takes one second to switch from the old database to the new database. So, you do have a one second essentially outage in your system as you scale up or scale down between databases, which is just amazing. Most applications can handle that, especially if they’re web-based applications. One person can wait an extra second. They’ll just think their internet connection had a hiccup.

Chris: Yeah, pretty cool. It all makes sense. If you had to do this yourself, you could do this with your own proxy and then you play around things like snapshots, backups, and all that kind of stuff right. They have the same problem where that one second hiccup, that’s basically where they find a spot where they can basically lock the database, stop the write, stop the transaction so they can switch the point of what before they’re pointing to from the storage aspect.

You have to do that pause so that you don’t miss any of that, the operations that come in. For them, because of the Aurora storage layer, that can be really fast versus if we had built it ourselves doing things like RDS snapshots and restores, it’s going to take a lot longer.

Jon: Right. One of the things that Aurora Serverless can do is it can scale all the way down to zero. If you let it have its default configuration, it’ll scale to nothing. You can tell it that you want a provision capacity unit and you have the minimum to be one, in which case it’ll never scale all the way down to zero. If you do that at scale all the way down to zero, then the time to get a database up for you as soon as you start getting traffic is 30 seconds. So, if you can’t handle a 30 second wait then you need to keep one capacity and a provision at all times.

One of the questions was, “When would you not use this? When is this not a good idea?” You already brought up, Chris. Well, it’s maybe not the greatest idea if you know exactly what you need, because then you could just save money with your consistent transaction flow of getting that exact size you need. Another reason you might not be able to use it is if you have constantly long running transactions. In that case, they’ll just never be able to find that scale point, because there’s just never that little brief window where they can get in there and stop transaction. I guess, another one and maybe, you can help me understand this because I didn’t quite understand what this would mean, but if he use a lot of temporary tables, then they also can’t find a time scale. I guess maybe because those are just in memory? Is that is that the reason why, Chris? Do you know?

Chris: Yeah. Sure, it just has to do with concurrency and locking in something like a temp table is being built on the fly. There’s issues around that and I don’t actually know whether or not it’s back to disk storage. I would imagine that it is.

Jon: I guess, the last reason that you wouldn’t use it is if you need to be be able to react to scaling in less than 30–90 seconds. If you’re really spiky, you start scaling suddenly, and you need to react scaling up almost instantly instead of, “Okay, it’s going to be 30–90 seconds before that new, warm instance is ready to get switched in,” then it’s also not a great use case. But arguably, if you need to react to scaling that quickly, probably you just need a size for a peak. There’s nothing that’s going to react that quickly.

Chris: I think in that case, too, that’s going from zero to something […] 30–90 seconds.

Jon: No. The switchover happens in one second but the reaction to this scaling does take 30–90 seconds. If it sees transactions climbing and, “Oh my goodness. You’re running out of CPU. I’m going to get one ready and then, I’m going to find that one second to switch you over.” That whole process can take 30–90 seconds, which is also amazing. It’s just incredible.

Chris: Then, that’s just like a policy setting, because it’s to minimize the thrashing of scaling up and down. It’s just verifying that, “Yeah. There’s been enough data here that I’m confident that the scaling decision is going to be the right thing to do.”

Jon: Right. Which actually is also cool, too, because that means maybe you can scale up and down a few times per 10 minutes which would be like, “Wow. That’s a lot of moving around instances. That’s cool.”

Let’s see here. It’s that simple. I talked about the noisy neighbor stuff. One of the things that was outside the scope of this talk but came up when he was talking about noisy neighbor and stuff, he was talking about how a new thing in AWS is this thing called cellular architecture. Have you heard of them talking about this, Chris?

Chris: Yeah. This is basically just partitioning. It’s just you’re breaking things up into its own cells that can handle and I think, at the end of day, you’re relying on just partitioning it up.

Jon: Right. They have a rule where a cell has at most 5000 instances in it. So, that multi-tenant proxy layer, there’s going to be lots and lots of those, but each one is going to have no more than 5000 instances, They’re going to be multi AZ, but that cellular level design gives them better outage protection than even just multi AZ deployments.

Chris: It’s all about limiting the blast radius.

Jon: Exactly. The same goes with the warm pool of instances. There cannot be more than 5000 instances in that warm pool before they create a new cell. Then, at that cellular level is also where they were doing the testing to make sure that there’s not noisy neighbor problems. I guess most of the work that they did with the noisy neighbor problems is just making sure that there’s no garbage collection posits, because apparently, the garbage collection is what really hinders other users of the proxy. So, it’s basically smooth with memory usage. He didn’t get into exactly how did they accomplish that though.

The other reason that the instances are not multi-tenant is that they just found that based on reasons of security and based on just not really knowing exactly how much usage the instances were going to get, they couldn’t really keep the requirements around noisy neighbors and around security all the way down to the database levels so they just had to give everybody separate instances of databases. There was not a way to do multi-tenant databases themselves, which they also don’t do in regular or normal Aurora. I was really excited, I met Anoop later, and I told him that we were going to talk about this. Hopefully, he’s excited to hear our conversation.

Chris: Very cool.

Jon: Anything else that you wanted to bring up about? I think we’ll probably use this for, I’m imagining a couple of our clients that would definitely benefit from this where we have had issues of trying to scale quickly up our database where we have some spiky traffic. We’ll definitely look at this. What about you Chris?

Chris: You know I am my own personal blog and I’ve been meaning to go from the latest version. I’m using Ghost, latest version, and it supports MySQL, no longer support PostgreSQL, so I need to switch to MySQL anyhow. Then, I’m back in it in RES which is, for a personal blog, it’s expensive. So, this is perfect for it. Running Aurora Serverless for my own personal blog would be a perfect use case.

Jon: It is.

Chris: I get like eight hits a week.

Jon: Right and it can handle that. If you want to get TechCrunch, it’d be fine.

Chris: Yeah.

Rich: You should plug your blog, otherwise, you’re going to continue getting eight hits a week.

Chris: Excuse me?

Jon: Go ahead and plug your blog. What is it?

Chris: upstart.chris.hick.com

Jon: Cool.

Rich: Now you’re going to get more than eight hits a week, hopefully.

Chris: We’ll see. I guess the other thing I want to point out to is, Aurora Serverless, there’s not a lot of magic here. You’re just doing vertical scaling which makes a lot of sense. They can’t do horizontal scaling because that’s one of the […] of the relational database model. In order to get horizontal scaling, you have to partition and partition a relational data is really difficult. That’s why we use NoSQL for really scalable. When we have systems that have to be very, very scalable, NoSQL is really the only option. That’s why they were built Something like Aurora Serverless to be handling or dealing with the vertical scaling, it’s totally in line with that philosophy. The real magic here just comes in from that Aurora storage layer and the ability to quickly switch. Basically, it’s not EBS. It’s the equivalent of detaching from one EBS falling and wiring up to another one, then flipping a switch, and then, changing the routing.

Jon: Actually, it reminded me of something. This is an open question. I’m not exactly sure, but Anoop mentioned that Aurora serverless doesn’t, I think he said doesn’t support read replicas, which is a kind of a big deal. So, if you have a really heavy read system, and you’ve been relying on read replicas to scale out of it, Aurora Serverless might not be the one you want to choose it. Probably better for transaction processing type workloads.

Chris: I feel that probably, you’re going to be pained. You’re going to be over pain because it’s not really distinguishable between read and write traffic. Again, you’re not provisioning anything yourself so, you’re not specifying things like read replicas. You have the single endpoint coming in. It just doesn’t have that support.

Jon: It does feel like a little bit of a weakness to me and it feels like a solvable problem, where they could notice that most the traffic is read and they could add read replicas. I wouldn’t be surprised if they do at some point. That’s just my prediction.

Chris: There’s nothing technically stopping them from doing it. They have all the metrics. They see every transaction coming in, they have some metrics on it, they know what your traffic pattern is, they know whether or not if it’s the right thing to do to be able to have a better read. It’s a lot of work, but definitely, it’s almost a forgone conclusion, if serverless has the traction. Folks are really using it, adopting it, and taken off. If they keep pitching it, it’s like, “This is really good for task databases, for devs to spin things up quickly and spin them down and in frequent websites stuff.” You kind of wonder, would they devote those resources to it, but you never know.

Jon: I think they will and I think, just to clarify, they talked about those workloads because they want to be able to be good at those and some other workloads. I don’t know if that’s the only types of workloads they were considering. They just wanted to make sure they understood, that they can handle a variety of workloads. But hopefully, I don’t mischaracterize that.

Chris: I think the other way I’m looking at this too is something like, Aurora Serverless makes a lot of sense when you’re first starting out, but once your software or your service has been out there for a while, then you now have some predictability. Then, that probably makes sense now and switch from it.

It’s almost the same model across a bunch of different things. It’s definitely true for when do you host your stuff in cloud versus go on Prim. You’ll see a lot of companies that would do these. They get to an inflection point where it’s like, “I’m completely overpaying to be running in the cloud,” so, you see them pull themselves out of the cloud and go on Prim. […] Dropbox did thi, not too long ago. There’s other companies that have done it.

That’s the great thing about the cloud. You don’t have this big capital expense. You don’t have to make any predictions. You don’t have to know how much you need. It’s just capacity on demand, but you’re paying a premium for it. The same thing here with Aurora Serverless. Any of these  serverless technologies. Who’s going to pay a premium for that? At some point, you’re going to hit an inflection point where it definitely is like, “I’m overpaying now. This is not the right solution for me.”

Jon: Right. All right, great. Well, thank you very much for talking with us about that today and thanks, Rich, for putting this together. We’ll talk to you next week.

Chris: All right, see you guys.

Rich: Later.

Well, dear listener, you made it to the end. We appreciate your time and invite you to continue the conversation with us online. This episode, along with show notes, and other valuable resources, is available at mobycast.fm/64. If you have any questions or additional insights, we encourage you to leave us a comment there. Thank you and we’ll see you again next week.

Show Buttons
Hide Buttons
>