91. That’s a Wrap – AWS re:Invent 2019 Takeaways – Part 1

Summary

We’re happy to report that we are back and survived AWS re:Invent. As promised, re:Invent is a heavyweight of a conference and this year did not disappoint!

With 4 keynotes, over 3,000 sessions, and hundreds of new product and feature announcements, we’ve got a lot of ground to cover. In fact, we have so much to share with you, that we are splitting this into a special two-part mini-series.

In this episode of Mobycast, we start by recapping some of the big keynote sessions and discuss the new products and technologies that we are most excited about.

Show Details

In this episode, we cover the following topics:

  • re:Invent 2019 by the numbers: 65,000 attendees, 3,000+ sessions, 4 keynotes, 6 venues.
  • Recap and analysis of Monday Night Live keynote with Peter DeSantis, including:
    • What is high performance computing (HPC)?
    • How AWS is reinventing the supercomputer.
    • Why everyone should care about HPC, not just the scientists.
    • How networking advancements are paving the way forward for cluster computing and enabling entirely new types of problem solving.
    • A discussion of the Elastic Fabric Adapter (EFA) and the new Scalable Reliable Datagram (SRD) networking protocol as a replacement for TCP for high performance networking.
    • Using the Nitro System to enable new instance types for machine learning infrastructure, such as the P3dn, G4dn and Inf1 instance types.
    • Utilizing custom silicon to make the “Inferentia” processor, which is a high-performance ML inference chip.
  • Recap and analysis of Andy Jassy’s keynote, including:
    • The theme of this year’s keynote is transformation, presented via 6 theme songs.
    • “Don’t wait until tomorrow” (Van Halen, “Right Now”)
      • Your transformation needs to start today. The problems will only get harder, deeper tomorrow.
    • “Don’t stop me now, I’m having such a good time” (Queen, “Don’t Stop Me Now”)
      • Developers love AWS and its capabilities (175+ services, breadth & depth).
      • AWS is rapidly innovating its compute capabilities with new instance types (driven by Nitro System) and new ways of running containers (including the just announced Fargate for EKS).
    • “Is that all you get for your money?” (Billy Joel, “Movin’ Out (Anthony’s Song)”)
      • You need to modernize your technology stack. Get off mainframes, migrate away from the old guard databases with their licensing tricks, and switching from Windows to Linux.
    • “The hunger keeps on growing” (Dave Matthews Band, “Too Much”)
      • Data is exploding, and customers are moving from data silos to data lakes, with S3 the most popular choice for data lakes.
      • New feature, Amazon S3 Access Points, helps make giving access to S3 data easier on a per user/application basis.
      • AWS is a leader in analytics infrastructure with Athena, EMR, Redshift, ElastiSearch, Kinesis, and QuickSight.
      • AWS is investing heavily in its Redshift platform, with many new features announced including the ability to now manage compute and storage separately using the new Redshift RA3 instances.

Links

End Song

You Just Can’t, by Roy England

More Info

We’d love to hear from you! You can reach us at:

Stevie Rose: We’re happy to report that we are back and survived AWS re:Invent as promised. re:Invent is a heavyweight of a conference. And this year it did not disappoint with four keynotes, over 3000 sessions and hundreds of new product and feature announcements. We’ve got a lot of ground to cover. In fact, we have so much to share with you that we are splitting this into a special two-part mini series.
In this episode of Mobycast, we start by recapping some of the big keynote sessions and discuss the new products and technologies that we are most excited about. Welcome to Mobycast a show about the techniques and technologies, used by the best cloud-native software teams. Each week your hosts, Jon Christensen and Chris Hickman, pick a software concept and dive deep to figure it out.

Jon Christensen: Welcome Chris, it’s another episode of Mobycast.

Chris Hickman: Hey, Jon it’s good to be back.

Jon Christensen: Yeah, good to have you back. So, Chris, what were you upto last week from Sunday to Friday?

Chris Hickman: Sunday to Friday was a whirlwind blur, so yes, we were at re:Invent in Las Vegas and it was definitely felt like a heavyweight 12 round match.

Jon Christensen: Yeah, I didn’t have a second. I think we mentioned before, the team from Argentina came this year and I was so excited to see them and Raul wrote to me on Slack just yesterday. I was sort of like, “Yeah, it would have been nice to hang out more, but we were pretty busy.” And he goes, “Yeah, we didn’t even get to have dinner together.” I was like, “Oh man, that’s so bad so much going on.”

Chris Hickman: And even when we all went to replay, we still didn’t have dinner together.

Jon Christensen: I know we lost each other at the beginning and then that was it. Everyone was just gone and then felt like there was no cell service. And then, eventually I finally got a little window of cell service of Raul saying, “Hey, I’m about to leave.”

Chris Hickman: Yeah, so welcome to the madness that is re:Invent, 65,000 of your closest friends packed over six venues, 3000 plus sessions. I don’t know about you, but I was out the door every morning at 7:00 AM and really didn’t get back to about 10:00 PM at night and lather, rinse, repeat each day. So it was quite the buffet, if you will of just learning in content and in activities and everything else.

Jon Christensen: Yeah, and I didn’t get to all the sessions this year, only about two thirds of them, only about 2000. How about you Chris?

Chris Hickman: I feel a little bit better than that. I went to … So I did 17 sessions. I went to three keynotes, went to several after hours receptions for various things, and then there was replay. So one of the days I think I hit six sessions basically all back to back. And that was pretty rough. Yeah, but I’m mostly going there for the learning, there is so much learning to do. It’s a great place to do it and just really just drink from the firehose.

Jon Christensen: Yeah. And let’s see, I didn’t do as many sessions because I definitely wanted to spend time trying to attract some people down and meet with people, which is a project in and of itself. And then also I did a couple of workshops this year. So Kelsey’s the other company that we sometimes talk about has some clients that are doing IoT and while I’m familiar with IoT architecture and principles, we haven’t used a lot of the IoT services from AWS yet.
It just was like, “Oh, I need to go figure this stuff out, I need to make sure that I have my head around it.” And so I did some workshops in IoT and they were A, super fun, I’ve got a freaking drink dispenser that I brought home that’s so cool that you can use from the internet and then B, just really actually exactly what I needed to tell you. Just see how other pieces working and put them all together, it was just very pragmatic to go to these workshops for me.

Chris Hickman: Cool, which is a little bit of a different take versus our pre-show that we did where we kind of poo-pooed the builders labs, the builder sessions, their workshops, the chalk tops and said, the sessions are where the lecture style sessions is, where it’s at. So kind of good that you did find some alternative sessions that you enjoyed and got a lot out of.

Jon Christensen: Yeah, and I think, the difference for me is the one I had been referring to before was all done for you and you used their computers, and these ones and they were different to use, it was bring your own computer. It wasn’t bringing your own AWS account, but bring your own computer and get in there and do stuff with your own computer. And there’s something about that, that just helps things stick for me.

Chris Hickman: Yeah, and again, that’s the difference between the workshops and the builders sessions versus the hands on labs inside. The hands on labs is what you did before where they have like a hundred different pre-can put together labs for to work on using their computers. It’s just, walk up and if there’s a computer station available, you just sit down and start playing around with it. And it’s the same labs that you get online to do it from your desk at home.

Jon Christensen: Right, so cool. I’m sure that some listeners want to know more than just hard personal experience. Maybe, some of our more professional thoughts about what we learned and what AWS is up to in the world and how that changes our whole landscape of being software developers.

Chris Hickman: Yeah, absolutely. There’s tons of content there, and rather than going doing recap, deep dive on each one of the sessions or even selected ones kind of figured today, we would just, let’s kind of recap the three primary keynotes. And then as we recap that, we’ll tease out some of what we think are the important themes here and the takeaways.
And then that’s what we’ll really finish up with is okay, what really are the main things to take away from this? Where do we see the innovation happening? What’s important to know going forward? What should we start to come up to speed on type thing and what were the lessons learned and we’ll finish with that.

Jon Christensen: Cool, all right. Do you want to talk about Andy’s first or do you want to talk about the Monday Night Madness first?

Chris Hickman: Let’s look at what the Monday night. We’ll go in chronological order. So [inaudible 00:07:02] I don’t know about you, I’ve never been to a Monday Night Madness keynote before. And so it was kind of interesting to me being my first one. It’s a-

Jon Christensen: Oh, sorry, it was Monday Night Live, that’s confused with Sunday Night Madness, Midnight Madness, yes. Sunday night, yes versus.

Chris Hickman: Monday Night Live here right with [Sanus 00:07:23]. So this is about infrastructure and this ended up being mostly all about High Performance Computing, super computing, networking and machine learning infrastructure and basically how all this stuff is intertwined. It was really a focus on what are we doing from an infrastructure standpoint that has taken us, allowing us to continue to innovate and take us into the future.
And so it was pretty interesting just to see the progression, so Moore’s Law has existed for quite some time, right. But we’re starting to run up to some very real problems with that. And so, it’s almost like you got to start thinking about different approaches. There’s, different types of compute, like quantum computing, which we’ll talk about a little bit. But then you really kind of go from the monolithic model if you will, to get the microservices approach, where you’re going to have clusters. And that’s how you get the gains.
And so that was really a large part of this particular talk. It started off with just kind of discussing High Performance Computing. What is it? It’s abbreviated HPC, I think we’ve all seen the acronym, we’ve seen it, talked about it, or seen references to it. But it’s like, “Is that really something that I care about?” I didn’t have a need for it. And he started off this way, framing it in that context and by the end of it, I think it was kind of made much more relevant to us that these kinds of techniques are really going to be necessary going forward.

Jon Christensen: I want to try to tell you what my mental model is of this, and see and bash it against you and see if it makes sense to you. My mental model is in the beginning the chips were able to do one instruction at a time. You give it an instruction, it does it. Next one, next one, next one, it’s all in serial. Then comes along this idea of pipe lining instruction so you can kind of be working on more than one instruction at once with a single chip. Then after that comes this whole concept of multiple cores. So you have a single chip and it’s got different cores on it. And there’s a way for it to move some instructions to different cores so that it’s almost like you’ve got four computers inside of one computer, but everything is really close together.
So it’s easy to move the instructions around to the core that they need to run on and get more work done with a single computer. And then what this feels like is, let’s take that one step further. And yeah, we’ve always had networked computers. And they’ve always been able to do, different things and maybe they can all be involved in solving the same problem.
But what if we could sort of compile one executable and have it run across a cluster of computers without actually, thinking of it as a cluster of computers. And what would it take to be able to do that? How could we get the actual networking latency across those computers low enough that the whole set of computers acts like a single computer? Is that along the lines of what this is about?

Chris Hickman: Yeah, absolutely. I mean this is the whole basically divide and conquer approach to scalability. We’ve seen it in software, we’ve seen it in data. And so it’s the same thing now with computers themselves. And so that progression of like, okay, what are the ways that we can basically just divvy it up and spread it across multiple things. Whether it be across multiple cores or multiple pipelines.
Now in order to scale, we needed to partition the problem up into discrete parts that can be executed simultaneously on other computers. And that’s how we scale. That’s going to be the method of scaling and that’s literally what a supercomputer is.

Jon Christensen: And the two pieces of magic to this feel like one, is that actual hardware/protocol upgrades to make the networking faster, to make the hardware able to talk from computer to computer faster. So, that it all feels like a single computer staff feels like one part of the magic. But the other part of the magic feels like, kind of doing some stuff automatically so that you’re not having to actually break the problem down yourself. But that, intelligent routing of instructions is happening based on your executable. Yeah, you see what I mean?

Chris Hickman: Yeah, and I think, we’ve been limited by the former and the latter is more real, it’s been here. The software to basically do that partitioning and then bring it back together, like that’s kind of existed. It’s just that we’ve been limited by the hardware. It’s just the number of computers that we can have in these clusters is just limited based upon just certain fundamental constraints. And that’s really where this talk was going with HPC and High Performance Computing and leading into supercomputers. Where, talking about how the largest supercomputers can have thousands of servers, but really it ends up becoming this hardware problem bottleneck. Where they can’t communicate to each other fast enough and that becomes the limiting factor.
The fabric of just communication was a problem. So AWS has made lots of investments here and that’s primarily what they were kind of highlighting was like we have a couple of new, big advancements that we’ve been worked on. One is like we have really rethought how to do system design and that has resulted in the creation of their Nitro System. Yeah, we’ll definitely talk about a bit more, and then from a networking standpoint, they’ve kind of re-imagined networking and looked at what the problems are there and come up with some novel solutions. They’ve come out with a new way of networking with their Elastic Fabric Adapter, EFA and a new networking protocol called Scalable Reliable Datagram, SRD which is really going after the fundamental problems that they’re running into with TCP.
So TCP it was built, define many years ago for networking between two computers over a network. And it became the backbone for the internet and point-to-point communications and works great for that. But when you have a lot of computers all on basically the same network, trying to communicate with each other very, very quickly with results, and passing around lots of on a data TCP, is not nearly consistent enough. There’s too much variability built into it and it just has some fundamental technical problems with it. Between Nitro and EFA, with SRD, those are two really big advancements that they’ve made, that now get some past this problem.
So now there’s a lot of charts and benchmarks and whatnot. But I mean the net result was without Nitro and without EFA, you could scale up to, let’s say it’s about 150 nodes instances, BMS basically strong together to make a supercomputer. And then after that, you didn’t really get any more gains adding nodes to it. And that was because of these fundamental problems that they had. And then with Nitro and with EFA, that problem goes away, that bottleneck. So they’re now seeing linear performance gains as they continue to add nodes to it, which is just wonderful.

Jon Christensen: It’s cool, I’m laughing but this is it, Chris, this is it. This is where the innovation is. You went to this keynote and I missed it because I just was a fool about how I signed up for the keynotes, and then you told me about what they talked about afterwards and that was my takeaway of that, this is where software is progressing is inside the cloud with companies like AWS, totally re-imagining computers and how they talk to each other because they control it all, top to bottom.
So let’s break up computers into different parts, let’s make chips, to do different things. You can do so much more, and it’s like I think of Apple too, like the whole ecosystem around the iPhone top to bottom. Of course, it’s so great because they control everything, well the same thing with the cloud. If you control everything top to bottom in the cloud, you can really make a lot of progress.

Chris Hickman: Yeah, and everything is building on the previous work. It’s absolutely an ex-exponential chart of gains. So, early on you’re putting in all that foundational work, to build up your platform and to give you all the various capabilities. So think about how much work folks like AWS have done just on compute, on storage, on networking, on security, management, all that stuff. And now that’s a great core competitive advantage, they’ve got that foundation in place and now rolling out stuff on top of that can … It’s really being able to integrate with it and they can innovate so much faster and every, it’s just-

Jon Christensen: It’s amazing.

Chris Hickman: It’s crazy.

Jon Christensen: It’s amazing, that they could be innovating on top of this stuff because part of me just imagines people running around with their hair on fire, trying to add enough disk fast enough. As everyone’s like, “I’m putting my data in AWS,” like, “Oh my God get us another desk, we got to get there.” Meanwhile, they’re like, “Oh, actually we’re going to design a new chip and do 150 million other things at the same time.”

Chris Hickman: It’s all done on a scale that it’s kind of hard to comprehend.

Jon Christensen: That’s what I’m talking about.

Chris Hickman: Remember, Amazon has a half million employees now or 600,000, I mean it’s some ridiculous number right now. No, that’s not all AWS, right? AWS is a very large portion of it as well. It’s not half, I’m sure, but is it 100,000 or 150,000 I don’t know. But they have a lot of people, they have a lot of experts, they have a lot of PhDs. They have tons of infrastructure in place. They have-

Jon Christensen: Yeah, so you can have 10,000 people running around with their hair on fire, plugging in desks and still have enough leftover to think about, the big picture.

Chris Hickman: Yeah, they have over 200 pops points of presence. All right, so these are essentially data centers. They have over 200 spread over the entire world. We talk about the networking backbone. They have their own dedicated backbone that they built, they have their own cables, undersea cables, that it’s just Amazon traffic.

Jon Christensen: Wow.

Chris Hickman: It’s not shared with anyone else, and so they have this global network in place, big fat pipes to build that networking backbone. They’d have 200 plus data centers, everything they do is at a scale that’s really kind of hard to understand. So, they aren’t running around with their hair on fire, like looking for [inaudible 00:19:51] or anything like that. They are over-provisioned, but one of the things that burn in one of his quotes was like, “There’s no compression algorithm for experience.”

Jon Christensen: I loved that one.

Chris Hickman: That’s really true here. So it’s like either they have been doing this for 20 plus years now. Starting off with Amazon and just going through that rapid growth of Amazon and the retail site crashing and burning really hard lessons.

Jon Christensen: Right, that’s what I’m talking about. It’s like not that long ago where they were running around with their hair on fire. It’s like how they turn that corner is amazing.

Chris Hickman: Yeah, well in order to get to where they are now, they had to do that and otherwise they wouldn’t be around. They’re just tremendous amounts of innovation now that’s based upon all that previous experience and all of the technology that they built today. And it’s just, again, kind of mind boggling of what’s now the result of it. In this talk in the Monday Night Live about, okay, we can now build these supercomputers out of clusters of VMs. Instead of having this huge big box, traditional supercomputer that costs so much more money and there’s only, 50 of them in the whole world. We can now build these with all of our standard infrastructure.
And so we’re now getting results where something, like a computational fluid dynamics program that a problem that took six hours to simulate before now takes 30 minutes.

Jon Christensen: Wow.

Chris Hickman: That’s a 12 acts, performance increase there, another example is they’re looking … Networking has gotten a 20X increase in performance just in the last six years. It’s now enabling … They can string together 5,000 C5n instances, VMs with 100 Gigabit networking. So that’s 360,000 cores with 960 terabytes of memory. And that’s what they can do now, so the computing power that you have there and the kind of problems that you can solve and how quickly you can solve them. Five years ago that was unthinkable, and it’s now reality. And so what is it going to be five years from now?

Jon Christensen: Yeah, and I don’t know, I mentioned my friend Andrew Blum’s book The Weather Machine before, but I hadn’t read it and I actually listened to it on the way to every event. And it totally talked about using secret supercomputers to model the weather. The weather model lives in supercomputers that the European, you hear the Euro, like that’s the European weather model lives in a super computer. They might be looking over at AWS now, and it should be right.

Chris Hickman: This is one of the classic problems, right?

Jon Christensen: Absolutely.

Chris Hickman: This is weather simulation CFD, Computational Fluid Dynamics AI, ML. There’s a lot of drugs simulations, Like, let me think, there’s going to be so many new problems and application domains that can now be modeled with this kind of power. Things that before seemed just so … I kind of see, it’s like at some point, we’ll be able to actually model the human system. And all the biology that goes on in it. There’s so many inputs to it, there’s so many variables, but once you can build that model, we have the competing power to actually run it. And so once you do that, now it’s like, can you get rid of [inaudible 00:23:51].

Jon Christensen: What will happen if I have this chocolate?

Chris Hickman: Exactly, so really pretty interesting. And so that keynote was rounded out with just talking about the advances that they’re doing just with the machine, with AI, ML infrastructure. So they did announce three new types of instances, and they’re all based … All instances going forward are going to be based on Nitro, and so that gives you that enhanced networking and gives you a lot of the other features that come along with Nitro, which we will talk about a little bit here. Three new instances, the P3dn instances, so these analogous to the P family instances, so definitely for doing ML, for training models the G4dn instances as is new. So that’s the G family, which is more for inference in ML.
And then the new [INF 00:25:05] one instances. And these are utilizing that custom silicon that they built, the inferential processor. This is a custom high-performance machine learning inference chip done by AWS. And so this is another big theme that we’re seeing is that, AWS acquired Annapurna Labs, I think about four or five years ago. So, they’re a chip making company and they are just going gangbusters now on just making custom silicon for all these various, instance types features. And so there’s custom silicon with going against, AI and ML. There’s custom silicon in the Nitro controller that’s allowing that to happen, and even in some of the new product launches they had. So we’ll talk a little bit about Aqua, but Aqua is another new system that’s using that has custom silicon in it, to get much better performance when communicating with S3 and bringing that into Redshift.

Jon Christensen: And that’s not even a hard decision. It’s like any software that they have that kind of gets deployed on every single one of their instances or it gets run across the entire infrastructure kind of always. And that’s stable. It’s like move that into silicon, we don’t need that as software anymore.

Chris Hickman: Yeah, absolutely, so them having that, that just breadth of knowledge and experience. And again, there’s no compression algorithm for that. They’ve been doing this for a while now. This is now just part of their tool set, it’s not just, throwing computers together. It’s now they’re building chips, they’re building networking hardware and protocols, their algorithms. There’s just networking, just everything across the whole gamut.

Jon Christensen: Yeah, so the only place my mind goes here is like, ah, I just want to be able to touch and play with that and have access to it. And it’s a little unfortunate that it’s, all proprietary, all owned by Amazon and who knows, right? We don’t really get to have a say in their decisions.
We cover a lot of information here on Mobycast and if you’ve ever wanted to go back and remind yourself of something we talked about in a previous episode, it can be hard to search through our website and transcripts to find exactly what you’re looking for. Well now it’s a lot easier, all you have to do is go to Mobycast.fm/show-notes and sign up. We’ll send you our weekly super detailed outline that we use to actually record the show. And a lot of times this outline contains more information than we get to during our hour on the air. Just signup and get weekly Moby cash teach sheets to offer episodes delivered right to your inbox.
Let’s talk about this. I want to go to the next part or talk about another keynote, but one of the big takeaways that we have, maybe we can jump right to that takeaway is, where is innovation happening? This takeaway, it comes from looking at the expo floor and listening to the keynote and going to sessions. And in my mind, innovation is happening in the clouds. It’s happening inside Google, inside Microsoft, inside Amazon. It’s good that it’s happening across three clouds, at least, they’re not just in one. I do think AWS probably has a big head start on at least some of this stuff, but they all have head starts in certain areas. How much innovation did you see on the expo floor, Chris?

Chris Hickman: I would say walking the explo floor myself. I didn’t get over to the Aria, so I don’t know about that one. But I did do the Venetian, which is the bigger one. And there wasn’t a lot of booze there that caught my eye. It did feel like a lot of the same stuff. There was a lot of folks there, that are dealing with hybrid collapse cloud. There’s still a lot of people that are on prem that are looking to get to the cloud or at least partially. And so all the whatever support they need to do that. Whether it’s, how do I move data from this place to that place? How do I do back up? How do I-
What’s the story there? So there’s a lot of companies out there in that space that are helping with that. And that’s a big part of the trade show floor. There was obviously all of our good friends in the APM space. So many boos out there in APM, so Datadog and new Relic and Splunk and signalFX, which is now owned by Splunk. There’re those guys and there were, some of the bigger players, there was, Dell is there and VMware. But again, I mean, they have their place in what they’re doing. And so a lot of these players are there again, really in support of, “Hey, you’ve been on prem, you want to move to cloud. It’s a big hard problem. It’s a lot of work and we’re going to help you with that.”

Jon Christensen: There’s also a bunch of vendors that were like, “Let’s take this problem that’s kind of hard and try to make it easier for you. [inaudible 00:30:36] make it blocks and arrows kind of stuff. And that’s really like, I don’t know it’s underwhelming to see those companies too.

Chris Hickman: Yeah, there was one company there that was basically, I think their whole value proposition was they were a GUI for Snowball. And I was just a little … That’s a bit of a head scratcher because, first of all, how many people are using Snowball. Is your market big enough and then how much are they going to pay for a GUI when at the end of the day it’s an S3 copy. Get your data onto a Snowball and then you just, ship them on its way. There’s quite a few companies I think in that space as well. So it’s hard, right? Because people know … You don’t want to be in a space where you know you’re going to be competing with the big ones like AWS.
So you’re not going to see someone doing voice transcription, like code right. And the ones that are there that are kind of in a being in the cross-hairs are the ones that are surprised, I think. One of the ones that caught my eye was a booth that said, “Oh, we have AI for enterprise search.” And so it’s a company that says, “Hey, this is for the enterprise.” So as an [inaudible 00:32:03], you have information that’s spread across a whole bunch of different places. It can be in Dropbox, it could be in Google drive, it could be in SharePoint. It’s a mixture of word documents and maybe, I don’t source code, it could be other, whatever.
It’s just spread everywhere, it’s kind of hard to find. So like this value proposition of having this one place where you can go and type in a natural query say, “Hey, how do I get a VPN connection account set up?” Or something like that, right? And to be able to type that in and then boom, here’s the results. And it went out and it knew what the information was and it understood the context of your query and presented you the information. That’s a really strong value proposition because it’s a hard problem, I think just about every company is dealing with. Even just like an onboarding process of someone like, what does it take, to tell someone like here’s all the information you need.
Here’s how you get all your accounts set up. Here’s how you get signed up for benefits. Here’s how you access this system, here’s how you get credentials for that or whatnot. It’s a mess, there was a company there that was doing that. And then of course in one of the keynotes, Amazon announced out, by the way, here’s the new service that we have, that basically does exactly this. And so this has happened before and it’s kind of always interesting to go and … I was talking with the vendor doing this and it was kind of asking questions like, “How does it work? And what about this? What about that?” And then my last question is like, “Well, so what do you think about Kendra? which is Amazon’s this.
And usually the answer is always like, “Oh, we’re better.” And it’s like, that’s not a good place to be in [inaudible 00:33:57] just say, we’re better or we’ve been doing it longer. Those are not good reasons why you’re a better choice than that. I think those companies, they get caught off guard by this. The saving grace I think for those companies is going to be like, they have to have the multi-cloud approach, that is going to be the way for them to compete. So, because I can think of, there’s other companies in the same space where [Palumi 00:34:30] is ones based here in the Seattle area, where they’re doing basically infrastructure as code, but via actual code like JavaScript or Python or whatnot. And that’s what the CDK is that Amazon has now.
So really pretty, pretty apples to apples comparison but the big thing I think [Palumi 00:34:51] has is, they’re multi-cloud [crosstalk 00:34:54].

Jon Christensen: Good luck signing up on the Google cloud with CDK it’s not going to happen.

Chris Hickman: That’s a definitely a good, competitive advantage for them and not being completely squashed by AWS. Same thing goes for Spotinst, is another company. Where they’re all about how do you best leverage Spot instances as a fleet to lower your AWS bill? They’ve done really well, because AWS continues to roll out features that really enable that as well. So they have spot fleet. They have now spot support for Fargate, there’s just so much that they’re doing. They’ve come out with savings plans, they’ve really simplified the whole pricing model for spot instances. And so AWS has been doing a lot right to enable that. But, Spotinst is still they’re growing very healthfully, they’re doing well. I think, again, they have a multi-cloud approach. They work with each other providers.
And I think it’s one of those things where AWS, they don’t care if Spotinst is successful. They don’t really view them as a competitive threat because at the end of the day, they are making better business. Well, it’s better business for AWS, because at the end of the day, spot instances are just unused, they’re just sitting there idle that they’re not making any money off of. And so if there’s other folks out there that are helping to increase that utilization and to bring it up, then that’s a great thing for AWS. And so they’re going to have a friendly relationship.

Jon Christensen: And it’s like they’ll have a friendly relationship and they’ll be happy about Spotinst, but they’re also not going to do anything at all to not worry about stepping over Spotinst then cutting into its market share. It’s both sides, it’s kind of just-

Chris Hickman: Yeah, it’s true.

Jon Christensen: Should we talk about one of the other keynotes or the other two keynotes real quick.

Chris Hickman: Absolutely.

Jon Christensen: Even though they were only five hours total of keynote?

Chris Hickman: Yeah. So, let’s talk about Andy Jassy’s keynote. This is definitely the biggie. This year it was three hours long. So a long time to sit in your seat, honestly, it didn’t even feel that long. There was just so much information there, so much to talk about. As usual, Andy always has like a … There’s a theme to it and then it’s broken up into sections and usually each section has a theme song that goes along with it. This year the theme for Andy was transformation and there were six aspects to this. So kind of starting off with at the beginning, the first one was “Don’t wait until tomorrow,” which is the Van Halen song right now. And the point here was just, look, this transformation of moving basically from on prem to enter cloud or using more of the cloud. It’s a hard problem, but you know what, it’s only going to get harder tomorrow if you don’t start today. This is just kind of like more of a high level.

Jon Christensen: He was talking to the enterprise CEOs in the room.

Chris Hickman: Yeah, absolutely. Pointing out like do you just from the senior leadership team all the way down, like you need that conviction, that alignment to tackle those big problems now and just put your aggressive top-down goals in place. Don’t go for the small little projects and wins. Have some big audacious goals to go after, very much kind of a cultural process thing. But that was the first facet of transformation, if you will. The next one was, “Don’t stop me now, I’m having such a good time,” which is Queen song. And basically the gist of this was that, “Hey, developers love AWS and its capabilities.” It’s got the breadth, it’s got the depth, we’re talking like over 175 services and, this is the best place to be working on. And it’s a lot of fun, but also pointed out there’s a lot of room for growth here though. And so I don’t know if it’s a surprising metric, but it is what it is. And that is that for out of total IT spend, 97% is on prem and only 3% is cloud. That’s a big disparity, and that is … There’s a lot of room for growth there.

Jon Christensen: Yeah, you know what though, all of a sudden I just had this thought, like companies that are super cloud-native, they almost don’t really have an IT department. So it’s kind of, you know what I mean? It’s like their IT departments are so much smaller. So when you’re adding up all the IT spend, it’s going to obviously lean towards the on-prem companies because they’re the ones that are running around plugging in computers. So there’s a little bit weirdness there.

Chris Hickman: This, IT spend also includes like, it’s whatever you’re spending towards that stuff. So whether it’s, you bought a computer for on-prem or whether you have an AWS bill that’s 10,000 bucks a month, right? That’s all IT spend, again, it looks like the very real.

Jon Christensen: You convince me.

Chris Hickman: Yeah, so there’re very real fact that, the cloud is new-ish in that, like we’re really only talking like the last like 10 years or so versus the amount of on-prem investments. That goes back 50 years, for some companies. There is a lot of stuff that’s still there on-prem and people are still trying to figure out, how do they get to the cloud. There was a long time, there were people who were like, “Hey, we’re not going to move to the cloud because we don’t think it’s secure.” Or we think there’s going to be performance issues or whatnot. A lot of those concerns have gone away, and so just folding right back into this transformation theme that he had for this keynote of, “Don’t put off those hard problems anymore.” Like he might get that more like 50, 50 instead of 97, three.
Of course pointed out AWS like they are the leader here in infrastructure as a service. So have almost 50% market share there. Microsoft comes in second at 15 and a half percent, and then followed by Alibaba at 7.7% and then Google at 4%.

Jon Christensen: Wow.

Chris Hickman: So it kind of interesting, Alibaba is really making some gains there.

Jon Christensen: Yeah, [crosstalk 00:41:42] not.

Chris Hickman: And then as part of this theme about just develop a seventh fun and like AWS is a great play. It just really talking about their innovation in both compute, and then he also talked a bit about containers. So on the compute side, brought up Nitro again and just how important that is and it’s really reinventing the hypervisor and enabling a lot of new types of instances and capabilities, and then also their chip innovations. So we talked about the, custom silicon that they’re making, their new A1 chip, the Graviton chip, which is arm-based and seeing upwards to 40% performance gains for that over the standard Intel-based architectures. And then also the Inferential chip as well for doing the inference, so pretty interesting innovations there on silicon.
And then on the container side pointed out, kind of an interesting statistic that 40% of customers are choosing Fargate over ECS and EKS. It’s almost half of customers coming in on AWS that are using ECS or EKS or something. They’re choosing Fargate, and so with that, they did announce a new capability Fargate for EKS. So now, just like we’ve had Fargate for ECS and we talked about that at length in a previous series of Mobycast, we now have that for EKS as well, so for Kubernetes. You run Kubernetes in the cloud and not have to manage your hardware, your machines. And so that was the second facet, if you will, for transformation. And the third was, “Is that all you get for your money?” Which was the Billy Joel song, “Moving Out.”
And so this was really kind of a jab at just modernization and the kind of like go back to some of the typical suspects that they like to poke fun at. The main themes here were, “Hey, get off the main frames.” So IBM get off that and become modern and moved to the cloud. The old guard databases with their tricks, so again, we’re talking Oracle and SQL Server. Where they’re really expensive with high licensing fees and they keep changing that game. And you just need to go to the AWS’s position, you just need to go to open-source technology based databases like Postgres and my SQL and of course Aurora. And then the third, another thing there was just switching from Windows to Linux.
Windows again you’re paying licensing fees for every one of those server installations versus Linux, Linux, you’re not. So Microsoft … Andy made some jabs against Microsoft with some of the changes they’ve made recently, with their licensing of windows where it’s not nearly as portable. The newer versions where, you can’t do the BYOL, bring your own license to the cloud. And so there was some, definitely some angst with that and just saying, just kill your windows. I think you referred to it as closing windows.

Jon Christensen: There used to be a nice migration path to AWS. So you have all these machines in your data center, let’s move AWS or you can just bring the licenses with you. You don’t have to buy them again.

Chris Hickman: Yeah, and so I guess in recent versions of windows, they’ve changed that where you just can’t do that anymore. But I think you probably can use them on Azure right, so-

Jon Christensen: Right, exactly.

Chris Hickman: The fourth one was, “The hunger keeps on growing,” which comes from the Dave Matthews band song, “Too much.” And the basically overall just here was the pay data’s exploding. We really don’t need to be … We know this, right? It’s just we’re moving from silos of data to big data lakes, storage is growing really rapidly and we need ways of dealing with all this data. We talked about how data lakes are becoming, just really important. And in AWS is the most popular place for data lakes and for the biggest ones. And S3 in particular is the choice for implementation.
They had a new announcement for an Amazon S3 access points. And this is just a much easier, simpler way to share data in S3 among multiple types of applications, or users, or entities without really kind of getting into the weeds of a bucket policies. So you can define these access points that just for every one of your users or applications and do it on an application basis if you will. And then you make it really easy to have hundreds of these access points per bucket, and you can have policies on the access points. So you can say, “Hey, I only want this traffic to come from within the VPC, and what are the kind of limitations that you want?” It’s a really flexible, easy way of managing permissions and access to your S3 buckets.

Jon Christensen: Yeah, but at the same time, this may be a little snarky, but it’s like now there’s yet another way to have to troubleshoot, why you can’t access the data that you’re trying to get to in S3. As I’ve been looking at the, IM policies forever and everything seems fine. Oh, well did you look at the access point?

Chris Hickman: Yeah, I would imagine, especially for new buckets that people create. You’re going to pick one way or the other and so it’s going to be there, a bucket policy or it’s going to be access points type thing.

Jon Christensen: Interesting.

Chris Hickman: Yeah, and then after that talked about just the analytics and just again, you have all this data and really what’s the use of having it, if you can’t actually look at it, analyze it and use it. And so this was a little bit of a surprise for me, just how much they really focused on in highlighted Redshift. So Redshift is our data warehouse technology, they mentioned that, “Hey, this was launched in 2012 and it was the fastest growing AWS service until Aurora came along, and Aurora displaced it.” Now, Aurora is the fastest growing service, but up until Aurora, it was Redshift was the fastest growing service.
Lots of people using it, lots of adoption. They announced five or six major new features for Redshift. We’re not going to go through all of them, but it was pretty impressive on what they’ve done. They announced new instance types for Redshift so that this is the RA3 instances with managed storage. So basically now you can separate the compute and the storage and manage those separately. Which is getting into the wheelhouse of some of the other platforms out there that some of the big advantages that they’ve had over Redshift that’s now going away. So, if you need more processing power, you can manage that if you really need more storage and processing power is not an issue, then you can just up the storage.

Jon Christensen: Well and then the main thing around that is that, a lot of times people are doing their data analysis like during the daylight hours. Or they’ll run one big query and need a bunch of power for it and then not need all that anymore. And with Redshift it was really hard to up and down provision and Snowflake the non-AWS thing that actually runs on AWS was a way to deal with that. I think this is going directly after a Snowflake, just like they went after Mongo last year.

Stevie Rose: Nobody listens to podcast outros. Why are you still here? Oh, that’s right, it’s the outro song. Come talk to us at Mobycast.fm or on Reddit at r/Mobycast.

Show Buttons
Hide Buttons
>