The Docker Transition Checklist

19 steps to better prepare you & your engineering team for migration to containers

12. Key Takeaways from Gluecon 2018

Jon Christensen shares five key takeaways from Gluecon 2018 in Denver, which focuses on the “glue” of software – such as APIs, containers, and building distributed systems. He discusses the current state of containerization based on the pulse he felt at the conference. Expectations have gone up, and things have become difficult.

Some of the highlights of the show include:

  • Takeaway 1 – Kubernetes, Kubernetes, Kubernetes: About 60% of the talks were about Kubernetes, and the remaining 40% at least mentioned Kubernetes; AWS ECS and Docker Swarm didn’t get much if any mention.
  • Elastic Kubernetes Service (EKS) is AWS’ managed Kubernetes service to run and configure Kubernetes. It’s an alternative to ECS, and it’s especially useful for people who have already made a significant investment in running their own Kubernetes clusters on AWS.
  • Takeaway 2 – Content: Talks were not technically complex, yet some attendees were confused and overwhelmed with topics like Service Mesh because of being unfamiliar with Docker. The audience could benefit from ECS & Docker, but this conference told them only about Kubernetes.
  • Tool vendors have made it easier to run distributed systems with Kubernetes, but their next step should be making it easier to build distributed systems. Today, it requires 9 files to run ‘Hello World’ with Kubernetes.
  • As technology makes some things easier, expectations become higher, and things get difficult again until the tools catch up with expectations. For example, having a nightly maintenance window is no longer acceptable.
  • Takeaway 3 – Cloud lock-in: Many people are worried about getting locked into a single cloud provider and having difficulties transitioning to a different provider if needed.
  • Takeaway 4 – Going serverless: People are wanting to use serverless for everything, but there are boundaries and limitations to this technology.  The biggest users of AWS Lambda (serverless technology) are not large enterprises building event-driven systems, but startups who just want to quickly deploy a small application without needing to manage a server. If those applications succeed and need to scale, it will be interesting to see if they can still be manageable on Lambda.
  • Takeaway 5 – Fewer vendors were present at Gluecon this year than last. We’re not sure if vendors are struggling or if Gluecon is less attractive for them.

Links and Resources

Kelsus
Secret Stache Media
Jon’s Gluecon Slides: CI/CD Pipelines
Gluecon
Kubernetes
Docker Swarm
Amazon ECS
re:Invent
AWS and Docker Training
Stackify
Lambda

 

Rich: In episode 12 of Mobycast,  John discusses his key takeaways from GlueCon 2018. Welcome to Mobycast, a weekly conversation about containerization, Docker and modern software deployment. Let’s jump right in.

John: Welcome. Rich and Chris, it’s Mobycast Number 12. I’m excited to be here today. We’d like to get started every week with just a little recap of what we’ve been up to this week. So, what have you been up to, Rich?

Rich: We were launching a site this week and, as a result, I didn’t have a whole lot that I could jump into so I started to look through our stag framework to see if I could build anything new, and I went down this rabbit hole of trying to build my own set of scaffolding generators or whatever. I’ve always liked to build stuff for you and so what I’m trying to accomplish is just write some sort of CL command and have some things created and put into the right folders. I’m about halfway through that and everything’s broken, hopefully next week, we’ll have a little bit of update on that.

John: Nice. You spent your week buried in tech. How about you, Chris?

Chris: Kind of the same deal after being on a plane for the past five weeks or so. This was the first week back in the office, typing on the keyboard and rolling up the sleeves and getting back into it. I did quite a bit of AWS maintenance and updating to our clusters and also building out a new environment just purely for doing demos and a stable, basically, production copy environment that we can do demos on. Then, also doing quite a bit of just planning and strategy for the projects that I’ve been leading.

John: I know you’re sent back in because all of our AWS build-outs and deploys are totally automated and stuff. He’s been totally doing that. You’ve got a new game that you’re playing or something.

Chris: No, that was one day of work, too, not quite one day of work but that was actually–I tore down all of our machines in one day and rebuilt them all and redid the way that we’re doing our log collection–the deal was also sweet–and updated all to the latest, greatest AMIs.

John: That’s actually good. That was obviously a poor attempt at a joke but the dream wouldn’t be if everything was automated, and the reality is that this stuff does take some time, but I’m impressed at how much you got done and in a short time. As for me, I went to a conference this week, one called GlueCon. It’s in Denver. It’s been held for 10 years now and it is about the glue, about APIs, about now containers, about building distributed systems. They also started introducing the B-word this year but we won’t talk about Blockchain.

That was what GlueCon was about, and I thought this is the segue, this is what we’ll talk about today. Today, we’ll talk about the state of the containerization world based on, essentially, the pulse that I took over the last two days at GlueCon. The types of attendees at GlueCon are a lot of Colorado companies. It’s a lot of telecom companies, a lot of some healthcare companies and then big companies like IBM are there. Microsoft was there. Oracle was there and then random other folks from software companies. It’s a lot of developers and development managers.

It’s not really an executive focus conference. The instructions that they give to speakers–and I was also lucky enough to be a speaker–is that your talk should have code in it so every talk did have some code in it, whether it should have or not. Anyway, I wrote down about five takeaways that I thought we could try to discuss in just a short, 20-minute podcast today. The first one is Kubernetes, Kubernetes, Kubernetes.

Something like 60% of the talks were specifically about Kubernetes-related things and then the other 40% at least mentioned Kubernetes. This is not a surprise. Kubernetes is the main orchestrator and it’s what all big enterprise companies are focused on, but I was a little surprised that I didn’t hear a bit more talk of ECS and very surprised that I did not hear the word “swarm” once. What are your thoughts on that, Chris?

Chris: Given that the conference that you were at, GlueCon, I’m not surprised at all that people weren’t talking about ECS. Now, if you’re at re:Invent, that’s a totally different story. As far as swarm goes, pretty much, I think the writing’s on the wall there for Docker that that’s not going to survive, and they’ve acknowledged that with not having native support for Kubernetes being built into the engine so I would definitely swarm enough going off into the sunset and Kubernetes and ECS are going to battle it out.

John: Right on, and I talked to Joe Beda from Heptio and I talked about ECS and Kubernetes a little bit. Specifically, I also talked about EKS which is in beta now and hopefully will be GA soon. Joe said that he does think that there’s just going to be an onslaught of new Kubernetes usage as soon as EKS is available. I sort of expected Calsys will be looking pretty hard at EKS. I don’t know. What do you think? Are we going to look hard at that?

Chris: What is EKS?

John: EKS is elastic Kubernetes service and that will be AWS’ managed Kubernetes service. It helps you configure and run Kubernetes.

Chris: I got you.

John: Chris, do you think that we’re going to be taking a hard look at EKS?

Chris: I don’t know if it’s a hard look but definitely a tangential one. I see that it’s been really useful and interesting for people that have been using Kubernetes and also still want that flexibility to say, “We have some stuff running on pram, we have some stuff running in the cloud,” and may even have multiple cloud providers so they don’t want to be locked in to the AWS orchestrator or ECS. If you’re in AWS, nothing is going to be tighter for running Docker and containers and Amazon’s version of its orchestrator.

I don’t see a compelling reason for us to say, “We’re going to switch from ECS to EKS.” I think it’s more of a migration path. Like you said, there are so many people running Kubernetes right now inside Amazon and they’re all managing those cluster hosts themselves and it’s not integrated in with load balancers and all the other goodness that you get with ECS. That’s where EKS comes in; it’s for those folks that are using that. They can now use it and take advantage of some of that great AWS integration goodness.

John: Right on, and I agree with you. It kind of leads into another sort of thing. You had mentioned that you’re not surprised given the audience at GlueCon that it was all about Kubernetes, and it’s true. When you have cable companies, and telecom companies, and other big enterprise, do-it-the-old-way companies and not cozied-up AWS types of companies, it’s not surprising to see Kubernetes.

What was interesting, though–and this is the second takeaway–is that there are all of these talks and a lot of their talks were not that technically complex. If you’ve been able to follow along with Mobycast, you would have been fine in a lot of these talks but the rest of the audience was not ready for these talks. There are a lot of attendees–I would say more than a half of the attendees just needed to go through the process of Docker-izing their stuff and getting out of their legacy treating servers as Pet’s World.

To sit there and listen to somebody talk about something called Istio, which is a service mesh toolset, they’re like, “Service mesh? Oh my god, I just have this legacy model that’s a collection of model-ettes, and I don’t even really understand what the value of a service mesh would be, let alone some of these detailed pieces of functionality that you’re talking about that come with this service mesh! It’s just going to blow right by me.”

I just say that because I think that a lot of the audience could benefit from ECS and does just need to get some services behind a load balancer running in Docker that can scale up and down, and that’s what ECS is great at, but they’re sitting here and going to conferences that are all about Kubernetes, Kubernetes, service meshes, service meshes, and I think that there’s something to that.

I think that this conference and the sort of people that are at it are leading a charge, and whether or not the people that need to be the next followers should be using ECS or Kubernetes, they’re probably going to find themselves using Kubernetes just because that’s the loudest sound in the room. Do you hear what I’m saying, Chris? We’re just all following each other, and if everybody’s shouting about Kubernetes, it’s hard to hear pure ring may be often, the corner of ECS.

Chris: Yeah, we talked about this in a previous episode, how the pace of innovation is redolent. Technology is changing constantly. No one was talking about service meshes two or three years ago and now you have a whole bunch of products in that space for dealing with the control plane across all your nodes and whatnot. You have things like service discoverability. There’s a lot of pieces to this.

Especially these conferences that are geared more towards the open-sourced world, it’s not packaged up. There are a lot of knobs to be turned and it can be very overwhelming and confusing, and I think a lot of people do good at that and they’re just like a deer in the headlights and don’t know what to do it other than, “Wow, it sounds really cool,” but not sure how practical it is for them. It’s an ongoing problem of how do you keep up and transition and how do you get it in bite-sized chunks that make sense for you to start using and adopting? People just have to make it a priority because it’s really easy just to say, “It’s too hard,” throw your arms up and just go back to business as usual.

John: Right. There was a talk from Brendan Burns, who is one of the people who originally created Kubernetes, and the talk was titled This Is Too Hard. You just said those exact words, Chris, “This is too hard,” but one of the points that he made at the beginning of the talk was, “You can’t throw your hands up in the air,” or, “You can’t walk away from this,” because the expectation of everybody now is that you’re going to be creating distributive systems that work, that are available.

Even if you’re making a little mobile app, you’re going to make some distributed systems behind that app that provide APIs that are always up and always available and there’s not going to be a downtime from the night to 2:00 AM that are just available and you do rolling updates and you do things the modern way. That’s everybody’s expectations now for users, CEOs, everybody. That’s really a problem that is too hard.

Rich: That is a desperate problem, and I think that everybody’s working really hard, including Calsys at solving this. We’re trying to help with training, we’re trying to help with talking about this on this podcast, and then tool builders are building tools to try to deal with this on the other side. Brendan said that he feels that they’ve done a good job of making it easier to run distributive systems with Kubernetes and that the next step is make it easier to build distributive systems. I totally agree. I think that a point that he made is that the minimum number of files and configurations that you need to build a distributive system is something like nine, and that’s too many. If you need to have nine different files with configurations in them just to sustain up a hello world service in Kubernetes, that’s too much; that’s too hard.

Rich: Hey, this is Rich. You might recognize me as the guy who introduces the show but is pretty much silent during the meat of the podcast. The truth is these topics are oftentimes incredibly complex and I’m just too inexperienced to provide much value. What you might not now is that John and Chris created a training product to help developers of all skill sets get caught up to speed on AWS and Docker. If you’re like me and feel underwater in these conversations, head on over to www.prodockertrading.com and get on the mailing list for the inaugural course. Okay, let’s dive back in.

Rich: I want to challenge that real quick if I can that this is sort of hard because part of me thinks it’s too hard for people who don’t have the education and the experience and the result of the open source community has, people like myself have emerged, people who shouldn’t be in this 20 years ago and never had the opportunity now. Yeah, it’s too hard for me. It’s probably too hard for a lot of people in that conference but, like you said, you sat through it and everything made sense. Chris would sit through it and it would make sense.

Is it too hard or do we have too high of expectations as an end-consumer of what we think is possible on a budget and, on the other end, are people just saying that they’re more experienced than they really are? Are they getting involved without the experience and education that they really need and isn’t that more of the problem? I never can call myself an engineer because I don’t have a degree but I can call myself anything else. No one’s going to ever really argue with that. I can call myself a senior developer. I guess you and I can’t call myself an engineer because you need a degree for it, but it seems to me that it’s only too hard because access to it is easier, if you’re using all of those services just to throw up a hello world, then you don’t get the point because you don’t need to do that.

John: Let me just address that a little bit. I think that things are cyclical and that we’re at a point in time right now where we’re doing something that should be fairly straightforward that’s too hard. If the idea is that you want to put up a service that can essentially look up what parts are available by part-ID number and, in order to do that, you have to do all this configuration and think about availability zones on AWS and think about failing, all kinds of things that don’t actually just get you that part-ID or that part by the ID, then it is too hard.

If you look back on why is this cyclical, it’s because early on, in order to do the same thing in the ’80s, it was also too hard. You maybe had to do assembling language to create a lock so that your code can run without getting stepped on by other codes. That was also very hard and then, later, higher-order programming languages came along that made that stuff a lot easier. As each thing becomes easier, then expectations go up and then things get hard again.

In the mid-2000s, things got a little bit easy for a while. It was possible to create web application without really knowing too much and get them out there and running, and expectations were fairly low about their stability and scalability, but then those expectations went way up with the introduction of mobile and relentless progress by companies like Netflix, Google and Twitter. As we move to the cloud, expectations have gone up and things have become difficult and so it’s our job as developers to make them easy again and then the next difficult thing will come along. I haven’t given you a chance to speak to that, Chris. Do you disagree?

Chris: I think it’s just the evolution of the ecosystem. It wasn’t expected back in the late ’90s or early 2000s because just the tools, the infrastructure, the ability to be available to everyone wasn’t there, but it is now. It’s now part of the expectation, like having the fail well is just not acceptable; having scheduled downtime is not so acceptable anymore because all of the infrastructure and the capabilities and the tools are there to do it.

Anyone can do it now; it’s in the hands of the many. It’s no longer just in the hands of the big ones like AOL or Microsoft or Netscape. Everything is advancing and evolving. Technology is weird. We’re starting in the face of how AI and ML and blockchain and Moore’s law and who knows, quantum computing, self-driving cars, drones, like, “It’s not slowing down, people.” That’s interesting because it actually makes me think that maybe there’s this inflection point that we tip, whereas humans, we just can’t keep up with it.

Rich: All those things that you just said are mind-boggling individually, and in aggregate, they’re just not even something you can comprehend. It’s too hard but I think it’s because our expectations for what we can do are sort of outpacing our availability or our potential to them.

Chris: We could totally sidetrack on this on a whole session on it but this is where augmentation comes in. What do you call personal assistants or robots or AI? Again, the expectation will be like people will be able to do this because you have these other things that help you deal with that. It’s no longer going to be just your own brain that has to deal with all the stuff to keep up with it; you’re going to have help. You’re going to have robots. You’re going to have really deep AI capabilities. Alexis is still in its early stages but what is it going to be five years from now? You can’t even imagine what it’s going to be.

John: We’re coming up on the 20-minute mark here and we’ve only hit on two of the five things that were takeaways from GlueCon. I just want to read off here what the other three are because maybe we can fit in one more quick one for our listeners. There’s a lot of worry about cloud lock-in. There’s a lot of people that are wanting to jump really straight into server lists and there’s a particular reason for that that I wanted to talk about. Then, the final thing is that I saw that there were fewer vendors this year, and that can be anything from things happening within the GlueCon planning world to actual reasons that some of the vendors that I saw last year might not be there anymore, maybe they didn’t survive.

Those are three more things that I wanted to talk about. Because we’ve touched on this on a previous episode, I think we could quickly touch on people wanting to jump heavy and hard into server lists. We’ve talked about how server list has a right fit, right tool for the job-type of fit, but what we’re seeing is that people want to use it for everything. I had a conversation with one of the founders of a company called Stackify that provide, essentially, GitHub to deploy dash-boarding, and monitoring, and help for server list applications.

When I talked to him about who is using server lists and what his target market was, he talked about how Lambda’s biggest user base is not enterprises, not companies with sophisticated needs around event management and event-driven systems but people that are just wanting to get something done quickly and startups and people that don’t want to think about infrastructure. I thought that was interesting. Whether or not it’s a good idea for people to be using Lambda for their whole application, that’s what people are using it for and it’s seen exponential growth in that area. I find it fascinating and I think it’s something to be aware of when we think about the software market. Would you have guessed the same, Chris?

Chris: Yeah, it’s not surprising. In a way, you can kind of think of it as what the basic programming language did for folks. It’s kind of something that makes it completely accessible. You can put together these really quick things that do, for instances, hello world or it does something like it resizes an image or whatever it is that you’re doing in Lambda. Maybe you’re doing a very easy Alexis skill or something. It doesn’t surprise me at all, but actually building true engineering software, that’s a whole other story. That may be really good news for Stackify but I think what happens after that, how long do they stay with that? If it does grow from being more than a prototype or a toy into something that really needs to be mature that they can depend on, then is that the right thing?

John: Right, and what are those boundaries? For a microservice, if you put a microservice on Lambda, what kind of limitations does it have around monitoring? What kind of limitations does it have around scaling up quickly, and how big can it scale, and how much does that cost versus doing some of the hard work creating a distributive service? Lots to talk about there but more questions as well because if the reality is that all these companies are building up big systems on Lambda and the reality is also that some of those companies are going to succeed, I guess I wonder how screwed are they going to be when they achieve that success, and is it going to kill them, are they going to move to other systems or are they going to be able to figure out a way forward with Lambda? It’ll be interesting.

Chris; Indeed. For me, a fundamental question for those folks is, “Why are you so excited about a server list?” because motivations will dictate whether they are successful in the future. If the motivations are in line with what it gives you, then it makes sense but otherwise, if your motivation is like, “Well, it’s too hard to figure out how to run servers or infrastructure,” it’s probably not the right reason. There’s other things that you’re going to have to deal with.

John: It doesn’t prevent you from having to operate a running system.

Rich: At some point, we should do an episode just on server lists so I can wrap my head around this a little bit more. I’d like to know, inherently, what’s wrong with what they’re doing, why is it sort of a definite problem when they go to certain sizes.

John: Maybe we could do that next week. I think that would be a good topic to just dive a little further into because I think it’s on everybody’s mind and I think there are definitely people out there that wonder, “If I’m not using server lists, am I making a bad architectural decision? Isn’t this the modern way?” Let’s talk about that some more for sure, maybe next week. Well, I want to make sure that we don’t lose listeners by talking for an hour or so, let’s wrap it up.

Thank you so much, again, this week for your time, Rich and Chris.

Rich: Thanks, John.

Chris: Thanks, guys.

Rich: Well, dear listener, you made it to the end. We appreciate your time and invite you to continue the conversation with us online. This episode, along with the show notes and other valuable resources, is available at www.mobycast.fm/12. If you have any questions or additional insights, we encourage you to leave us a comment there. Thank you and we’ll see you again next week.

Show Buttons
Hide Buttons
>