The Docker Transition Checklist

19 steps to better prepare you & your engineering team for migration to containers

02. Transitioning Legacy Applications to Docker

What goes on inside an organization when it decides to make the switch to Docker? What does a team and company go through? Chris Hickman and Jon Christensen of Kelsus discuss transitioning legacy applications to Docker. About two-three years ago, Chris joined a company that was in the midst of transitioning to Docker to handle its Amazon Web loads. He was tasked with learning how to dockerize all of the microservices deployed by the company inside AWS.

Some of the highlights of the show include:

  • Transitioning to Docker is a major paradigm shift in how you think about software and how it is developed, debugged, deployed, packaged.
  • Chris started using Docker at a startup company with a small engineering team. They started with hand-written scripts to deploy their four microservices in Docker. When they started, each container ran on its own AWS EC2 instance (virtual machine). Over time, their deployment process evolved with more sophistication, better tools, and improved automation.
  • Jon learned about Docker for Ruby apps many years ago at a Meetup. The speaker described how it changed his own dev process and it sounded more cumbersome than the standard Rails deployment process. The speaker said the Docker was worth the effort, but it was hard to convince others in the audience at that time.
  • To make the transition to Docker requires buy-in at the business level about the benefits of containerization, and buy-in from engineers who need to understand how it will impact their day to day work.
  • There is some friction to adopting Docker. It may feel uncomfortable for a while until it gets easier and the benefits are apparent. This is especially true for teams that are implementing other changes to their deployment process, such as test automation and CI/CD, at the same time as they are implementing Docker.
  • Platforms-as-a-Service (PaaS), such as Heroku and Elastic Beanstalk, hide a lot of complexity and make cloud deployment easy, at least for simple monolithic applications. But that apparent simplicity comes with some downsides: lack of control & flexibility, and less scalability.
  • PaaS don’t work as well with microservices architectures when many services must be deployed. For example, it’s difficult to manage a of collection of related services (applications) and report across those services as single logical unit.
  • Developers need to understand what it means to run in a container. What does it mean for my code to be isolated inside this container, which is not meant to be seen from the outside? What are the moving parts involved in a containerized app?
  • Developers need to learn new ways to troubleshoot their code. Containers can terminate if your code has an unhandled exception, for example. Developers will need to learn Docker commands to find both running and terminated containers and examine log files, stdout and stderr.
  • Developers will need to learn Docker commands and command-line parameters to enable troubleshooting. They may need to learn tricks such as volume mounts so log files can be written to the host file system, using ‘sleep’ to keep a container running if their app crashes, and using SSH to access the running container to see what’s happening inside.
  • Hot reloading or hot deployments are productivity boost in deployment, and developers will need to learn new ways to do this as well.  Volume mounts can ‘punch a hole’ in the container to enable hot reloading.
  • Software developers tend to fall into two camps: one camp only focuses on the application logic itself; the other camp knows how that application runs and interacts with the computer and operating system.
  • One of the surprising and truly valuable benefits of Docker is that it makes your team become much more capable as engineers. With simple apps and with PaaS, developers might scrape by without knowing much about how a computer works: networking, memory, I/O, file systems. Docker forces you to learn a little bit about each of these important parts of computing, and that makes you a better developer.
  • A Docker Image is description of how to start the container with your app inside. It is self-describing, meaning that it contains everything needed to run itself on any machine that has the Docker platform installed. The image will run the same way in any environment. This makes sharing images and deploying code very easy and reliable.
  • Just as PaaS fills a need in the market to enable ‘one click’ deployment of apps without any cloud expertise, there are now services such as Containership that are promising similar easy cloud deployment of containerized apps in the cloud.  This may be a good option for simple apps that don’t require much scalability, and for organizations without any expertise in infrastructure. Just like with PaaS, though, this ease of deployment has downsides in lack of control and scalability.
  • Kelsus’ journey with Docker has been closely linked to our adoption of more and more AWS services
  • Should you be agnostic about your cloud provider? AWS is the market leader, investing billions in providing tons of valuable services. Google Cloud, Azure, and others are investing similar efforts. Our advice is to pick one cloud provider and leverage their investments to your advantage.
  • Trying to remain cloud-agnostic makes sense only if you’re using a very small number of basic cloud services.
  • There’s a tipping point where you are using many of your cloud provider’s inter-related services and the cost to change providers is too high.
  • If you have so much business success that your monthly cloud costs explode, and therefore it makes sense to bring your software on-prem, that’s a good problem to have. In this case, you might decide, for example, to use Kubernetes in your own data center instead of ECS on AWS.
  • It is much easier to adopt Docker today than it was 3 years ago. The tools available and bundled services from cloud providers are a big help.

Links and Resources:



Amazon Web Services (AWS): ECS; Elastic Beanstalk; Relational Database Service

Ruby on Rails




Secret Stache Media


Google Compute Cloud

Microsoft Azure

Jon Christensen 00:41

Cool. So here we are, on our second Mobicast. This is pretty good. The first one was fun and a learning experience. And so the second one, we hope to learn even more and have even more fun. So in order to make it more fun, I decided that this week’s topic is going to be comedy, so we have to be funny today. You ready for that, Chris and Rich?

Chris Hickman 01:04

The pressure.

Jon Christensen 01:05

Yes. Just kidding. So what we’re going to talk about is what goes on inside of an organization, when you decide to make the switch to Docker. Who usually makes that decision? What it’s like when people make that decision, when it comes from the top down or the bottom up? And then what does the company and the team that decide to do it, what did they go through? So let’s just see– I have a list of things to get us started. Oh, yeah. So I just wanted to start with a question for Chris, because Chris you’re the one of the three of us that was the first to say, “I’m going to give this a try. I’m going to learn Docker.” So can you tell us about the company situation you were in, or if it was a personal project, or what got you into it and how did that go?

Chris Hickman 02:02

Sure. So I had heard about Docker in various circles and things that I read. I’d played around with it a little bit, but hadn’t really had the opportunity to dive too deeply into it, until I– about, let’s see, three years ago– two, three years ago, joined a company and they had just started to transition to Docker, specifically to run their Amazon Web loads. And so one of my very first tasks, day one of starting there, was to start doing that. Start Dockerizing all of the micro-services that we had running inside AWS. So it was very much get thrown into the deep end of the pool, ramp up very, very quickly, and figure out how to do this and what does it mean, and to start running into all these brand new concepts and really different ways of thinking when it comes to–

Jon Christensen 03:05

So do you know who at that company, what their role is? Who was it that decided to make that decision in the first place? It sounded like it was already made before you joined them, but do you know where it came about?

Chris Hickman 03:20

It was. So at that point in time, I think we had about four services that were currently being deployed. And actually, most of those were– I believe all of those were actually running– I’m trying to think back now. I think all of those were running in– they were dockerized at that point. I don’t know when that decision was made, by whom. It was a relatively recent decision, because I actually joined that team when it was early on. So when I joined that team, we were only about four developers– when I signed my employment agreement, there were four developers on that team, on the engineering team. And by the time I left about a year later, we were up to over 20. So a pretty small team and also not very experienced. So I believe it was pretty much ad hoc. We had a great engineering leader for that team, came from Amazon, Microsoft as well, but I think Docker was new to him as well. So I think they said– it was, again, pretty much an ad hoc process, where it was like, “Hey we understand at a high level, there’s some great features here for Docker. Let’s go ahead and get our toes wet and start playing around with it.” Especially at that point in time, we still weren’t really running production. We hadn’t opened the doors, if you will, for the first version of the service. So that experimentation phase was still kind of non-critical. So when I came on board, basically these services were kind of hand-tuned for Docker. Docker was installed by hand, the deploys were literally scripts to stop Docker, copy images, start Docker. I mean, it was really rudimentary. And so we took it from there.

Jon Christensen 05:29

Interesting, so it does– you said that there was a strong engineering leader there that had come from Amazon and Microsoft. I mean, we don’t know for sure, it sounds like. But it wouldn’t be surprising if that was the person that said, “Let’s try this out. Let’s see if we can make good use of this.”

Chris Hickman 05:44

I suspect so. It’s just part of overall just good engineering practices. So whether it be things like logging or monitoring or just deployment, just kind of knowing that, “Hey, this is definitely a technology that warrants us a closer look, and so let’s start using it.”

Jon Christensen 06:07

Right. It’s so interesting to me, because I’m just kind of wondering for a lot of companies how it makes its way in. So the first time I heard about it was, I went– it was probably three or four years ago, I went to a Ruby on Rails meetup in San Diego. Their SD Ruby Group, it’s called. Everything, all the meetups in San Diego are called SD and then the name of the topic, because people are not that creative about their meetup names. See, we’re on the comedy still [laughter]. The person that stood up and talked about Docker was a pretty young person, somebody early to mid-20s, and essentially trying to convince the group. And the group was ranging from– I think I was mid-30s at the time, and the group ranged from probably mid-40s to several people that were just even still in college. So this person, who had maybe a couple of years in software developmental experience professionally, and stood up and said, “Hey, I found this cool thing called Docker, and here’s some tools that you can use to use Docker in your Ruby development environment.” And it’s great because all that benefits of Docker, but he was really focused on saying how it had changed his development processes. And he wasn’t talking much about using it in production, and I think a lot of us sort of graybeards in the audience were kind of looking at it and going, “Wow, that looks like a lot of overhead.” You can’t even get into your server to see what’s going on, without doing an extra level of shelling. You have to create a terminal to get into the Docker thing, in order to see the logs and see what’s going on. And you also have to map these ports, and do all this extra stuff that feels like a lot of extra work. For when we could just be typing rail space S, and now we’ve got a server running on our machine.

Jon Christensen 08:06

But I think this guy was convinced, and we all were kind of not convinced. So it’s sort of interesting to see to have that be my first impression, and then being proven totally wrong over the years. Like completely wrong, that was the future and I just didn’t see it at the time. But I bring it up because if that was the person in an organization that was fired up about Docker, and there were people that had the bulk of their experience, their software development experience pre-Docker, I could see there being a lot of push back.

Chris Hickman 08:47

Absolutely. And I think this is kind of true across the board with lots of new technologies. There’s a lot of just people in general, resistance to change. And as much as we all kind of say we like to learn new things and what not, actually change in the way that we work in adopting these new technologies. A lot of times, there’s a lot of resistance there, right? Because it is different, it requires friction to adopt those things. So Docker, it is a huge paradigm shift in how you think about software and how it is developed, debugged, deployed, packaged. There’s some pretty radical changes in the way that you work, and there is going to be this period of adoption where it just doesn’t feel good. It’s uncomfortable, it’s messy, it’s frustrating. But if you stick with it, you’ll get over that and you will come to see the– appreciate the features that you do get from that, the benefits that you do get from that. And it gets a lot easier too, as you start to really assimilate those concepts and those paradigm shifts, you become more comfortable with that. So I’ve kind of gone through this now with three separate engineering teams, kind of introducing them to Docker, switching us over from a non-Docker process to a Docker process. And it’s been the exact same adoption pattern each time, which is very, very interesting. But also kind of now comforting for me to know as well, because I kind of know how to– I know how the movie ends.

Jon Christensen 10:35

Right. Right. So I do want to talk about what that pattern is. And maybe you can talk about it in terms of Kelsus, and what you saw when you introduced Docker to Kelsus. But before we get there, I still want to talk about this idea of push back a little bit. Because in fact when you joined Kelsus, it was in January of 2017. I think you got some push back from me on Docker, because I was still not convinced at the time. You remember any of our conversations about that?

Chris Hickman 11:04

I do remember me– you and I talking, and I believe that was one of the first things that I noticed was like, “Oh man, we got to fix this. We got to get on Docker. We got to start using ECS. We’ve got some work here to do.” And I do remember you kind of expressing some reservations, if you will. [inaudible], “Whoa, wait a minute. Why do we need to change? Things are working just fine. What do you mean Docker? What do you mean ECS? What are you talking about?” But I think once you start just kind of talking about, “Okay. Well, let’s talk about what it is. Why would we do this?” There’s some great benefits to doing this, and let’s talk about that stuff and you need to work through that. And I think kind of what you’re alluding to here too, there’s a couple of different levels of buy-in that you have to get, right? Like you have to get buy-in at the benefit level, kind of at the business decision level, kind of really understanding like, “Yeah. This is the right thing to do from just a business efficiency operations standpoint.” And then you need to get the buy-in from the people actually that you’re going to change the way that they work, right? So the developers, the engineers, you have to get buy-in there. So there’s two levels of buy-in that you need to get.

Jon Christensen 12:29

Right. And I think one of the things that happened early on, was that we were doing a deployment for some clients and I had it checked in early in the week, and then checked in the middle of the week, and then checked in at the end of the week, and I was like, “Are you serious, we’re still deploying? This should have taken a couple of hours. What are we doing?” So what was happening during that time? And why did something that use to take a couple of hours, take a week?

Chris Hickman 13:02

Right. So now we’re talking about– so when we first adopted Docker, why the deployment process got longer for us.

Jon Christensen 13:12

Yeah. Because still on that topic of push back, and I guess now we’re talking about push back from the business level a little bit.

Chris Hickman 13:18

Yeah. And so that’s that kind of a cycle of adoption where it’s when you first start doing the switch over, there are some really big just paradigm shifts and changes in the way that you think about developing, testing, deploying, packaging the software, that makes things uglier and messier in the short term. And there’s a lot of concepts to ramp up on, and they’re usually concepts that are pretty new for most people, or at least they haven’t thought about them in a deeper way. And so that’s kind of what we went through it at Kelsus, was just that ramping up on various new things. Not only was it Docker and containerization, it was also just some of the additional engineering sophistication that went around that, right? So not only were we moving to Docker, but we were also kind of figuring out how to like automate our bills and automate our deployments. First through batch scripts, and then actually start using a continuous integration system which we weren’t using. So there’s definitely some new concepts, technology, tools that we were using along the way that just required quite a bit of ramp up. A lot of– there’s pitfalls, there’s the common stumbling blocks that we all go through type thing. So, yeah, there’s an adoption period where it’s messy and it takes longer, but it gets better.

Jon Christensen 15:03

Right. And also to bring you along for the ride, Rich, and anybody that’s listening. Part of what I’m getting at is that we had relied heavily on platforms as a service before Docker, so were using things like Heroku or Amazon’s Elastic Beanstalk to do our deployments. And they do take care of a lot of things that you need to think about, when you use Docker and ECS and Amazon outside of the platform at the service arena. So they are doing stuff for you for free. And they don’t do it in a way that lets you control that stuff, they just do it. And so for when you’re working with a team that doesn’t have experience building web applications at scale, and the web applications that you’re building are sort of guaranteed to not need scale, those still might be a good way to go. You still might need to throw together some software real quick and just get it online somewhere, without taking even the extra few hours to do anything other than deploy it and show it to people and say, “Does this look like where you want to go from here?” But once you have a serious software that you know is going to scale, and that’s going to require lots of different services and lots of people working on it, then the platform as a service model – where it’s completely opinionated how your software is deployed – starts to fall over a little bit. Every new application is an entirely new setup machines or an entirely new application in– like in Heroku, it’s an entirely new application that you have to manage and that they’re not connected to one another, you can’t do reports across them, you can’t look at your infrastructure all at once, and so many other problems. I mean, we can talk about the benefits of Docker in general, and that’s not what the point is.

Jon Christensen 17:11

The point is coming from that to Docker, feels a little daunting. And it is. It’s quite simply, it’s more. There’s more to it. There’s more work you have to do. And I think I as the person who was running Kelsus, was used to not having to do a lot of work to get a piece of software in front of clients. But the reality was that sort of right at the same time that Chris joined Kelsus, our clients were needing more. They were actually needing more than just a quick, almost toy application thrown up on a platform as a service. They were needing real applications with real operations and support in infrastructure, that a lot of people worked on all together at once and had different deployment schedules, etc. Before we got into the push back on Docker, Chris, you were about to talk about the pattern or the path that you’ve seen happen a few times. You want to talk about that a little bit?

Chris Hickman 18:17

Sure. Absolutely. Definitely, the first big issue that folks run into with Docker is just kind of this idea of what does it mean to be running inside of a container, and what are now the moving pieces as I’m developing things? So kind of instantly, right out of the gate, people end up running to the issue of, “Okay, I’m used to write in my code, I can run it, and they have some support for hot reloading. And I’ll bring up my editor, change some lines of code and expect that to be kind of instantly updated and just work.” And that’s one of the first use case scenarios that run out of the box, just doesn’t work for developers. So hot reloading is definitely not something that is going to be straightforward, right out of the box. You can achieve those kinds of setups, but it takes– there are some setup that you have to do to achieve that, and then also you have to realize what it is that you’re doing and some of the pros and cons of doing that. But for me, it’s definitely been one of the hardest things for anyone that’s adopting Docker to understand it, it’s just like containers. And what does that really mean? It’s not just this running a program or running your software on a machine anymore. You run it inside of a container, and the container is very much this isolated bubble that really is, for the most part, isolated from everything else. It’s completely disparate. And that is just this huge mental shift for developers, and that causes a lot of the initial struggle with adopting it.

Jon Christensen 20:22

I think that one of the first things that I remember along those lines was, “Okay. I put my thing in a container, I ran it and– oh, wait a minute, now my container’s not running anymore. What happened? Why is it not running anymore? I thought I started it.” And it’s because the process inside the container had some kind of error, and it caused the container to shut down. And the logs are in the shut-down container, so I can’t even see what happened and I have no idea how to troubleshoot that scenario. And I think that it’s pretty common for people’s first experience with Docker, wouldn’t you say?

Chris Hickman 21:01

Absolutely. Because again, it kind of boils back down to the fact that you are running inside this container that’s completely isolated from the rest of it. So unless you do something different in your setup to kind of punch a hole into that barrier, it’s not going to happen. And so, yeah, you have something that goes wrong inside your code that’s running inside that container. As an exception, it terminates and that’s it. And so you can get access to that stuff, but you have to do some extra steps, right? So it’s like you have to go look and say, “Okay. What was that container that was running?” And that’s a special flag when kind of looking at Docker processes containers. And then from that, you have to say, “Okay. I got to go use this logs command in Docker,” and then that will then allow me to see a little bit more on what happened. What went to standard out, standard air inside of the container to kind of understand why did my software fail. So as a developer that’s new to this, it’s like there’s a lot of screaming and yelling and frustration, right? Because it’s like, “Man, you’ve made my life a lot more difficult. I can’t believe how crazy this is.”

Chris Hickman 22:20

But again, it’s an adjustment, right? So it’s something that’s new and, yeah, there’s a bit more hoops you have to jump through there because of that barrier of the container. But you have to keep in mind that that’s one of the great things about Docker, is that it does create that barrier for you. That’s what enables all these benefits of Docker. So you kind of have to take the good with the bad, and just learn to adjust, right? There’s many, many strategies to mitigate some of these downsides and these typical problem areas that developers have. And you can make your process a lot smoother, and that’s what happens, right? A part of that adoption period is that you start learning. It’s like, “Okay. Yeah. I can do my logs a little bit differently. Maybe I’ll write them to the file system and allow that to be shared with my hosts.” So now I actually can see what’s going on inside the container without being inside the container, and understanding how that works and why that might be okay for me to do, type thing.

Jon Christensen 23:30

I want to put a placeholder on that, for us to talk about that a little bit later. The things that people learn when they’re overcoming these sort of painful new boundaries. But before we talk about that, I want to go on– okay. So you talked about phase one, was understanding containers. So then what happens after that? What’s phase two, or what’s the next step in the evolution?

Chris Hickman 23:57

Right. And so I think they’re not necessarily like these really nice discreet phases, right? Because what we’re calling phase one, that kind of like really understanding and grocking this idea of containers, and that your software is running inside this protected box that really is not meant to be seen from the outside. That is the overall encompassing thing that you have to get over, and just really, really kind of get your head around that type thing. I don’t know if this is going back a little bit, but there’s a movie called The Abyss by James Cameron. And in that movie, some undersea– I believe they’re oil well drillers, but they have a– there’s a navy seals mission that comes along with them, because the navy seals have some top secret project thing that they want to do as well. So they go down with them. And at one point in the movie, the navy seal shows them this new technology they have, where he takes the pet rat of one of the oil drilling employees and gives him this fluid. He puts him in this container of fluid, and the guy is kind of freaking out because this is his pet and his pet’s going to drown. And the rat is submerged in this fluid and you can see it’s just struggling, it’s trying to get free. It looks like it’s suffocating, but the navy seal guy is saying, “No, no, this is like a special fluid. It’s actually highly oxygenated, and the rat will be able to breathe. It just has to stop resisting it and just let it happen normally.” And so gradually there’s less and less resistance from the rat, until finally it’s breathing normally. So technology’s like this, that scene always comes back to me. It kind of feels like this breathing fluid analogy, where you kind of go through this. It’s a very radically different way in thinking of things. It’s so different that there’s a lot of resistance and struggling with it, but gradually you get more and more comfortable with it until it’s like, “Okay, I got it. I understand this. This feels normal.”

Jon Christensen 26:27

Right. Okay. Gotcha. So if we can’t talk about it in phases, I still want to talk about the whole process a little bit. Because I think where we’ve left off is, developers are getting their heads around what this is. And they’ve maybe been kicking and screaming a little bit, because it’s changed their lives a little bit. So how did they start over– beyond the analogy that you gave where it’s like, “Well, just breathe,” what is kind of the next thing that happens? What causes them to start realizing that they can breathe? What is the benefit that they start seeing? Or is it– what happens next?

Chris Hickman 27:07

Right. So part of it is definitely just the whole experience of actually just doing it. So like you said, “Hey, I just Dockerized my software, I’ve ran it inside a container, it immediately dies. I don’t know what’s going on.” So now I got to figure out like, “Okay, how do I debug this? How do I figure out what happens?” So now you start learning about, “Oh, this is how I can go list the containers that ran inside Docker,” and, “Oh, here’s this flag that showed not only active containers, but containers that are no longer active along with their exit codes. And this is how I can see what happened inside that container with this other Docker command.” You start using techniques like, “Oh, maybe I can– I’m still having problems figuring out what’s going on. So I’m going to make it, so that my container doesn’t exit. I’m going to put like a sleep command in there that will keep that container running, so it doesn’t prematurely exit.” And then, as we talked about before, you can now shell into your container as if it were just a whole another machine. And so you can now SSH into your Docker container, and now peek around inside the live container. And maybe it turns out like files that you thought were getting written to a certain place, aren’t getting written there and they’re missing. Or maybe the way that your configuration was set up is not the way that you thought it was, you can start de-bugging that way. And so you’re learning more about how that containerization works, you’re learning some techniques to help you work with it. That leads to experimenting further and learning more about other various problems that you may be having like, “Okay, hot reloading. How can I do that?” And so people will go and do some Googling and looking at some blog articles and say, “Okay, now I understand.” There’s ways to do this and so many experiment with that. So I think it’s this process of just like getting in there, rolling up your sleeves, doing the work. And as you kind of run into these various stumbling blocks, you get over each one of them individually. Until the stumbling blocks that you come across, become fewer and fewer and fewer.

Jon Christensen 29:30

Right. I think you’re hitting on something, that to me is one of the biggest benefits that I’ve seen from Kelsus switching to Docker. And it’s the one that I’d tell other business people, especially business people that run software development teams, or that have software development teams inside their businesses that I tell them about most. Because it’s one thing to save on infrastructure costs, and it’s another thing to believe that you can get some benefits like not having to shut down your application to upgrade it, things like that. Those are all great. But especially for companies that don’t operate at massive scale, I think this one thing is just– it’s just mind-blowing and it’s bananas how good it is. And it’s that up until now, and this may just be me and it may just be my bias, but I feel like I’ve seen that software developers end up being kind of two camps. There’s the camp that writes code, that makes the application follow the business rules. And there’s the camp that knows what’s really going on inside that application, knows how to run it. And back in the days of Java, it was those people that could debug a classpath issue. In the days of Ruby on Rails, it was the people that could figure out how to get through some sort of gem compilation problem or some sort of dependency problem. They understood what was going on in the machine, and there was a kind of a divide between those two. And I think more people fall into the first camp where as long as everything goes well, they can write code that makes stuff happen on the screen that reacts to what users are trying to do. But then when things get difficult, they need to go find the senior person or the person that’s in that latter camp, that can actually know what’s going on in the machine.

Jon Christensen 31:30

And I would say, the Docker forces the people that are in that, “I can only write code camp,” to deal with machines. They can’t get around it. There’s no, “Let me just look at the logs and see what’s wrong.” Everything else is just me looking up at this other screen and typing in the commands that it tells me to. They have to actually understand that ports are getting mapped, and they have to understand that a machine is starting and stopping when a process is running. And they just have to– like I said, they may have to shell into that container, and know more about it than they did before. And the thing is, if you can learn how to code and you can get good at that, there’s nothing that prevents you from learning about how machines work and how your code runs on machines. It’s just it’s almost– I think it happens almost out of a certain kind of laziness or lack of interest. And Docker’s like, “Nope, you don’t get to be lazy anymore. You don’t get to not be interested in this anymore. You have to learn this.” And so then, all of a sudden, your whole team is better. They can all do stuff that they didn’t used to be able to do anymore. And I could be overselling this, but is that what you’ve seen as well, Chris?

Chris Hickman 32:53

Yeah. Absolutely. I mean, Docker containers in general. You’re virtualizing the machine, right? So you have various subsystems that are being virtualized. So whether it be networking or input IO devices, storage. All these things come to the forefront now with Docker, right? Before, if you’re just writing in a dynamic language such as Python, or JavaScript, or whatnot. There’s no real necessity. Like as you were talking about, you can get by with writing software really not really understanding how a computer works. You really can, which is kind of like an unfortunate thing. I think kind of like as you alluded to, there’s certain folks out there where they’re okay with that, not having to know about that stuff. They can write some code. And there’s going to be a lot of stuff that’s mysterious to them, but they can get by with their job without really knowing that stuff. With Docker, sorry, it’s not going to happen. You’re going to have to know about networking. You’re going to have to know about storage. You’re going to have to start thinking about different types of file systems. That’s just kind of the way it is. Because of that, it does make you a better developer because now you understand more about how the computer actually works. Those kinds of principles and bits of knowledge are going to be useful for other things. So now someone that maybe they hadn’t really done anything with cloud-type things, now maybe things like DNS start making a lot more sense to them. So they can go use something like Route 53 and they understand that. Definitely, [inaudible] into a machine over a VPN or something like that, that becomes much more approachable and whatnot. So, absolutely. Makes sense.

Jon Christensen 35:14

Exactly. Exactly. So when I was upset that a deployment was taking a week that should have taken a couple of hours, maybe a better response would have been, “Well, guess what? The developers that are taking a week, it’s taking this long because they’re learning DNS, they’re learning file systems, they’re learning networking. They’re learning all of these things that they just didn’t know before now, and that’s why it’s going to take a week or longer to get them up to speed.” And, yeah, then now they do. And it’s like, “Oh my goodness, the team is so much more capable.” Okay. So in the process of, “Okay. We’re taking on Docker. We’re learning this. We’re doing it,” we’ve sort of been focused for this whole first part of the conversation on what happens to developers inside the organization that decides to take this on, and that is kind of the hardest place. It’s the place where there’s the most push back and the most pain, but I think it’s not everything. Hopefully, you’re never stuck 100% in development. Eventually, put some software in staging environments and let other people see it, or you test it in some testing environment, or you put it in production and give it to the world. So let’s talk about some of the things that happen after development, and what it’s like for those things.

Chris Hickman 36:47

Right. So you’re specifically talking about like–?

Jon Christensen 36:51

We’re out of development and [crosstalk]. Exactly.

Chris Hickman 36:56

So this is definitely one of the huge features of Docker, is that when you have Docker as your code, you’re basically packaging it up as an image. That image is basically a description of how to go start one of these containers with your software, and the configuration, and the operating system, and all the support it needs in order to run. And one of the big problems is promises of Docker, is that if it runs inside Docker and your machine, it’s going to run inside Docker on someone else’s machine. It’s fully self-describing. There really shouldn’t be anything else that someone needs to do. So having this inoperability, this ease of passing these things around, so that anyone can come up to speed with it is a huge benefit. Even still just working locally, just being able to have another developer, someone else on your team to be able to sync up with your code, have that Docker image be built, and then run the code without really having to do anything, is a pretty huge win. In the past, I’ve been on teams where – before Docker was around – it might take you two or three days to get your development environment up and right, up and running in order for you to actually run the software that you’re meant to be building. So you have to go install compilers and various libraries, and sometimes device drivers and you have to get these dependencies and you have to have these credentials, and it will be a big long Wiki post telling me how to do all these stuff. So with Docker, that’s really kind of done as part of the Dockerization process. And after that, no one else has to deal with it. So you get the benefit of now passing these images around your teammates and [inaudible]. But then also, when you want to deploy to other various machines – whether they’d be on parameter or in the cloud, in your staging environments or production environments – kind of the same principle applies, right? You’ve built up this pre-packaged image of your software. And as long as you have Docker support running in the environment that you wanted to be deployed in, you have utmost confidence that you’re going to be good to go and it’s literally just a matter of starting a new container based on that image.

Jon Christensen 39:28

So I’m going to stop you before we talk about– before we get too much into bringing it up on staging our production. There was something that you said that I kind of got stuck on. And I was wondering if it made sense to you, Rich. Chris, you said, “It’s fully self-describing.” And as soon as you said that, I was like, “Oh, that sounds difficult to understand.” Rich, did you know what he meant when he said that it was fully self-describing?

Rich Staats 39:54

Yeah. I got out of it that, basically, it reads that config file and just doesn’t really need to do internally. Is that all right?

Jon Christensen 40:05


Rich Staats 40:05


Jon Christensen 40:06

Yeah. That is pretty much right. So the configuration file says, “Here’s everything I need in order to make myself, and I can make myself anywhere. I don’t care if I’m making in saltwater or freshwater, or in the mountains or in the valley. I can make myself.” Okay. Cool. And that is exactly the kind of point– that is exactly the point that Chris was about to make. So I’ve made myself in a development environment on somebody’s MacBook Pro, and now– continue from there, Chris.

Chris Hickman 40:40

Sure. So, yeah. So in a typical workflow in developers or market working on software, their development looking on their machines, they go through their development process testing their application. Now, they’re like, “Okay, I’m ready. I want to have this be deployed.” Deployment’s become so much simpler now with Docker. Because, again, what you’re really deploying is literally just that image. You don’t have to go worry about, “Okay. Now, I have to make sure I have the right operating system running on my– my instance is running in the cloud, or I need to make sure that it has this compiler or that compiler installed on it, or it has this library or that library. All that stuff goes out the window. The only thing you need on that host machine in your cloud environment is essentially Docker. So as long as Docker is running on that, you can now– very easily you can script it. It can be completely automated if you want, just say, “Hey, Docker, run this image.” And containers, because the surface area that they virtualize is smaller than a full virtual machine, containers can start up very, very quickly. So before in the past, we might be used to deployments taking minutes, 10 minutes sometimes, depending on what you were doing. When now with Docker, deployments might take seconds, 5 seconds, 10 seconds, with that type thing. So much, much faster.

Jon Christensen 42:23

So the first company that you’re working with as you were going through this process, were you spinning up EC2 instances on AWS? You’re spinning up real sort of physical machines. They’re actually virtual machines, I guess. But for all intents and purposes, they’re full machines. Were you spinning one of those up, logging into it, and then putting a container on it manually, and then running that container? And that was your staging or your production environment?

Chris Hickman 42:55

Yeah, for all intents and purposes. So it was pretty rudimentary. That was actually one of the things, just throwing out. Take away Docker out of the equation, just the fact that you have these dedicated machines not behind [load?] balancers, using Elastic IPs to run the actual production software. So definitely rudimentary, and a lot of work there to get everything to a kind of more a really production-worthy environment. So, yeah, starting off with– these were just dedicated EC2s for each one of the services. The Docker daemon had been hand installed and was running there, so deployment was literally stopped the running container, pull the new image, and then start the container with the new image type thing. And so quickly, got away from that saying, “Hey, this is really not very scalable, it’s not very available. We need to do some– there’s a lot of improvement here.”

Jon Christensen 44:14

How do you think you ended up there? Because it sort of seems like, for people that have a lot of experience developing and deploying software, was it an accident that you ended up in that situation where you were sort of hand-managing containers? Or was it, “We know this is going to be painful for a little bit. We know this is not the right way to do it, but at least it’s a step in the right direction towards eventual automated container management.”

Chris Hickman 44:40

So it’s the life cycle of adoption, and it’s also just where the technology was too, right? So that process was kind of solidified in mid-2015. And at that time, you asked for kind of hands, like who knows about Docker. You have a hundred people in the room, there’s not going to be many people going to be raising their hands that even know what Docker is, right, let alone what it does and why you would use it. So this was still pretty early days for Docker, that a lot of the tooling around it – that we have, that we enjoy today – didn’t exist back then. So that was part of the function. Same thing with Amazon. So Amazon now has it’s orchestration environment for Docker called EC2 Container Services, ECS, which is a great, great system. It’s really easy to run your Docker images across a fleet of EC2 nodes. That, in mid-2015, it may not have existed. Or if it did, it was just getting started. Right? And it was pretty cumbersome.

Chris Hickman 45:55

So there weren’t a lot of great options, right? So that’s part of the reason for that. The other part of the reason, this is supernatural for so many companies, is that company was very early on. Like I said, it was a team of four developers who had recently done its B-round and was really starting to ramp up. But it was early days, so it’s a small early stage startup with a small engineering team. It’s all about the MVP and pragmatism, and getting stuff done. So that kind of stuff gets rolled out in an incremental fashion. You build your sophistication incrementally. It’s not blowing the ocean from day one. It’s instead just making progress each day. And then as you get bigger and get more people on board, then you can start doing the things that you know you want to do.

Jon Christensen 46:53

Sure. Okay. That makes sense. I can imagine that it might have been a little difficult for people that have had experience, say, with platforms as a service to deal with the fact that they’re touching machines and hand-managing containers. When they may have just said, “Get up,” or something or– that’s not quite the right command. But, “Get pushed to this production repository,” and then everything kind of automatically happens after that with their platform as a service.

Chris Hickman 47:31

Yeah. I mean, absolutely. These platforms as a service, they were great. They fulfilled a need where it was basically– it was hands off for the developer, right? You really didn’t have to know anything about the cloud, right? You were in the cloud for free. All you had to do was just– it was magic, right? It was mystery, and you really didn’t even have to know anything about that magic and mystery. You really didn’t care, right? All that you knew is after 10 minutes, your application was up and running, right? And it’s like, “Okay. Cool. I’m a cloud expert.” It’s like, “No, not really.”

Jon Christensen 48:09

And I think 2015 was the kind of peak, platform as a service. And because of that, I think maybe there are a lot of people out there that are coming to Docker and coming to containers wanting that. And then I think that there’s just a glut of companies out there right now, quite literally trying to provide exactly that. So that you can be Dockerized and containerized in name only. One of the ones that occurs to me off the top of my head is Containership, “Write your code, tell us where your code is and we will put it in containers, and we’ll put it on– click the checkbox for which cloud you want it on.” Thoughts about some of those? Would it be better to do– I guess here’s a question. Would it be better to do that or stay in a platform as a service if kind of going all in and learning Docker, and using something more sophisticated like Kubernetes or ECS is not an option?

Chris Hickman 49:22

Yeah. I mean, I think all these solutions exist out there, because there’s various slices of the market where they make sense. So I think it really boils down to what your level of sophistication is, like what your capabilities are? A little, what is important to you as a business? Like, it may be as a business like technology is just really not too terribly important to you, it’s just something that you need to do in order to have your business running. And you may not have a huge tech team or you may not care about that, you may not have like huge scalability requirements. And it may be okay for you to overpay on like a platform as a service option, because going with a better – in air quotes – “technical solution” that may cost half as much, may save you $500 a month. But it might cost you $100,000 to save that 500 bucks a month, right, to build out the technology. And that may not be your core competency, right? That may not make you better in the marketplace for your business to compete. So it just really– I think it just kind of depends on what your environment is, what your goals are, what you need to do. But they’ll definitely be– there’s always going to be spots in there for– so the technology landscape is changing so quickly, it’s so dizzying. And with just Amazon alone, they’re launching– it’s kind of crazy how many new– I think they’re up to like 130 services on their console page now, and it was like maybe 10 like 4 years ago. And so they’re literally launching brand new services on them every month in several of these things, and so just keeping up with the innovation that’s happening is very difficult for the experts. So you now extend that out to just like the entire marketplace in general, and everyone that needs to have something running in the cloud, it’s like it’s super daunting.

Chris Hickman 51:32

So there’s huge opportunities out there for companies to mitigate that rapid pace of innovation, to make it digestible and consumable by other folks. Right? So there’s going to be opportunities there, for sure. And whether that’s platforms as a service or integrations or whatnot, and that it’s going to be there.

Jon Christensen 51:58

Right. You reminded me as you were talking about all the things that haven’t been added to AWS, you reminded me of Kelsus’s adoption of Docker and containers, and it’s really been very much married to our adoption of AWS as a whole. So I think we started with just using a few AWS services here and there, “Oh, let’s grab an EC2 machine to do something with,” or, “Oh, RDS, their database management system is so good, let’s just use that for all of our databases from now on.” And then after EC2 and RDS, then we looked at Elastic Beanstalk and it was a great thing for a while. But now with containerization in Docker and the proliferation of AWS services, I think we must be using– of that list of 100 and some, we must be using 30 or 40. And so it’s really married to AWS, and I wonder if our– I imagine for companies that are tied into the Google Cloud or the Microsoft Cloud, if they’re going through the same thing, that taking on containerization really means getting in deep with a cloud provider and all the services that it offers. And that it may not make much sense to try to not– to try to be cloud-agnostic.

Chris Hickman 53:23

Yeah. This is kind of been an age-old issue that teams grapple with, and they have for the last five years, right? It’s like, “Do you lock yourself in to your cloud vendor,” right, by integrating in with the stuff that’s specific just for them. And I think a provider like Amazon that is the undisputed leader in this space and cloud services, the amount of research and development they’re throwing at this, and the innovation that they are developing the ease of use you get, the efficiency of scale. Just the better cost dynamics by using and integrating with these services, it makes it much harder to justify keeping yourself agnostic. Right? To say, “Oh, I’m going to pick up my stuff and go somewhere else. I’m going to go to Azure, I’m going to go to Google, Google Cloud.” So I think at some point there’s some tipping point where you’re using a minimal amount of cloud services and you can have the obstruction layers on them, so that you can say, “Yeah. I’m portable, and I can pick my stuff up and go somewhere else.” But there’s a tipping point where you kind of just say, “You know what? I’m all in now, and this is now my provider.” And honestly from a business perspective, that’s what all these cloud providers are doing, right? That’s why they are building this new services, that’s why each one of these services is building on the previous services that have been developed, right? The synergy that the integration in with them, so they become more powerful the more that you use. They’re tendrils. They don’t want you to leave, they want you to be locked in, right?

Jon Christensen 55:16

Yeah. Exactly.

Chris Hickman 55:17

So it’s creating benefits to users, because you’re getting a much more powerful platform to work on. And it makes you much more capable, and it’s more cost-effective and just easier to work with, but it’s causing some pretty severe lock-in for you as well.

Jon Christensen 55:41

Right. And I would say I think the advice that I would give to people that are looking at moving into Docker and containerization is pick a cloud, just pick one. If you tried not to, it will make the transition more difficult.

Chris Hickman 55:55


Jon Christensen 55:56

And then once you do pick one, you’ll find that you’ll be using more services within that cloud provider, than maybe you had before you decided to do Docker and containerization. So it’s a natural– like unfortunately, the decision to go with a cloud and the decision to go with Docker and containerization, is going to push you over that tipping point and get you locked into whichever cloud did you pick.

Chris Hickman 56:22

Yeah. And the truth is, is that usually like– especially if you’re at that kind of like the workloads and the levels of scale when you’re adopting stuff like that. You’re not there at that level where it even, from a cost standpoint, makes sense to think about, “Oh, should I be running myself on-prem?” Once you actually get to the point, like if you get successful enough and you have enough traffic where it’s like, “Okay. Now, I’m really overspending by using a cloud provider, and I want to go look at doing on-prem.” At that point, great, because you’re so successful, you’ve obviously got a lot of resources and you can go do that, right? You can go spend that R&D to say, “Okay. Now we’re going to get off of this.” And so instead of using ECS, “We’re going to go to Kubernetes and that will be our orchestrator, and we’ll render ourselves inside our own data center on Bare Metal.” But at that point, if your bill is $300,000 a month on Amazon, then that’s a good problem to have.

Jon Christensen 57:30

Yes. Yes. That can be our conclusion for today. So at the end of the Docker, containerization choice life cycle, lies a good problem where you’re spending $300,000 a month on your AWS system and you decide to move it in on-prem. So, yeah, I think– but the bigger conclusion is, I think that as we’ve seen it can be painful, but the pain is the pain that is associated with learning and not necessarily pain that’s just for pain’s sake.

Chris Hickman 58:08

Absolutely. And I will say the level of pain today to learn Docker, is a lot less than the level of pain three years ago. It’s so much easier today. Right? It’s gotten so much more sophisticated, there’ so much more tooling around and there’s so much more of an ecosystem and information. So it’s even easier to adopt it today. So even though there’s still challenges and there’s going to be that friction, it’s a lot less so.

Jon Christensen 58:35

Right on. Well, let’s wrap it up. Thanks, Chris. Thanks, Rich.

Rich Staats 58:41

Thanks, guys.

Chris Hickman 58:42

Yeah. Thank you.

Show Buttons
Hide Buttons