The Docker Transition Checklist

19 steps to better prepare you & your engineering team for migration to containers

59. Node.js Rocks in Docker for Dev and Ops (A Dockercon 2019 Recap)

Jon Christensen and Chris Hickman of Kelsus and Rich Staats of Secret Stache discuss Bret Fisher’s DockerCon 2019 session titled, Node.js Rocks in Docker for Dev and Ops.

Some of the highlights of the show include:

  • New this Year: Open source summits
  • Where DockerCon 2019 lands on the scale: ‘I’ll never go back’ to ‘Best conference ever’
  • Lack of innovation makes it less likely or necessary to go every year
  • Is Docker getting old and long in the tooth? Or, very mature and robust technology?
  • Sticker Theme: Do things to get stickers; go play with a robot, fill out a survey, etc.
  • Keynotes lacked energy and followed same look and feel as previous years
  • Brown is the new Green: Mixture of legacy and greenfield work
  • Same players, some growth, still going strong; but not at DockerCon
  • Docker Captain’s presentation provided best practices for managing Node and Javascript projects when developing, testing, and operating containers
  • Stick with even-numbered releases of Node for long-term support (LTS) of base image
  • Caching Issue: If you have the latest image that’s tagged ‘latest,’ it’s cached in your local host and hasn’t been reaped; don’t use ‘latest,’ specify version
  • Less software you have, the fewer doorways into your code
  • Add Node modules to Docker ‘ignore’ file
  • Running as Root: All the privilege in the world, but only use what you need to avoid issues

Links and Resources

DockerCon 2019

Bret Fisher’s Node.js Rocks in Docker for Dev and Ops

re:Invent

Docker

Node.js

Debian Jessie

Debian Stretch

Debian Wheezy

Nodemon

Slim

Alpine

Microsoft Azure

AppGet

Yum

Kelsus

Secret Stache Media

Rich: In Episode 59 of Mobycast, Jon and Chris break down Bret Fisher’s DockerCon 2019 session which was titled, Node.js Rocks in Docker for Dev and Ops. Welcome to Mobycast, a weekly conversation about cloud-native development, AWS, and building distributed systems. Let’s jump right in.

Jon: Welcome, Chris and Rich. It’s another episode of Mobycast.

Rich: Hey.

Chris: Hey guys. Good to be back.

Jon: Hey, Chris. I haven’t talk to you since you you went to DockerCon and I’ve been curious to find out how it went. […] fresh back today, right?

Chris: Yes. I got back late last night from beautiful San Francisco. This was DockerCon IV for me, fourth year in a row. It’s been interesting to see how it’s evolved over the years and changed and what not, but same […] format, two full days of content and sessions, and then the last day, the third day was a repeat of some of the more popular breakout sessions. New this year were the open source summits. It was definitely an interesting thing and I wasn’t able to participate in any of those or attend them just because of the timing, but definitely it’s a good addition to the conference.

Jon: Cool. Let’s just get right to it. On a scale of ‘I will never go back’ to ‘best conference ever,’ where does this one land?

Chris: That’s a tough one because from a personal standpoint, it’s probably my last DockerCon just because after four, I think it kind of reached the saturation point. There’s not enough new innovation and enough new things happening to Docker itself. The return on learnings there for someone that’s more experienced with it becomes less and less. Each year the return becomes less and less.

I think it’s actually true across the board for DockerCon. Every year, they always ask, “Hey, raise your hand if this is your first DockerCon,” and it’s well over half the audience […] their hand. I do think Docker and DockerCon, in particular, is definitely one of those conferences where the need to go every year becomes less and less as opposed to something like re:Invent where we talked about that piece of innovation is just relentless and there’s just so much going on there, there’s so much to learn. It’s almost like you have to go each time just to try to keep up with what’s going on. I said I think this was probably my last DockerCon but I’m glad I went.

Jon: Docker getting old and long in the tooth?

Chris: How about a very mature and robust technology?

Jon: There we go. I think it might be worth you sharing a couple of your takeaways from how the keynote were, then just maybe a couple of sessions, and then maybe we can dive in. What we want to do with this episode is we want to dive into one of the better sessions. I think it’s useful for our audience and for any people that also aren’t developers of other companies. Maybe one minute overview on a few sessions and then we’ll start diving into one that we like.

Chris: All right. In general, something that would really stuck out is this felt much smaller this year. In previous years, last year and the year before it at Austin, the numbers were about 5000 attendees. I was seeing numbers around 3000 and some of the emails that they were sending out from the Docker folks. That’s a big difference to go from 5000 to 3000. It definitely felt smaller.

That has its pluses and advantages, too. I mean re:Invent, there is so much FOMO going on like, “Am I going to get any sessions?” and […] battle people out for versus DockerCon is just chill. Docker and chill like you’re not going to have any problem getting into a session. They get on the pre-register and nothing. It’s just so simple. You just walk in and no big deal.

That was the big difference. We talked about this with last year’s session, the changeover from then and […] and now to Steve Singh running Docker. I think Docker folks […] more on what’s their business model and really becoming a business, not just a technology, seeing more and more of that focus and just has more and more of that feel of this about the enterprise, it’s about business, it’s about making money.

Jon: It feels so yucky. It goes from a two-day celebration of innovation to a two-day infomercial.

Chris: It’s definitely a different vibe, for sure, absolutely. It is what it is. That’s just what they need right now and that’s what they’re focused on. It is what is it.

I thought it was interesting for whatever reason this year, the theme felt it was all about stickers. Stickers are fine. I’ve got more than a few on my laptop, but this was like everything. The big thing during the Con was just do things to get stickers, like go play around with one of the robots and you get a sticker, you fill out the survey for the conference and you get a sticker, you go do something special at the conference party you get a sticker, and they give you a sticker book.

It was strange. I mean, in previous years when I was in Seattle for the keynote, on every chair they put an umbrella with Docker logo on it. This year, you got this blank sticker book. It literally was just a little notebook. It wasn’t even that sticker book. It’s weird. I definitely would have preferred the umbrella as opposed to the notebook, but it is what it is.

The keynotes this year kind of again followed the same look and feel. Again, the energy, the vibe wasn’t there. There was a big chunk […] with a roundtable of Docker customers. They had five different representatives from five companies up on the stage and spent about 20 minutes doing a roundtable discussion Q&A. I would have preferred doing something else, the nicest way to say it.

Jon: You get that many C-level people on a stage together in a panel and it’s just going to be a game of saying nothing but seeming important.

Chris: What they should have done is they should have passed out beers and we should have played buzzword bingo and drink. That would have been pretty interesting. This was the first time ever I heard the term ‘brownfield.’ We’ve all heard of greenfield. Brownfield, have you heard of this?

Jon: No. Not outside of the sewage incantation.

Chris: I’ve never heard of this before. I heard of other people, speakers there they mentioned that they haven’t heard of it before as well. Apparently a new term, basically it’s what you do when you have a mixture of legacy and greenfield work. That’s brownfield. Decomposing your monolith, you have your legacy app, and then you start building up some […] functionality in the form of new microservices around it, that’s brownfield. So, some to learn there. Maybe a final takeaway is that I was surprised at just how small the expo floor was. It was really small.

Jon: Interesting. That means companies didn’t want to spend the money to be there and does that mean that there are not that many companies or there’s tons of companies, they just didn’t feel the spend was worth it because their potential customers weren’t there?

Chris: Yeah and it’s probably all of that and more. I was surprised. A lot of the companies that I remember being there last year were not here this year.

Jon: I heard that there are fewer companies. You think about why companies are out there building stuff that’s really all about the container. New companies or the numbers of companies and the container landscape, that probably peaked last year or maybe even two years ago and then it started getting consolidated and lots die off, too.

Chris: Part of things like the market there is still the same players and maybe even some growth. In years passed there’s been a lot of vendors there and the storage driver space obviously in the monitoring, tracing, diagnostics area, hybrid, OS vendors, and the likes and what not. All of those folks are still out there and going strong. They just weren’t there this year. It was surprising.

Jon: All right. I actually don’t think we should spend too much time talking about talks that we’re not going to talk about. Let’s just say that it isn’t really exactly the first time I’ve talked to you since we went to DockerCon and you went to a lot of talks that you’re never going to get that time back in your life. But there were a couple of good ones and we like to break down talks here on Mobycast and really get into them, talk about them, and go a little deeper than you can when you’re just sitting there and in the moment. Let’s do that with one of your favorite talks from this past show.

Chris: Yes. I did attend a session from Bret Fisher who’s a long-time Docker captain and just very big in the Docker […] a long-time Docker community there’s a lot of training and online courses now on blogging and like I said Docker captain just really active in the community.

He gave a talk all about Node.js and using that in Docker. It’s called Node.js Rocks in Docker for Dev and Ops. The high-level description of this was just best practices for managing Node and Javascript projects when developing, testing, and operating containers. The session covers local development of Node, Javascript-specific projects, and then also have optimized Docker desktop and Docker compose configs for the best of both worlds with Javascript and Docker.

Here are Kelsus where we have a Node.js shop and now it’s […] a big Docker shop, this looks like a good session to go to.

Jon: Without knowing anything about the talk, just the name of it, Node.js Rocks in Docker for Dev and Ops, it does kind of look like a perfect marriage. Node makes these little servers that die if any error happens, so having an infrastructural way of spinning them back up and treating them as cattle seems perfect for the exact types of solutions that it provides.

Chris: Absolutely. This is a very good, relevant, practical talk and it was really just a hit list of some of the tips and tricks. Some of these are somewhat known and there were some more detailed nuanced gems as well. So pretty good session.

Maybe just the overall outline for this was various topics around how to do Node in a Docker environment. Some of the high-level topics that was covered were what are some of the best practices for doing Docker files for Node, how do we do process management containers with Node, talk about healthy shutdown like how do you actually property shutdown Node.

The talk is actually a very big gotcha. Then going to a bit about things like multi-stage Docker files and how you can use those for various different environments. Also get into some of the things we all should get into like security scanning and hardening. Then also wrapping up with some practical tips for dealing with the classic node_modules problem with npm. When that gets built, what platform is it going to build for, and making sure that you’re not running the problems especially if you have volume mount shares. You can easier do things like hard rebooting for debugging and what not so making sure your node_modules doesn’t get bit by that.

Jon: […] that particular problem. We’ll get there when we get there […]. Everyone listening, this is definitely going to be a two-parter.

Chris: Absolutely. Then a little bit about health checks and how those fit into the whole scenario. That’s the general outline.

Jon: So, it starts with the Dockerfile and the best practices around that for Node.

Chris: Yes, so let’s go into that. Starting off with that, the first thing that Bret touched on was about the base image. What base image you use and stick with the even-numbered major releases of Node. These are the ones that are going to be supported and have LTS. Node recently just released 12.

Jon: You said LTS. What does that stand for?

Chris: Long-term support. The odd ones won’t have long-term support but the even ones will. I believe LTS is guaranteed. Something like 18 months or 2 years worth of support. So stick to the even-numbered major releases of Node. I think we definitely talked about this in the past. Don’t use the latest tech. Be very explicit what version you want and […] the actual image. This is the tag on the actual Docker image that you’re using, so don’t use latest. Instead, just pick the number that you want to use and use that because with latest, you don’t necessarily know what you’re going to get. It’s going to change out from underneath of you.

We’ve also talked about the issue with caching. If for whatever reason you and your Docker host, if you have the latest image that’s tagged latest, it’s cached there in your local host, and it hasn’t been reaped, that could mean that you’re going to be using an old version without really wanting that. So, just don’t use latest. Specify the version that you want.

Jon: Cool. Makes sense.

Chris: This is actually something that probably has […]. If you’re using Node and if you’re using Docker with Node, you probably run into this and that is, if you are using one of the versions that’s based on the Jessie distribution, you probably got bit by this issue where you can no longer build your Docker image. You want to go pull from the Debian repo for Jessie, it’s not there anymore. This is because Jessie is no longer supported by Debian. They archived it. So basically, don’t use Jessie. Instead, switch to Stretch is the flavor that you should be looking for in the official Node distribution. Use Stretch, not Jessie. That will be a supported version.

Jon: When it comes to Node, support by whom? Just various maintainers of Node or a company?

Chris: This is specifically Debian. This is the Debian Linux distro. Jessie is a flavor of Debian. So, they’re just no longer supported.

Jon: Oh, okay.

Chris: Both Wheezy and Jessie have been archived. What it means is that you’re not going to get any security patches on Node. It’s the same thing with Node with LTS support. If it’s not an LTS version, if you’re using one of those odd ones, then after six months it’s no longer going to get updated with security CVEs that happen and what not. Just not supported anymore.

Jon: Got it.

Chris: Use Stretch. We also talked a bit about should you use Alpine versus Slim. I think we’ve talked about this in the past as well about using Alpine for your images. Alpine’s lot of great things about it, you’ll hear a lot of people talk about Alpine, and talk about the advantage a little bit. It’s very minimal base image. It’s just stripped down to just what you absolutely need from a software standpoint. It’s good for two reasons: one it’s good for just file size. If you’re concerned about file size, Alpine is going to be very small. The full Node images are almost a gig if pulling in everything versus Alpine it’s 77 megs. It’s a huge difference there.

But not only that. Regardless of the space, the other big advantage is just security and the breadth of the surface area you have to cover. The more software you have, the more surface area you have, and the more possibilities that, “Hey, there’s going to be something wrong here with security. There’s just more than I have to protect.” The less software you have there, the less doorways into your code. Those two things are something to look at.

The downsides of Alpine are just that it’s much more difficult to work with. It’s not just convenient. Everything is different in Alpine. BASH is not there by default.

Jon: Everything you want you’re going to add to it.

Chris: Yeah. It’s got its own package manager, totally different than apt-get or yum. It’s more difficult to work with. Also, the CVE scans don’t work so well with Alpine. It’s also because it is so minimal. It has its own compatibility issues, in particular.

Recently there was a file watching issue with nodemon, which is a popular package with […] to monitor file system changes, so you can hot reload your app. That wasn’t working with Alpine but fine with other distributions. Again, minimal software, harder to configure. You’re going to more likely run into these issues.

The reason why he went into this so much is just use Slim. Slim is actually pretty darn small. It’s not too much bigger actually than Alpine. In his example, a pretty official Node.js image, Slim is about 150 megabytes versus Alpine at 77. 150 megabytes is still so much better than 1 gig, where 800 megabytes or what not. So, 150 megabytes is totally reasonable. It’s only like 75 megabytes more than Alpine. You get a lot of the software that you’re used to. So start off with Slim, just go with that, and make life a little bit easier unless you have a really compelling reason why you need to go Alpine. Start with Slim.

Jon: Interesting. We just have a couple of more minutes. I think we can get through one more point and then finish up with this Dockerfile best practices section.

Chris: Sure. We talked a little bit about Node modules and dealing with that. This does become more of an issue when you do have have volume mounts between your containers and your host. When you’re developing locally on your machine for things like hot reloading or just speed up it all, sometimes you’ll want to share that with the host, Node modules with the host and your source.

The problem is that if your Node app is used to run under a certain platform, which it does, it’s going to be probably run on under Linux, if you’re running AWS or Azure, some cloud or what not, probably going to be run on a Linux machine, and there are some Node modules definitely have native code. When you do an npm install, it’s going to go run node-gyp to compile the native code, the C code, the C++ code, whatever it may be into object code. It’s going to be done for the platform it’s going to be run on.

If you were working on a Mac, on OS X, you’re sharing your Node modules, and you run your npm install outside the container, node-gyp is going to be compiling against MacOS. If that gets baked into your image and now you go deploy your image on AWS, things are not going to go well.

Jon: Right. That’s an interesting problem.

Chris: He just pointed this out. Be careful with it. Definitely one of the things you should be doing is you should be be adding Node modules to your Docker ignore file. If you do that, you definitely don’t have to worry about it.

Jon: I can’t recall your Docker ignore file is basically going to say, “When you’re putting this image in an image repository, don’t bring stuff with it”?

Chris: It’s when your building an image. When your building the image it says, “Whenever you’re adding files or copying files, I don’t care what you’re saying in the instruction. Don’t include these types of files when you do it.”

Jon: And when you say any instructions, you’re talking about it in the Dockerfile instructions.

Chris: In the actual Docker file itself if it’s an add or a copy instruction in your Docker file. Whatever you’re specifying in there, the Docker ignore file will override that. So, typically your Docker ignore, you want to make sure something like .git is ignored because you don’t want your git repo to be part of the actual Docker image. So, that’s something like there. No module should be there as well and protect against that when you are building an image. Of course, you shouldn’t be building your image on your local machine if you should be doing any of your CI system, but again, sometimes you may or may not be doing that, so add it to your Docker ignore.

Wrapping up the Dockerfile best practices, the final thing discussed was just talking about security and principle of the least privilege, definitely switching over and using the node user to run your application. By default in Docker, if you don’t change your user, you’re gonna be running as root. Obviously, running as root, all the privilege in the world. You really shouldn’t be doing that. You should be really operating from the principle of least privilege. You should only be using the privilege that you need. Root is way too powerful and there’s way too much stuff that can go wrong. You’re opening yourself up to security problems and vulnerabilities and what not.

All the Node official images actually have built-in support for a user that’s created already in the operating system for you called node. But it’s up to you to enable it. You can do this in your Dockerfile really easily. Just use the user command to switch to the node user and just do that. This may have some permission access issues. You’re going to have some gotchas along the way, so it’s going to take a little bit more work just to tweak it and make sure that you’re using the right directories, have the right permissions and ownership and what not. There’s probably a few more instructions that you need to do in there in your Docker file.

Jon: It’s so much better to do that during the course of development than you’ve got a working application, you’re all done, and now you work for a bigger company so they’re going to put it through a security audit, and it’s like, “Oh you got to change this.” That’s not […] you want to change that.

Chris: Exactly. So switch these or don’t run as root. Run under a lesser privileged account like the built-in node account.

Jon: That’s cool that they’ve built that in, made it ready to go switch over, and good to go.

Chris: Absolutely.

Jon: All right. Well, thank you. Next week, we’re gonna talk a lot more about this talk because already it’s super useful stuff. It’s been a while since we talked much about Docker so it feels good to go back to our container roots on this podcast. So, thank you.

Chris: Moby 101.

Jon: Exactly.

Rich: All right. See you guys.

Jon: Yeah. Thanks, Rich. Thanks, Chris. Bye-bye.

Chris: Bye.

Rich: Well dear listener, you made it to the end. We appreciate your time and invite you to continue the conversation with us online. This episode, along with show notes and other valuable resources is available at mobycast.fm/59. If you have any questions or additional insights, we encourage you to leave us a comment there. Thank you and we’ll see you again next week.

Show Buttons
Hide Buttons
>