The Docker Transition Checklist

19 steps to better prepare you & your engineering team for migration to containers

65. Practical AWS – Hosting a Personal Blog The Hard Way, Then The Easy Way

Chris Hickman describes to his co-hosts, Jon Christensen and Rich Staats, how he set up his personal blog using Ghost.

Some of the highlights of the show include:

  • Why Ghost over WordPress? Wanting to do things the hard way, while having complete control and ownership of blog
  • Choosing this option involved more DevOps and Site Reliability Engineering (SRE) work
  • Personal Blog: Web application with no limits to what you can do with it
  • Grand aspirations of starting a personal blog: Thousands of posts, kudos, and followers
  • Ghost: Open source, Node.js application created in reaction to lessons learned from other options
  • Rich’s reaction to and lack of initial interest in Ghost having been in the WordPress world
  • What could dethrone WordPress? API-first content management system (CMS), where admin creates endpoints with no consideration of what’s on frontend
  • Decision to host blog on AWS ECS and Dockerize it meant dealing with database, mail service credential secrets
  • Cost was a major consideration to make blog available via DNS and Elastic Load Balancing (ELB), plus hosting other microservices on ECS using personal AWS account
  • Ports make load balancer work with multiple services; NGINX made mapping friendlier on frontend
  • Game Changer: AWS Application Load Balancer (ALB) is now available; simple and straightforward process to convert blog’s architecture and configuration
  • ECS cluster backed by auto-scale group is defined by launch configuration
  • CMSes connect marketing and technology; need to be more of a business decision

Links and Resources

Ghost

Chris Hickman’s Personal Blog

Amazon Web Services (AWS)

WordPress Blog Package

Node.js

GraphCMS

Craft CMS

Laravel’s Lavalite CMS

Docker

AWS ECS

AWS ELB

AWS ALB

AWS Amazon Machine Image (AMI)

AWS Fargate

AWS Lambda

NGINX

Cosmic JS

Webiny Serverless CMS

Microsoft Azure

Google Cloud

Kelsus

Secret Stache Media

Rich: In Episode 65 of Mobycast, Chris walks us through the setup of his personal blog using Ghost. Welcome to Mobycast, a weekly conversation about cloud-native development, AWS, and building distributed systems. Let’s jump right in.

Jon: Welcome, Chris and Rich. It’s another episode of Mobycast.

Chris: Hey.

Rich: Hey, guys.

Jon: Alright. In deciding this week’s topic, Chris has been doing some work on his personal blog. The thing is Chris is hosting his personal blog in a way that nobody in their right mind does. I think you’re kind of doing it because it gives you a chance to do what you want in AWS. It’s kind of a learning tool to make that sure you’re good to go on new services and changes inside of AWS, and get some tricky stuff to play with instead of just going on on WordPress.com and buying their blog hosting package. Am I right? Or is it also that you have some specific data ownership concerns and you want to make sure you’re in charge of your own database of your own blog? Or a little bit of both?

Chris: Yeah, a little bit of both. Probably first and foremost is […] for punishment, do things the hard way. Then, also just having the option for complete total control and flexibility over whatever it is that I want to do with it.

Jon: Right on. It turns out that option involves lots of software operations work or DevOps work or whatever you want to call it. It involves a lot of SRE work.

Chris: We’ll get into this, but at the end of the day, it’s hosting personal blog. The amount of work and the amount of service area that this is touched is pretty breathtaking. It’s been fun and it’s alright for just that alone.

Jon: Right. That’s the thing. A personal blog is a web application. There’s no limit to what you can do in terms of making that web application distributed, reliable, available, and all those sort of things. I’m hoping also, Rich, to just kind of get your take on a couple of things while in this conversation because you offer development services for WordPress. You are not hosting people’s personal blogs. There’s some overlap between WordPress and people’s personal blogs that kind of grew out of that space into two-thirds of the internet. Yeah, we’ll get your take on couple of things as we get into it.

To startout, maybe you can just give us an overview of the decisions you made, Chris, at the beginning, when you decided, “I’m going to make a little blog and I’m going to host it myself.”

Chris: Yeah. This process started about three or four years ago. Just like everyone else, “I know I should have my own personal blog.” Grand aspirations for the weekly blog posting and thousands upon thousands of followers—all the kudos and the claps. Of course, being a tech guy, going and paying $9 a month or whatever it is to go have just some hosted service, I’m just typing my content like, “That was too easy. I didn’t want to do it that way. Let’s do the hard way.” Of course, WordPress has been definitely popular and the de facto standard for blogs.

But around that time, I heard about a tool called Ghost. Ghost was a relative newcomer on the scene, with lots of buzz around it, the thing that probably got interested to the fact that it was a node.js application. It was started as a kickstarter. I think some of the people that were at WordPress or had worked on WordPress had the feel of like, “Hey, we’ve learned a lot with CMS systems and working with WordPress. Here’s the things we want to address now with Ghost. We’re going to do this kickstarter and it’s going to be open source.” It’s just this really cool thing. Being a node.js application that caught my eye, “Okay, I’m going to use this Ghost thing.” That’s what started this process.

Jon: Cool. With Ghost, Rich, you’ve been in the WordPress world for quite a while. You were already deep in WordPress when Ghost came out. What was your thought about Ghost when it came out? Have you still been following it at all?

Rich: I remembered when it came out, but I’m guessing it was five years or more.

Chris: I think the kickstarter was in 2013.

Jon: Okay.

Rich: Yeah. Back then, I didn’t have a whole lot of interest at them because I felt like WordPress was still headed in the right direction. I think that there’s a lot that I could still learn from WordPress. Today, it’s a little different because there’s so many options that I’m starting to like. Look over, the […]. I know a lot of people who were super excited about it. I came back to WordPress just because a lot of times in the early stages of those products, they’re great. Fundamentally, they’re built in a way that’s more modern, but at the same time, they don’t have the rich feature set that we’re accustomed to with WordPress.

Jon: Yeah. WordPress being a platform essentially means it has a whole ecosystem of things that enrich it. I’m just curious, you mentioned that you’re looking at a couple of things now. What are you looking at now that could dethrone WordPress in your world?

Rich: The whole of idea of the API-first CMSs, you have this admin, but it basically just creates a bunch of endpoints, and it has no real consideration of what will be in the frontend, that’s really exciting.

Jon: Yeah, for sure.

Rich: That’s where everything’s headed. Craft CMS which uses GraphQL Contentful. If you have the […] Contentful starts like $200 a month. It provides these admin UIs that give you the ability to just on the fly, create these endpoints and do whatever you want with it. I think that’s sort of where we would head if we were to leave. There are plenty of PHP like Craft CMS, which I think is on Symfony maybe. Laravel has their own version of a CMS. I don’t think that if we were to leave WordPress we would stick with PHP.

Jon: Yeah, that makes sense. Probably a good idea.

Rich: Yeah.

Jon: Alright, so you’re going to host this thing, Chris. How are you going to host it? At this point, you’re already looking at AWS quite a bit. What did you decide to do?

Chris: This was like three or four years ago. By then, I’ve been using AWS for quite some time. Hosting this on AWS was definitely going to happen. Also, Docker as well. I’m going to Dockerize this and run in within AWS.

There was actually a Docker image for Ghost. It was definitely a bit rough around the edges. It took some effort to get that into shape the functionality that I needed. I had things like with secrets for database credentials or for mail service credentials. Also, things like login and various other things. I had to take that base Docker image, then customize it, extend it, and make my own custom Docker image from it. Once that’s done, then go ahead and host that on ECS inside AWS.

Jon: I’m imagining when you said secrets, mail host credentials, and things like that, that maybe the Docker image that you got, it just made some assumptions with those things where they shouldn’t be. You have to get them out and put them in a place where they could be read maybe off of S3 with keys or something.

Chris: It was essentially left as an exercise to the reader. The Docker image that they had kind of made some assumptions. Configuration was coming from a file. At that point, it was a JavaScript file. It was just expecting a JavaScript file to exist on a file system and that’s where the secrets are and this reads from that.

Jon: Right on. You have to go and make that JavaScript file then make sure not to put your secrets in it. It has to read them from someone else. Got it. Okay, cool. You’ve run it in ECS. I think from our notes here, I sometimes talk about our notes. It’s no a secret that we have to look at our notes to have a conversation. It looks like what you want to talk about is mostly not the Dockerizing, not putting things on ECS, but really making the thing available via DNS, load balancing, and things like that. Maybe we can go straight to that.

Chris: Yeah. The process of Dockerizing Ghost and getting all of this stuff to work, that was an effort in itself. But the next thing I was faced with was—in addition to my own personal blog—[…] had other things that I was hosting on ECS as well with their own DNS names. Things like some various microservices with restful APIs and wanting to host that.

This is from my own personal interest, my personal AWS account, really, cost was a huge factor here. I’m kind of thinking, “Okay, what am I going to do? I basically have a handful of services that I wanted to be able to host in ECS. I want to be able to address those by different DNS names. How can I do this without having this spin up an ELB—Elastic Load Balancer—for each one of these DNS names?” These are not necessarily cheap especially if you’re paying for it. That’s going out of your Starbucks money. $20 a pop per month. If you have five, that’s $100 a month just on load balancers alone. The other big caveat here too is this is before AWS had announced or released ALBs, Application Load Balancers. It was just the one load balancer type. It was classic load balancer. Things like host based routing or path based routing didn’t exists at the load balancer at that point. Thinking about this, I’m like, “How can I do this?” This is where it gets a little crazy.

Jon: I’ve got a question. Before you get there because I’m just trying to understand your architecture, I’m just trying to think, did you have multiple ECS clusters for each of your different projects that you had? Or were you trying to put these other APIs that you were talking about? These other microservices you had for whatever other reason and your blog wanting to reside as services within a same ECS cluster.

Chris: Yeah. It was all just single ECS cluster. There’s an ECS server for each one of these. You specify a load balancer for each one of your ECS services that are fronted by a load balancer. If I kind of just done that the straightforward way, it would have been, if I have five services there will be five ECS services, and I’d have five load balancers, and then boom, I’d be done.

Jon: Okay. Just thinking that through. I’m still trying to get my head around that. The reason you would have five is because that load balancer is going to direct all traffic to that service. For you to direct traffic to different services, you would need different load balancers, right?

Chris: Mm-hmm.

Jon: Okay, cool.

Chris: Again, with the classic load balancers, the way that it was dictated was you’re going based upon a port. The world’s a lot different now. Essentially, each one of these services, in order for them to coexist, they have to be listening on different ports. That was the mapping that was done on the load balancers. It’s like, “For this particular service, this is the port that it’s on, so this is where you’re forwarding to.”

Jon: Right on. You could have one load balancer go talk to many different services as long as the services were on different port.

Chris: Yes. The idea was like, “Okay. I only want to have one load balancer. How do I have one load balancer and have these five different services?” Definitely, one of the things I have to do there is each one of these services has to be listening on different port. When you go to my blog, you go to the CMS name Collin8080. When you want to hit this one other service, you’ve got to call it 8081. That feels kind of bad. Not so nice.

Given that’s the underlying mechanism of how this stuff works, how can I make this friendlier for people on the frontend? What I ended up doing was basically using Nginx to do this mapping. Essentially, how it works is that, given one of these five services, all of them, they’re DNS would resolve to the same ELB. Then what will happen is the ELB would come in on obviously, port 80 or port. That traffic would be forwarded to this custom app that was basically Nginx. Nginx would then do the inspection of the header and see, “Okay, what host were they trying to talk to?” They came in for the blog URL, or for one of those service URL and it would see that. Based upon that, it would say, “You want to go back to that load balancer but now go on to this port.” Nginx would then basically loop them back around to a load balancer but this time to use the custom port that service was using.

Jon: Alright.

Chris: When that happens, now, the load balancers would actually forward it to the correct service in ECS.

Jon: Yup. Yeah. We’ve found this before at Kelsus on a project before. Also, I think because of the lack of ALBs and host based routing for the same type of reason. It was like some stuff coming in and should go to this WordPress application. Some other actually go to this node service and Nginx is the one that configure that out.

I can’t remember at the top of my head what configuration file, where you say you to do this in Nginx. Nginx has this place where you go and say, “Anything that looks like this in the path sent it here.” It could be anything. It could be, “Send it to disney.com.”

Chris: Yeah. Nginx has a config file. It’s nginx.com by default. That’s where you specify this. The pattern typically would be something like you’d have a mixture, you’d have kind of like normal web static assets like JavaScript, images, and CSS. That would be served up by Nginx itself. You would have perhaps custom API, restful API service. That would then be forwarded to the upstream in Nginx But usually in that case, it’s on it’s own ELB. It would just be making calls out to this other thing. This is a little bit different in that it’s looping back around on to the same load balancer. It kind of hurts to say this out loud.

Jon: The only problem with it though, the load balancer’s there. What it understood, the ELB, what they understood was port. I think we literally did that exact thing not even just for Kelsus project, but all the way back in 2001 or 2002 at […] just having hardware load balancers that we would reuse and send stuff back to a hardware load balancer from either a path ITRS to different ports to get that hardware load balancer to do different things. It’s an old trick.

Chris: Yeah. It worked pretty well. It worked like that for two or three years. Every once in a while, things would kind of get locked up a bit. I think Nginx got confused or the load balancer did or port or whatnot, and we’d have to kick things over, but for the most part, it works pretty well. Recently, within the last few months, I was like, “You know it’s really kind of silly that I’m still doing this on using this technique. It’s like ALBs now exists. This is like a slam dunk for doing ALBs. Just go ahead and I’m just going to convert this over.”

Jon: ALB—Application Load Balancers?

Chris: Yeah. ALBs do support things like host-based routing which essentially what I had to […]. This is what I was basically implementing in my own host based routing. With ALBs, it’s a first class feature. I figured, it’s been a couple of days doing this or whatnot. Man, I was just blown away by how easy and straightforward this whole process was. I think, start to finish, it took literally […] maybe not five minutes, but it felt like five minutes actually. It almost felt like magic.

Jon: […] because it’s like a rift on the blog entries like, “Get your ALB based Ghost blog up in five minutes or less. It’s all in my blog post.” I think we should really point out to sort of the history of Mobycast, I think of the time you were literally setting for some AWS certification stuff. Neither of us are elbow deep in code for our clients. We’re often more at the solutions architectural level and just straight up project management sometimes just to get work done for our clients. Being able to get our hands dirty with something actual coding or AWS configuration or DevOps, we have to create those opportunities for ourselves. This is one of those.

Chris: Yeah, absolutely. To convert it […] I want minimal downtime, I want to make sure this is pretty radical different architecture. I just want to make sure everything’s working and whatnot. I was like, “Okay. What are the steps that I need to do?” Basically, first thing I did is just go create a new ALB and don’t worry about the configuration because that comes later. I created my ALB. I then decided to create a brand new ECS cluster as opposed to reusing the existing one. The reason for that is because I wanted to keep the same service name. Instead of My Blog Two or My New Blog or something like that, I just wanted My Blog still.

Jon: Not to mention that you get brand new fresh AMIs.

Chris: Yes, absolutely. This whole process of creating the ECS cluster takes literally five minutes.

Jon: Quick, Rich, what’s that an AMI?

Rich: You’re killing me.

Jon: I know. AMI is Application—I don’t even know what it is. It’s just an […]

Chris: AWS Machine Image, I believe.

Jon: AWS Machine Image. I was like, “What’s the A stands for?” Okay. It’s an EC2 image basically, Rich. When I said you would get a new one, what I meant was that those things will kind of get old, they’re constantly getting updated with just different capabilities or different versions of the ECS thing. Agent, that was the word I was looking for. By creating a new cluster, he’s getting a new ECS AMI that has all the latest features and capabilities.

Chris: Indeed. Once, I had my new ECS cluster created, a new ECS service to host this blog. The big thing here is make sure that I’m going to use dynamic port mapping. This is a great feature of ALBs. This is exactly what I wanted, so in the past, I had to keep that map for particular service, what port was it going to listen to because again, you can’t have more than one service listening to the same port. You have no idea what services are going to be on which machines and have these conflicts. I had to map myself and manage that mapping. But with dynamic port mapping, you no longer have to do that, you let AWS do it. It’s just a really nice straightforward awesome feature. Just check that box and now […].

Jon: This server is run on ECS. In AWS, “I don’t care what port you run it on. You figure it out.” When you point to ALB at the service, you just say, “Point it at this service name.” AWS knows what port that’s running on but the ALB knows what port that’s running on. Is that essentially what you’re saying?

Chris: Yeah.

Jon: Sweet.

Chris: Yeah. I have many ECS service that’s going to use the port mapping. I wire it up and say, this is the ALB that it’s using. Now, I can go and configure the ALB because now I have the application which ends up basically  being the target group, if you will. Now that that’s there, now I can go to ALB and I can configure that host based routing. I can say, “Okay, for these particular host names…” for me, the name of my blog is Upstart, so whenever a request comes in and the host name is upstart.chris.com, it’s now getting forwarded to that new ECS service. If I also went www.chris.com, I want that to be an alias for it, I just configure that as another host forward to this particular ECS service target group.

Basically, that’s all I had to do. At that point, it’s now just a DNS change. I just go into DNS and I can say, “Okay, for upstart.chris.com, point it to this new ALB instead of old classic load balancer.” Now, everything just works. We just walk through that process. It really definitely took less than two hours. It’s simplified everything so much too which is really nice.

It still have just the single load balancer. I can host almost as many services I want. I don’t have to manage ports. It’s just very simple. Not only that, there’s some additional benefits I get with this as well. Before, with that previous architecture, there’s issues with, “Okay, how do I do TLS correctly? How’s that going to work? How do I handle the redirects there?” With the ALB, it’s very simple. I wanted to force all traffic to go over TLS. All I have to do is I can setup the port […] in the ALB. I can say, “Whenever anything comes in in port 80, just force them to go back 443.” I’m done. It’s nothing but TLS traffic and it’s all handled at the load balancer level. That was really nice and really easy checkbox to add for functionality.

Jon: Yeah. I think one of the hardest things about ECS is the part that you have to understand how ECT really works. That’s like setting up your autoscaling stuff. I can’t remember the two names. It’s like the autoscaling something and the thing that autoscaling uses to know what kind of machine it should use when it adds or […] machine.

Chris: When you create an ECS cluster, that’s backed by an autoscale group.

Jon: Autoscale group, yep.

Chris: Autoscale group is defined by launch configuration.

Jon: Launch configuration. I literally was not paying attention to you for about 30 seconds while I was trying to remember the word launch configuration. Thank you for reminding me. Honestly, I do think that’s sort of the part where it’s like, “Hey, this is kinda ugly.” But you can get around that by using Fargate if you’re just like, “Scale up or scale down, I don’t want to think about it.”

Chris: Truth be told, all this stuff are all wrapped by AWS ECS to begin with. If you are using ECS, you’re just going to create cluster. You tell it when you create the cluster how many machines you want in it. If you want three or whatever like that, it’s just setting the number of machines in the autoscale group to three. It never tells you that it created an ASG on a scale group, it never told you it created a launch configuration, you just give it the parameters it needed to do that.

I’m sure there’s plenty of people out there, they use ECS, they’ve spun it up, and they don’t even know they have an ASG and launch configuration behind it.

Jon: Interesting. I’ve been thinking a lot about Serverless lately, this whole thing makes me realize that Ghost, and then some of the things Rich was talking about, CMSs, you got to think that somebody’s maybe working on a Serverless CMS which we kind of need if you want to do self hosting. You could create a Serverless CMS that would enroll itself inside AWS account. Basically, grab the cloud information to just spin up a Serverless CMS inside a AWS account so that you’re in control of all the resources that AWS actually owns a physical hardware of. But yeah, a kind of slightly different than having to run a Docker image, and do the […] of a Docker image, you could do a fully Serverless thing that can scale down to zero or scale up to millions. I bet somebody’s working on that.

Chris: Yeah, I’m sure there’s probably several. I wouldn’t be surprised if they exist. For this particular architecture, Ghost is the only thing that can’t be Serverless. It supports MySQL. In the previous episode, we talked about the Aurora Serverless, so I could use that. It’s really just the hosting and the actual CMS itself. At the end of the day, you can totally see a CMS being implemented as lambda functions. […], manipulating, making database calls, and when it needs to and whatnot. It’s not too difficult to see that path happening. I’m sure that it does exist out there.

Jon: […] for the listener, go Google it and find out if it’s happening. Whoever does do that, whoever’s working on that would get that one little extra hit, hopefully you get to name it.

Rich: There’s a few, Cosmic JS.

Jon: Oh, yeah. I’ve heard of that.

Rich: […] is actually one that was on Product Hunt not too long ago.

Jon: Interesting. Let me be clear, the architecture of your CMS is not really that important. At the end of the day, as long as it can scale and as long as it could scale it quickly, that’s the requirement. The business requirement is being able to scale up and scale down quickly. Not that it uses AWS in the specific way, or Google Cloud, or Azure in a specific way, but just that it can do what it needs to do to handle what low or high and spiky traffic. CMSs […] to be—what’s the word? Disrupted. We’ll watch it. We’ll keep watching it.

In the meantime, there’s rarely a project that goes by with Kelsus that doesn’t require some level integration with the CMS. There’s always that connection between marketing and the people that want to make changes to what they see on web pages without getting the development team involved, and the stuff that just needs development team to change it.

Rich: Absolutely. CMSs are need to be more of a business decision and a technology decision for that reason. You need to have the marketing team in there. They’re going to be comfortable with a U, and they’re going to be uncomfortable with new UIs. I think WordPress ends up being the choice not for the text stack. I don’t think WordPress […], because people are familiar with it. The ecosystem is huge. A lot of problems are already solved. You can pay a license for $100 a year to solve really hard problems. When you start to go into this obscure CMSs that are still in their infancy, you’re typically building everything from scratch.

Jon: It gets back to that thing that I was saying. You don’t want the development team involved on some of these simple things you’re doing on your user facing website.

Rich: Yeah. One of my marketing taglines, if I was to target software engineering companies like Kelsus would be, “Software engineers should not build marketing websites.” Not that they can’t, they shouldn’t. It’s not a good use of their time.

Jon: Right. Well, thanks, Rich. Thanks, Chris. That’s really interesting.

Chris: Yeah.

Jon: We’ll catch you next week.

Chris: Alright, see you next week.

Rich: Later.

Well dear listener, you made it to the end. We appreciate your time and invite you to continue the conversation with us online. This episode, along with show notes and other valuable resources is available at mobycast.fm/65. If you have any questions or additional insights, we encourage you to leave us a comment there. Thank you. We’ll see you again next week.

Show Buttons
Hide Buttons
>