87. Serverless Containers with ECS Fargate – Part 3

Support Mobycast



You may not be an expert on container networking, but wouldn’t you like to impress guests at your next party by explaining the difference between “host” and “bridge” networking?

This week on Mobycast, Jon and Chris conclude their three-part series on serverless containers with AWS Fargate. We wrap our heads around container networking and its various networking modes, with particular emphasis on task networking (aka “awsvpc” mode).

We finish by pulling together everything we learned over these 3 episodes to walk step-by-step through the migration of a container from EC2 to Fargate. After this episode, you’ll be the life of the party!

Show Details

In this episode, we cover the following topics:

  • Container networking
    • ECS networking mode
      • Configures the Docker networking mode to use for the containers in the task
        • Specified as part of the task definition
      • Valid values:
        • none
          • Containers do not have external connectivity and port mappings can’t be specified in the container definition
        • bridge
          • Utilizes Docker’s built-in virtual network which runs inside each container instance
            • Containers on an instance are connected to each other using the docker0 bridge
            • Containers use this bridge to communicate with endpoints outside of the instance using primary ENI of instance they are running on
            • Containers share networking properties of the primary ENI, including the firewall rules and IP addressing
            • Containers are addressed by combination of IP address of primary ENI and host port to which they are mapped
          • Cons:
            • You cannot address these containers with the IP address allocated by Docker
              • It comes from pool of locally scoped addresses
            • You cannot enforce finely grained network ACLs and firewall rules
        • host
          • Bypass Docker’s built-in virtual network and maps container ports directly to the EC2’s NIC directly
          • You can’t run multiple instantiations of the same task on a single container instance when port mappings are used
        • awsvpc
          • Each task is allocated its own ENI and IP address
            • Multiple applications (including multiple copies of same app) can run on same port number without conflict
          • You must specify a NetworkConfiguration when you create a service or run a task with the task definition
      • Default networking mode is bridge
      • host and awsvpc network modes offer the highest networking performance
        • They use the Amazon EC2 network stack instead of the virtualized network stack provided by the bridge mode
        • Cannot take advantage of dynamic host port mappings
        • Exposed container ports are mapped directly…
          • host: to corresponding host port
          • awsvpc: to attached elastic network interface port
    • Task networking (aka awsvpc mode networking)
      • Benefits
        • Each task has its own attached ENI
          • With primary private IP address and internal DNS hostname
        • Simplifies container networking
          • No host port specified
            • Container port is what is used by task ENI
            • Container ports must be unique in a single task definition
        • Gives more control over how tasks communicate
          • With other tasks
            • Containers share a network namespace
            • Communicate with each other over localhost interface
              • e.g. curl
          • With other services in VPC
          • Note: containers that belong to the same task can communicate over the localhost interface
        • Take advantage of VPC Flow Logs
        • Better security through use of security groups
          • You can assign different security groups to each task, which gives you more fine-grained security
      • Limitations
        • The number of ENIs that can be attached to EC2 instances is fairly small
          • E.g. c5.large EC2 may have up to 3 ENIs attached to it
            • 1 primary, and 2 for task networking
            • Therefore, you can only host 2 tasks using awsvpc mode networking on a c5.large
        • However, you can increase ENI density using “VPC trunking”
    • VPC trunking
      • Allows for overcoming ENI density limits
      • Multiplexes data over shared communication link
      • How it works
        • Two ENIs are attached to the instance
          • Primary ENI
          • Trunk ENI
            • Note that enabling trunking consumes an additional IP address per instance
        • Your account, IAM user, or role must opt in to the awsvpcTrunking account setting
      • Benefits
        • Up to 5x-17x more ENIs per instance
        • E.g. with trunking, c5.large goes from 3 to 12 ENIs
          • 1 primary, 1 trunk, and 10 for task networking
  • Migrating a container from EC2 to Fargate
    • IAM roles
      • Roles created automatically by ECS
        • Amazon ECS service-linked IAM role, AWSServiceRoleForECS
          • Gives permission to attach ENI to instance
        • Task Execution IAM Role (ecsTaskExecutionRole)
          • Needed for:
            • Pulling images from ECR
            • Pushing logs to CloudWatch
      • Create a task-based IAM role
        • Required because we don’t have an ecsInstanceRole anymore
        • Create a IAM policy that gives minimal privileges needed by task
          • Remember two categories of policies:
            • AWS Managed
            • Customer Managed
          • We are going to create a new customer managed policy that contains only the permissions our app needs
            • KMS Decrypt, S3 GETs from specific bucket
          • IAM -> Policies -> Create Policy -> JSON
        • Create role based on “Elastic Container Service Task” service role
          • This service role gives permission to ECS to use STS to assume role (sts:AssumeRole) and perform actions on its behalf
          • IAM -> Roles -> Create Role
            • “Select type of trusted entity”: AWS Service
            • Choose “Elastic Container Service”, and then “Elastic Container Service Task” use case
            • Next, then attach IAM policy we created to the role and save
    • Task definition file changes
      • Task-level parameters
        • Add FARGATE for requiredCompatibilities
        • Use awsvpc as the network mode
        • Specify cpu and memory limits at the task level
        • Specify Task Execution IAM Role (executionRoleARN)
          • Allows task to pull images from ECR and send logs to CloudWatch Logs
        • Specify task-based IAM role (taskDefinitionArn)
          • Needed to give task permissions to perform AWS API calls (such as S3 reads)
      • Container-level parameters
        • Only specify containerPort (do not specify hostPort)
      • See Task Definition example below
    • Create ECS service
      • Choose cluster
      • Specify networking
        • VPC, subnets
        • Create a security group for this task
          • Security group is attached to the ENI
          • Allow inbound port 80 traffic
        • Auto-assign public IP
      • Attach to existing application load balancer
        • Specify production listener (port/protocol)
        • Create a new target group
          • When creating target group, you specify “target type”
            • Instance
            • IP
            • Lambda function
          • For awsvpc mode (and by default, Fargate), you must use the IP target type
        • Specify path pattern for ALB listener, health check path
          • Note: you cannot specify host-based routing through the console
            • You can update that after creating the service through the ALB console
    • Update security groups
      • Security group for ALB
        • Allow outbound port 80 to the security group we attached to our ENI
      • Security group for RDS
        • Allow inbound port 3306 from the security group for our ENI
    • Create Route 53 record
      • ALIAS pointing to our ALB
    • Log integration with SumoLogic
      • Update task to send logs to stdout/stderr
        • Do not log to file
      • Configure containers for CloudWatch logging (“awslogs” driver)
      • Create Lambda function that subscribes to CloudWatch Log Group
      • Lambda function converts from CloudWatch format to Sumo, then POSTs data to Sumo HTTP Source
      • DeadLetterQueue is recommended to handle/retry failed sends to Sumo


End Song

Drifter by Roy England

We’d love to hear from you! You can reach us at:

Voiceover: Wouldn’t you like to impress guests at your next party by explaining the difference between host and bridge networking? This week on Mobycast, John and Chris conclude their three part series on serverless containers with AWS Fargate. We wrap our heads around container networking and its various networking modes with particular emphasis on task networking, a.k.a AWS VPC mode. We finish by pulling together everything we learned over these three episodes to walk step by step through the migration of a container from EC2 to Fargate. After this episode, you’ll be the life of the party. Welcome to Mobycast, a show about the techniques and technologies used by the best cloud native software teams. Each week your hosts, Jon Christensenn and Chris Hickman, pick a software concept and dive deep to figure it out.

Jon Christensen: Welcome Chris. It’s another episode of Mobycast.

Chris Hickman: Hey John, it’s good to be back.

Jon Christensen: Hey, good to have you back. Here we go. We’re going to talk a little bit more today about ECS. This is episode three on ECS, isn’t it?

Chris Hickman: It is.

Jon Christensen: Yep.

Chris Hickman: That was not planned. But turns out there’s a lot here and truth is we could probably do a 12 part mini series workshop all about ECS and lots of just really interesting good stuff with it.

Jon Christensen: I mean, let’s be honest, we could have like a whole podcast on ECS.

Chris Hickman: We could.

Jon Christensen: You totally could. You could go through all the features and all the changes, and then you can talk to different people about how they’re using ECS, problems they’re having with ECS. You can interview us developers about ECS. You could do a whole on ECS podcast.

Chris Hickman: Absolutely.

Jon Christensen: We’re not going to do that. We’ll try to keep it to this last episode or is it this? Or this and two more? Or this and four more?

Chris Hickman: We’re going to wrap up. So this is a three part series on serverless containers with Fargate. We’re also taking some detours here to some of the really important core key aspects of ECS like identity and access management. We’re going to be talking about container networking. We’ve talked a little bit about auto scaling and how you scale these things up and down. So we’re touching on various key aspects of ECS that you need to know on a practical basis as we discuss it in the context of how do I go from an EC2 launch type to a Fargate launch type and go to this serverless model or running like containers as opposed to managing the EC2s myself.

Jon Christensen: And just in case somebody happens to stumble onto this series on part three and hasn’t listened to the other two parts. One of the questions I asked in part one, I think it’s embarrassing asking again and those people that stumble in can hear the answer to this and then decide whether to start on part one. The question is, if you’re not an ECS user, is there any value here, Chris?

Chris Hickman: Yeah. The additional context that we’re going to be talking about, the different areas and topics of just stuff you need to know when running containers in production. So there’s quite a bit of overlap here that applies to whatever orchestration system that you’re using. That said, there is a lot that’s AWS specific, but again, when we talk about container networking and just we’re going to get into like what the bridge network mode is. And that applies wherever you’re running Docker, whether it’s inside ECS or Kubernetes or on your own or however else it is. Understanding identity and access management too is the implementation of it is specific to AWS, but the concepts apply across the board. Right?

Jon Christensen: Yeah, I can imagine it’d be useful from almost anyone too. Like even if you’re an Azure and you’re on Kubernetes or TCP on Kubernetes, I wish there was somebody doing exactly what we were doing on those so that I could listen to it and just find out what I don’t know about those other cloud engines and do some comparison and learn from that. I can imagine people that are on those other clouds might get something out of this too.

Chris Hickman: Yeah, absolutely.

Jon Christensen: Cool. All right. Again, no pleasantries this week. No weather discussion, no biking discussion. We’re just going to jump right in, because last week at the end we had to take a listener poll and decide whether we were going to extend this into three episodes and listeners overwhelmingly said, “Yes, let’s do it.” So here we are, episode three. Can we jump in Chris? How is that? [crosstalk 00:04:34]

Chris Hickman: The responses were deafening.

Jon Christensen: Yes.

Chris Hickman: So yeah, let’s jump in. Just real quick recap. Last week we talked about, we started going through this process of, how do you go from the EC2 launch type to the Fargate launch type with the container. And we said, “Hey, it’s part of that, let’s dive deep into two really important areas of ECS that are going to help inform us as we do that migration.” And one is identity access management with ECS and the various roles that are required by ECS itself as a service and other services. And then also really just as important or very important from a security posture is to use task-based [inaudible 00:05:15] for containers. So don’t share a single identity for all containers running on our instances instead of we should narrow it down so that each specific service has an identity associated with it, that has the minimal permissions that it needs to do whatever it is it needs to do. And you do that on a service by service basis.
So that was last week. Today we’re gonna dive deep into container networking and understand at least at the ECS level, what’s important there and what we need to know. And we’re definitely going to be touching on some things that are probably not very common that maybe not a lot of people know about. And so it should be interesting. And then after that, we will wrap up by now going through these practical steps of, “Okay, I have an existing container, it’s on my EC2 launch type. I now want to get it running on the Fargate launch type.” What exactly do I have to do? Like how hard is that? What are the steps? And so we’re just going to walk through the steps. That should be fun as well.

Jon Christensen: Cool.

Chris Hickman: All right. With that, why don’t we dive right into container networking. So Docker, whenever you’re running container, you’re going to be specifying a networking mode for that container to use, right? And there’s lots of different options here. The default networking mode is bridge, but there’s other options as well. So with ECS, we’re going to specify that networking mode as part of our task definition. With ECS, we have four different values for this networking mode. Four different types. So one is none. So this is pretty straight, really straight forward. Pretty easy. There is no network. So we have a hundred percent coverage on that now. We can nail that. That’s not an exam question. We got everything we need to know about networking. Good. So the next one is bridge. And so we talked about that. That’s the default mode.
And then two other ones, host and then AWS VPC. And so we touched on AWS VPC last episode and we’re going to dive even deeper in it today. But let’s talk about host first. And so what host networking mode does, is it bypasses Docker’s built in virtual networking and instead just maps straight from the container to the host that we’re running on. So there’s no proxy between us. And there’s no difference between like container port versus host port. It’s just they’re one and the same it’s basically just the host port.

Jon Christensen: Okay.

Chris Hickman: And so some implications of this is that you’re going to have port conflicts. If you’re not careful, you can’t run multiple instantiations of the same task on a single instance. When you have these with the port mappings.
So if I have a container that says I want to listen on port 80 you’re owning, I’m able to spin up one of those on that particular instance, because it’s now using the host port 80. And not only that, like not just additional copies of that particular container, but any other container that wants to listen to port 80. And so if you wanted to use host mode and you’re running multiple containers and different apps and services and whatnot, so it’s going to get pretty messy pretty quickly where you’re going to have to go and keep track of these mappings. And know, “Okay. Hey, for this particular app, I want him to be listening on port 8080 and then this other one’s not going to listen on port 8081. And then you have to manage all this stuff yourself manually. And you can set up listeners on your load balancers to forward appropriately to those ports. But overall, kind of messy, but pretty straight forward from a networking model. There’s just one networking system there. There’s no virtual network, there’s no proxy. It’s just straight through.

Jon Christensen: So I haven’t said much yet, Chris and I think I just have to say that I don’t really understand how container networking works in the first place. So talking through these different types of networking isn’t creating an image in my mind I’m like, I get the words and they make sense and I know the rules that I could follow them, but I don’t have a picture in my head of how this is all working in the first place. Because I remember where we left off in the series on containers versus virtual machines, is we got to a place where we had a really good understanding that a container is just the process that’s running on the host operating system. And so already that’s like, “Well wait, processes don’t have their own network cards.” So how is this happening where it has, where it’s able to do networking and bridged networking makes sense. Well you’re going to use the host network interface and you’re just going to be on a port and behind it. You’re just like any other application, you’re going to bind to a port. Yeah, I’m going to bind a port 7775. Okay, that makes sense to me. That’s how networking works. Applications bind to ports. So I can do that. But once we get outside of that world and we’re not on my favorite category, which is none, then I’m a little confused.

Chris Hickman: Yeah, this is a good time to talk about bridge network mode. Because this might shed some light onto that.

Jon Christensen: Okay.

Chris Hickman: So bridge, again, this is basically the default networking mode for just running Docker in general. And it’s also the default mode when you run your containers in ECS. And so what bridge does, is it utilized? So Docker has a built in virtual network service and that runs inside each, basically each container is now using that. So think of it as a proxy, it’s a network that’s set up by the Docker daemon itself. That means that you’re creating this, for lack of a better term, it’s the Docker network. And so containers on that instance, they’re all connected to each other through this bridge network.

Jon Christensen: But in order for me to understand that I have to go a little bit deeper, I have to have a network interface and it has to conform to all of the layers of networking. It’s got to have a hardware address and it’s going to have a IP address and all those things that I’ve learned about in the, I can’t remember the name of it, but that layer diagram, it’s got each machine and that network has to have all those layers in the diagram for the networking to work. So I don’t understand, are we creating a virtual piece of hardware? Is Docker creating virtual hardware for each and if so, how does that work? I need a little more groundwork, a little more foundation.

Chris Hickman: Well, we’re not going to go there. Because it gets really deep and technical, with things like IP table. But you can think of it as like it’s routing. And so it’s creating networks, it’s creating routes and it has a route tables that indicates if this is on the Docker network, these are the IP addresses. The virtual IP addresses and these are how you can go from this point to that point. There’s that networking world and then you have the world of everything that’s outside that to go to access the outside world. And how you get from the local Docker network out to the internet or any other machine, you know, off that machine. So those are the two networking scopes for our containers. On the actual host, containers can talk to each other over that virtual Docker network, when they need to talk outside of it, they’re going to go through that virtual Docker network to the actual Nick, right on the host and then now they can go out and so think of it as a middleman, as a proxy or Docker is setting up these networks and it’s using things like IP tables and routing to create that virtual Docker network and then also know when should it go out to the Nick and out to the open world.

Jon Christensen: Okay.

Chris Hickman: So and what this does, It’s like if you’re setting up, you can think of it’s like private subnets. You’re creating virtual, you’re creating these virtual subnets for your containers. And so this is how inside those virtual sudden EDS, they can be on their, on whatever reports they want to be, but when they have to go out to the outside world, that’s when now we have to do the port mapping and that’s what Docker is helping to manage.
It’s how it’s helping to manage like Oh on the host I’m listening to port 80 80 but I know now that that’s going to go to this particular container and it’s going to be, it on port 80. And to do that mapping between it. So that’s what bridge networking gives us. It gives us that ability. So now we can have multiple containers all listening on port 80 on their container network and it’s not going to conflict with the actual host network. Because there’s going to be different mappings there. So it’s outside the containers. They have to listen on unique addresses, but inside them they can be duplicated. So it makes it much easier to develop our code. We don’t have to keep track of the mappings when we’re writing code or configuring our application of started. It’s really only at deploy time that with the worry about it.

Jon Christensen: Okay. So I can just follow that. I can be all right with the fact that Docker is creating this sort of virtual network for me and inside the virtual network the containers have their own IP addresses and the applications are bound to ports on those IP addresses. And those, that internal sub net, if you will of Docker containers is not addressable to the outside world. And then if you want to get out to the outside world, you have to go through this bridge that you’re talking about. Okay. I can follow that.

Chris Hickman: Yeah. So our containers, they’re sharing all the networking properties of the primary NIC, the network interface card, or like in the case of of AWS, it’s the ENI, the elastic network interface. So they’re all sharing that same IP address. And then all the rules that go along with that firewall rules, IP addressing everything. And then Docker is providing that translation between it. And so if something wants to address a container from the outside world, they’re addressing it not just by IP address, but by the combination of IP address and port.

Jon Christensen: Okay.

Chris Hickman: So that’s bridge. And so what are some of the disadvantages also limitations? One is you can’t address these containers with the IP address allocated by Docker. That kind of makes sense, right? We said that this is the virtual, the container network, the Docker network with this different private subnet if you will. Coming from a pool of locally scoped addresses. No one else outside of that container network knows about that. So you can’t use those. We say that you have to use the IP address of the host plus the port that container’s been mapped to. And then the other disadvantage here is that you just can’t enforce by [inaudible 00:16:28] anchoring network ackles firewall rules on a per container basis. You can only do it at the instance level. And that’s a pretty big limitation.

Jon Christensen: And I would need to know an example of a rule that you would want to do that [crosstalk 00:16:46] I can’t because I’m using bridged networking. I need a use case. Can you come up with these Chris?

Chris Hickman: Imagine you had a bunch of services and one of them was just like a really sensitive app. It’s the admin app that maybe has access to maybe financial data or something like that.

Jon Christensen: [crosstalk 00:17:08] I was thinking about the Uber app, the one that let admin see where everybody is.

Chris Hickman: Yeah, sure. So that one, but you just want to have a cluster of EC2s and you want to run it as an ECS service and so it’s going to get scheduled on one of these instances. You would like to lock that down and say, “You know what, I only want traffic from this particular other network or this particular,” maybe it’s just a particular IP address or from this particular subnet can only access that particular app. I want to lock it down. I don’t want to use just straight user often authorization. I also want to use some networking rules as well to really lock it down. And so with bridge networking, you just can’t do that. The only way you could do it is you’d have to create multiple clusters.

Jon Christensen: [crosstalk 00:18:03] And then you could have cluster to cluster networking nightmares.

Chris Hickman: Yeah.

Jon Christensen: Okay. Cool.

Chris Hickman: So this is why you can’t do a bridge. Now we’re going to talk about the fourth one, AWS VPC mode. And so again, we started talking about this last week because of Fargate requires this networking mode. You can’t run Fargate tasks with any other networking mode other than AWS VPC. And so what happens here is that, the network interface card is being created on a per task basis instead of on a per instance basis.

Jon Christensen: Right.

Chris Hickman: And so keep in mind, AWS VPC mode, which is also known as task networking. You can use this both in the EC2 launch type as well as the Fargate launch type. It’s not only available with Fargate, it’s also with EC2. It’s just keep in mind with Fargate, it’s required. You have no other option versus on the EC2 launch type, you have all four of these options available to you.

Jon Christensen: Okay.

Chris Hickman: So with AWS VPC mode, each task gets its own ENI and consequently its own IP address. It’s nice where you can have multiple applications now including multiple copies of the same app can all run on the same port number without any conflict. Because there’s separate IP.

Jon Christensen: Yeah. Super fascinating because it’s literally a feature of virtual machines that makes this possible and not a feature of containers. Since every task is running in its own virtual machine, it’s on firecracker virtual machine. That’s how this becomes even possible. Without that this stays impossible.

Chris Hickman: Keep in mind when we can do this with the EC2 launch type as well, which I don’t think… that’s definitely not firecracker for sure.

Jon Christensen: Or is it? I guess not.

Chris Hickman: No, it’s not. These are just EC2 instances.

Jon Christensen: So then that doesn’t make sense. I just shot myself in the foot. But I totally get how you can put in a network interface onto a firecracker instance. I get that. Because like the whole series we did on virtual machines and how they virtualize everything makes it very obvious how you had virtualized a network interface. But the whole thing we also talked about during that series about how a container is just a process makes me really confused about how all of a sudden my container has got its own network interface.

Chris Hickman: Which I mean it’s a bit orthogonal. Because it’s like you can imagine having a machine and you can have, you can insert multiple NIC cards on it. And you can just… it’s up to your code. Which one does it want to talk to. [crosstalk 00:20:48] Do I want to use NIC one or NIC two? And so this is the equivalent of just on that instance, you’re just slapping [crosstalk 00:20:55] the NIC card in there. And saying, okay, task this is yours. You get to use it.

Jon Christensen: That’s like a aha moment for me. Thank you.

Chris Hickman: Cool. So some other things to keep in mind. So of these host and AWS VPC network modes, they’re going to give you the best networking performance because there’s no virtualization, there’s no proxy, there’s no middleman. So bridge is going to be the one. Well I guess none is the fastest, but other than that, bridge is proxied. So you’re going to have some performance impacts their host and AWS VPC. You’re going straight through the NIC. With no intermediary. State’s going to give you the best networking performance. There’s just no virtualized network stack that you’re dealing with there. And the exposed container ports that you have, as we talked about, if you’re using the host networking, it’s attached to the corresponding host port. And if you’re using AWS VPC, then it’s going to be mapped to the ENI interface port for that particular task. The thing to keep in mind here host in AWS VPC, you’re going to get better network performance then you are going to be with bridge. So it sounds like, “Hey [inaudible 00:22:09] AWS VPC mode this is great.”

Jon Christensen: Yeah. [crosstalk 00:22:13] It doesn’t like the way to go except for the thing you were just saying, it was like, “Oh! Now all of a sudden I have a bunch of tasks that are on the network a lot more surface area there to be aware of.”

Chris Hickman: There is. I guess, with great power comes great responsibility.

Jon Christensen: Yes. Tell me more. Spidey.

Chris Hickman: Yes. So on one hand doing this task, networking and having our own attached ENI one of the things it does simplify container networking quite a bit. So remember we talked about we can think of container networking in two different modes. One is like on the container network and then the other one is over the outside of the machine. Out over the… just everything outside the machine. Whether it be over the internet or other downstream services or whatever it may be. If we want to talk to the container network, it’s really easy because they all share a network namespace and you can communicate with them over local host. Now and this is at the task level. So this is for this situation. Like if you had a task with three containers defined in it, inside that task definition. Whatever your containers, ports are listened to, those have to be unique. Because they’re all running on the same NIC. And so they have to be on distinct IP addresses.

Jon Christensen: Okay that makes sense.

Chris Hickman: But because they’re all sharing the same NIC, they’re basically, this is all local host. So for all those three containers, they can all refer to each other by just local host or And then whatever port number they’re on. So it makes it pretty straightforward and easy for them to all talk to each other. You don’t have to set up like your own custom namespaces or DNS or anything like that. You can just… so it makes it very easy for containers within a task to talk to each other. And see some other nice things with task networking. Again, it’s because you’re now you have this dedicated ENI to your task that lights up things like now we can do VPC flow logs on a per task basis. So we can see exactly the traffic at the task level. And that’s going to help us perhaps maybe troubleshoot problems a lot easier than it would be if we were going off, using bridge networking where we had 10 different services all sharing the same ENI.

Jon Christensen: So what kind of problem would you troubleshoot with a VPC flow log?

Chris Hickman: If you have like maybe it’s a security group problem where you just can’t… a particular port’s not open and you don’t understand. Why is this thing timing out flow logs might be able to show you there what’s going on, because you’re going to see deny actions in those flow logs as well. Kind of like the basic networking. I’m like, “Hey, do I have connectivity to this? Is it a firewall that’s blocking it?” Those kinds of things. So they’re not… full disclosure, VPC flow logs is not a fun place to be. If you’ve ever looked, they were literally… it’s basically all the header information with your packets. So it’s not going to… you can’t actually look at the packets. You’re just going to see things like, “What was the source, what was the destination, what was it trying to do and what was the result.”

Jon Christensen: [crosstalk 00:25:57] And then, but I can see what you’re saying. It’s like if you have task level networking, then VPC flow logs might actually tell you something useful. Whereas if you don’t, then you might not even bother ever cracking those things open. Like, why bother? It’s just a bunch of noise.

Chris Hickman: And then of course, finally the other thing is that we have better security now. Because we can do much more fine grain security. So we now can have different security groups for each one of our tasks. And so this kind of lights up the scenario we talked about before where it’s like if you want to be able to lock down a particular task to really restrict the ingress traffic to it. Well, you have a security group just for that ENI that’s associated to that task and you can lock it down as much as you want. And so we have now that fine grain security control over it. So pretty big benefit.

Jon Christensen: All right.

Chris Hickman: So the limitations. Where is the downside to this? And so the downside is, it’s all about networking density. You can only attach a certain number of the NIC to an instance.

Jon Christensen: Okay. Yeah, that was one thing. And we were just talking about, that was my aha moment. Like, “Oh, how are all these containers getting their own ENI?” And you were like, “Yeah, you just slapped them on the instances.” Got it.

Chris Hickman: In the Fargate world, we don’t have this problem. Because it’s AWS’s problem. They have to go figure out how to do it. So we don’t have to worry about it there. If we are in the EC2 launch group world, well we do have to worry about it. So by default this is going to depend on the type of instance you’re running. How many ENIs can be attached to it. It’s a fairly small number of ENIs that can be attached to an instance. So for example, a C5 large EC2 you can have up to three ENI’s attached to it.

Jon Christensen: Okay.

Chris Hickman: So that’s one is for the primary, so that EC2 has to have its primary ENI which leaves two ENI’s for task networking. So that means on a standard C5 large EC2 you can only run two tasks using AWS VPC mode.

Jon Christensen: I’m sitting here and this is what’s going on in my head. I’m like, “Ooh, so he can make a task that all it does is a single XR and then you could say that you want 100,000 of them and that you want to do task level networking,” and then be like, “All right AWS, figure out how to deal with this.” And like now they’ve got to give out so many IP addresses for just doing your little XRs. A task that basically does nothing. Yeah, that’d be a hard problem.

Chris Hickman: Yeah. And so I mean, as Fargate, that’s what they have to do. For every Fargate tasks they are allocated ENI, or that and wiring it all up. So for however many tasks you run, that’s how many ENIs are created for you. And keep in mind those ENI’s are being created inside your VPC on your subnets and they are taking up IP addresses of your address space. So be careful. If you go spin up too many of them, you’ll find out, “Okay, I don’t have any more IP addresses and [inaudible 00:29:10] they’re all gone.” So something to keep in mind there. So there’s a little bit of a bummer. With this ENI density type thing, it doesn’t make… it sounds like it’s not very useful with ECS because we’re typically running, 10 tasks, 20 tasks on an EC2.
So how does that work? AWS has addressed this. This is one of the recent advancements over the last year or so, and that is, you can now increase your ENI density through something called trunking. So the VPC trunking, it’s a networking feature that they offer that basically increases the number of ENIs that you can attach to your EC2s. The way they do this, it’s basically, it’s multiplexing data over a shared communication link.
What you’d end up having is for each one of your instances that has trunking enabled, you’re going to have the primary ENI and then you’ll have a trunk ENI. And the trunking ENIs basically is that multiplexer. And so you’re now attaching your task ENI’s, they’re all going through that trunk. And so with this particular model, you’re now you get 5X to 17X more ENI’s per instance. So to go back to our example of the C5 large, it’s with trunking enabled. You can now go from three to 12 ENIs. So that means that you have two in it, one ENI’s dedicated for the primary, one’s dedicated for the trunk and that leaves us now 10 for task networking. So that means instead of just being able to run two task, I want to C5 large, we can actually run 10 task. I’m running a C5 large, which is quite a bit better. So this is something you have to opt into.

Jon Christensen: Okay. Yeah, I imagine it costs more or whatever.

Chris Hickman: Yeah. Well I mean you’re going to be allocating additional ENIs.

Jon Christensen: Okay. [inaudible 00:31:07] on ENI.

Chris Hickman: So yeah.

Jon Christensen: That’s interesting. You can opt into this at the account level or the role. And it’s a specific setting. So in insight ECS, they make it pretty easy. To do this, it’s actually in the settings section of the console for ECS. We can go in there and say, “Yes, I want to opt into VPC trunking.” And then when you do that, now that will be enabled. And so now whenever you spin up an instance, it will allocate the trunking, and you’ll now be able to attach more ENIs to it.

Chris Hickman: So now we have two highly related things that I’m like letting, I’m allowing for us to just be squishy on that we have to go and talk about and if it wraps out and one is like really how that container in bridge networking works. Like how that Docker network actually works. And the other is how this VPC trunking works. Because while I can completely understand throwing network interfaces at a computer, this whole VPC trunking thing is just magic. Oh yeah, it’s click here for magic. Someday I want to know how that magic actually works. Because at the end of the day it’s going to be a little bit of code and it’s not going to be, all this magic stuff that seems so complicated generally ends up being like, “Oh yeah, that makes sense.” Once you figure it out.

Jon Christensen: Yeah. And I guess a good way to think of it, it’s just software. And so it’s at the hardware level, like the machine can only, it supports a certain number, a density. And so as far as the machine knows, there’s just two ENIs, two network cards with it. And so now you’re using software to add these additional ones and then they’re all going to share and go through that ENI that’s been assigned to them. But you’re using software to build on top of that without changing like the operating system or the hardware of the actual system itself.

Chris Hickman: Yeah. I definitely just want to know more about how that actually works. At some point someday we’ll get there.

Speaker 1: Just a sec. There’s something important you need to do. You must have noticed that Mobycast is ad free, but Chris and John need your help to make this work for everyone. Please help the Mobycast team by giving us five stars on iTunes, writing positive reviews and telling your colleagues, friends, neighbors, children, and pets about the show. Go ahead and do it now. Great. I promise not to ask you to do that again.

Chris Hickman: Cool. All right, well we’ve gotten through, now that we talked about identity access management, we’ve talked about container networking and so I think we’re now ready to talk about, okay, we’ve got a task and we want to convert it over from EC2 launch type to Fargate. Like what do we need to do?

Jon Christensen: So this wouldn’t be all that common for anybody that’s kind of new to this. It would only be people that you know were like, “Oh yeah, I’m all in on EC2 ECS stuff. Oh look at Fargate, it’s all of a sudden more reasonably priced. Maybe I shouldn’t move over.”

Chris Hickman: Which is probably going to be like the bulk of [inaudible 00:34:08]. ECS has been out there now for what, since 2015 and Fargate, it really didn’t become viable until early 2018 as we talked about when it first launched. It was quite a bit more expensive than whatnot and prices coming down. So in general, you’ve got a lot of people out there that are running using EC2 launch type.

Jon Christensen: Okay.

Chris Hickman: So it makes sense to go look that far like, “Hey, should we switch over to it? And is it cost effective?”

Jon Christensen: And is it a button click? Like how hard is it going to be?

Chris Hickman: Exactly. What are the steps? Like do I have to do a rearchitect. Is it going to be a complete app rewrite or is it a couple check boxes? That’s what we’re going to talk about now. Is just like what do we have to do differently? Just imagine we have an existing service. I actually went through this process for my own personal blog site. So I’m basically have a node js app that is my blog app.

Jon Christensen: Should I give you a hard time about spending more time architecting and building your blog than blogging?

Chris Hickman: That’s the reason why we all have blogs. It’s to actually to mess around with the blog software, the platform, and gives excuses for not actually producing content. So yeah, it’s shameful. I think my last post was about a year ago. It’s on the, to do lists. There’s tons of things to write about, but it’s always more fun to play with code and deploys and brochure.[crosstalk 00:35:47]

Jon Christensen: And we’ve got maybe Cason here’s your sort of blog right here.

Chris Hickman: Yes.

Jon Christensen: You are listening to it right now.

Chris Hickman: Yeah, indeed. So I have this existing app. It’s fronted by an ALB. And you I’ve got to let me say DNS set up with route 53 to do that. So the ALBs are listening on port 80 and 443 and then those are now forwarded into my ECS tasks, through those the listeners that I set up in the rules. And so now I want to move that over to Fargate. What do I want the steps? So one is, “Okay, we got to do something, just do some stuff with . And so now that I’m going to Fargate, there’s going to be two additional roles that are going to come into play. One is the, the Amazon ECS service linked role. And that is what gives ECS permission to attach ENIs to the instance.

Chris Hickman: Just know that this exists and it has to be there. We don’t have to do anything and AWS is going to automatically create this for us. And it’s just going to work. There’s just no behind the scenes. That’s what’s happening. And then likewise, we’re also going to need now at task execution, I am like what we talked about last time and that’s for giving access to that, the task that member, that task management type functionality. So pulling images from me, ECR, pushing logs to CloudWatch. So that role needs to be set up and again, it’s going to be created automatically for us by ECS if it doesn’t exist when we go through this process.

Jon Christensen: And that’s only if you’re using the console, right? Like if you’re trying to do this with CloudFormation, you’d have to be aware of this and go make a role and refer to it from CloudFormation.

Chris Hickman: That’s a good question. So I’m not… I suspect that you’re right that you would have to do that yourself. That, that’s part of one of the advantages of going to the console’s is doing some of this for you, as a wizard if you will. But if you go through CloudFormation, then you’re going to be much more explicit and then specify I need to go create this role.

Jon Christensen: Yeah, that’s a good point.I just want to quickly divert on that. I think a lot of folks are kind of like, I’m not really using AWS if I’m not doing infrastructure as what’s the word? Infrastructure as code. Yes. That’s what I’m looking for. And you don’t have to write. If you’re just going to build a system that’s going to have like a couple environments, maybe dev stage and production and you know that’s what it’s going to be for a couple of years. You can save yourself a lot of time by not going down the CloudFormation path. It really is a time consuming killer. And if you’re not spending up and tearing down environments Willy nilly, you just save yourself that 125 hours and use it on your features. Anyway. Just a little side note.

Chris Hickman: Yeah, it is definitely one of those things where it boils down to just how many people are involved and you know, just how much flexibility you want and how much of a down payment you want. How much of an investment you want to make and doing that. Learning how to do information, infrastructure as code, whatever platform you use, whether it be cloud formation or CDK or Terraform, there’s a big learning curve there. And it’s an ongoing learning curve as well. So it’s pretty sophisticated and it’s going to require a lot of time and resources and think of it as a whole nother software project for all intents and purposes.[crosstalk 00:39:10]

Jon Christensen: Think of it as another additional software project as opposed to like, “Oh yeah, that’s just the edges of the thing that the main thing that I’m working on.” Yeah.

Chris Hickman: Yeah, indeed. Cool. So we have those two roles. Again, they’re going to be creative for us automatically, so just be aware of them. And then the other thing we’re going to do now is we’re going to create a task based I am role or our task. So personally, I didn’t have to do this before because I was running on the EC2 launch type, and I was being lazy, and I was just using the ECS instance role. So I was sharing that one role for all of my containers. It turns out the only thing running on my EC2 is just this one service. So it’s not that big of a deal. But that’s what I was sharing. I was reusing that role for any programmatic access that my app needs. And so my app does need AWS access to things like, S3 and KMS.
And when we run in Fargate, we no longer have an ECS instance role. Because we’re not running on our instance anymore. So that just doesn’t exist. I have to create this task base I, and pretty, straightforward, pretty easy. So, first I need to go create an IAM policy that’s going to give them minimal privileges needed by my task. And remember we talked about there’s two categories of policies. There’s the AWS managed. And so that’s a catalog of just a bunch of different pre-made policies for various different services. Giving things like, “Oh, I want full S3 access.” Or “I want read only S3 access.” For all the different various, so there’s like over 500 of these managed AWS policies and [crosstalk 00:40:58] one of those is going to get one or more of those are going to fit the bill for you [crosstalk 00:41:02] or you can have a customer managed policy.
And so for me, I’m going to go with a customer manage policy cause I need KMS, and there’s no KMS managed policy that fits the bill here for this. What I’m going to do is, I just go to I am and policies, I’m going to go ahead and click on the create policy, create policy button there, and then I’m just gonna slap in my Jason from my policy. And my policy is going to be really straight forward. It’s going to be an allow effect. There’s going to be two actions that it’s going to allow, one’s going to be S3 get object. And so that I can go read something from a bucket, a file from a bucket. And then the other action I’m going to allow is KMS decrypt.

Jon Christensen: Okay.

Chris Hickman: And then I’m going to specify the resources that those things can, act upon. And so then I’m going to [crosstalk 00:41:59] that’s on the list. The ARN for my S3 bucket, that this task is going to talk to. And then I’m also going to list the, ARN for the KMS master key to do the decrypt operations. Because really what [inaudible 00:42:13] I have some secrets for this particular app that are KMS encrypted and stored as, a Jason document and store it in an S3. So it needs to Esri access to go read that file and then it needs KMS to decrypt it so it can read those credentials.

Jon Christensen: Cool.

Chris Hickman: So I’ve got my policy now and this is a custom policy I’ve created. And then now I have to go and create a role.
And so I’ll go back to… I am creating a new role this time I’m going to base it on a as a S it’s, you’re going to get some options there. And one of them is, the first one actually is going to be a service role and so I’m going to, I’m going to base this on the elastic, the ECS task service role. By doing that, it basically is just setting up the trust relationship that will allow ECS to use that security token surface to assume the role and perform the actual on the behalf. [crosstalk 00:43:09] So that’s what you’re doing by choosing that service role.

Jon Christensen: Yeah, that’s what we talked about. Last step is there a lot is like the,[crosstalk 00:43:14] I was like, “Okay, I need to get a role. It’s got to have this like name. Oh, here it is. Here’s the one.”

Chris Hickman: Yeah. So then once I choose that, [inaudible 00:43:23] I’m going to choose ECS as the service. I’m going to choose the elastic container service task as my use case wants to do that. I can now just attach the IAM policy that I created before and save. So now I have my roles and now in my task-based IAM role, it’s locked down. It can only do those two things, the S3 get object and the KMS decrypt and now I’m ready to move on. So the next step is to go and build our task definition file. Make the changes to it.

Jon Christensen: Yeah. Because you already have a task definition. A task definition file, you just need it.[crosstalk 00:43:55]right? Yup.

Chris Hickman: And so pretty straight forward here and not much work to do. So one is in, there’s a required compatibility section in there. I’m just going to say specify a Fargate. And so that’s going to say that this is Fargate eligible, this task.

Jon Christensen: Okay.

Chris Hickman: The other very important thing I have to do is I have to switch my network mode specifically to,AWS VPC mode.So that’s going to be at the task level of parameter there. I’m now going to change my CPU and memory limits to be defined at the task level again and make sure they’re not at the container level. And we talked about that[crosstalk 00:44:30] last week as well.[crosstalk 00:44:31] But it’s just saying like the CPU, memory, lips or memory limits are being allocated at the task and not necessarily the container. Although again, usually we have one container per task…

Jon Christensen: Right.

Chris Hickman: And so it’s usually a one on one mapping.

Jon Christensen: Also, just a quick interruption. Hey Gus, I just want to say it. I guess everybody knows about Gus. He’s our third member.[crosstalk 00:44:52]

Chris Hickman: He’s not even close either. Something has got him excited is he’s,[crosstalk 00:44:57] he’s let out a Barker too and

Jon Christensen: He’s a good dog.

Chris Hickman: So, two other very important things that we’re going to do in our task definition are specify these roles that we talked about. So one is we need to specify the task execution, I am role. So that’s that execution role. ARN. And again, remember that’s the one that allows the, the task to pull images allows us to pull images from ECR and send logs to CloudWatch.[crosstalk 00:45:23] So we’re going to specify that in the task definition file. And then we’re also going to specify that task base I am role[crosstalk 00:45:29] that we just created, right? So say like, Hey, this is the task base I enrolled at. This task is going to use when it gets instantiated. So go get the air and for that and put that here in the task definition file.

Jon Christensen: Cool.

Chris Hickman: And really the only other change we need to do here is we just need to make sure that we’re, you know, we can in our networking, for our container, just make sure that we’re just specifying container port. We’re not going to be specifying host port anymore, although I don’t think you’ll get an error. If you do specify a host port, it will just ignore it.

Jon Christensen: Okay.

Chris Hickman: So now we have a rule set up, we have our test definitions and now it’s time to go create a service.so you know, we’re going to go into the console, go to ECS, we’re going to choose the cluster. We want to launch this into. We can either create a new cluster or we could use an existing one. For me, I mean I already had an existing cluster that was based on AC two Linux plus networking node.
So I have a cluster of obesity hosts. So I’m just gonna use that…

Jon Christensen: Okay?

Chris Hickman: Because I can launch Fargates into that club Fargate task into that cluster.[crosstalk 00:46:28] If I wanted to, I go create a new one, a networking only cluster. I could’ve used that as well. Just both are equally valid and fine. Now that I’ve chosen my cluster, I’m not going to define everything about the service that I need to. So one of the important things is now I’m going to specify networking. And this is new, in general the EC2 launch type and using bridge networking. I don’t need to specify this. But with [inaudible 00:46:57]the foreign aid task, because it’s AWS VPC mode, now I have to specify all the networking with it. So that means I have to specify what VPC it should use.

Jon Christensen: Okay?

Chris Hickman: Also all the subnets that it should use.

Jon Christensen: Okay?

Chris Hickman: And then I also need to say it’s be specifically declare what the security group is that this is going to use.

Jon Christensen: If you give it more than one subnet does it just like, “Oh, I’ll just spray him into the subnets…. the task.”

Chris Hickman: Yeah. It’s just going to, it’s going to spread them across.

Jon Christensen: Okay.

Chris Hickman: Yeah.

Jon Christensen: And also another, a silly question, but I assume as you’re doing all this, your blog is up and running on the existing cluster and you like you never took it down. It’s just running. It’s happened. Okay. Yeah,

Chris Hickman: And then because I’m running in public subnets, I’m also going to want out as auto assign a public IP. And so that’s such a look, just a little checkbox here. I can check off as I go through.So now these [inaudible 00:47:45] ENIs will also have a public IP address associated with them that will allow them to talk to the outside world.

Jon Christensen: Okay.

Chris Hickman: So now that I specify my networking, I can now go ahead and attach to an existing my, load balancer. And so, like I said, I already have an existing application load balancer. And so I’m just gonna use that for this particular service and just piggyback on it.

Jon Christensen: Right.

Chris Hickman: And so I just specify my production listener, my port protocol. So I may be listening on port four 43, right cause I want, it’s all TLS traffic. And then I specify my target group.
So I’m going to create a new target group. When you specify the target group, you’re, also going to be specifying that target type. The target groups have three types that they allow those types, our instance, which is for basically bridge mode,[crosstalk 00:48:39] type networking or host. You have, IP and that’s what is used by AWS VPC mode. And then you can also, specify targets of Lambda functions. So for us, because we’re Fargate and that requires AWS VPC, that means that our target types going to be IP.

Jon Christensen: Okay?

Chris Hickman: That’ll actually be set up for us. There’s really nothing, we were not even able to change it. It’s just going to be, we’ll see that’s what we’re basically forced to go with. And then after that we can specify a path pattern for our listeners. So to tell our application load balancer given input traffic, when should it route to this target group versus some other target group.
So now we have our service, it’s created, task are being launched. Now for me, what I had to do is I also had to do some security group modifications. So I had two other security groups that came into play here. One was a security group for my ALB and for kind of a, a better security posture. I have it locked down for the outboard out outbound traffic cause I only want out the ALB to talk to basically my, ECS services really shouldn’t be making requests anywhere else. Right. So I had to add a new rule in there to say allow outbound traffic on port 80 to the ENI, the security group for the ENI that we just created or the Fargate service.

Jon Christensen: I lost you there a little bit like a year. We’re talking about outbound traffic. Who’s talking like outbound traffic to me means, you know, maybe they, the ECS tasks are trying to talk out to the network and I’m not sure what they would be doing in terms of a blog. Like what, what?Yeah.

Chris Hickman: Yeah, so, in mound and outbound, right? And so for your low balance, so the low balancer has a security group. So inbound is like, okay, from the open internet,[crosstalk 00:50:37] what are the rules for allowing that? And right. So I have listeners for things like port 80 and 443,[crosstalk 00:50:41] two ballot everyone, Zero to zero to zero to zero. The whole world can talk, but then you have outbound. And so that is what are the connections that the low balancer can make when it’s making calls itself out.

Jon Christensen: Oh, okay. Got it.

Chris Hickman: So I’m going to lock it down and say load balancer. I only want you be able to talk to my ECS services so I don’t want it. I’m not going to make it. zero.zero.zero.zero instead, I’m going to have a rule set up that says, Oh, for if you want to, if it’s port 80 that it’s talking to my, my Fargate service on, I’m going to say, I’m going to create a rule that allows outbound port 80 but the destination is going to be the security group or that ENI.

Jon Christensen: Oh okay, got it.

Chris Hickman: Right? So just it’s just, you know, lock in the day.

Jon Christensen: Yeah.Makes sense.

Chris Hickman: And then likewise, I had a similar rule to set up on my RDS instance. So my RDS instance, I’m only allowing inbound access from the security groups of my ECS services.

Jon Christensen: Okay.

Chris Hickman: And so, so now I specify it, I add an inbound rule or my RDS security group to allow traffic from the security group of the ENI as well.

Jon Christensen: Okay.

Chris Hickman: Right? So that now allows my, task to now go make, to attach and make calls and to, my Postgres database.

Jon Christensen: That makes sense.

Chris Hickman: And so that’s basically it right at that what I did, I kept my site, I didn’t touch my main site. IS this is spinning up a new instance of my site and now I have this running under a like, a different TLD. Right. So it’s beta dot[inaudible 00:52:24]dot com. Instead of like, you know, Bob that Chris[crosstalk 00:52:28] or whatever,

Jon Christensen: Okay.

Chris Hickman: So that way I can test it and say, okay yeah, this is what I or I could have just switched it over to,

Jon Christensen: Right.

Chris Hickman: I could have just changed DNS, or I could have,since I was using the same ALB I could just remove, I could change my routing rules to just save route to the Fargo task

Jon Christensen: Yeah

Chris Hickman: Instead of route to both target groups.

Jon Christensen: Yeah.

Chris Hickman: So it’s whatever you want. I, knew I was just using this as an exercise. I wasn’t going to stay with Fargate. I just created a separate thing for it and really said to do with things like host based routing on my ALB and really quickly at a new route, 53 records.

Jon Christensen: Yeah. It’s so funny. That’s so funny. I knew I wasn’t going to stay with Fargo. I’m a single speed bike rider. He took all the effort to get switched over to the fire brigade, but you’re like, no, I still need to be able to touch those machines.

Chris Hickman: Yeah. Indeed.

Jon Christensen: All right.

Chris Hickman: And it just really quickly this to fall. So one other thing would be okay logs.And what do we, what do we do there? So like we talked about, Fargate, it only supports the AWS logs driver. It also supports Splunk. So if you’re using Splunk, you’re in business, otherwise it’s CloudWatch logs. And so that means that we have some extra work now, if we want to use like a third party service like Sumo or Loggly or, whatever it may be, right. One of the things I had to do is I had to change my code to, I was redirecting standard out standard error to a file. So that I could easily see it on the, the host itself.
And so I had to change that for this [inaudible 00:54:01]too, to not trap that anymore.So let just standard out and standard error is just be admitted so that Docker would grab it right[crosstalk 00:54:10] through the AWS logs driver. It would be shipped over to CloudWatch. So once that happens, you now have your logs in CloudWatch and then it’s basically an exercise for the reader to go figure out how to get those CloudWatch logs into whatever other system you want to do. And there’s many ways of doing that. It could be, you know, you could have a Lambda function that subscribes to the CloudWatch log group that yes, as these logs are created, it sees that and gets notified it runs and it can reformat and then send them over to the, wherever it needs to go. You can use things like Canisius and,you know,there’s other techniques as well.

Jon Christensen: Please tell me that you didn’t remember to do that at first and then, and that things weren’t working and you’re like, where’s my logs? And then you flipped over your desk.

Chris Hickman: Yeah.

Jon Christensen: That’s what Chris [inaudible 00:54:58] Yeah, yeah.

Chris Hickman: That’s how I roll.
Pretty common. I think your management pretty much.No, I mean I knew what I was giving. I mean this is one of the reasons why Fargate has been, you know, just one of the, I knew there was going to be work involved, just because of the logging thing.

Jon Christensen: Right.

Chris Hickman: I’m not a big fan of CloudWatch logs. It’s getting better with CloudWatch log insights.But it’s still, it’s not seem well I love, I love me some Summa logic Sumo if you’re listening sponsor,

Jon Christensen: [inaudible 00:55:29] there we go.

Chris Hickman: We will give you some love. So that’s been one of the things. I just know that there’s going to be extra work there in order to get that integration, that I need. So…

Jon Christensen: Cool. Well how awesome.I think we got through it. We did an ECS and a bag [crosstalk 00:55:46] no problem. And you know, we, I think he did a write up a little while back where it was like, here’s how to do an ECS installation with EC2 and you had it in like a, you know, bullet point form. And I followed it and we did it with a client and it was like, Oh, that is so valuable. That was bullet points. Like how much would somebody pay to just get their hands on this? And here it is, for free in Moby cast.

Chris Hickman: Yes,And hopefully even a little bit entertaining as well. So…

Jon Christensen: Well thanks very much and I’m on vacation next week, but Moby cast, we’ll keep rolling though. I’ll ask.

Chris Hickman: All right.

Jon Christensen: I will talk to you in a couple of weeks, Chris.

Chris Hickman: All right. Enjoy. Bye.

Jon Christensen: Bye.

Voiceover: Nobody listens to podcast outros. Why are you still here? Oh, that’s right. It’s the outro song. Come talk to us. At Mobycast.fm or on reddit at r/Mobycast.

Show Buttons
Hide Buttons