95. The Future of Containers – Part 1 – Making Sense of MicroVMs

Summary

With cloud computing, we started with virtual machines. They allow us to virtualize an entire server, while providing strong isolation and security.

Then containers came along. They allow us to virtualize just our applications, making containers faster and less resource intensive than VMs. But with these gains we lose strong isolation.

What if we could have the speed and resource efficiency of containers coupled with the enhanced security and isolation of VMs?

In this episode of Mobycast, Jon and Chris kick off a three-part series on the future of containers. We dive deep on microVMs, unikernels and container sandboxes, understanding what they are, how they work, and how well they combine the best of both VM and container worlds.

Companion Blog Post

The Future Of Containers – What’s Next

Show Details

In this episode, we cover the following topics:

  • We review virtual machines (full virtualization) and their benefits and tradeoffs.
  • We then revisit containers (OS-level virtualization) and briefly recap how they use OS kernel features to enable virtualization.
  • Containers provide great performance and resource efficiency, but at the cost of losing strong isolation. Can we have the performance and efficiency benefits of containers but with the strong isolation of VMs? There are some promising technologies that aim to combine the best of both VM and container worlds: microVMs, unikernels and container sandboxes.
  • What are microVMs?
  • What are unikernels?
  • What are container sandboxes?
  • AWS Firecracker is one of the most talked about microVMs. We discuss what it is, and the key benefits of using Firecracker.

Links

End Song

Smooth Modulator by aMIGAaMIGO

More Info

We’d love to hear from you! You can reach us at:

 

Stevie Rose: With cloud computing, we started with virtual machines. They allow us to virtualize an entire server while providing strong isolation and security. Then containers came along. They allow us to virtualize just our applications, making containers faster and less resource-intensive than VMs, but with these gains we lose strong isolation. What if we could have the speed and resource efficiency of containers coupled with the enhanced security and isolation of VMs? In this episode of Mobycast, Jon and Chris kickoff a three part series on the future of containers. We dive deep on microVMs, unikernels and container sandboxes, understanding what they are, how they work, and how will they combine the best of both VM and container worlds.
Welcome to Mobycast, a show about the techniques and technologies used by the best cloud native software teams. Each week your hosts, Jon Christensen and Chris Hickman pick a software concept and dive deep to figure it out.

Jon Christensen: Welcome Chris, to another episode of Mobycast.

Chris Hickman: Hey Jon, it’s good to be back.

Jon Christensen: Yeah, good to have you back. All right, before we get started today I just got a quick story to tell you, and it’s related to software engineering. So yesterday I decided … Was it yesterday? No it was two days ago, I decided to switch out the kitchen sink faucet. And no problem, couple hour job, right? You’ve done that before. Easy peasy.

Chris Hickman: Mm-hmm (affirmative).

Jon Christensen: Sometimes it’s easier than others. Depends.

Chris Hickman: Right. So it turned into a 10-hour day of lots of trips to the hardware store. Actually giving up, being like, “Kelly, I’m not going to be able to do this. We’re going to have to call a plumber tomorrow.” After I’ve already spent nine and a half hours. And then a final push at the end, and success. I did it after the kids went to bed.

Jon Christensen: Yes, there you go.
Yeah. And the thing was that it made me think about the software development because it was like peeling the onion. I had issues with a particular valve and I tried to fix it a number of different ways. And finally I got it fixed at the end after giving up and then coming back to it, which is so, so similar to the feeling of doing software development just facing issues, trying to figure out how to do things, watching videos, the whole process felt similar.

Chris Hickman: Yeah. At the end of the day it’s problem solving, right? That’s what software development is. It’s you have a problem and then okay, what are the steps to go and fix it? And you have to do a combination of experience and trial and error and looking for help from others. And so those kinds of skills are so important. Not just in writing software but in just real life.

Jon Christensen: Right. And there was a place … I think the whole thing would have gone a lot faster, but there was this situation I ran into where I shut off the water at the sink valve and opened the pipe and it was like, oh, the water’s still kind of coming through this valve even though it’s shut off. So obviously the valve is bad. And it’s not a problem, right? It was mostly shut off. And if I connect the hose to it, then it’s not going to create enough pressure in that hose to really leak or anything. So I could have just lived with that and Kelly was kind of like, “Yeah, just leave it. Hit it up next time.” And I was like, “No, but I’m in here and I got this broken valve. This is the time. I’m in this code right now. This is the time to fix it.” Yeah.

Chris Hickman: Yeah, that wouldn’t have been an option for me at all. That would drive me nuts.

Jon Christensen: To just leave it?

Chris Hickman: Yes.

Jon Christensen: Yeah, totally.

Chris Hickman: Absolutely.

Jon Christensen: There’s no way I could leave that. Yeah. Yeah. But that was the difference between a couple hour job and a 10 hour job, is that one stupid valve and how close it was to the wall and yeah, you can start to imagine.

Chris Hickman: Oh yeah, yeah what happened. I know just from experience with me, in my house, the house a little over 20 years old now and so all the valves are obviously over 20 years old on the water lines for all the sinks and whatnot. So if you go and turn them off, it usually works. But what happens is those rubber gaskets inside those valves are so old and brittle that the mere fact of actually turning that valve and applying some stress to it just rips those rubber valves completely apart, right?

Jon Christensen: Yeah.

Chris Hickman: So now the valve doesn’t work anymore and not only that, then it actually, those pieces of rubber then go up into the waterline and into the faucet. And now it gets stuck in the faucet itself. And so sometimes I get lucky where I can just remove the aerator on the faucet and pluck out the bits of rubber. And then more unlucky times they actually get stuck inside the faucet housing themselves. It’s like, okay, now I’ve got to go change out the faucet.

Jon Christensen: Right.

Chris Hickman: But those are the … Yeah, some of the joys of home ownership for sure.

Jon Christensen: Right? And I think most of our audience is already professional software developers. They’re already kind of into their career, but some people are probably just doing a bootcamp or making their way into software development. And not to say don’t do it, but if you’re finding that sort of abstract, cerebral side of software development to be not for you, but you do like problem solving and you do like thinking about systems, then actually plumbing is a really good option. I think that contract plumbers make about the same as … Not Silicon Valley software developers, but at least outside of Silicon Valley software developers.
I know that some good friends of mine do really amazing work. They live in … They both have houses in Vail. That’s not something that just anybody gets. And when I look at the systems that they built … So you picture a plumber as being up to their elbows in shit basically. But these dudes are not. They never snake a drain unless it’s at their own house, right? So it’s such an underserved area. They’re master plumbers and they make great money and they do systems-level thinking every day. Such a great career for them.

Chris Hickman: Yeah.

Jon Christensen: Yeah. Anyway, that’s my plug for becoming a plumber. The world needs more of them because it’s never going to … We’re never going to be able to afford plumbers soon if there’s fewer and fewer, which is what is actually happening in the world.

Chris Hickman: Yeah. They’ll become like COBOL programmers.

Jon Christensen: Yeah. Right on. So today I’m super excited to do our topic because it’s a topic that people have requested kind of. The last time we did virtual machines and containers, somebody was like, “Oh, are you doing microVMs?” And it’s like, shut up. We didn’t get to it. Today we’re finally here. We got to it. We’re getting to it. Well we’re not there yet. We’re about to get there.

Chris Hickman: We’re still on plumbing.

Jon Christensen: Yeah. So we’re going to talk about microVMs, and this is going to be a series, I believe. Chris, do you want to kick us off?

Chris Hickman: Yeah, sure. At first when I was looking at putting this together, I was thinking, I was like, okay, let’s do an episode on microVMs and kind of tease out, are these, is the feature for containers? And then kind of doing some research and looking at the landscape, it kind of made sense to kind of think of this just more of in the context of really what’s next for containers? So we started off, there was virtual machines, then we had containers. Those have been pretty well established. Now we have a great ecosystem around them and all the tooling and whatnot.
But now we’re starting to hear some other new things. Things like microVMs is definitely a big one. But microVMs is not the only thing out there. There’s actually other things going on that are pretty interesting. Some of them pretty established as well. It’s not brand new. So things like unikernels. What is a unikernel and what are some of the problems it’s trying to address?

Jon Christensen: It’s a rare and elusive kernel.

Chris Hickman: Yes. But so, just a little bit of a spoiler alert, unikernels basically run single process, right? So single process, single horn. So there are some comparisons there to rainbows and sparkles and sprinkles and whatnot with the unikernels. And then there’s other things like new sandboxing techniques, like what Google’s doing with their gVisor project as well. So lots of interesting things going on. And a lot of this is being driven by the demands of cloud computing and the very kind of … It’s a vertical, right? It’s a niche in the market, and the kind of features and capabilities it needs. So we’ve talked about virtual machines and how basically it’s mocking the entire computer and all the kind of devices that you need in order to run an unmodified guest OS. But in a cloud computing environment, we don’t need that stuff.
And so that’s what is now where a lot of the development is. And for some of these new technologies that are starting to get a lot of play, they’re really addressing that space of like, hey, we don’t need … We’re starting to give and take, right? So these are the combinations, the mutations, the merging if you will between the good things about virtual machines and the good things about containers. And so that’s what we’re going to … That’s what we’ll be talking about over this episode. And definitely a followup episode. It may be three depending on how far along we get. But really looking at things like microVMs, unikernels, sandboxes and just why these things exist, why you should know about them, how you might be able to use them should you use them. Those kinds of questions. Just breaking it down.

Jon Christensen: Cool. All right. Yeah, you sold me. So let’s get into it.

Chris Hickman: So with that, why don’t we just kind of just do a little bit of recap on just what the current container landscape looks like. And so we have our tried and true friends, virtual machines and containers, right?

Jon Christensen: Right.

Chris Hickman: And so again, for those folks listening, if you haven’t listened to our four episode series on virtual machines versus containers revisited, which are episodes 81 through 84, please go back to those. Lots of really, really good solid foundational knowledge there for just really understanding what virtual machines are, what they do, how they do it, and then what containers are and what they do and how they do it. And then comparing and contrasting those two technologies, right? Because this is what we live with each and every day.
So just as a quick recap of that, so with virtual machines this is dealing with full virtualization, right? So the virtual machine, it’s running a full copy of the operating system and a virtual copy of all the hardware. And so it is basically simulating everything that it needs to from a hardware standpoint so that you can run an unmodified guest OS on that virtual machine without that guest OS knowing that it’s running virtually. It thinks it’s running on just a bare metal computer, right?

Jon Christensen: Right.

Chris Hickman: So everything that needs to be emulated, simulated in that environment is from the virtual machine.

Jon Christensen: And if you want the part where I interrupt Chris and say, “What does all that mean actually?” Then go back to the episodes because that’s where I do that. I’m not going to do that right now.

Chris Hickman: Perfect. Yeah. And so part of that is there’s the hypervisor, right? And so the hypervisor is that piece of software that’s managing the hardware on behalf of the virtual machines. So it’s a key part of virtual machines is that hypervisor. And then another key thing to keep in mind here is that with each … Each virtual machine is isolated from any other virtual machines on that particular physical server that it may be on, right? So each virtual machine, they have their own guest OS, and therefore they have their own kernel, right? So pretty strong isolation.

Jon Christensen: Right.

Chris Hickman: But on the other hand, it’s also like, because it’s kind of a generic environment where it’s basically trying to simulate … It’s stimulating everything, emulating everything it needs to in order to run that unmodified guest OS. It’s pretty heavy-handed, right? It’s pretty heavyweight stack. And so therefore, VMs typically start up times are quite lengthy. They require a lot of resources, but they do provide that strong isolation and security. So after that, we have containers. And containers are virtualization at the OS level.
And so this really gives rise from some of the virtualization, isolation and resource management mechanisms that come from the operating system kernel. And typically when we talk about containers, we’re talking about Linux containers because Linux has some core technologies there that we’ve talked about. Again, we’ve talked about these at length, but in general, we’re talking about things like namespaces and C groups which are allowing … We’re using kernel level features to isolate processes that are running on the computer to provide that OS level virtualization.

Jon Christensen: Right.

Chris Hickman: The key thing with containers is that they are sharing the same single kernel with each other and the host. So again, these containers are relying on those virtualization isolation mechanisms of the Linux kernel. They’re all sharing the same kernel. So this is one of the … So it’s both a pro and a con, right? So the pro is that this means that containers … And they’re not emulating all the hardware of the system. They’re not spinning up a brand new version of the operating system.
So they’re very lightweight. They’re very quick to spin up. But on the downside, we have some security isolation concerns, right? Because they are sharing the same kernel across all these different containers as well as with the host. And so in general, it’s a less secure posture, right?

Jon Christensen: Yeah. Yeah. And I think that’s the kind of key, right? It’s like, virtual machine isolation is trusted enough to the point where literally Pepsi and Coke can be running on the same machine and the two companies have no idea and kind of trust that. And it literally does happen in the cloud. Literally have competing companies potentially running on the same physical hardware. Whereas, really nobody is saying that that’s an okay thing to do with containers. There’s nobody that’s like, “Oh yeah, yeah, trust the container. It’s going to keep your stuff completely isolated from any other container that’s running on the same machine.” At least that’s the sense that I get. Containers, yeah sure, go ahead and run your two different programs on the same machine, your two different applications for your company. But best not be putting sensitive data that’s not allowed …. One app that has sensitive data and another app that has other sensitive data and the two should not be mixed. It should not be running in the same guest or host OS.

Chris Hickman: Yeah, absolutely. As a general rule, that’s absolutely the case. If you’re in a multi-tenant environment, having containers from multiple tenants on the same host has been very problematic. And most vendors out there have gone to lengths to kind of give those isolation guarantees by essentially having separate VMs for every one of the customers that are running containers, right? To give that isolation. And so those are the … That’s virtual machines versus containers. We now kind of talked about some of the limitations, problems with containers. And so that’s really given rise now to some of these other things that are these new technologies that are coming online that are trying to address it, right?

Jon Christensen: Right.

Chris Hickman: They’re basically, they’re all about …

Jon Christensen: Just Chris, it’s such a simple problem, right? It’s like the … It’s so dead simple. It’s like, what do we want? Trustworthy isolation? How do we want it? As fast as containers. That’s it.

Chris Hickman: Could you chant that a little more melodically? Come on, let’s have some passion [inaudible 00:17:02] or something, right? What do we want?that are trumping, right? Like what do we want? Isolation. How do we want it? We want it now and fast.

Jon Christensen: Right. Yes. Thank you for doing that for me. I wasn’t going to do it.

Chris Hickman: All right, you’re welcome.

Jon Christensen: Too embarrassed on the podcast to go all the way to third circle.

Chris Hickman: I’m okay with it, obviously. So yeah, and that’s exactly what’s happening, right? So we’re hearing these terms like microVMs, you may or may not have heard about unikernels. But these things are getting more attention now, right? Because they really are doing exactly that. What we just chanted for. It’s all about we need that security and the isolation that’s offered by VMs, but we don’t want this big heavyweight stack that slows us down and takes up an ordinate amount of resources.
Especially given the fact that we’re running in a very specialized environment for these particular applications, which is we’re running in the cloud, right? So this is lights out data centers, right? We don’t need to emulate a video terminal. We don’t need to have all these different hardware devices to be emulated.

Jon Christensen: I love that term you just used. I’ve never heard that one before. Lights out data centers.

Chris Hickman: Oh really? Yeah, that an old term. But it really … I guess because I kind of lived through that, right? This goes back to the days at Microsoft when I was working at the Microsoft network and we were moving MSN from that proprietary dial up X25 network to the internet. So we had a data center and to begin with, we didn’t have the … There wasn’t the remote tools to connect to these machines. These were Windows servers, not … Linux actually at that point didn’t even exist yet, I don’t think, right?

Jon Christensen: Sure, yeah.

Chris Hickman: So we had to actually physically go to the data center, right? And to go get on the machine. And really the solution there was every one of these racks would have a KVM tray. So a 1U high tray, you slide it out, pop it open, and then there’s a keyboard, there’s a display that flips up, and then you have a little track pad mouse type thing. And then you have cables connected to all the different servers that are in that rack that you can then toggle between them and whatnot. And now you can control these computers and do whatever it is that you needed to do.
So a few years later being able to now remote into these machines, that was such a big deal, right? It’s like you didn’t have to go to the data center anymore. You didn’t have to use these KVM switches. And that’s when we kind of started hearing this lights out data center, right? There’s no reason to turn on the lights because there’s no one in there.

Jon Christensen: Right, got it.

Chris Hickman: So keep the lights out and you’re just remoting in. And that was definitely just a huge, huge step. And obviously that’s where we’re at now.

Jon Christensen: Right.

Chris Hickman: Yeah. So yeah, it’s in a very constrained environment with … And we can take advantage of the fact that there’s only a requirement list that we have to support. It’s much less than what it was when VMware came out with their software for virtual machines. So that’s what these new technologies are addressing. And so microVMs are exactly what they sound like. So they are virtual machines, they have a hypervisor, but they are only emulating the bare minimum, right? They’re the absolute minimum of what needs to be simulated for these types of environments. And so therefore they are … They do have the speed and resource efficiency, and efficiency of containers, but because they are Vms, they give us that security and that workload isolation.

Jon Christensen: Can we give a quick list of things that a virtual machine might virtualize and then list a couple of the things that a microVM doesn’t need to virtualize?

Chris Hickman: Sure. So we talked about video. We talked about … You could have things like sound. You could have silly things like another IO device is pens and styluses and whatnot. As a flip side of that, you can think … A microVM, one in particular, the firecracker, which we’re going to talk a lot more about. They only have support for five. There’s only five emulated devices in the Firecracker VM, so there’s one network device. There’s one block IO device. There’s one networking device which is V-SOC. There’s a serial console and then they emulate a one button keyboard controller.

Jon Christensen: Yeah, that’s kind of what …

Chris Hickman: And that one button is just … They only have that in there because they needed something in order to allow a VM to be reset. And so this is basically the on/off button. So those are the only five things being emulated. So could you imagine having a full blown computer with only those devices, right? It’d be very limited.

Jon Christensen: Right. And I just want to say that it’s … The thing that makes it micro is not just … And this is what I’m coming to understand as I speak, it’s not just leaving out whole things, which is awesome. Leaving out the whole video virtualization is awesome. You don’t need any of it. But it’s also narrowing the path for other things. So networking for the OS for a typical [inaudible 00:22:51] OS, networking is like phew, it’s this big thing and you have to have generic handlers to okay, well what kind of networking am I doing? Okay now what kind of drivers am I loading?
Whereas, if you’re making a microVM, you can know all of that in advance. And so it’s like, I’m going to just cut away all of this extra garbage code that we don’t need because I know this is my network device, this is the type of networking I’m doing and this is the driver I need for it. And instead of even having a driver, it’s just like the driver is in the microVM code itself maybe. There isn’t a specialized driver loading system even. It’s just like, Oh we don’t need that.

Chris Hickman: Right. Yep. Absolutely. And so that’s exactly how these designs are being driven. They’re just … And they’re making design decisions as well, right? So like, yeah there’s firecracker has these five emulated devices. There’s one for network. And when they did that, they made very specific design decisions about what that network device looks like and its capabilities, right? So it’s very specifically, it’s V-SOC. And that’s a whole nother episode to go into the details of what that means and the differences between tap networking versus the F, virtual ethernet. But it’s just a very specific design session to again, limit that service area and it causes then folks that are developing against these things, right? To have to make trade-offs. So there’s certain environments like firecracker only supports a certain number of guests that can actually run on the firecracker hypervisor.

Jon Christensen: Sure. Yeah.

Chris Hickman: But again, these are the trade outs that they’re making, right, in order to … Again, it’s specialized to the very specific environment. And by doing so, that’s how they can just really cut away all this other support and reduce that heavyweight stack into something very, very lightweight that is super fast and doesn’t require a lot of resources. But again, gives us that high isolation that we get from a VM.

Jon Christensen: Right, right. Cool.

Chris Hickman: So, that’s microVMs. So microVMs, really just virtual machines. They’re doing the same thing. They have a hypervisor, they can take advantage of hardware virtualization, but they’re just very, very fast and they’re not using a lot of resources. So, and we’ll get into what are some of the current implementations of microVMs and how they work. Another thing I wanted to just bring up as one of these other kind of technology stacks that’s getting more and more play is unikernels. And unikernels, they’re not new.
They’ve been around for a while. For easily four or five years, if not even longer. And they’re different in how they’re implemented. But some of the design goals are the same. So, what is a unikernel? So what it is, is it’s a lightweight, immutable OS that’s compiled specifically to run a single application.

Jon Christensen: Okay.

Chris Hickman: Right? So really what it means is it means you can think of this as, I’m only going to run one process, one application. So let’s just say it’s an engine X web server. So, it needs certain operating system library primitives and whatnot, but not the whole gamut. And so what a unikernel does is the unikernel just says, okay, let’s go and make a … Let’s go compile this application and pull in all the operating system resources then functionally that it does need, put that all together into a single bundle. And now that becomes a unikernel. So we no longer have a full blown operating system. There’s just this one thing, this one entity. And we can’t … It’s not going to be possible for us to run more than one of these things. If we wanted to run another process, like instead of the engine X web engine, we also wanted to run Tomcat or something like that. We couldn’t do that, right? We’d have to go through the …

Jon Christensen: Yeah, you can’t even … Yeah, you think of it … I’m just thinking about containers, which is obviously a totally different animal, but with that you can run different processes as a container. But with this it sounds like you can’t.

Chris Hickman: You can’t.

Jon Christensen: Yeah, you can only run an engine X. So one of the things that occurs to me is that maybe you could get rid of a lot of operating system decisions, even some scheduling type stuff. Yeah.

Chris Hickman: Absolutely. And that’s what this has taken advantage of, right? So saying there’s only a single process. So I don’t have to worry about scheduling amongst processes. You’re still going to have multi-threading support, right?

Jon Christensen: Yeah. Threads, yeah.

Chris Hickman: So there’s that. But you don’t have to worry about scheduling processes and whatnot, right? You don’t have to …

Jon Christensen: A whole bunch of code just gone.

Chris Hickman: Yeah. Or just the code for creating processes, right?

Jon Christensen: Gone.

Chris Hickman: Gone. Yeah.

Jon Christensen: Boy, but what a pain though, right? It feels hard, right? It essentially feels like you’re compiling your application into or with the kernel. So it better be an application that you run a lot of, because you don’t want to do this for your little forum website, right? It feels like if you’re going to be running 100000 instances of this application across a fleet of machines, okay, then maybe this starts to feel worth it. Otherwise, having to compile a specialized kernel for every application that you write, it feels hard to me just off the top of my head, unless maybe the tooling around it makes it easy.

Chris Hickman: Sure. Yeah. So there’s tooling around this that can … And there’s more of an ecosystem that’s building up. So actually building these images is not too terribly difficult anymore, especially with some of the tooling that’s coming online. And so it’s one of those things … Okay, so what does this give you? So unikernels are giving you … And so something else to consider with the unikernels is because there’s only one process running, it has a single address space model. And so what this means is that there’s no user versus kernel address space. There’s only a single outer space, right? Which is really kind of like, whoa, when you think about it, right? Because it’s like back, if …

Jon Christensen: Well actually can you just remind us what an address space is?

Chris Hickman: Yeah, so this is … Okay. This goes back to just operating systems, especially multitasking operating systems 101. So back in the days of say Windows 3.1 or whatnot, right? That didn’t have this concept … It didn’t have the concept of a kernel and a protected kernel and memory that only the kernel had access to. And so that meant that you could have an application that crashed your entire machine, right? And so you had to go and reboot the entire machine. And so then operating systems got more sophisticated and they had that clear separation of it. So this way, if an application had an error in it and crashed, only the application crashed, not the entire operating system, right?

Jon Christensen: Right.

Chris Hickman: So there’s this separation, right, between the kernel of the operating system and application programs, user programs, user space programs.

Jon Christensen: That’s so funny that you bring up Windows as an example because there’s been separation of kernel and user address spaces forever, since VMS, like way back, but Windows just didn’t do that until later.

Chris Hickman: Right, right. Yeah.

Jon Christensen: Cool.

Chris Hickman: Yeah. So with unikernels they’re like, look, you only have one process running. So it doesn’t even make sense to do that, and we already pay … You pay a huge performance penalty of doing the translations between user and kernel address space. And so with unikernels, you don’t have that. So you get a huge performance boon just from having that, right? Along with the fact that you only have the very minimal amount of code on there, the operating system code to support your application.
There’s not all the other overhead that goes along with it. And so they’re very … so it’s very … So performance-wise, very, very fast. But then also it has a great security model as well, right? Because every single, every unikernel has its own kernel. There’s no sharing of kernels, right? Because the kernel’s basically baked into this image type thing. So it can only talk … There’s that strong isolation, right? And typically unikernels will run on hypervisors or maybe bare metal as well. So, and there’s lots of implementations for unikernels out there. We’ll hopefully get to this in more depth as well as we go on. But just something I wanted to kind of highlight that it’s not just microVMs, there’s also this really different approach that is gaining momentum with unikernels.

Jon Christensen: Right. We cover a lot of information here on Mobycast. And if you’ve ever wanted to go back and remind yourself of something we talked about in a previous episode, it can be hard to search through our website and transcripts to find exactly what you’re looking for. Well now it’s a lot easier. All you have to do is go to Mobycast.FM/show-notes and sign up. We’ll send you our weekly super detailed outline that we use to actually record the show. And a lot of times this outline contains more information than we get to during our hour on the air. So signup and get weekly Mobycast cheat sheets to all of our episodes delivered right to your inbox.

Chris Hickman: A third approach that is out there is this idea of just sandboxing. And so sandboxing is a technique where it’s really addressing those security concerns that we have with containers where they have the shared kernel. And so what it does is basically sandbox, a sandbox technique is sandboxing the kernel used by those containers to be a separate kernel through just re-implementation of the Linux kernel itself. So a project doing this is something called gVisor from Google.

Jon Christensen: Okay.

Chris Hickman: And what gVisor really is, is they basically have … They’ve implemented the Linux kernel as software written in Go, in the Go programming language. And that’s what containers are using when they need to access kernel level. So CIS calls, they’re actually … They’re not talking to the host kernel anymore, they’re actually going through this intermediary. They’re going through gVisor. and that provides this isolation. So it’s kind of … It’s not a virtual machine, but it’s also not doing that shared kernel as well. So it’s somewhere in between. So it doesn’t really help with performance at all with containers or anything. But it is giving, it is addressing some of those security concerns because they’re no longer sharing the same kernel.

Jon Christensen: Well I was going to say it seems like it would be addressing performance, right? Because it’s containers and they’re already fast, right? Sandboxes are essentially containers, yeah?

Chris Hickman: Well it is, but it’s actually slowing down containers.

Jon Christensen: Okay.

Chris Hickman: Right? Because you’re going now through an intermediary.

Jon Christensen: Right, right. But slowing down containers, that’s kind of interesting. It’s like slowing down … You can either speed up the VMs or slow down the containers, but yeah.

Chris Hickman: Yeah. Yeah, security comes at a price, right?

Jon Christensen: Right, yeah.

Chris Hickman: Yeah.

Jon Christensen: For whatever reason, when you were talking about that, talking about essentially doing a better job of making the OS respect security boundaries between processes. That’s essentially what that sandbox sounds like it’s doing. It seems reasonable, right? Because even hardware has sometimes been responsible for the problem of leaking information across Os’s. I think it was Heartbleed was the name of the Intel issue where memory was leaking. Memory from one OS was readable from another, like one virtual machine was readable inside another virtual machine and it was literally because of this leak of information in Silicon sandboxes, right? Is that the name of the one?

Chris Hickman: No. But yeah, we have these bugs and security stuff. Heartbleed was specific to SSL and it had to do with …

Jon Christensen: Oh yeah, it’s not that one.

Chris Hickman: You send a certain …

Jon Christensen: Yeah, it’s not Heartbleed.

Chris Hickman: You said the Heartbleed command and it basically exposed a buffer of memory that you [inaudible 00:36:25].

Jon Christensen: Yeah, Heartbleed was a couple of years before the one I’m thinking of, and I just can’t remember the name of it. It hit every single Intel chip, every single one. And it was like two years ago.

Chris Hickman: Yes, yes.

Jon Christensen: Whatever that one was is the one I’m talking about.

Chris Hickman: Yes, yes. Yeah.

Jon Christensen: So we’ll put it in the show notes.

Chris Hickman: Yes.

Jon Christensen: But yeah, I guess the point is if you’ve got multiple things happening on a chip, then the chip has to provide some security to prevent those things from knowing about each other. And also the actual operating systems and hypervisor do that too. And there’s no reason I guess that like in these sandboxes that an individual kernel couldn’t also do that, but it just would have to have better isolation between its processes. And I guess as I’m sitting here thinking about this, one of the reasons that probably kernels don’t provide out of the box better isolation between processes is just because being able to access information across processes is so valuable. It’s a thing that people want typically rather than a thing that people don’t want.

Chris Hickman: Yes. Yeah, yeah. Indeed.

Jon Christensen: Okay, well cool. So here we are. We’ve talked about sandboxes, we’ve talked about unikernels and we’ve talked about microVMs.

Chris Hickman: Right. Yeah. So why don’t we dive in and start talking about some of the microVM implementations and now that we kind of understand a little bit about the characteristics and the landscape out there. let’s go and … What is out there and how do we use these things? So why don’t we start with Firecracker. So Firecracker is a microVM. This has been developed by Amazon for AWS, and it really comes out of the issues that they had in supporting Lambda. So Lambda launched in 2014, functions as a service, right? Where you could go and like, here’s my … I’m now going to write code and at the function level and go and run this in a server-less fashion for me on some machine. And so really great from a user standpoint. It kicked off a whole new portion of the industry with it.
But on Amazon side they had to figure out, well how are they going to do this and how are they going to ensure things like security and isolation for their customers? So when Lambda first launched, they ended up going … They had to go with the decision of they would have a virtual machine per customer. So I think they can make it … The schedule are smart enough so that if you had multiple functions running in Lambda, as long as they were in the same account, they would run … They could be scheduled on … They could run on the same virtual machine, but you couldn’t run a Lambda that was being used by some other customers. On a virtual machine that was used by some other customer, right? So pretty heavyweight, if you will, for running these … And think about it, in a worst case scenario, you’re spinning up a VM for every function that gets run on Lambda, right?
But they had to do that because that was the only way they could really guarantee security and the isolation that they know that their customers wanted. They also … Security is job one for AWS. So they don’t … Could you imagine? What would happen is six months into this and it’s like they’re getting all this great usage of people using it. And then they find out, oh, someone’s figured out how to get through to someone else’s functions on a machine. So they’ve written some code or whatnot to go and access someone else’s functions and have a security leak. That would be just catastrophic for someone.

Jon Christensen: I can’t imagine what would happen.

Chris Hickman: Yeah.

Jon Christensen: And now I know that it was called Meltdown and it happened in January of 2018. And I also know that it was an Intel chip issue that was making it possible for a single process to read all of the memory on the chip. And what happened was, there was [inaudible 00:40:51] figured out and Amazon had it, applied to every single computer. I crossed their entire cloud super fast despite the fact that it actually slowed everything down a little bit. People were noticing, oh, some of my stuff’s got slowed down a little bit. But security being job one, they’d rather slow all of their customers down than potentially let that memory leak from one customer to another.

Chris Hickman: Yeah, that was one where I was like, oh, the fix is going to cause things to run about 30% slower.

Jon Christensen: Yeah.

Chris Hickman: And so it’s like boom, overnight. It’s like you kind of see those performance degradations, but you’re not leaking memory.

Jon Christensen: I’ll take it. Yeah.

Chris Hickman: Yeah. Here, you actually got some security. Yeah. Yeah. So yeah, Meltdown Spectre was definitely one of those chip issues.

Jon Christensen: Right.

Chris Hickman: Yep.

Jon Christensen: Well cool. So we’ve got Lambda and it’s a daunting problem. We’re trying to run functions. We had to start up virtual machines for every single function invocation. Unbelievably huge overhead, but they decided to take it on anyway because they saw the future.

Chris Hickman: Absolutely. Well, and again, it was just from a business standpoint, very expensive for them to run these functions, right? Given that they have to spin up a VM for every one of them. So that gave rise to some thinking of, well how can we make this better? And that gave rise to the development of Firecracker. So they, I believe development started in 2017.

Jon Christensen: Okay.

Chris Hickman: And started going to production in 2018.

Jon Christensen: Okay.

Chris Hickman: And so just as a bit of a benchmark as of now, so call it December 2019 or January 2020, they’re running trillions of Lambda invokes per month on Firecracker.

Jon Christensen: No problem.

Chris Hickman: That’s a lot. Yeah, that’s a lot. Right?

Jon Christensen: Yeah.

Chris Hickman: So a little bit about … Okay, so how does it work? So Firecracker, it’s a lightweight virtual machine monitor. So a VMM, otherwise known as a hypervisor, right? So it is a hypervisor. It’s using the Linux kernel-based virtual machine, so KVM. So this is a KVM-based hypervisor. And so just as a … So we talked a bit about KBM as one of the popular hypervisors back in the VMs versus containers mini series. But just kind of as a refresh, so KVM, it’s a virtualization module inside the Linux kernel, which then can turn the Linux kernel into a hypervisor.
And it’s able to run multiple VMs running unmodified guest OS images. Each VM has it’s own private virtualized hardware and it can also, it’s leveraging hardware virtualization, right? So those CPU virtualization extensions like Intel VT or AMDV, it can use those chip capabilities for doing the hardware virtualization.

Jon Christensen: Right.

Chris Hickman: So very, very fast hypervisor, very performant, strong security and isolation. And from an architecture standpoint, there’s two pieces to it. There’s the kernel component and then there’s a user space component. And so the kernel is that, it’s a loadable kernel module that provides the core virtualization infrastructure. And then it also has a processor-specific portion to it. So whether you’re running on Intel or AMD or whatnot. The user space component, that’s responsible for kind of doing the hardware emulation. And typically here you’ll see there’s software called QEMU. And probably, if you’re in the space you’ll see this is pretty popular. It’s used a lot. So it’s full blown emulation. It’s a user land program that does this hardware emulation. And it’s used by KVM for IO emulations. And it’s again, very popular. It’s performant. The QEMU team says they can run KVM and Zen virtual machines with near native performance.
The reason why we’re talking about this now is that this is where Firecracker comes in. So Firecracker is KVM-based, so it’s still … So it uses that KVM kernel component, but the user space component, that’s what Firecracker is. It’s an alternative to QEMU, right? So that’s where Firecracker slots into this. And so again, keep in mind QEMU, it’s that full blown emulation versus Firecracker. This has been purpose-built for just this space. And specifically they purposely build it to begin with to run Lambda functions. So think of this as like, it’s really more in the realm of server-less computing.

Jon Christensen: Yeah.

Chris Hickman: And so it’s only emulating the bare minimum that it needs to.

Jon Christensen: Would that be related at all to some of the things that you can’t do in Lambda? Do you think that that is part of it? Or is it just … You know what I mean? There were certain things at least that for a while you couldn’t do. Accessing certain hardware for example, from a function. Maybe you wanted to do some calculations on a GPU. That didn’t seem like it was a possibility with Lambda. Yeah? Do you think that some of the limitations around Lambda were based on just limitations of what the VM was able to do?

Chris Hickman: Probably not so much just because again, Lambda launched in 2014 and that wasn’t using Firecracker, right? That was using basically just EC2 instances and virtual machines.

Jon Christensen: Yeah, and you still couldn’t do some of that stuff.

Chris Hickman: Yeah, so a big part of that was just, okay, these are the types of instances that we’re going to be using. This is how we’re going to lock them down. We don’t want people accessing just disc space indiscriminately. We’re going to put some limitations around it just for them to be able to operate it in a scalable way, right? And so limitations came from that. And then once they started using Firecracker, that may or may not have imposed additional limits. But I think from a user standpoint, it was probably pretty much transparent, right? So I’m sure, I don’t think it was like, oh, before I could do this on my code and now that doesn’t work anymore.

Jon Christensen: Right. I could see maybe some of those limitations getting baked into Firecracker. So maybe, you talk about disc access and stuff, maybe some of that was just a code limitation that turned into a microVM-style limitation.

Chris Hickman: Yeah, absolutely. They are make … Make no mistake about it, they made design decisions and trade-offs based upon this very specific environment that they were shooting for, right?

Jon Christensen: Yeah.

Chris Hickman: So they definitely, if it wasn’t supported in the original version and they really didn’t see a need for it and people weren’t complaining about it, they’re definitely not going to support it. And when they did Firecracker.

Jon Christensen: Right, especially if not supporting it could lower startup times or something.

Chris Hickman: Yeah, absolutely. Yeah.

Jon Christensen: Cool.

Chris Hickman: And so in addition to being this very purpose-built alternative to QEMU for basically server-less containers and only emulating the bare minimum, it’s got some additional features above and beyond that are kind of really useful in this environment. So one, there is a restful API for controlling Firecracker that it’s offered. They do things like resource rate limiting for microVMs, which obviously that’s pretty important for things like Lambda functions to be able to say, hey, you can only use this much memory, right? Or this much CPU. So having this concept of resource rate limiting. And then there’s also, there’s a microVM metadata service that allows for sharing of configuration data between the host and guest as well. So just some additional features that really are kind of useful in this environment of, hey, I want to run lots of these things on a single host, and I need some way of managing them and doing coordination and whatnot.

Jon Christensen: Okay. That actually probably was inspired by the container world and the Docker Daemon. Yeah.

Chris Hickman: Yep. Yeah, some other things keep in mind with Firecracker is it can run Linux guest basically. It can also run OSV guest. OSV is a unikernel implementation. And so we’ll kind of talk about OSV later, but for the most just … So this is not going to run Windows guest, right? It’s not any kind of any type of operator. It’s again, very specific. And for the most part we can just kind of think of it as being Linux. And obviously it’s in KVM as well, although KVM can run Windows images as well, but it’s written in Rust, which is kind of interesting as well. One of the reasons why I did that is because it’s a memory-safe programming language. So a lot easier to write this low level code safely using a language like Rust as opposed to C or C plus plus.

Jon Christensen: You know what? Are we getting close to the end here? Because if we are, and we still have like 10 minutes left, we should say what that means, a memory-safe programming language. If not, we’ll have to save it for another time.

Chris Hickman: We’re nowhere near being done. There’s still a lot to talk about here. So if you want to talk about what we mean by memory-safe programming language we can.

Jon Christensen: Let’s do it. I already interrupted us. I don’t know what that means exactly and I feel like, gosh, I wish I knew that so that this whole thing just sat in my head better.

Chris Hickman: Yeah. So, it’s really the evolution of programming languages and it kind of goes back to in the past with things like C and C plus plus where you’re doing all the memory management yourself, right? So when you need to access memory, you’re allocating the memory and then when you’re done with it, you’re supposed to free it. But, if you forget to free the memory that you used, that’s a memory leak, right?

Jon Christensen: Yep.

Chris Hickman: So that’s a problem. So especially for long running processes, over time if you’ve got this subtle bug where you forgot to free some memory [inaudible 00:52:01] market is no longer being used, then eventually your program would crash because it’d run out of memory.

Jon Christensen: Sure.

Chris Hickman: So that’s not memory safe, right?

Jon Christensen: Right.

Chris Hickman: So that’s what languages like Rust and Go give you, is some of that memory management capabilities. It also protects you from overriding memory addresses and whatnot so that you’re not accessing memory that you really shouldn’t be doing and whatnot.

Jon Christensen: I think I can put this into the slot of it has the memory management of a modern language like Java but compiles all the way down to talk directly to the kernel instead of into it’s own little VM like Java virtual machine.

Chris Hickman: Right, right. It’s not running as a virtual machine like the Java virtual machine, which is such an unfortunate term, right? Because the Java virtual machine is not the same thing as what we’re talking about right now as virtual machines.

Jon Christensen: Right. Yeah. Okay, thanks. That helps.

Chris Hickman: Sure. So that’s what Firecracker is and kind of what it’s general architecture looks like. So let’s talk a little bit about just what are the benefits of Firecracker. Obviously I think we kind of already know, we’ve kind of harped on this. So security is a big benefit, performance and efficiency. So let’s talk about security a bit. So because it is a virtual machine, because it has the hypervisor, right, there is that strong isolation between the virtual machines and their host. It also has this very limited device model where it’s only emulating devices that are really needed for this particular type of environment. So we talked about this. There’s only five emulated devices here, right? So there’s that one for networking. There’s one for block level IO, there’s the V-SOC virtualization, virtual device, serial console, and that one button keyboard for doing the reset.

Jon Christensen: I want to nitpick on just another term. You’re sort of using virtualizing and emulating interchangeably. Are you seeing that in the literature too that you’re reading?

Chris Hickman: Mm-hmm (affirmative).

Jon Christensen: Okay.

Chris Hickman: Yeah. I think it depends on … There might be some subtle differences like when one term is more correct than the other one. But definitely, typically with devices you’ll see emulation for hardware, like emulating devices. And virtualization is more of just I think more of a general term.

Jon Christensen: Yeah, like …

Chris Hickman: But at the end of the day, as far as we’re concerned, it’s kind of the same thing.

Jon Christensen: Okay.

Chris Hickman: And we can kind of almost interchange them.

Jon Christensen: Okay, cool.

Chris Hickman: Also from a security standpoint, they have kind of a twofold approach to security. So one is virtualization, so that just by virtue of having virtual machines and their own guest OS that’s separate from everything else, that’s one boundary. But then they also have the concept of this jailer. And really what that is, is they’re sandboxing or jailing the hypervisor in a process jail.
So what they’re doing is they’re using Linux C groups and they’re also using set comp BPF. So it’s kind of interesting. So it’s using some of the same techniques that containers use to get isolation. And so they’re doing that, they’re using some of that stuff to jail the hypervisor and to make sure that it has only access to a small tightly controlled list of CIS calls.

Jon Christensen: Oh interesting.

Chris Hickman: So just an additional layer, right? It’s just, again, security is top of mind. It’s job one. And so the Firecracker process doesn’t need access to a whole bunch of stuff. So let’s jail it.

Jon Christensen: Okay.

Chris Hickman: And there is also a single VM Firecracker process as part of that model as well. So a lot of stuff on the security front. So it definitely gives us strong security isolation. From a performance standpoint, again, because everything is so streamlined and we have the bare minimum amount of emulation being done, and the minimal amount of features that we need … So the microVM startup time, just the microVM startup itself, four milliseconds.

Jon Christensen: That’s so fast. But before you continue, there was just something that you said really quickly that I kind of tripped over and I think it’s because I maybe wasn’t crystal clear on my understanding of the actual definition of Firecracker. But you said there’s a single VM … Our future selves are traveling back in time to address this confusion instead of having you wait for our next episode. So the phrase that we’ve been talking about and kind of stumbling over is single VM per firecracker process. And Chris, maybe you can help me and help everybody by explaining what my confusion was and what our miscommunication was around this phrase.

Chris Hickman: Sorry, yeah. I’m like, wow, what great technology to be able to go back in time and kind of edit what happened. So yeah, we kind of got wrapped around the axle a little bit for 10 or 15 minutes over this. We were going through the benefits of Firecracker and specifically security benefits of it. And one of the bullet points that we talked about was there is a single VM per Firecracker process. And that kind of peaked your interest, Jon, where you were like, wait a minute. Let’s talk about that some more. That sounds kind of weird.

Jon Christensen: Yeah.

Chris Hickman: And I was like, Jon, don’t worry about it. This is just part of the architecture, right? There’s lots of components here and whatnot. And this is just one of those ways that it provides some further isolation. And so the confusion was, was earlier in the episode, we kind of talked about here’s the landscape of containers now, and then here’s where it’s going in the future to solve some of these issues with, hey, we don’t have the strong isolation with containers like we do with VMs and we’d like to have that back.
And so some of those in serving that landscape, we said there’s microVMs, which we were talking about. We talked about unikernels and kind of introduced that concept. And when we did introduce unikernels we said they are focused on running a single process, right? So it’s like a custom built OS that’s just built to run a single process. It pulls in the operating system libraries and all the support code it needs just to run that one process. And that’s all that runs inside of that. And that’s a unikernel.

Jon Christensen: Right.

Chris Hickman: And so then I think when we got to this part of the Firecracker discussion and we said, hey, there’s a single VM per Firecracker process. That led to the connection like, wait, wait, you mean I can only run a single process inside of Firecracker VM? And that’s not the case at all.

Jon Christensen: Yeah, there’s potential ambiguity around the phrase VM per Firecracker process especially because it’s like which direction does that go? Does that mean one process per VM or one VM per process? And I just wasn’t picturing that a Firecracker process is a thing that lives outside of the virtual machine. That microVM, like the actual virtual machine that’s running your user app. I guess that’s not what’s called a Firecracker process confusingly, right? In my mind, that was the firecracker process.

Chris Hickman: Right, exactly. So this Firecracker process, this is the runtime support that’s on the host. It’s outside of the VM. Inside that firecracker VM, that’s the full up guest operating system you can do, which it does include the container run time and you can run as many processes as you want inside that. But you can kind of think of it as just pairing up for that piece of the architecture Firecracker that’s outside the VM on the host that’s responsible for kind of guiding that virtual machine and providing the runtime support. There’s a one-to-one correlation between that particular process and the VM.

Jon Christensen: Yep. So one Firecracker process is not bidding up a bunch of VMs basically that’s all it was saying?

Chris Hickman: Yes. It’s a little bit kind of tricky, right? Because Firecracker, the technology, right, it’s composed of a bunch of different pieces, right? There’s agents, there’s controllers, there’s …

Jon Christensen: Right.

Chris Hickman: And then there’s this runtime support, right? Which we just said Firecracker process. And it’s like, well which process are we talking about? So I think, again, just realize there’s lots of pieces to it. This is the corresponding runtime support that’s running on the host outside the VM. And it’s that one-to-one mapping. So I think at the end of the day, it just means, hey, this gives us further strong isolation in that Vms. Firecracker VMs, the code right inside that is not sharing this runtime support outside the container on the host, right? So there’s less possibility for crosstalk and leaks and outbreaks and whatnot.

Jon Christensen: I got to push this, Chris. So I like to draw parallels. And a parallel that I want to try to draw is that the Firecracker process that we keep talking about is kind of the equivalent, does the equivalent job as Run C does for containers. Obviously containers and microVMs are different things. But if in sort of a parallel inspired world, I feel like that Firecracker process is kind of like the Run C of microVMs.

Chris Hickman: Yeah, it’s a good way of thinking of it. It’s just, and we’ll actually next week … Trying to keep this straight with time travel. Next week we’ll talk a little bit about kata containers. And that will be a much more familiar analogy we will be able to use, right, to understand this.

Jon Christensen: Excellent.

Chris Hickman: So, it’s a good point, right? So we already talked about container D and Run C and how they support containers and Run C is the low level runtime support for creating these, instantiating the containers and bringing them down. So we need something like that for VMs. And so there’s some piece of Firecracker that does that and that is this.

Jon Christensen: Cool. Awesome. Well now I get it. Thank you Chris.

Chris Hickman: All right, perfect.

Jon Christensen: All right, thanks Chris.

Chris Hickman: Yeah. Well, before we wrap, let’s just talk real quick. We were just going through the benefits of Firecracker. So we talked … So security, we have beat that horse. Hopefully everyone agrees that Firecracker take security very seriously. And you’re getting really strong isolation there. We talked about performance again because this is so streamline, minimal devices emulated. We said the base microVM startup time is four milliseconds.

Jon Christensen: Yeah.

Chris Hickman: To actually boot to a guest Linux user space, you’re looking at 125 milliseconds. So that’s really the number we care about, right? Because it’s the … The VM’s kind of not useful until you actually have your guest [inaudible 01:03:53] in it, right?

Jon Christensen: Right. Yeah.

Chris Hickman: So 125 milliseconds.

Jon Christensen: The vanity metric, that four milliseconds.

Chris Hickman: Kind of, yeah. So it’s an eighth of a second to bring it. So that’s very, very fast.

Jon Christensen: That’s fast. Yeah.

Chris Hickman: Very fast. And then from an efficiency standpoint, just super low memory and CP overhead, you’re looking at less than five megabytes of memory for each one of these VMs. And so all this means again, you can have thousands of microVMs per host. And so things like 4000 microVMs per host machine is absolutely doable, which is kind of mind-boggling if you think about it.

Jon Christensen: Yeah, I don’t understand that at all.

Chris Hickman: And it’s a great achieve … You can’t run 4000 containers. I don’t think …

Jon Christensen: Right. It is just too weird. That’s so little memory that I feel like it should be like … You should be able to browse through the code of all of it in a day and be like, oh cool. That was simple and quick, right?

Chris Hickman: Yeah.

Jon Christensen: Because any sufficient amount of code is going to create a much bigger memory footprint as it does stuff.

Chris Hickman: Yeah. This is the memory footprint obviously of the VM, the VM overhead, right?

Jon Christensen: Yeah, yeah, yeah.

Chris Hickman: It’s not the memory load of the guest OS, whatever you do on guest OS, right?

Jon Christensen: Yes, yes, yes. Yeah, good point.

Chris Hickman: So there’s that too, right? But assuming you’re not using a lot of memory inside the guest OS, this is the baseline overhead for the Firecracker VM. So you can run thousands of these things. So again, if you’re using this for running Lambda functions, right, each one of those Lambda functions is not going to be using up a lot of memory. You can put limits on that. So running thousands of Lambda functions on a single host now are totally doable and possible and they all have that strong isolation. And so it’s just a really strong benefit and a big win for running these kinds of application workloads.

Jon Christensen: Cool. There’s another one for next week too, because when you said thousands of these on a single machine, but you also said, but that doesn’t count the guests OS. It’s like, well then wait a minute. So how big is a Lambda function and how big is this guest OS? And so how can you really have thousands of these? So I’d like to know that too. Yeah.

Chris Hickman: All right. We have our homework cut out for us.

Jon Christensen: I know, man. But this is cutting, cutting, cutting edge stuff. Not very many people are immersed in this, and so to be able to talk about it fluently, come on. That’s really, really hard for anybody. So kudos to you for doing what you did. Thank you, Chris.

Chris Hickman: Yeah. All right, thank you. Thank you, Jon.

Jon Christensen: Yeah. Talk to you next week.

Chris Hickman: All right, see you then.

Jon Christensen: Bye.

Chris Hickman: Bye.

Stevie Rose: Thanks for being aboard with us on this week’s episode of Mobycast. Also, thanks to our producer, Roy England. And I’m our announcer, Stevie Rose. Come talk to us on Mobycast.fm or on Reddit at R/Mobycast.

Show Buttons
Hide Buttons
>