What is the future of containers? In this three-part series, we are exploring the promising technologies aiming to make our cloud-native apps more secure without giving up performance.
Previously, we learned all about microVMs, taking a deep dive on the most talked about microVM – AWS Firecracker.
This week on Mobycast, we finish looking at microVMs with a discussion of Kata Containers. Then we explore the world of unikernels, which promise the same benefits as microVMs but with a dramatically different approach. Oh… and somewhere along the way, Chris accidently invents a new technology – “conternels”.
Companion Blog Post
In this episode, we cover the following topics:
- We continue our discussion of microVMs with a look at Kata Containers.
- Kata Containers formed by the merger of two projects: Intel Clear Containers and Hyper runV.
- How does Kata Containers integrate with existing container tooling?
- How mature are Kata Containers – are they ready for production?
- We then take a look at unikernels, which take a dramatically different approach to solving the problem of providing high security with blazing performance.
- The benefits of unikernels along with a comparison on how they differ from containers.
- We discuss some of the most popular unikernel implementations, including OSv and MirageOS.
- Does the future point to a deathmatch between containers and unikernels, or will there be a need for both approaches to cloud-native apps?
- Kata Containers
- Intel Clear Containers
- Containers vs. Unikernels: An Apples-to-Oranges Comparison
- OSv – Github
- Making OSv Run on Firecracker
- Docker Acquires Unikernel Systems to Extend the Breadth of the Docker Platform
We’d love to hear from you! You can reach us at:
- Web: https://mobycast.fm
- Voicemail: 844-818-0993
- Email: email@example.com
- Twitter: https://twitter.com/hashtag/mobycast
- Reddit: https://reddit.com/r/mobycast
Stevie Rose: What is the future of containers? In this three part series, we’re exploring the promising technologies aiming to make our cloud native apps more secure without giving up performance. Previously we learned all about microVMs taking a deep dive on the most talked about microVMs, AWS Firecracker. This week on Mobycast we finish looking at microVMs with the discussion of Kata containers. Then we explore the world of unikernels, which promise the same benefits as microVMs but with a dramatically different approach. Oh, and somewhere along the way Chris accidentally invents a new technology, conternals.
Welcome to Mobycast, a show about the techniques and technologies, used by the best cloud native software teams. Each week your hosts, Jon Christensen and Chris Hickman pick a software concept and dive deep to figure it out.
Jon Christensen: Welcome Chris. It’s another episode of Mobycast.
Chris Hickman: Hey Jon. It’s good to be back.
Jon Christensen: Good to have you back. Here we are in a series on microVMs, which is super cool. And the second episode of the series I think was one of the best episodes of Mobycast that we’ve done in a long time. So we’ve got our work cut out for us today. Let’s try not to be boring and yeah, we’re going to-
Chris Hickman: Speak for yourself Jon.
Jon Christensen: What’s that?
Chris Hickman: Speak for yourself. I’ll hold my own. I’ll be witty and entertaining.
Jon Christensen: Yeah. So we’re going to talk today and we’re not going to get into pleasantries and how the weather is there because I’m sure it’s awful. And it is here too.
Chris Hickman: Snow.
Jon Christensen: Yeah. Same here. So let’s talk about software and computers. Yeah. So in part three we’re going to talk about Kata containers and unikernels.
Chris Hickman: Yes.
Jon Christensen: Just really wanted to say unikernels.
Chris Hickman: Yeah, I mean let’s … Yeah, it might catch on, right? There’s your meme right there. Go ahead and do it. Post it to Twitter.
Jon Christensen: My shot of unikernels.
Chris Hickman: There you go. Yeah. So I mean we’re on this journey of just like looking at what’s the future of containers. We’ve talked about like the benefits, the pros and cons of VMs and containers. So VMs, strong isolation, high security. Containers though give us much better speed, performance and efficiency of resources, but giving up some of that security and isolation. Basically all the direction going forward now is like how do we kind of bring those two things together and resolve them?
And three main things that we’ve been looking at. So we’ve been looking at microVMs and what they are. That’s what we’ve talked about in the first two episodes. Last episode we really went deep on one of the most popular microVMs out there, which is Firecracker from AWS. And so today we’re going to finish talking about another one called Kata containers. And then after that we can move on to unikernels, which is a whole nut or unikernels, is a whole nother way of going about some of that to get some of the same benefits. And then we’ll maybe briefly just touch on like container sandboxes is like a third kind of prong out there, although probably much less interesting for us. So yeah.
Jon Christensen: But is any of this interesting to anybody? Like what’s the point? Why are we even talking about all of this?
Chris Hickman: Yeah. So it’s interesting in the aspect of like, this is the evolution of technology and a big part of it, it’s just all being driven by cloud computing, which is really kind of interesting. And not only that, everything we’re talking about is going to be really important for devices as well. So IOT, right? Because they have some of the same constraints, where it’s like very special purpose, but you really have to take advantage. You need either speed or you need really great resource efficiency, right? So like that sensor in your home like it can’t require eight gigs of ram or a huge CPU, right? So it needs to be very resource efficient.
Jon Christensen: And another thing, I’m not sure if microVMs will have the same kind of trajectory that Docker did. I kind of think that they won’t and that they’ll be mostly behind the scenes stuff and that once you get your head around working in a containerized workflow, that that will continue to be what you need to know into the future as microVMs become more popular. But I say that with a caveat, because I distinctly remember somebody saying, “Oh yeah.” And like this is like 10 years ago now.
Somebody being like, “Yeah, Docker.” That seems kind of cool, but it’s like, it’s only like script kitties talking about it. And so it’s pretty uninteresting to me. And now that guy’s probably like writing Docker up and stuff like that every day as part of his job and being like, “Huh, I was kind of immature in my reasoning about what Docker was all about back then.” And maybe it’s not quite the same with microVMs. Maybe they do sort of stay in the background in the domain of people that work inside big cloud companies and stuff like that. But they are along that same trajectory. So if you were [inaudible 00:05:33] Docker 10 years ago, and you’re [inaudible 00:05:34] microVMs today, then your job is probably going to involve doing something with microVMs like 9:00 to 5:00 every single day of the week in 10 years.
Chris Hickman: Yeah, I mean here’s the way I look at it. So we said that like there’s these three kind of main prongs and with microVMs, unikernels and sandboxes. So let’s just forget about sandboxes, because sandbox is just really like how do we take standard containers and give them some stronger isolation by sandboxing the kernel. Right? So Google gVisor project is doing that, but it’s not really dramatically different and there’s not a lot of interest there, right? Because it’s really only addressing the security part of it. It doesn’t give us any other improvements like the … it actually slows down containers because you’re going through a proxy now.
So let’s talk about microVMs and unikernels and the way you can look at it is that microVMs slot in to the abstractions that we have right now with containers, right? It just becomes part of the container infrastructure that we use to run it. But it doesn’t change how we … we’re still going to use containers and we still use all that rich ecosystem of tooling. So think of microVMs as basically a drop in replacement for run C.
Like everyone that’s using containers now, like how many of them really know what … Do they, one: Even know that run C exist? And two, like what does it do? And you just don’t need to know too much about that. Right? It’s like the guts of it, the implementation of the infrastructure. It’s just not too terribly important. And I kind of see that’s where microVMs slotting in versus unikernels, which we’re going to talk to much more in depth. They represent like this radical shift in how we build our applications. And that is more analogous to going back 10 years when containers came online, right?
Like it was this huge shift of going from like I’m building an application that’s running on a VM to now I’m build an application that runs inside a container. And we’ve talked about this quite a bit on previous episodes on Mobycast where it’s just like, that is a huge mental shift. It’s a huge culture shift. That’s a process and it’s taken a long time for that to get that traction. And there is that pushback from folks when they make that change.
So you’re going to have that same kind of adoption problem with unikernels, but not with microVMs. MicroVMs just slot right in and people won’t even have to know about it. But I guarantee you going forward, the odds that you’ll be running your containers, one container per VM in a microVM is extremely high.
If you’re running in the cloud, that is going to be the pattern. But it’s just going to be infrastructure undifferentiated, heavy lifting that for the most part we’re not going to need to know about. But it is important for us to know at least what that is and what gains we get from it, and be able to talk intelligently about like, “Oh, I know what a microVM is.
Jon Christensen: Cool. All right. Now let’s jump in. Are we going to go straight to Kata containers?
Chris Hickman: I think so. So we’ve kind of set the groundwork. I think we know we’re at. We’re talking through microVMs. Like I said, we spent a lot of time going into Firecracker and we finished up with that last time. Now let’s just look at another one real quick with Kata containers. Because that’s another popular implementation that’s out there that folks may have heard about.
Jon Christensen: Right. And we even did a spoiler alert at the end of the last episode. So people already know what you’re going to say.
Chris Hickman: Yeah. And there’s not as much to say about this, because unlike Firecracker, which is a hypervisor, Kata containers is up the stack a bit. It’s not a hypervisor itself. It is really more like that runs C replacement. So it’s more at the container run. So it’s that portion of the container runtime that says instead of the services above me, think they’re asking of me to create a container. But instead what it’s doing is it’s first creating a VM and then it’s creating the container inside that VM.
So just like Firecracker container D is doing that for Firecracker VMs, Kata containers is doing that for other types of VMs. And so Kata containers works with other hypervisors out there. So it does work with things like QEMU, KVM and it also there’s been work to integrate in with Firecracker as well. So think of this as more of like at the container runtime integration point in the infrastructure.
Jon Christensen: Well, I guess what I’d like to know then if it’s creating a VM and putting it in and then running a container inside it. Maybe you just answered that and I’m not quite familiar enough with … I can’t remember exactly what a KVM and QEMU do. I think they’re just different hypervisor implementations.
Chris Hickman: So yeah, together they are a hypervisor. So KVM is the Linux kernel virtual machine module, right? And so that is a way of making the Linux kernel a hypervisor. There’s two spaces, there’s two parts to it, there’s the kernel component and then there’s the userspace component. And the userspace component is responsible for emulating a lot of the devices and some of the other emulation capabilities. QEMU is that userspace component that does the image. So you’ll see something like KVM is paired up with something else. KVM in QEMU is your hypervisor. Or like in the Firecracker space, it’s KVM plus Firecracker. Firecracker is that alternative to QEMU.
Jon Christensen: So there we go. That totally answers my question, because I was like, well if Kata containers is making VMs to run containers and what VMs is it making? And it sounds like it’s either making … like they’re going to do a Firecracker one, so that’ll be a super it’ll actually be a micro VM, like super lightweight with a bunch of stuff removed from it. Or it’ll be a QEMU one which is like a full up Linux kernel that does everything.
Chris Hickman: Yeah, just like that QEMU when it’s configured to use that as the hypervisor. I mean that’s still really lightweight.
Jon Christensen: Okay.
Chris Hickman: All right. So it’s the as much or as little as you want. I think the emulation. So they’re very, very fast. So if you go to like the QEMU project page, you’ll see, it’s like wow! Like we are designed to be very high performance and you can spin up these VMs very, very quickly with this as the hypervisor. So it’s by no means like this full up heavyweight long boot time alternative. So it’s very viable to have Kata containers and have all the benefits of Kata containers. But using QEMU as your hypervisor as opposed to say like Firecracker.
Jon Christensen: Okay cool.
Chris Hickman: You’re still going to get great performance gains. It’s still going to be extremely fast and really great with resource efficiency. Again the difference there is that like Firecracker it has its own basically userspace component for the emulation and it’s just purpose built for AWS land and the kind of use cases that it has versus QEMU is a little bit broader.
Jon Christensen: Okay. Super. That’s exactly what I needed to know. Thank you.
Chris Hickman: Cool. Yeah. And so maybe it’s kind of some things with Kata containers that are maybe important, it’s just from a project history standpoint, Kata is the result of a merger between two projects. So one was the clear containers project from Intel and with that project they were focused on very fast boot times and enhanced security for containers. So they were looking at less than a hundred millisecond boot time and enhanced security. And so that was clear containers from Intel. And then the other project is from hyper the run V project. And that was a hypervisor based runtime for OCI. So remember OCI is that container runtime interface that we use for our containers that allow … like Docker was one of the folks behind that to define like what is that common protocol that we can expect from the container runtime and be able to slot in different types of runtimes into our system, not just container D and run C and whatnot.
Jon Christensen: And that protocol, just to kind of bring that conversation back to life for some people. What would a protocol even do? Well, needed to be stopped. They needed to be started. They need to be identified a few other things.
Chris Hickman: Yeah, exactly. So really the hyper run V project was pretty similar to like, again, like what Kata is. So it’s that OCI compliant piece of the stack for slotting in and saying, I’m creating VMS with containers inside those VMs as opposed to just creating containers purely. And then the clear containers project was bringing some of the performance and security gains that they were implementing with containers. So those two things merged together I think near the end of 2017 and became the Kata containers project. It’s still relatively newish, right? So-
Jon Christensen: They’re still working on merge conflict resolution.
Chris Hickman: I think they’re beyond that, but it’s less than two years old, Kata containers as a thing. Which in the technology world like that is still pretty early on, pretty early days. And so they still have a lot of work ahead of them. There has been work to integrate them with other systems. So in particular, in the Kubernetes side of the world with the Kubernetes’ runtime specification, the CRI-O. And if anyone knows if there’s a way that folks out there pronounce that, like if it’s CRI-O or something like that, I would love to know, but I will just spell it out for now.
So that is the runtime spec for Kubernetes. And so Kata is one of those things that you can now slot in. So Kata is now an alternative to run C for the container runtime in Kubernetes.
Jon Christensen: Cool.
Chris Hickman: So that’s one way you can use Kata containers today with Kubernetes.
Jon Christensen: So it seems like there’s a death match happening a little bit between Firecracker and Kata. And the advantage that Kata has is that they started from a point of view of like, well let’s make sure that this is generally useful. And Firecrackers started from the point of view of like, let’s make sure this solves AWS problems very, very well. And so as once Firecracker was opensource, people are like, well, how can we make this more generally useful? And so that’s the direction it’s going. And Kata is like, I guess like if you’re in AWS, like maybe just use Firecracker because it’s already really well built for that environment. But outside of AWS maybe Kata feels like the better choice.
Chris Hickman: Yeah. I mean here’s I mean the other way to look at it. It’s not a death match between Kata and Firecracker. It’s a death match between Kata and Firecracker container D. right? Because Kata, you can use Firecracker VMs with Kata, but you wouldn’t use Firecracker container D. Right? So that’s what’s going head to head here is Kata versus Firecracker container D.
Jon Christensen: Well Kata is like a couple of years old, and Firecracker container D. I mean how old is it? Is that even like six months old?
Chris Hickman: Yeah. So Kata is definitely further along in that space. Right?
Jon Christensen: Interesting.
Chris Hickman: So it may be like, Hey, if you do want to use Firecracker VMs with Kubernetes, then we talked about last episode how [inaudible 00:18:04] has Ignite, which is basically, it slots in like Kata containers would, right? It’s another runtime. So it’s kind of interesting to see like this explosion now of runtimes. And it’ll be interesting to see like is there even more of these projects out there that happen. Because for awhile it was really just basically container D and run C. But now I mean Dockers kind of like what’s going to happen there. Right?
We’ll see where this goes. But I think if you are running Kubernetes, then you’re probably looking at things like Kata or Ignite to run Firecracker VMs.
Jon Christensen: Cool.
Chris Hickman: Yeah. So that’s Kata. So again, think of it as a alternative to Firecracker container D. And again, it’s something that’s been out there. It’s been used in production. It started off like clear containers was at scale in production by companies like JD.com, which is like the big Chinese eCommerce company, Kata containers are being run in production by Baidu and other big Chinese internet company. So it’s there, it’s established, it’s being used in production workloads. And it can probably just further along than some of these other projects that we’ve talked about.
Jon Christensen: Right, right. It’s kind of funny though, like we say that, this container runtime is being used in big production workloads and while there is some amount of stability that you need and some amount of operational capability that you need to do that. The whole point is that like containers are kind of throwaway. So even if they’re like total chaos inside of that system, it doesn’t really matter as long as there’s enough of them running to handle your traffic. So you know what I mean? Like the irony is maybe all this stuff doesn’t need to be as mature as it once did.
Chris Hickman: Yeah. I mean it has to be mature enough though, right? So it’s like there is a big difference between like 80% like correct versus like 97% correct. So you might be able to run production 97% correct. But 80% is just too many crashes. And it’s just not going to work. So there is that. So I think we can safely assume that things like clear and Kata are at that 99% or a higher level. So pretty tried and true.
Jon Christensen: Cool.
Chris Hickman: Yeah. So I think we can kind of wrap up microVMs with that. So all those techniques are basically just like we’re going to run our containers inside of VM and that’s going to get us back that great security and workload isolation capabilities that we have with VMs, but we still retain the performance, the efficiency, the resource efficiency and more importantly like that abstraction, the tooling and the ecosystem of just, we still get to work with containers with really not even having to know the details of like it’s now like [inaudible 00:21:14] is inside its own custom bespoke VM.
So let’s move on to the next technique, which is unikernels. Yes. And this is totally different. It is. And there’s not a lot of … I mean they really are unicorns. There’s not a lot of unikernels out there in practice. I mean we’ll get into this. We talked about this earlier in this episode, where they do represent like a fundamental shift in how you go about using this tech and how do you package your application as a unikernel completely different than containers.
It’s a completely different approach. It requires totally different tooling. Some of them require totally different languages, programming languages. So this is a big, big, big, big change. So let’s just recount like what are they? So basically what they are is they are a lightweight and mutable operating system that’s been compiled specifically to run a single application.
So what does that mean? So normally on a system you have an operating system and it’s designed to be multitasking and multi-threaded, it’s got all the support that an operating system would have. Managing kernel versus userspace interrupts. I mean, all just devices, IO, storage, all that kind of stuff. So with unikernels, what you’re doing is you’re saying, I have the single application, the single process that I want to run. I want to, instead of running a full blown operating system, I only want the pieces of the operating system that I need to run that single process and I’m going to go and compile basically an executable pulling in just that very specific code that I need and the output is instead of now running an operating system on my server with my application running instead, it’s just this executable that has only the bare minimum amount of code that it needs.
And so it’s just a single process, single application running on that machine inside that unikernel.
Jon Christensen: Right. I mean, I think when I hear this description, it just feels like everything that was old becomes new again.
Chris Hickman: Absolutely. Right? I mean is this is kind of how it all … I mean it really goes back. I mean this is-
Jon Christensen: This goes back to like the pre-operating system day, like way back.
Chris Hickman: Yeah. Before operating systems became multitasking. I think we’ve talked about this last episode of the previous one about giving an example of the early versions of windows, like windows three one, where it didn’t make a distinction between protected kernel and userspace and how an application when it crashed, it could possibly take down the kernel, which means you have to reboot your machine and then they said no, we’re going to build this in a more secure way and we’re going to isolate that so that rogue apps or poorly behaving apps can’t take down a whole system.
And so this is getting rid of all those protections, right? And so there is no difference now between kernel and userspace. You don’t need them because there is no kernel anymore, and a differentiation between the kernel and userspace. There’s just one, and that’s just your application.
Jon Christensen: But even further, like even further back than that, like I think before there were even operating systems, it was sort of like this whole like, how can I just get it piece of Silicon to run an application, and I just want to kind of bootstrap this application into existence. Like when you turn on this machine, this is the application that’s running.
Chris Hickman: Yeah, sure. I mean if you go back to the … the name escapes me right now, but what’s kind of credited as the first PC, the one that bill Gates and Paul Allen saw on the cover of popular electronics. In that particular one, they had to load their program in as part of like turning on the machine. And that’s it. So you turned it on, and it would go run and just run through the program. And then we evolved from there.
Jon Christensen: Right, right.
Chris Hickman: Yeah. So this is kind of exactly that, right? But it really is being driven by the exact same use cases that we see with microVMs. So it’s very special purpose. We’re using unikernels for things like cloud applications that need to be very, very fast. They’re very special purpose. They really kind of doing one thing and one thing only. And they need to be very resource efficient. Or it’s something like an IOT application where you’re in a device and again, same kind of constraints. So unikernels are good for the exact same things that microVMs are, targeting those environments.
Jon Christensen: Right. I guess the one thing that this makes me think about that is worth pointing out is like with a unikernel you’re still probably running code that’s generally kind of generalized. Like the code doesn’t exactly … like you kind of need all of the instructions of like a processor, like a general purpose processor. So it needs to be able to do everything that a general purpose processor can do as opposed to some of the things we’ve talked about before where like it’s so specific that you can drop it down and like put the code into Silicon and like the example about it. Like just turning an app into actual Silicon, like putting it on a chip. You see what I mean? There’s like a difference between these two things.
I guess maybe though, maybe if you’re going down the unikernel path, like if you’re like, “Well, I’m going to build a unikernel for this, because we need to run it on a 100,000 machines.” Perhaps if you’re like, “Actually we need to run it on a 100 million machines.” You might be like, let’s take our unikernel and figure out how to codify it into a chip. And then we just have an exec for doing what these unikernels we had before are doing. Yeah? Like that-
Chris Hickman: Well, I mean it’s really easy just to go and throw into a rom. I mean that’s what you wanted to do, right? This could be pure Harper with the unikernel stored in rom or something like that. Or param. It’s very, very minimal. It’s very, very tight. And it’s very, very specialized. So …
Jon Christensen: So I guess without knowing exactly where that line is and when you do it and have those tradeoffs work, because that’s not really our job and our expertise here at Mobycast, we can at least identify that gradient, right? Like all the way down becomes Silicon becomes rom becomes chip stuff and all the way up becomes continue like running software on actual machines, like any kind of software on machines, like sort of like custom as needed, bespoke software. You see that gradient? Yeah. Am I speaking gibberish here?
Chris Hickman: No, I mean it. So I think we’ll kind of get into this too, right? Where it’s just, I mean maybe just jumping to the conclusion a little bit. I’d say that microVMs and containers is really how we’re going to be interacting with this stuff. Unikernels are really interesting, and it’s important to talk about them, because they’re really aiming at some of the same problems that microVMs are addressing, but they’re just like … The adoption, the people that are going to be using those kernels are going to be a much smaller set, right? It’s really for very specific things.
So like if you are a big cloud company and like you just need to run just a bunch of web servers, like you may turn your web server into a unikernel and just employ the effort to do that because it just makes sense to, you’re going to overcome the investment that goes into doing that. And the learning curve and the … like it’s hard. That’s the other, one of the things that’s kind of hard with unikernels is like you have to like recompile with whenever there’s a device driver change. Right?
So versus like the VMs, they abstract that away from you. So that’s one of the benefits of something like using … like Kata doesn’t have to worry about that. Like it’s the hypervisor is responsible having to deal with devices, it just has to deal with the virtualized device. So is that kind of what you were getting at with the gradient?
Jon Christensen: Yeah. It is that, and essentially I think the key driver on the gradient is essentially how many of these things are we dealing with, and how important is efficiency. So the more we’re dealing with and the more important efficiency is, like the more you get down into unikernels. And then even beyond that, all the way to putting things into Silicon. And the less of them we’re dealing with and the less important efficiency is, the more likely you are to just type in a command to run the program for yourself so you can just use it. Like that’s a single instance of a program.
Chris Hickman: Yeah. And I think you get by and large, almost all of us, like 99.99% of us, we’re going to be in the container microVM world and as a very, very small fraction where unikernels make a lot of sense and it’s worth going through the effort to use them.
Jon Christensen: Right. We cover a lot of information here on Mobycast. And if you’ve ever wanted to go back and remind yourself of something we talked about in a previous episode, it can be hard to search through our website and transcripts to find exactly what you’re looking for.
Well now it’s a lot easier. All you have to do is go to mobycast.fm/show-notes and sign up. We’ll send you our weekly super detailed outline that we use to actually record the show. And a lot of times this outline contains more information than we get to during our hour on the air. Signup and get weekly Mobycast cheat sheets to all of our episodes delivered right to your inbox.
Chris Hickman: Like if you’re working for a company where you’re planning to have a bunch of IOT devices that just basically just running a single application for capturing data and then sending it on or something like that. And that you should be looking at unikernels, right? It’s just one of the possibilities there. Maybe you’re on prem and you have very large data center and you have a really core application that’s serving a lot of requests, that may very well be a candidate.
It’s like, you know what? We’re going to turn that into a unikernel, but it’s still going to be … like in that particular case, you could have an application landscape. Maybe there’s a thousand applications that you have that your engineering team, and IT department are deploying, well maybe only a handful of those really qualify as like being worthy of being unikernels and the rest is still be containers.
Jon Christensen: Right. I guess I want to point out that tooling matters and if a lot of people put in a bunch of hard work around the unikernel tooling and the next thing we know there’s like a dropdown in visual studio that says compile as unikernel and that’s all you have to do. Like man, wow! Then maybe we all start using them, right? Like compile as the unikernel and deploy Azure right now. Click.
Chris Hickman: Yeah, I mean and that’s what’s missing, right? So with the unikernel, so one like you are giving up some flexibility there, right? It has to be a single process. And that may or may not be a deal breaker for your particular application. But if it’s not and if the tooling is there, so that it’s just as easy to create a unikernel as it is to create a container, then absolutely. I just don’t see that happening anytime soon. Like the unikernel-
Jon Christensen: You’re on tape saying that.
Chris Hickman: Absolutely. And I will totally deal. Like anyone wants to throw it on a bet, I will. Go to the survey of the unikernel ecosystem and it’s so fragmented. It is so fits and starts. There’s some interesting projects out there, but-
Jon Christensen: Would you say that if just for a script kitty, I’m just writing this down. Just for script kitties?
Chris Hickman: I think it’s for folks are just really love geeking out on hardware and working on hard problems and dealing with like kernel-level type code. Right? So there’ve been like 10 or more projects out there that are dealing with unikernels. One of the ones that seems to get a lot of traction right now is OSB. Which is kind of really designed for Linux apps in the cloud.
And another one that’s been around for quite some time is MirageOS. And there’s other ones like IncludeOS. There’s rump and the rump kernel. But is one called runtime JS. A lot of these, they were kind of started four or five, six, seven years ago. And they had some, a flurry of activity and then they’ve kind of died off.
And MirageOS and it looks like a lot of the activity has kind of trailed off in the last couple of years as well. MirageOS also requires you to write your application in oCaml. I mean so that’s a really big change, right?
Like I said, we’re now going to have to go learn this new language. I mean, come on, it’s not even like rust or, or Python. I mean, it’s Ocaml. So …
Jon Christensen: Yeah. You know, join us at the oCaml meet up on Thursday where Jerry talks about how he’s incorporating and making unikernels into his dev process.
Chris Hickman: I know. It’s like, “Wow. Instead of three people, there’s four people tonight. This is amazing. Let’s go attack the new guy.” He’s like, “Oh, I thought I was here for the rust meetup.” Yeah. So it just-
Jon Christensen: But seriously though, like that’s, I mean like there were people having the same conversation about Docker 10 years ago and saying, “Well it’s just for people that like playing around with C groups.”
Chris Hickman: Right. But here’s the difference though. Unikernels have been around for, I mean, this community has been there for five plus years.
Jon Christensen: Yeah. But didn’t we also do an episode where we talked about how containers had been around for 20 years or something like that? Or like a really long time?
Chris Hickman: Yeah. Until the tooling came about. Yes. So …
Jon Christensen: Okay. That’s the only point I’m trying to make. If it turns out that they’re wildly beneficial once some certain tooling comes around, like then everybody’s going to jump on, because that’s what we do as developers. Like we love the new and like …
And the next generation that’s tired of hearing graybeards talk about containers and how I was there when blah, blah, blah. And it’s going to be like, where is our thing? And it could be unikernels who knows?
Chris Hickman: Yeah, again just unikernels are just so specialized. It’s going to limit the use cases that you can use them for. So I think if the right tooling comes on along, they could become much more popular than they are right now. But again, it’s not going to be general purpose. And it’s that big shift change. Right? It’s really similar to like what it took when we went from building for VMs versus building for containers to go to this idea of like, I am no longer in an operating system. I’m single address space, single process. Like it’s just really just radically different way of building an application and you just-
Jon Christensen: But listening to you say that sounds so cool. That’s cool. I want to do that.
Chris Hickman: Yeah. And there’s nothing, so okay, maybe we should talk. So we talked. Okay. There’s are many projects out there that are in this unikernel space. Really OSB I think right now is the one that has the most momentum. It’s also interesting to note that Firecracker, the only guest other than Linux that it supports is OSB.
So that gives some credence to the OSB project for sure. So you can have your unikernel done in OSB, and then you can now run it inside a Firecracker VM. And that is supported today.
Jon Christensen: I want to get my head around what OSB, like how to think about it. Because I can kind of think about a unikernel and I can kind of think about, well somehow you have to say, you know that like kind of statically link your code to the kernel, right? Like that’s essentially what you have to do. And I want to just kind of picture what OSB does or what it is that enables me to do that.
Chris Hickman: I think you can really just think of it as it’s a compiler. And so it it requires a hypervisor, right? Cause they don’t want to go and try to have support for all the different device drivers and, and everything else. Right? So it requires a hypervisor. So it’s going against that virtualized model so it doesn’t have to worry about that and let the hypervisor worry about device drivers and whatnot. But you’re just basically, it is a compiler, right?
So you have your application, you’re now going to run it through a compiler. That’s going to spit out basically an executable internal and image format and it’s basically going to create a VM image. Right? But it’s just no longer this full blown operating system. There’s no fork command. You can’t spawn off a new process or anything like that. It’s just been statically linked all the code that it does need, it ends up being … these unikernels they implement, they say that they focus on the protocol implementations rather than device drivers. And that really just means they are writing the support for … Those are the library, the OS libraries that they need in order to give the support, the program thinks and so that it’s as if it were running on a full blown operating system, but instead just put it in.
I think it’s a compiler and it’s building a VM image. This is really how to look at it.
Jon Christensen: It’s like with all of these projects, the name kind of gets to apply to everything. And so you go and you get the OSB toolkit and that’s the compiler that you’re actually downloading probably. And then you run probably like there’s like a CLI, I bet that’s like OSB build or something. Who knows what it is. And then you point it at your directory and it builds it. And then out comes an image that you just mentioned. And the part where you said Firecracker has support for OSB, what that means is that that image, you would call it an OSB image. And when it’s actually instantiated and run then that you would call that an OSB unikernel. So everything kind of gets that name OSB on it. Even though it’s different things like there’s the OSB compiler, the OSB image and the OSB actual running unikernel.
Chris Hickman: Yeah. Cool. When you tell Firecracker, Hey, I want to create up the VM, there’s API command line flags that tell it. Do you want to use a Linux OS or do you want to use OSB? And if OSB, here’s a path to my image to go use.
Jon Christensen: But that sounds fun. I want to go do that right now.
Chris Hickman: Well I mean you can. So I mean there’ll be links in the show notes to go see all these different project pages and they all have tutorials and if you want to go write some oCaml, you can, MirageOS. I’ll show you how to do that.
Jon Christensen: That I don’t want to do.
Chris Hickman: If you want to go create something on OSB, you can go download Firecracker yourself and OSB yourself and go build out this stuff and play around with it.
So just on the tooling landscape here, there is kind of a newish tool out there called … it’s pronounced “you-neek”. It spelled out-
Jon Christensen: Oh! I totally just read that as Unix. I didn’t even notice that was a K.
Chris Hickman: Yeah. So it’s spelled out, it looks like Uni and then capital K, but apparently it’s pronounced “you-neek”.
Jon Christensen: And it looks like the word Unix. Bad job naming people, whoever did that. Try harder next time.
Chris Hickman: Yes. And so what this is, is this is trying to be some of that Docker like tooling or making unikernels. And so what it does, it supports building unikernels for MirageOS for IncludeOS, for OSB and for rump. And so for each one of those though, it’s got to be in specific languages, right? So if you want to make the MirageOS unikernel, then it’s going to be oCaml that you need to write your application in. If it’s IncludeOS it’s C++ code, OSB supports Java, node C and C++. Rump is Python, node and Go. And then for Firecracker it needs to be Go.
So if you write your application in and Go, then you can use UniK to now compile that into a Firecracker VM.
Jon Christensen: Cool.
Chris Hickman: So yeah, so UniK, it’s open source. It’s from the solo.io team and we’ve talked about them in the past with things like microservices, observability. They a whole suite of open sourced software for dealing with just cloud development. So they have things like glue and there’s some tools around observability, lots of really low-level technical interesting stuff they’re doing.
I think I’ve mentioned this in the past, I felt a little bamboozled at some of these conferences where I go to a talk that sounds really cool and ends up being in a demo of one of these solo applicate, these solo projects as opposed to being more general purpose. Right? And kind of feels like it’s an ad for, but all of that said.
Jon Christensen: Yeah. Which is tough because it’s like it’s an open source project, right? So is it an ad or is it not?
Chris Hickman: But yeah, so UniK is out there and it’s kind of, that’s its goal, right? Is to kind of be that Docker like tooling for unikernels. So something else to-
Jon Christensen: Those folks who are listening to Mobycast, and when I was saying, I think this is all going in this direction, they were like, “Yeah, yeah. He’s right.” And when you were saying it’s not, they were like, “No, no, he’s wrong.”
Chris Hickman: I would gladly have a smack down with them and it’ll be friendly. But I’ll still stick to my guns. And I think we’ll all come to a point where we can agree it’s not a death match between containers and unikernels that they each are going to have their place and it’s just containers by far. Conternals. There we go. That’s the future conternals. Yes. TM. I’m trademarking that right now. I’m going to go get that domain name.
Jon Christensen: I love that.
Chris Hickman: Conternals IO. You bet. So there’s room for both, but by and large containers will have the majority of the market share and unikernels will be relegated to a small specialized but important part of that.
Jon Christensen: Cool. We have a couple of minutes left, Chris and I just wanted to … Are we done with the outline for that?
Chris Hickman: So we went through in depth with micro BMS. We’ve now really, I think understand unikernels and its landscape, and we talked about sandboxes. And really quick sandboxes, it’s really Google the gVisor project there is kind of taking that approach where it’s … we’ve talked about containers, one of the security problems that they have is that it’s got a shared kernel across all containers, including with the host. And so what that project is doing is it’s creating a kernel proxy for every container and they’re actually implementing the CIS calls for the Linux kernel in the code and for that proxy. And that way, instead of having just a single kernel for the entire system, now each container has its own kernel, but so that becomes the isolation.
So it’s not a VM in that really strong isolation, but it is tighter and it’s more locked down because they’re no longer sharing a kernel.
Jon Christensen: Right. I don’t know. It does seem like it’s like, well if you’re not taking advantage of the hardware capabilities of the chip that lets you do virtualization, then can you really trust that software to really break it down and isolate it?
Anyway. Maybe. It’ll be interesting to see where that goes. And working on the isolation problem from the container side instead of working on the speeds feed problem from the VM side. It does seem like a worthwhile thing for some people to throw their minds at for sure.
Chris Hickman: Yeah.
Jon Christensen: So yeah, just cause we had a couple more minutes here. I wanted to get your thoughts on something just completely off the cuff. I think I sent you a article the other day or just a link to Oxide computers, Oxide computing of that’s Jess Frazelle’s new company. Did you see that? Are you aware of them?
Yeah, I just thought it’d be kind of fun to talk about it for five minutes. It’s interesting because we are saying all of the innovation is happening inside the cloud. And we’re even talking about that exact innovation in these past three episodes. But we also talked about it in our re-invent post shows, like moving, moving hypervisor into Silicon and moving some of the networking and changing the way the networking protocols work. And so basically this Jess and her partners are making a bet that they can give people that one operate on prem, some of the advances that have happened and build an actual server in servers, like … And so, I don’t know, do you think they can, do you think that they’re going to …? Like are you interested in seeing what they do with that?
Chris Hickman: So, yeah. I guess so the caveat is I’m not an expert in the on-prem market space, right?
Jon Christensen: Sure. Me neither.
Chris Hickman: So what we do know is like we’ve seen numbers where it’s like, Oh 3% of the compute workload is in the cloud and 97% is not. So okay. That sounds like quite a bit. But it’s like, I don’t know, is that like are we talking about like folks where they have a single rack of computers makes up the bulk of that? Or is it like people with like full blown data centers with 20,000 servers in it?
So it really kind of boils down to like what’s the market space for this? I mean are there enough folks out there that really have the need for this? Like that’s one of the reasons why cloud is driving innovations, because they have a very real business need.
So Google and Facebook started off kind of leading the way with this just because they had to. Like Google was like we have to have millions of computers. So we’re going to start building our own and we’re going to build up based upon like a blade type architecture and then just keep going from there. So we just need to have hardware folks on the team and continually doing that just because that’s the only way that we’re going to be able to scale.
And Facebook had the exact same problem. And then obviously public clouds like AWS and now Google cloud and Azure, I mean they all have the same problems also. So it makes a lot of sense. I just, I’m not familiar enough with just like customers in the market space for on-prem. Like is this a compelling enough solution for them? Do they really have that problem or are they okay buying blade servers or one use servers from folks like Dell or whoever else they’re getting their … I’ve been so removed from that on-prem space thankfully for so long. Right? Like I can’t imagine going and building out my own data center now.
Jon Christensen: Right. You kind of can like get a sense of it though, right? Like you kind of can feel like that there must be … like it just feels like when you go to conferences and you see the kind of companies that might sell that kind of stuff. It’s like, yeah, that doesn’t look new-ish at all. And I can kind of imagine that maybe a reason for building this company, this Oxide computer is like, because the blade server sales companies are like, “Yeah we’re on version 5.8 and with 5.8 we’ve added this faster processor that we bought from Intel. We just put it in there.”
And like they’re just kind of like adding more and more of like whatever the commercially available hardware is. But then inside the cloud they’re like, yeah, and we’re buying these machines, we’re ripping them open and we’re changing out all these components. We’re making them our own. Or we’re even manufacturing them ourselves or whatever.
So it does feel like there’s probably a pretty substantial difference between what’s inside the box on a computer sitting inside an AWS data center or a Google data center versus what’s inside the box on a computer running like on a proprietary healthcare company data center.
Chris Hickman: And that’s totally true. Right? And again, it’s just is there need for having that AWS level specialization on-prem? Are people willing to pay for it? And then not only pay for that, but are they willing to like take the chance, right?
Jon Christensen: Yeah, for sure. That’s a big one.
Chris Hickman: This is really important stuff versus like to say, I’m going to bet it on like this brand new thing coming out of a startup. I mean this is the meat and potatoes. So is that.
I’ll go out on a limb prediction, they’re acquired within 18 months, two years and by a hardware company.
Jon Christensen: Yeah. And then, yeah, that makes sense. Because-
Chris Hickman: That to me is like no brainer.
Jon Christensen: Yeah. Because they’ll probably be successful at making a pretty cool machine. And then if they can make that, they’ll have a hard time selling it because of the problem you just said. But then somebody will look at it and go, “Oh, I don’t want that.” Like we can just pull that in house. And then they have all the sales relationships and everything that they need to make it go. Yeah.
Chris Hickman: Yeah. I think it just makes a lot of sense to be like, this is core technology that can really benefit one of the bigger companies that’s selling hardware and it’ll be a no brainer. And they have a great team, a great founding team. So Brian Cantrell being there as one of the three founders, like he’s got a long time at giant as being one of their primary leaders there. And I mean, he was one of the key stewards of node JS while during those formative years and whatnot. And I think before that he was that Son. And so like really strong pedigree in that space.
Jon Christensen: Super cool. Yeah, and it’s a fun company to watch just because, I mean, it’s literally like you get to think about Halt and Catch Fire like that TV show, and just like that starting a computer company. Wow! Like here we are, 2020 and somebody’s starting a computer company. How fun. Very cool. Let’s talk again next week. I’m not sure what we’re going to talk about. Do you know?
Chris Hickman: You know who knows? It’s going to be a surprise.
Jon Christensen: Okay, we’re going to surprise everybody.
Chris Hickman: Okay.
Jon Christensen: All right. Thanks so much for this episode, Chris.
Chris Hickman: Thanks Jon. Talk to you later.
Jon Christensen: Talk to you next week.
Chris Hickman: Bye.
Jon Christensen: Bye.
Stevie Rose: Thanks for being onboard with us on this week’s episode of Mobycast. Also, thanks to our producer, Roy England. And I’m our announcer, Stevie Rose. Come talk to us on mobycast.fm or on Reddit at r/mobycast.