82. Virtual Machines vs. Containers Revisited – Part 2
Sponsors
Summary
Cloud computing would not be possible if not for virtual machines. They are the fundamental resource for cloud-native applications. Then along came Docker with its containers, and the virtualization scene got a bit more complicated and confusing.
So, we kicked off a new series where we go deep on virtual machines and containers, aiming to clear up any confusion between these important technologies.
In episode #81 of Mobycast, we discussed full virtualization, also known as virtual machines. We explained hypervisors, the fundamental technology that enables virtual machines. And then we took a detailed look at how Type 1 hypervisors work.
In today’s episode of Mobycast, Jon and Chris bring these concepts to life by examining several popular hypervisor implementations.
Show Details
In this episode, we cover the following topics:
- Hypervisor implementations
- Hyper-V
- Type 1 hypervisor from Microsoft
- Architecture
- Implements isolation of virtual machines in terms of a partition
- Partition is logical unit of isolation in which each guest OS executes
- Parent partition
- Virtualization software runs in parent partition and has direct access to hardware
- Requires supported version of Windows Server
- There must be at least one parent partition
- Parent partition creates child partitions which host the guest OSes
- Done via Hyper-V “hypercall” API
- Parent partitions run a Virtualization Service Provider (VSP) which connects to the VMBus
- Handles device access requests from child partition
- Virtualization software runs in parent partition and has direct access to hardware
- Child partition
- Does not have direct access to hardware
- Has virtual view of processor and runs in Guest Virtual Address (not necessarily the entire virtual address space)
- Hypervisor handles interrupts to processor, and redirects to respective partition
- Any request to the virtual devices is redirected via the VMBus to the devices in the parent partition
- Does not have direct access to hardware
- VMBus
- Logical channel which enables inter-partition communication
- Implements isolation of virtual machines in terms of a partition
- KVM (Kernel-based Virtual Machine)
- Virtualization module in Linux kernel
- Turns Linux kernel into hypervisor
- Available in mainline Linux since 2007
- Can run multiple VMs running unmodified Linux or Windows images
- Leverages hardware virtualization
- Via CPU virtualization extensions (Intel VT or AMD-V)
- But also provides paravirtualization support for Linux/FreeBSD/NetBSD/Windows using VirtIO API
- Architecture
- Kernel component
- Consists of:
- Loadable kernel module, kvm.ko, that provides the core virtualization infrastructure
- Processor specific module, kvm-intel.ko or kvm-amd.ko
- Consists of:
- Userspace component
- QEMU (Quick Emulator)
- Userland program that does hardware emulation
- Used by KVM for I/O emulations
- QEMU (Quick Emulator)
- Kernel component
- Virtualization module in Linux kernel
- Hyper-V
- AWS hypervisor choices & history
- AWS uses custom hardware for faster EC2 VM performance
- Original EC2 technology ran highly customized version of Xen hypervisor
- VMs can run using either paravirtualization (PV) or hardware virtual machine (HVM)
- HVM guests are fully virtualized
- VMs on top of hypervisor are not aware they are sharing with other VMs
- Memory allocated to guest OSes is scrubbed by hypervisor when it is de-allocated
- Only AWS admins have access to hypervisors
- AWS found that Xen has many limitations that impede their growth
- Engineers improved performance by moving parts of software stack to purpose-built hardware components
- C3 instance family (2013)
- Debut of custom chips in Amazon EC2
- Custom network interface for faster bandwidth and throughput
- Debut of custom chips in Amazon EC2
- C4 instance family (2015)
- Offload network virtualization to custom hardware with ASIC optimized for storage services
- C5 instance family (2017)
- Project Nitro
- Traditional hypervisors do everything
- Protect the physical hardware and bios, virtualize the CPU, storage, networking, management tasks
- Nitro breaks apart those functions, offloading to dedicated hardware and software
- Replace Xen with a highly optimized KVM hypervisor tightly coupled with an ASIC
- Very fast VMs approaching performance of bare metal server
- Traditional hypervisors do everything
- Project Nitro
- Amazon EC2 – Bare metal instances (2017)
- Use Project Nitro
Links
- Xen Project
- Kernel Virtual Machine
- QEMU
- Mastering KVM Virtualization
- Hyper-V
- AWS Nitro System
- AWS re:Invent 2018: Powering Next-Gen EC2 Instances: Deep Dive into the Nitro System
- AWS re:Invent 2017: C5 Instances and the Evolution of Amazon EC2 Virtualization
End Song
We’d love to hear from you! You can reach us at:
- Web: https://mobycast.fm
- Voicemail: 844-818-0993
- Email: ask@mobycast.fm
- Twitter: https://twitter.com/hashtag/mobycast
Coming soon…