Nested virtualization is the ability to run a virtual machine inside another virtual machine. In a standard setup, a hypervisor (software like VMware, Hyper-V, or KVM) runs on physical hardware and creates virtual machines. With nested virtualization, one of those virtual machines acts as its own hypervisor and spins up additional virtual machines within it. The physical server is called the L0 host, the first virtual machine is L1, and the virtual machine running inside it is L2.
How Nested Virtualization Works
Modern CPUs include hardware virtualization extensions, Intel VT-x and AMD-V, that let hypervisors efficiently manage virtual machines. Normally, these extensions are consumed by the first hypervisor running on the physical hardware. Nested virtualization works by exposing those same hardware extensions to a guest VM, so the guest can use them to run its own hypervisor and create VMs of its own.
On Intel systems, VT-x can be toggled on or off in the BIOS or UEFI firmware, typically under menus labeled Chipset, Advanced CPU Configuration, or Security Settings. The exact label varies by manufacturer and may appear as “Virtualization Extensions,” “Vanderpool,” or “Intel Virtualization Technology.” AMD-V extensions, by contrast, cannot be disabled in the BIOS and are enabled by default. Once the physical CPU supports it, the L0 hypervisor still needs to be configured to pass those extensions through to the L1 guest.
Why People Use It
The most common reason to nest virtual machines is software development and testing. If you’re building tools that interact with hypervisors, or you need to test deployment scripts across multiple operating systems, nested virtualization lets you create throwaway environments without needing extra physical servers. A single workstation can simulate an entire multi-machine infrastructure.
Training and certification labs are another major use case. IT instructors can give each student a single VM that contains an entire virtualized lab, complete with its own network of inner VMs. This is far simpler than provisioning dedicated hardware for every learner. Network engineers use tools like GNS3 inside nested setups to emulate complex router and switch topologies.
Consulting firms use nested virtualization for client demos and pilot tests, spinning up isolated proof-of-concept environments that can be torn down after a presentation. It also plays a role in disaster recovery testing, where organizations can validate backup and restore procedures inside a sandboxed virtual environment without risking production systems.
Where You’re Already Using It
If you run Windows Subsystem for Linux (WSL2) on Windows 10 or 11, you’re already benefiting from nested virtualization concepts. WSL2 uses a lightweight utility VM built on a subset of the Hyper-V architecture to run a real Linux kernel. That VM starts and stops automatically and manages its own resources. If you want to run WSL2 itself inside a virtual machine, such as a Hyper-V guest, nested virtualization must be explicitly enabled on the parent host.
Hypervisor Support and Configuration
Not every hypervisor supports nesting, and the ones that do each require specific configuration steps.
Hyper-V requires a single PowerShell command on the host to expose virtualization extensions to a guest:
Set-VMProcessor -VMName "YourVMName" -ExposeVirtualizationExtensions $true
The VM must be powered off when you run this. Once enabled, the guest can install its own copy of Hyper-V or another hypervisor and create inner VMs.
KVM on Linux enables nesting through a kernel module parameter. Loading the KVM module with nested=1 exposes the virtualization extensions to guest VMs. Red Hat tests nested KVM primarily on RHEL systems, supporting both RHEL and Windows as L1 guest hypervisors. On AMD systems, one limitation to be aware of: live migration of VMs does not work when nested virtualization is enabled on the L0 host.
VMware ESXi supports nesting as well, though when used as an L0 host with a Linux-based L1 guest running KVM, Red Hat notes this configuration is not formally tested. It generally works for development and lab purposes, but you may encounter edge cases.
Nesting is currently supported on Intel, AMD, IBM POWER9, and IBM Z architectures. ARM processors do not support it. Going deeper than two levels (creating an L3 guest inside an L2 guest) has not been properly tested and is not expected to work reliably.
Nested Virtualization in the Cloud
Major cloud providers offer nested virtualization on select instance types, which is useful when you need to run a hypervisor inside a cloud VM for testing or development.
On AWS, nested virtualization is supported on C8i, M8i, and R8i instance types. These are standard (non-bare-metal) EC2 instances where you can run hypervisors like Hyper-V or KVM inside the guest. For workloads that are performance-sensitive or have strict latency requirements, AWS recommends bare metal instances instead, since they give you direct access to the hardware without the overhead of a parent hypervisor.
Microsoft Azure initially launched nested virtualization support on Dv3 and Ev3 VM sizes and has expanded to additional sizes since then. If you’re running Hyper-V labs or testing deployment automation in Azure, these are the instance families to look at.
Performance Overhead
Running a virtual machine inside a virtual machine adds measurable overhead. Every instruction from the L2 guest must pass through two layers of virtualization instead of one, and I/O operations (disk reads, network traffic) take the biggest hit.
Research from USENIX found that nested file system interactions can degrade throughput by as much as 67% in worst-case scenarios, such as running certain Linux file systems layered on top of each other. Even in the best cases, latency increased by 5 to 15% across the board for typical workloads like web serving. CPU-bound tasks tend to fare better than storage-heavy ones, but you should expect noticeable slowdowns compared to running the same workload in a single-layer VM.
This is why nested virtualization is primarily a tool for development, testing, and training rather than production workloads. The convenience of running hypervisors inside VMs comes at a real performance cost, and most organizations reserve it for scenarios where flexibility matters more than speed.
Security Implications
Each layer of virtualization adds an additional isolation boundary. An L2 guest must escape through both the L1 hypervisor and the L0 hypervisor to reach the physical host, and each nested hypervisor can enforce its own security policies and configurations. In that sense, nesting can add defense in depth.
The tradeoff is a larger attack surface. More hypervisor code running means more potential vulnerabilities. A bug in the L1 hypervisor’s handling of nested virtualization extensions could expose the L0 host in ways that wouldn’t exist in a single-layer setup. For production environments where security is critical, running workloads directly on L1 VMs with a well-hardened L0 hypervisor is a simpler, more auditable approach. Nested setups are best suited to isolated lab and development environments where the risk profile is more forgiving.

