An LXC container is a lightweight virtual environment that runs a complete Linux operating system using isolation features built into the Linux kernel, rather than emulating hardware the way a traditional virtual machine does. Think of it as a way to run multiple independent Linux systems on a single host, where each one gets its own files, processes, network, and users, but they all share the same underlying kernel. This makes LXC containers fast to start, efficient with resources, and closer to bare-metal performance than hypervisor-based alternatives like KVM or VirtualBox.
How LXC Isolation Works
LXC relies on two core Linux kernel features: namespaces and control groups. Namespaces create the illusion that each container has its own private system. There are six key namespaces that LXC uses, each isolating a different part of the operating system:
- PID namespace: Each container sees only its own processes. Process ID 1 inside the container is just a regular process on the host.
- Network namespace: Each container gets its own network stack with separate IP addresses, routing tables, and firewall rules.
- Mount namespace: Each container has its own filesystem tree, so it can mount and unmount volumes without affecting the host or other containers.
- User namespace: User and group IDs inside the container are mapped to different IDs on the host, which is critical for security.
- UTS namespace: Each container can have its own hostname.
- IPC namespace: Inter-process communication channels are isolated so containers can’t eavesdrop on each other’s messages.
Control groups (cgroups) handle the resource side. They let the host limit how much CPU, memory, disk I/O, and network bandwidth each container can consume. Without cgroups, a single runaway container could starve the rest of the system. Together, namespaces provide isolation and cgroups provide resource control. LXC also layers on additional security through kernel capability restrictions, mandatory access control profiles (like AppArmor or SELinux), and system call filtering to limit what the container can ask the kernel to do.
System Containers vs. Application Containers
The biggest conceptual difference between LXC and tools like Docker is what lives inside the container. LXC creates “system containers,” which house a complete Linux environment: an init system, user space utilities, package managers, and potentially dozens of running services. They behave like a full operating system. You can SSH into them, install packages, run cron jobs, and treat them much like a lightweight virtual machine.
Docker, by contrast, builds “application containers.” Each Docker container typically runs a single application and its dependencies, packaged into a portable image. Docker containers are designed to be ephemeral and replaceable. LXC containers are designed to be persistent and long-lived.
This makes LXC particularly useful for running legacy applications that expect a full system environment, for development machines where you need a complete OS to work in, or for hosting providers who want to give users isolated Linux instances without the overhead of full virtualization. Docker is a better fit when you want to package and ship a single application consistently across different environments.
Performance Compared to Virtual Machines
Because LXC shares the host kernel instead of emulating hardware, its performance is very close to running directly on the physical machine. Benchmarks on Proxmox comparing LXC to KVM (a full hardware-emulating hypervisor) show the difference clearly.
For CPU-bound tasks, LXC and KVM perform similarly when given the same number of cores. Single-threaded workloads like chess AI calculations and MP3 encoding show nearly identical numbers. Where LXC pulls ahead is disk I/O. In a 4GB read test, LXC achieved roughly 1,319 MB/s compared to KVM’s 376 MB/s. Write performance showed a similar gap: 352 MB/s for LXC versus 257 MB/s for KVM. Web serving benchmarks also favored LXC, handling about 7,000 requests per second compared to KVM’s 5,400.
The reason is straightforward. KVM has to pass every disk and network request through a virtualized hardware layer. LXC talks directly to the host’s real hardware through the shared kernel, cutting out that middleman entirely. For workloads that are I/O heavy or need to serve many network requests, this difference matters.
Unprivileged Containers and Security
One of LXC’s most important security features is the unprivileged container. In a standard (privileged) container, the root user inside the container is the same root user on the host. If an attacker escapes the container, they have full control of the host system.
Unprivileged containers solve this by remapping user IDs. Root (UID 0) inside the container is mapped to something like UID 100000 on the host. UID 1 becomes 100001, and so on. If someone exploits a vulnerability and breaks out of an unprivileged container, they land on the host as a random, powerless user with no special privileges. The LXC project considers unprivileged containers safe by design, since a container escape would only be possible through a generic kernel bug rather than an LXC-specific flaw.
Managing LXC Containers
At its core, LXC provides a set of low-level command-line tools. Creating a container is as simple as running lxc-create -n mycontainer, which sets up a persistent container you can then configure. Starting it uses lxc-start -n mycontainer, and stopping it uses lxc-stop -n mycontainer. You can list all containers with lxc-ls -f or get details about a specific one with lxc-info -n mycontainer. There’s also lxc-monitor for tracking state changes across multiple containers, which is useful for scripting and automation.
LXC distinguishes between two ways of running things. lxc-start boots a full system inside the container (running an init process), while lxc-execute runs a single application. This flexibility is part of what separates LXC from purely application-focused container tools.
LXD: A Friendlier Management Layer
While LXC’s low-level tools are powerful, they can be tedious to configure manually. LXD is a management daemon built on top of LXC that simplifies the entire experience. Setting up a new container through LXD takes a single command and about two minutes, compared to the more involved setup process with raw LXC tools.
LXD adds features that LXC alone doesn’t provide out of the box: a REST API for managing containers over the network, easy container export and migration for portability, built-in snapshot support, and integration with storage backends like ZFS, Btrfs, LVM, and Ceph. ZFS and Btrfs are particularly popular choices because they support copy-on-write snapshots, meaning creating a new container from an existing image is nearly instant and uses minimal extra disk space. LXD also integrates with OpenStack for large-scale cloud deployments.
The naming can be confusing: the lxc command (without a hyphen) is actually the LXD client, while the older lxc-* commands (with hyphens, like lxc-create) are the low-level LXC tools. Most users today interact with LXD rather than raw LXC, since it provides feature parity plus significant convenience improvements.
Common Use Cases
LXC containers fill a specific niche between full virtual machines and application containers. They’re well suited for running multiple isolated Linux environments on a single server without the overhead of hardware emulation. Hosting providers use them to give customers dedicated-feeling environments at a fraction of the resource cost of VMs. Developers use them as lightweight development environments that can mirror production systems. System administrators use them to isolate services that expect a full OS, like mail servers or database clusters, without dedicating entire physical or virtual machines to each one.
Legacy applications that depend on specific system configurations or older library versions are another strong fit. Rather than trying to modernize the application or maintain an aging physical server, you can run the legacy environment inside an LXC container on modern hardware, fully isolated from the rest of your infrastructure.

