What Is Mixed Reality? MR vs. VR and AR Explained

Mixed reality (MR) is a technology that blends digital objects into your physical environment so they can interact with real-world surfaces, furniture, and even your hands. Unlike virtual reality, which blocks out the real world entirely and places you in a fully digital space, mixed reality keeps you anchored in your actual surroundings while layering in 3D content that behaves as if it’s physically there. A virtual ball can roll off your real desk, a holographic screen can hang on your actual wall, and a digital character can walk behind your couch and disappear from view.

How MR Differs From VR and AR

These three technologies sit along what’s called the virtuality continuum, a spectrum running from the completely physical world on one end to a completely digital world on the other. VR lives at the fully virtual extreme: you put on a headset and everything you see is computer-generated. Augmented reality (AR) is closer to the real-world end, overlaying flat digital information onto your view, like navigation arrows on a phone screen, but those digital elements don’t respond to physical objects around them.

Mixed reality occupies the middle of that continuum. The key distinction is interaction. In MR, digital elements get input from your environment and change according to it. A virtual object knows where your table is and can sit on top of it. It understands the shape of your room and responds to depth, lighting, and obstacles. That two-way relationship between the physical and digital is what separates MR from simple AR overlays and from VR’s total immersion.

How Headsets Create the Blend

MR headsets use one of two approaches to let you see both the real world and digital content at the same time.

Optical see-through uses transparent glass or plastic lenses. You look through the display the same way you’d look through a pair of glasses, and the headset projects holograms onto that transparent surface. The real world looks natural because you’re seeing it directly with your eyes. The tradeoff is that virtual content can appear washed out or hard to see depending on room lighting, since the holograms are competing with ambient light. Microsoft’s HoloLens 2 is the most well-known example of this design.

Video passthrough takes a different route. Cameras on the front of the headset capture your surroundings in real time, then the device composites digital objects into that video feed and displays the combined image on opaque screens inside the headset. This gives the system more control over how the blend looks, since both the real and virtual elements are rendered on the same screen. Virtual content appears more vivid and solid. The downside is that the quality of your real-world view depends entirely on those cameras. On the Meta Quest 3, for instance, passthrough latency (the delay between real movement and what you see on screen) is about 39 milliseconds. Apple’s Vision Pro cuts that to roughly 11 milliseconds, making the real world feel more immediate and responsive.

How MR Understands Your Room

For digital objects to sit on your coffee table or hide behind your bookshelf, the headset needs a detailed 3D map of your space. MR devices accomplish this through a process called spatial mapping. Cameras and sensors on the headset continuously scan your environment, estimating the device’s position while simultaneously reconstructing the structure of walls, floors, furniture, and other surfaces. This happens in real time as you move around.

That spatial map is what makes convincing mixed reality possible. It lets the system place a virtual lamp on a real shelf at the correct height, or make a holographic fish tank appear to rest on your actual countertop. Without it, digital content would just float in space with no relationship to the room around you.

Why Occlusion Matters

One of the trickiest problems in mixed reality is occlusion: making a real object correctly block your view of a virtual one. If you place a holographic robot on the floor behind your couch, you should only see the parts of the robot that aren’t hidden by the couch. Without proper occlusion handling, the digital robot would appear to float in front of the couch, breaking the illusion that it actually exists in your room.

MR systems solve this by tracking the boundaries of real objects and redrawing those physical surfaces on top of the virtual content in the final image. The border between the real object and the hidden virtual content is smoothed so the transition looks seamless. It’s a computationally demanding task, but it’s essential for making digital content feel like it belongs in your space rather than pasted on top of it.

Hand Tracking and Natural Interaction

Because MR keeps you connected to the real world, traditional VR controllers can feel out of place. Many MR experiences instead rely on hand tracking, letting you reach out and tap a virtual button, pinch to resize a holographic window, or grab and rotate a 3D model using your bare hands. Cameras on the headset track the position of each finger and reconstruct a digital skeleton of your hand in real time.

This approach removes a barrier between you and the content. You don’t need to learn which buttons to press. You just interact the way you would with a real object. The latency of that hand tracking varies by device. On the Quest 3, the total delay from moving your hand to seeing your virtual hand respond is about 70 milliseconds. On the Vision Pro, it’s closer to 128 milliseconds, partly because Apple’s system processes the tracking differently despite having lower passthrough latency.

Where MR Is Already Being Used

Mixed reality has gained the most traction in professional settings where overlaying digital information onto real-world tasks provides a clear advantage.

In manufacturing, MR headsets guide workers through complex assembly processes by projecting step-by-step instructions directly onto the parts they’re handling. The results can be dramatic. In one automotive factory, daily assembly failures dropped from 1,600 to just 80 after adopting MR guidance, a 95% reduction. At another auto parts facility, component shortages fell from 253 to 19, and incorrect assemblies dropped from 168 to 13. When a worker can see exactly which part goes where, overlaid right on the workstation, mistakes become far less likely.

In surgical training, MR is being used to blend 3D anatomical models with physical training tools like 3D-printed replicas. Trainees wearing MR headsets can see a patient’s scan data overlaid directly onto a model, improving their spatial understanding of the procedure. Studies have found that MR-assisted training reduces the time spent on certain guidance steps and improves the accuracy of procedures practiced on those models.

Other applications include architecture (walking through a building design overlaid on an empty lot), remote collaboration (a technician seeing an expert’s annotations floating on the equipment they’re repairing), and education (students interacting with 3D models of molecules or historical artifacts placed on their desks).

MR Market Growth

The mixed reality market is expanding quickly. Valued at roughly $5.9 billion in 2025, it’s projected to reach $8.4 billion in 2026 and could grow to nearly $51 billion by 2031, reflecting a compound annual growth rate of about 43%. That growth is being driven by both consumer headsets like the Quest 3 and Vision Pro and enterprise adoption in healthcare, manufacturing, and defense.

For most people today, the easiest entry point into mixed reality is a consumer headset with video passthrough. These devices can switch between full VR (blocking out the room entirely) and mixed reality (blending digital content into your space), giving you access to both ends of the continuum in a single piece of hardware. As passthrough camera quality improves and spatial mapping becomes more precise, the line between “VR headset” and “MR headset” is becoming less about the hardware and more about which mode you choose to use.