How to Make AR Glasses With Open-Source Hardware

Building AR glasses from scratch requires solving five core engineering problems at once: a tiny display bright enough to see through, sensors that track your head position in real time, software that renders graphics onto the real world, a battery small enough to fit in a glasses frame, and thermal management that keeps it all from burning your skin. No single component is easy, but open-source projects and off-the-shelf parts have made it possible for skilled hobbyists to build working prototypes.

Start With an Open-Source Reference Design

The most accessible entry point for building your own AR glasses is Project North Star, an open-source hardware platform with published schematics, 3D-printable parts, and assembly guides. The core bill of materials includes a Leap Motion Controller for hand tracking, two 3.5-inch 120Hz LCD displays (BOE VS035ZSM at 1440×1600 resolution per eye), a custom display driver board, and two acrylic reflectors that act as optical combiners. The reflectors bounce the display image toward your eyes while letting you see the real world through them.

Assembly involves fastening the displays and a facial interface into a 3D-printed optics bracket with four screws per side, then aligning the acrylic reflectors with tabs on the bracket and tacking them in place with hot glue. It’s not elegant, and the result looks more like a bulky headset than a pair of glasses, but it produces a functional wide-field-of-view AR display you can develop software for. Some components, particularly the reflectors and high-refresh displays, are difficult to source individually since manufacturers like BOE sell them only in bulk. Community forums for the project maintain updated sourcing guides.

Choosing a Display Technology

The display is the hardest component to get right. You need something small enough to fit near your eye, bright enough to remain visible against sunlight, and efficient enough not to drain your battery in minutes. The three main options for DIY builders are micro-OLED panels, LCoS (liquid crystal on silicon) modules, and the LCD approach used by North Star.

LCDs are the easiest to source and the cheapest, but they’re also the largest and heaviest. They work well for headset-style builds where size isn’t the primary constraint. Micro-OLED panels offer better contrast and smaller form factors, and they show up in commercial products from companies like Nreal and XREAL. LCoS modules project an image onto the combiner lens and can be very compact, but they require more complex optical alignment.

For outdoor visibility, you need brightness in the thousands of nits. Most small off-the-shelf displays top out well below that, which is why even commercial AR glasses struggle in direct sunlight. If you’re prototyping indoors, a display producing 500 to 1,000 nits will work fine. For any outdoor use, you’ll likely need to add a tinted visor or accept washed-out overlays.

Tracking: How the Glasses Know Where You’re Looking

AR glasses need to know their exact position and orientation in 3D space so that virtual objects stay locked to real-world locations. This is called six-degrees-of-freedom (6DOF) tracking, meaning the system tracks movement along three axes plus rotation around each one. The standard approach combines camera data with an inertial measurement unit (IMU) running a SLAM algorithm (simultaneous localization and mapping).

For a DIY build, you need at minimum one wide-angle camera (a fisheye or global-shutter camera works best) and an IMU sampling at around 100 Hz for both its gyroscope and accelerometer. The IMU fills in position estimates between camera frames, which is critical because even brief tracking gaps cause virtual objects to visibly jitter or drift. RTAB-Map is one open-source SLAM implementation that uses camera data for loop closure, the process of recognizing when you’ve returned to a previously mapped area. Intel RealSense depth cameras are popular in prototypes because they combine a depth sensor with an IMU in a single module.

The processing demands are significant. Running SLAM in real time while also rendering graphics typically requires at least a mobile-class GPU. Many DIY projects offload this to a tethered phone or a small single-board computer like an NVIDIA Jetson clipped to a belt. Fully self-contained processing inside a glasses frame remains largely out of reach for hobbyist builds.

The Optics: Combining Real and Virtual

The optical combiner is what makes AR glasses different from a VR headset. It’s a semi-transparent element positioned in front of your eye that reflects the display image toward your retina while allowing real-world light to pass through. In the simplest builds, this is just a piece of angled half-silvered acrylic, essentially a beam splitter. Project North Star uses this approach with custom-cut acrylic reflectors.

More advanced combiners include diffractive waveguides, which use nanoscale surface gratings to guide light from a tiny projector across a flat lens and into your eye. These are what give commercial products like HoloLens their thin, glasses-like appearance. Waveguides are not realistically manufacturable at home. They require semiconductor-grade fabrication. If you want a slim form factor, your best option is to purchase waveguide evaluation kits from optical component suppliers, though these typically cost several hundred to several thousand dollars.

Regardless of combiner type, you’ll spend a lot of time on alignment. Even a millimeter of offset between the display, combiner, and your pupil position can distort or cut off the virtual image. Build in adjustment mechanisms (slotted screw holes, set screws, or shimming surfaces) from the start.

Power and Battery Placement

Current AR glasses typically target 2 to 4 hours of runtime per charge, and achieving even that modest goal requires careful component selection. Lithium polymer (LiPo) cells are the standard choice because they can be shaped into flat, narrow profiles that fit inside glasses temples. High-capacity cells like the LP603450 or LP703496 form factors provide enough energy for high-performance displays and computing while supporting the high peak currents that rendering workloads demand.

Weight distribution matters as much as capacity. A single large battery on one side makes the glasses slide and dig into your nose. The best approach splits the battery across both temples, balancing the load. For a tethered prototype where the computer lives in your pocket, you can also run power through the tether cable, which dramatically simplifies the glasses themselves and lets you use a larger, cheaper battery pack.

Keeping It Cool Enough to Wear

Any surface touching your skin needs to stay below 43°C (about 109°F). Above that threshold, prolonged contact can cause low-temperature burns. This is a real constraint, not a theoretical one, because processors, display drivers, and batteries all generate heat in a very small space pressed against your head.

Research on smart glasses thermal management found that the best frame design uses aluminum for the rims and front section (where the electronics sit) and cellulose acetate, a common plastic used in regular eyeglasses, for the temples and ear pieces. Aluminum’s thermal conductivity of 236 W/mK spreads heat efficiently across the frame, while the plastic temples (0.2 W/mK) insulate your skin from that heat. This combination keeps skin-contact surfaces below the burn threshold while giving the electronics enough surface area to dissipate heat passively.

In practice, for a prototype, you can achieve something similar by mounting your hottest components (the processor and display driver) on a small aluminum heat spreader on the front of the frame and keeping the ear-contact areas plastic. Adding even a thin thermal pad between chips and the frame helps. If your prototype runs too hot, the first fix is usually reducing display brightness or processor clock speed rather than adding fans, which would add noise and weight.

Software: Rendering the AR Experience

The software stack for AR glasses has three main layers: the runtime that talks to hardware, the rendering engine that draws 3D graphics, and the application logic that decides what to show and where.

OpenXR is the standard API that ties these layers together. It’s a royalty-free, open specification that provides a common interface for accessing tracking data, controller input, and display output across different hardware. A typical OpenXR program creates an instance (connecting to the runtime), selects a system (your physical display and sensors), sets up rendering buffers, and then enters a loop where it continuously gets tracking data, renders a frame, and submits it to the display. Building on OpenXR means your software can potentially run on other AR and VR hardware without a full rewrite.

For the rendering engine, Unity and Unreal both support OpenXR and have AR toolkits. If you want something lighter weight for a custom Linux-based build, OpenGL or Vulkan can render directly to the displays with your own compositor. The key rendering challenge specific to AR is reprojection: adjusting the rendered image at the last possible moment based on the newest tracking data to minimize perceived latency. Even 20 milliseconds of delay between a head movement and the display update creates visible drift that makes virtual objects feel disconnected from reality.

Realistic Expectations for a First Build

A hobbyist’s first AR glasses prototype will almost certainly be bulky, tethered to an external computer, and limited to indoor use. That’s normal. The goal of a first build should be getting a stable virtual overlay that tracks with your head movement, not building something you’d wear to a coffee shop. Budget roughly $300 to $800 for components depending on your display choice and whether you already own a suitable compute platform.

The skills you’ll need span 3D printing or CNC work for the frame, basic soldering for display and sensor connections, and enough programming ability to work with OpenXR or a similar framework. If you’re coming from a software background, the optics will be the steepest learning curve. If you’re coming from hardware, the real-time rendering pipeline will take the most time to get right. Community resources around Project North Star, Relativty (an open-source VR project with transferable knowledge), and various maker forums are the best places to find people who’ve solved the specific problems you’ll run into.