What Is Aliasing in Audio: Causes and Prevention

Aliasing is a type of audio distortion that happens when a digital system encounters a frequency higher than it can accurately capture. Instead of recording that frequency correctly, the system misinterprets it as a lower, false frequency that wasn’t in the original sound. These phantom frequencies sound harsh and unmusical, and they’re one of the core problems that digital audio engineers work to prevent.

How Digital Sampling Creates Aliasing

To understand aliasing, you need to know one thing about how digital audio works: sound is captured by taking thousands of snapshots (called samples) every second. A CD-quality recording takes 44,100 samples per second, written as a sample rate of 44.1 kHz. A video soundtrack typically uses 48 kHz.

The Nyquist-Shannon sampling theorem defines the hard rule that governs all digital audio: your sample rate must be greater than twice the highest frequency you want to capture. This means a 44.1 kHz sample rate can accurately represent frequencies up to 22.05 kHz, and a 48 kHz rate handles frequencies up to 24 kHz. That upper limit, half the sample rate, is called the Nyquist frequency.

Any frequency that sneaks above the Nyquist frequency doesn’t just vanish. It gets “reflected” back down into the audible range as a completely different, lower frequency. That reflected imposter is the alias. So if you’re recording at 44.1 kHz and a 23 kHz tone enters the system, it doesn’t get captured as 23 kHz. It folds back and appears as a tone somewhere below 22.05 kHz, producing a sound that has no musical relationship to the original.

The Wagon Wheel Analogy

The easiest way to visualize aliasing is the wagon wheel effect you’ve seen in old movies. When a stagecoach speeds up, there’s a point where the wheel appears to slow down, stop, or even spin backward. The wheel hasn’t actually changed direction. The camera is simply taking pictures too slowly to keep up with the true rotation, so the motion gets misinterpreted as something completely different.

A UC Davis computer science course illustrates this with a clock’s minute hand. If you photograph the clock every 5 minutes, you’ll clearly see the hand moving clockwise. But if you only photograph it every 55 minutes, the hand appears to creep counter-clockwise. If that movie were your only source of information, you’d have no way to know the clock was actually moving the other direction. Audio aliasing works the same way: sample too slowly for the frequency you’re trying to capture, and the system reconstructs a false signal that looks nothing like the real one.

What Aliasing Sounds Like

Aliased frequencies sound like strange, inharmonic tones that have no musical relationship to the original signal. In clean recordings of simple tones, you might hear an unexpected pitch that shifts in the wrong direction as the original pitch rises. In more complex material like music, aliasing shows up as a metallic, gritty quality, especially in the high frequencies. The artifacts tend to be most noticeable on sustained high notes, cymbals, or any bright, harmonically rich sound.

Audio aliases are sometimes described as frequencies that “bounce off” the Nyquist ceiling. If you run a sine wave that sweeps upward in pitch through a system prone to aliasing and watch the result on a spectrum analyzer, you’ll see the distinctive V-shaped pattern: the frequency rises toward the Nyquist limit, hits it, then reflects back downward. That reflected energy is entirely artificial.

Aliasing in Music Production

In modern music production, the most common source of aliasing isn’t the initial recording. It’s what happens afterward, inside your software. Any plugin that creates new harmonics, particularly saturation, distortion, and overdrive effects, can generate aliasing. Here’s why: when you distort a signal, the process creates overtones at multiples of the original frequency. A note with a fundamental at 5 kHz, run through a distortion that generates a fifth harmonic, produces energy at 25 kHz. At a 44.1 kHz sample rate, the Nyquist frequency is 22.05 kHz, so that 25 kHz harmonic folds back into the audible range as an alias.

Louder signals distort more aggressively, producing more and higher harmonics, which means more of those harmonics cross the Nyquist boundary. This is why aliasing from plugins tends to get worse as you push a saturator harder. A subtle touch of warmth might produce no audible aliasing at all, while cranking the same plugin introduces noticeable grit that wasn’t part of the intended sound.

One practical test: run a sine wave sweep through a saturation plugin and watch the output on a spectrograph. If the plugin is producing aliasing, you’ll see spectral peaks rising toward the top of the frequency range, then bouncing back down in those telltale V shapes.

How Aliasing Is Prevented

The primary defense against aliasing is filtering out frequencies above the Nyquist limit before they enter the digital system. In hardware, this means placing an analog low-pass filter (called an anti-aliasing filter) directly before the analog-to-digital converter. This filter blocks any frequency content above the Nyquist frequency so it never reaches the sampling stage. Every audio interface and digital recorder has one built in.

For software plugins that generate new harmonics internally, the solution is oversampling. The plugin temporarily increases the effective sample rate, often by 2x, 4x, or even higher, performs its distortion processing at that elevated rate, then filters and converts back down. With the sample rate doubled, the Nyquist ceiling is twice as high, giving those new harmonics room to exist without folding back. Many modern saturation and distortion plugins offer oversampling as a toggle, sometimes labeled “HQ” or “high quality.” The tradeoff is higher CPU usage.

Working at higher session sample rates like 88.2 kHz or 96 kHz also gives more headroom above the audible range, which can reduce aliasing from plugin processing. However, most professional producers and mastering engineers still work at 44.1 or 48 kHz. High-quality converters sound excellent at these standard rates, and the CPU savings from lower sample rates are significant when a session has dozens of tracks and plugins running simultaneously.

Aliasing vs. Quantization Noise

Aliasing is sometimes confused with quantization noise, but they’re caused by entirely different aspects of the digital audio process. Aliasing is a sample rate problem: not enough snapshots per second to capture the frequency content. Quantization noise is a bit depth problem: not enough precision in each individual snapshot to represent the exact amplitude of the signal.

Think of it this way: sample rate determines how high in frequency you can go, while bit depth determines how finely you can measure the loudness at each sample point. An 8-bit system, for example, can only represent amplitude in steps of about 1/128th of the full range, so every sample carries a tiny rounding error. That accumulated rounding error becomes a low-level noise floor. Aliasing, by contrast, creates distinct false tones at specific frequencies. One sounds like hiss or fuzz in the background; the other sounds like wrong notes that shouldn’t be there.