Phase cancellation happens when two copies of a sound wave meet and their peaks line up with each other’s valleys, causing them to cancel out and reduce or eliminate the sound entirely. It’s the same physics that makes noise-canceling headphones work, but in recording and live sound, it’s usually an accident that makes your audio sound thin, hollow, or weirdly quiet.
How Waves Cancel Each Other
Sound travels as a wave of pressure changes in the air, alternating between compressions (high pressure) and rarefactions (low pressure). When two waves arrive at the same point at the same time, their amplitudes simply add together. If both waves push the air in the same direction, you get a louder sound. This is constructive interference.
Destructive interference is the opposite. When one wave’s peak arrives at the exact same moment as another wave’s valley, the two pressures work against each other. A peak of +1 combined with a valley of -1 produces zero. If the two waves are identical in shape and amplitude but shifted by exactly half a wavelength, they cancel completely and you hear nothing. In practice, perfect total cancellation is rare outside a laboratory. What you typically get is partial cancellation, where some frequencies lose energy while others don’t, creating an uneven and unnatural sound.
Common Causes in Recording
The most frequent cause of phase cancellation in a studio is using multiple microphones to record a single source. Drum recording is the classic example. When one mic is closer to the snare than another, the sound reaches each mic at a slightly different time. That time difference means the two signals are offset from each other, so when they’re combined in a mix, certain frequencies cancel while others reinforce. The result is a thin, scooped-out tone that no amount of EQ can fully fix.
The same thing happens when recording acoustic guitar with two mics, capturing a piano from different angles, or even placing a single mic near a reflective surface. The reflected sound bouncing off a wall or floor arrives at the mic slightly after the direct sound, creating a second copy that’s delayed just enough to cause partial cancellation.
Comb Filtering: Partial Cancellation
When the delay between two copies of a signal is very short, you don’t get a clean cancellation of all frequencies. Instead, you get a pattern called comb filtering. Some frequencies cancel (where the delay equals half a wavelength), while others reinforce (where the delay equals a full wavelength). The result, if you plot it on a frequency graph, looks like the teeth of a comb: a series of evenly spaced notches cutting through the frequency spectrum.
The first cancelled frequency depends on the delay time. If two mics are about 30 centimeters apart (roughly one millisecond of delay at the speed of sound), the first notch appears around 500 Hz, with additional notches repeating at 1,500 Hz, 2,500 Hz, and so on. Move the mics closer together and the notches shift to higher frequencies. Move them farther apart and the notches drop lower. This is why comb filtering sounds different depending on mic placement, but it always has that characteristic hollow, “phasey” quality.
Stereo-to-Mono Collapse
Phase cancellation can also strike when a stereo recording gets played back in mono. A phase correlation meter, found in most digital audio workstations, measures how similarly the left and right channels behave on a scale from +1 to -1. A reading of +1 means both channels are identical (pure mono). A reading of 0 means the channels are diverging widely, giving a broad stereo image. A reading of -1 means the left and right channels are identical but with opposite polarity, which means folding them together into mono produces silence.
This matters because your music will be summed to mono more often than you might think: phone speakers, smart speakers, some PA systems, and many retail environments all play audio in mono. If your stereo mix has elements that are out of phase, those elements vanish or lose low-end energy the moment the left and right channels combine.
Phase Problems in Live Sound
In a live setting, phase cancellation most commonly shows up at speaker crossover points. A two-way speaker system hands off bass frequencies to a subwoofer and midrange-and-above to the main cabinets. At the crossover frequency, both speakers are active simultaneously. If the sound leaves each speaker at a slightly different time, destructive interference creates an audible dip right at that transition frequency. These dips can’t be corrected with EQ because the problem is timing, not level.
Sound takes about 3 milliseconds to travel one meter, so even small differences in speaker placement matter. Engineers fix this by adjusting speaker positions, adding digital delay to one set of speakers, or using a phase control (often a knob labeled 0 to 180 degrees on subwoofers) until the output at the crossover frequency reaches its maximum level and the dip disappears.
Polarity vs. Phase
These two terms get used interchangeably in audio, but they describe different things. Polarity is a simple flip: positive voltage becomes negative and vice versa. Nothing moves earlier or later in time. If you have two identical, perfectly time-aligned signals and flip the polarity of one, they cancel completely. The polarity invert button (often labeled with a “ø” symbol) on a mixing console or audio interface does exactly this.
Phase, on the other hand, is about delay. It’s the time offset between two signals. When a second mic picks up a sound a fraction of a millisecond after the first mic, that’s a phase difference. Flipping polarity might help in some cases, but it won’t perfectly align two signals that are offset by an arbitrary amount of time. For that, you need a dedicated phase alignment tool that lets you shift one signal continuously between 0 and 180 degrees, or a simple delay adjustment measured in milliseconds or samples.
How to Prevent and Fix It
The simplest prevention method in recording is the 3-to-1 rule. For every unit of distance between a mic and its source, the next mic should be at least three units away from the first mic. If one mic sits one foot from an acoustic guitar, the second mic should be at least three feet from the first. This ratio ensures that the signal reaching the farther mic is quiet enough relative to the close mic that any phase discrepancy becomes negligible in the mix.
When prevention isn’t possible, several fixes exist in post-production. Zooming into the waveforms of two tracks and nudging one forward or backward in time until the peaks line up is the most direct approach. Many DAWs include sample-accurate delay compensation for this purpose. Flipping polarity on one track can also help, especially when two mics were placed on opposite sides of an instrument (like the top and bottom of a snare drum, where the drumstick pushes the head down toward the bottom mic while pulling it away from the top mic).
Checking your mix in mono is one of the fastest ways to detect phase problems. If an instrument suddenly sounds thinner, quieter, or disappears when you hit the mono button, phase cancellation is at work. A correlation meter gives you a real-time visual confirmation: if the needle swings toward -1, something in your stereo signal is fighting itself.
When Phase Cancellation Is Intentional
Not all phase cancellation is unwanted. Active noise-canceling headphones use it on purpose. A tiny microphone on the outside of the headphone picks up ambient noise, and a processor generates a matching sound wave with identical amplitude but inverted polarity. When this “anti-noise” signal combines with the incoming noise inside the ear cup, the two cancel each other out, reducing what you hear. The technology works best on steady, low-frequency sounds like airplane engine drone, where the waveform is predictable enough for the processor to generate an accurate inverse in real time.
The same principle shows up in studio vocal recording. If you record a voice with two identical mics in the same position and flip the polarity of one, the voice cancels out while any sound that reached the mics differently (like room noise from an off-axis direction) remains. This trick is the basis of noise-reduction techniques used in broadcast and podcasting to isolate a speaker from background sound.

