Beating Phase Issues in Your DAW

Phase cancellation occurs when two identical sound waves are equal and opposite to each other; in other words, as one waveform increases in amplitude, the other decreases at the same rate. If you add the instantaneous values of these two waveforms, the result is zero. As a result, the two sounds cancel out, and assuming they truly are identical, you will not be able to hear anything.
Publish date:
Updated on

At this point, some of you (you know who you are!) are rushing to the forums to tell us that what we’re really talking about is polarity, not phase. And by and large, you’re right; polarity is a total “flip” of a signal that affects all frequencies, whereas phase changes can sometimes be associated only with specific frequency ranges (e.g., phase variations in a speaker crossover). But everyone knows what we mean by “phase,” so that’s the term we’ll use.

To avoid phase issues, the first step is to record everything in phase. This occurs through proper mic placement, and adhering to the “Three to One Rule”: When using multiple mics, for every unit of distance from the sound source, the mics should be at least three units apart from one another. This means that if one mic is three inches from a guitar cab, the second mic should be at least nine inches from that mic. But what if there are still phase issues, we’re not into re-cutting the track, and are stuck with fixing it in the mix?


First, let’s learn more about phasing. Here’s a way to demonstrate phase cancellation and learn the proper techniques for aligning your tracks in your DAW, as well as get a feel for the process of correcting phase.

  1. Make two new audio tracks in your DAW. Assign both outputs to the same speaker or channel, and pan both to center.
  2. Record a sine wave onto each of the two tracks. To do this with Pro Tools, record your Signal Generator plug-in’s output into each track. You can also use a standalone signal generator found with common studio maintenance equipment, a plug-in similar to Signal Generator, a digital audio editing program that can generate sine waves (Sound Forge, Wavelab, Audition, Peak, etc.; see Tech Bench, 11/06 issue), or even an oscillator found on a mixing console or tape machine. The frequency of the sine wave doesn’t matter, but a 300Hz tone is a good place to start.
  3. Put your DAW in play mode, and listen to the outputs from each track. If you turn track one on and off, and assuming that both sine waves start at the same point in their cycle, you’ll notice that the combined amplitude of both waves is louder than a single track’s amplitude.
  4. Zoom all the way in so that you can analyze the peaks and valleys of each of the waveforms (Figure 1).
  5. Leaving track one alone, select track two and physically move the waveform to the right or left. As you move track two around in time, listen to the combined outputs from both tracks. Note that once the two sine waves are opposite of each other (i.e., the crest of the wave on channel one aligns with the valley of channel two), both tracks will become inaudible, or close to it if the channels aren’t perfectly matched.
  6. Experiment further with moving one of the waveforms around in time while listening to the sum of both of them. Notice how the volume changes based on the location of each waveform; the amplitude will double if the peaks and valleys match up, but if they’re out of phase, the sound will be attenuated greatly — or even inaudible.


Instead of using a sine wave, let’s now apply the same concept to instruments that we find in our recordings. Let’s say you’re mixing a song with two snare tracks (i.e., top and bottom mics were used to record the drum). Having one source with more than one mic may mean phase issues; if you find your snare tracks are out of phase, simply invert the polarity of the bottom snare track (flip the signal upside down so the crests of the waveform are now in phase with each other), and you’re probably set.

But not always. While this works for some tracks, other tracks may be more problematic. Say you have two guitar tracks (one track is a Shure M57 directly on the grille of the cabinet while the second track is a Neumann U47 a few feet back from the amp to capture some of the room), or two bass tracks (a DI signal as well as an amp signal). In the case of the guitar, the amp output had to travel further to the room mic than it did to hit the close mic. The bass signal, likewise, had to go further to hit the amp than it did to hit the DI. Temporarily mix the two signals together in mono; if the sound becomes weaker instead of stronger, you have phase issues that need to be addressed.

In this case you should follow steps 4–6, moving the second track so that it aligns with the first. The two tracks may not look identical (especially for the guitar tracks, as they are from different mics that have different frequency responses, positioned at different distances from the source) — but the similarities will be clear enough to allow you to position the tracks accordingly. [Note: One good way to give yourself a point of reference when recording your bass is to track yourself plugging in at the beginning of the take. The “pop” of the cable will give you a nice spike on your waveform that you can then use to help you align all of your tracks.]

By moving your tracks around in time, you can greatly alter the sound of the finished product — so it’s a good idea to experiment with some phase tricks even when your tracks aren’t canceling out totally. For example, I’ve noticed that flipping the polarity of a single kick drum track can make the drum sound more punchy and direct. Similarly, inverting the phase on a hi-hat track can make the hats really cut through the mix.

However, when checking phase between two tracks, always monitor them in mono until you’ve determined that the phase relationship is correct. Two out-of-phase signals can give a gloriously wide sound when panned in stereo (this was the basis of “stereo simulation” in many effects boxes), but they’ll disappear completely when played back in mono.


Consider a kick drum: When it’s hit with a beater, a rush of air pushes forward from the front of the drum and to the listener. Now consider that same kick drum playing back through a speaker. Does the hit cause the speaker to move out, thus pushing air, or is there reversed polarity somewhere in the picture, so that the speaker “sucks in” air?

If the speaker moves out, it’s considered to match the absolute polarity of the kick drum. Now, most texts will tell you that a sound heard in isolation (i.e., not compared to any other sound) will sound the same whether its polarity is absolute or reversed. And in theory, this makes sense. Yet some people can perceive when something like a kick drum is “sucking” instead of “pushing,” thus making a case that absolute polarity does matter, and can make a difference to the overall sound.
––Craig Anderton