It's Only a Phase

A RECORDING-MUSICIAN'S GUIDE TO UNDERSTANDING AND CONTROLLING PHASE ISSUES
Author:
Publish date:
Social count:
0
Image placeholder title

The concept of phase comes into play again and again in audio technology, from stereo mixing to microphone placement to digital sync to everyday troubleshooting and more. When it works in your favor, it can be responsible for some cool results; when it works against you, it can take whole instruments out of the mix. Understanding the consequences of phase will help you grasp a great variety of issues that recording musicians encounter every day.

What Is It?
A wave''s phase describes where it is in its cycle at any given moment. While phase is related to both frequency and time, it does not measure wavelength, speed, or absolute distance. What it does measure is where a wave is relative to the beginning of its cycle. From that, its phase can tell us something about what kind of energy the wave has at that moment, and thus how it might interact with other waves it comes in contact with. Although we''re focused on sound here, phase is a basic characteristic in other media that travel in waves (wave mechanics), so by understanding phase in sound, you can also start to understand something about how it works in light, water, and even quantum mechanics.

As you probably know, sound waves are the patterns of changes in pressure we hear as sound moves through the air. The length of this pattern is called a period, and when a wave has made one trip through the pattern, it has moved through one complete cycle of the wave. When a wave pattern repeats in the same shape continuously, it is called a periodic wave. These periodic waves carry the sense of pitch in the sound, while waves whose shapes do not repeat regularly carry the sense of noise we hear. I will be referring primarily to periodic waves here as they are relevant to music and are easier to analyze.

Image placeholder title

FIG. 1: The process of a periodic wave moving through time is measured in degrees because it''s cyclical.

Illustration: Chuck Dahmer

It can be useful to describe exactly where a sound is in a cycle at a given moment, and because of the repeating nature of a periodic wave, it is common to measure its period in degrees, like a circle might be (see Fig. 1). Although we''ve been describing acoustic sound waves, electricity is also represented with waves, and so phase also applies to analog audio—and, by extension, digital audio (after it goes through D/A conversion).

When Waves Collide
Waves interact with each other when they collide acoustically or are combined electrically. When either of those things happen, their individual energies get mixed together and the result is a new combination. In AC electricity, this is called summing—simply adding things together. Because their energies are different at different points in their cycles, the phase of one wave relative to another is a critical factor.

Electrically, this summing usually happens when they are combined in the same bus of a mixer. So when two channels are both panned hard-left, they both go to the left mix bus, their full signals are summed, and thus the phase characteristics of those signals will interact. On the other hand, if they are hard-panned in opposite directions, one goes to the left mix bus and one to the right. If those buses go to left and right speakers, then the two signals are not summed and never meet (until they get back into the acoustic world), and so their electrical energy remains separate.

Image placeholder title

FIG. 2: Waves that are combined in phase are constructive, increasing total amplitude (a). Waves combined when they are out of phase are destructive, decreasing total amplitude (b).

Illustration: Chuck Dahmer

It is important to understand that sonically, phase comes into play only when waves are summed together in some amount. If each wave happens to reach the mic at exactly the same place in its cycle, they are fully in phase with each other. In that circumstance, when they are summed together in a mixer, the total result will be louder than either of the two by themselves. This is called constructive interference (see Fig. 2a), and reflects the common sense that if two summed microphones pick up the same sound at the same level, the result will be an increase in the overall volume.

The opposite effect, destructive interference, occurs when that same wave arrives at the two mics at different points of its cycle. When that offset reaches 180 degrees, they are then completely out of phase with each other. This effect can actually cause the summed sound of the two mics to be quieter than just one because waves that are 180 degrees out of phase with each other cancel each other out and produce no sound at all (see Fig. 2b).

How Problems Arise
Taken by itself and with simple sound waves, phase can seem pretty basic and much harder to hear than other aspects of sound, like amplitude and frequency. But in the real world, waves are not simple and they hit each other at any point in their cycles, not just 0 degrees and 180 degrees. Because sounds comprise more than a single simple wave, their harmonics interact, causing some frequencies to add and others to cancel. When you consider that there are thousands of them interacting at a time, this is where the real complexity and richness of sound occurs. When severe constructive and destructive inference is spread across all frequencies, the result can create an odd effect called comb filtering, or what people tend to call phasiness (see Web Clip 1).

The most common place phase problems start is related to microphone spacing. Because the distance between two mics changes the time a sound arrives at each of them, the differences in phase between all their harmonics interact, causing some amount of comb filtering. While it can be used as an effect, it''s not usually a desired sound. So engineers often listen to a combination of different mics and try to find (and avoid) places where phase interactions have a negative effect on the recording.

One common situation where many people overlook checking phase is when miking guitar amps, where it is common to use more than one mic in front of the speaker. These mics may or may not be exactly the same distance from the speaker, so the combination of them should be checked for the negative effects of phase. This can also happen when recording two channels of an instrument—bass, for example—using a miked amp and a DI box. So depending on where the phase of the waves happens to fall when they hit the mic, some fairly significant changes in tone can result (see Web Clip 2).

Image placeholder title

FIG. 3: Mics spaced apart can capture a wave in phase in one mic and out of phase in the other, leading to comb filtering and an odd frequency response.

Illustration: Chuck Dahmer

One of the most critical places to be aware of phase is when the typical combination of close and overhead mics are used on a drum set. The sound from the snare drum hit is heard through its close mic first and the overhead mic slightly later because it''s slightly futher away (see Fig. 3). The typical spacing of these mics is in the range that can often cause cancellations between them. Add that there are usually two drum overhead mics that are often spaced apart, and the chances for audible phase interaction between them all is even greater.

Staying Out of Trouble
There are a few suggestions to avoid the problem in the first place. One way not to invite extra “phase-ghosts in the machine” is simply to use the fewest number of mics you need to get the job done. The fewer mics on the floor, the fewer entry points for sound, and thus the fewer places they have to potentially conflict with each other.

Another commonly known method is by following the 3:1 rule. This principle says that for every unit of distance (whatever it may be) between a mic and its source, a nearby mic should be separated by at least three times that distance. Put another way, the distance between a mic and a source should be one-third of the distance between that mic and the next one closest to it. The idea is to avoid placing mics too close together, which greatly increases the chances for some destructive phase effects between them. In practice, the 3:1 rule is a good guideline to start with, but not be tied to, as you get more comfortable with how to listen for phase problems.

This is also one reason some engineers avoid using spaced stereo techniques in certain situations (particularly those that may involve broadcast) in favor of coincident stereo setups, such as X/Y, M/S, or Blumlein because, by definition, these coincident techniques position the left and right mics close enough together that there''s no phase differences between the two. (For more on stereo miking, see “Double Your Pleasure” in the June 2000 issue of EM, available at emusician.com.) If you are using a spaced omni pair of mics for drum overheads, you will also want to check the phase between the two of them, and between them and at least the kick and snare.

How can you spot ill effects of phase problems between mics? They don''t always occur, so it''s something that you need to check for each time. To find them in a drum kit recording, you need to be able to hear just the snare and one overhead together in mono (which sums them, as discussed above). This can be done simply at any mixer, or with a combination of a mixer and a monitoring controller, which makes it easy to hear the results in mono without disturbing your mix. If you have a console, the PFL Solo button on each channel will also do just want you need. (However, it won''t reflect actual fader levels in your mix as it''s a pre-fader listen.) If you hear comb filtering (the phasey or wooshy sound mentioned earlier), then you likely need to make a small but important adjustment.

Often there isn''t obvious comb filtering, but the mix of the two mics causes the drum to sound significantly thinner and weaker than it might normally. An odd frequency response, such as a hollow sound in the midrange, can also be indicative of destructive phase interference in the two mics. Many engineers check these things by habit as they''re setting up their mix and listen for one of these problems to jump out.

Image placeholder title

FIG. 4: The difference between two waves in phase but 180 degrees out of polarity (a) and two waves 180 degrees out of phase (b).

Illustration: Chuck Dahmer

Phase vs. Polarity
It might be useful at this point to quickly address a common misconception. Phase and polarity do not describe exactly the same thing. As we''ve looked at already, phase is specifically about timing. When you have a phase problem, you have a problem between the timing of two things, such as the distance between two mics or latency between two DAW tracks. The term out of polarity refers to two waves that may actually be in phase (i.e., they started at the same time), but whose energy is moving in opposite directions. Polarity does, indeed, relate to the shape of a wave and how it combines with others, but a problem with polarity is not related to a time offset between two similar waves the way phase is (see Figs. 4a and b). The effects of the two are often similar: When two sounds have opposite polarity, they look and behave like two sounds that are 180 degrees out of phase. But the solutions to the two problems are different, so it can be useful to differentiate them.

A common example is the way a snare drum works: When the drum is struck from the top, the head is pushed down. This creates low pressure on the top of the drum and high pressure on the bottom. So now we have the same (or similar) waves from the same source and at the same time, but radiating in opposite polarity with each other. This is a recipe for sonic weirdness because any frequencies that are the same in both sounds (which is many of them) will cancel each other out. This is why a polarity-reverse (often mistakenly called a phase-reverse) switch is used.

This function simply turns the wave upside-down (sometimes also called reverse or invert) without changing its timing so that all the points that were cancelled now add constructively, and vice-versa. It flips the sound from the bottom of the snare to become in phase with the sound at the top, and so they add constructively instead of destructively. The result is a snare sound that is usually much more full than if the two mics were kept out of polarity. This technique is often used with open-back guitar amps because similar principles are at work.

That said, using the polarity-reverse switch can still be useful in more drastic cases of phase bugs or as a coarse quick fix (see Web Clip 3). This can be done easily on many hardware mixers and mic preamps—look for a ø symbol. All major DAW platforms have some way to invert polarity, as well, either file-based (which changes the audio file itself) or real time (as a plug-in). Sometimes there is a dedicated plug-in, but sometimes you need to look for it as a function of another plug-in. For example, Pro Tools LE doesn''t provide a dedicated polarity-reverse switch as a real-time RTAS plug-in; there is one in the file-based AudioSuite menu. But it is a function included on many other plug-ins such as trims, delays, and EQs. In general, try engaging the polarity-flip on the mic with the least amount of leakage in it, which usually means the close mic. If the sound of that works, you should now check that close mic with other mics that share sounds in it.

For the snare, check it against the hi-hat mic because their polarity relationship is now opposite of what it was before. You might also need to change the mic under the snare back to its original polarity if you flipped it earlier. It is unlikely things will end up perfect with this method, but it can produce big changes in sound quickly, and so might very well be an improvement.

Making It Right
Fixing a phase problem with a time-based solution can happen in a number of ways at any point in the process, although the earlier the better. During recording, the most direct solution is often to move the correct mic just a bit closer or further away from the other one, listen, and repeat. Because of the variables and complexities in the acoustic world, it can be difficult to predict which exact move will completely fix a phase problem. But as a general guideline, if the cancellation seems to be occurring in the high-mids, subtle movements of less than one foot can do the job, and hopefully with a minimal change in the tonal quality of your setup. Lower frequencies usually mean larger changes are needed, so listen and check carefully to balance correcting the phase problem with keeping your desired sound.

As it fundamentally has to do with time, anything that lets you change the timing between the two tracks during or after recording can be used as a fix. We''re talking about very small increments of time here, so you''ll need tools that allow you control at the millisecond or even the sample level. This is commonly done in a DAW, which typically lets you see the waveform and get some visual help in lining things up. Be aware that this method is not foolproof either, and that your ears must prevail over your eyes.

Image placeholder title

FIG. 5: Snare and stereo overhead tracks recorded out of phase. Pro Tools'' Relative Grid mode and Nudge function in samples can be used to slide the snare track later in time, putting it in phase with the overheads.

Because the sound gets to the overheads later than it does to close mics, one approach is to slide the overheads slightly earlier in the timeline. This will put them more in phase with the snare and possibly other close mics to varying degrees. You''ll need to zoom in far enough to be able to see individual waveform cycles. It can also be helpful to use a quantized editing mode in the DAW so you know exactly how much of a change you''re making. For example, the Relative Grid mode in Pro Tools is ideal for this because the grid can be set to milliseconds or samples, and it lets you move the sound relative to where it actually started, as opposed to relative to the timeline (see Fig. 5).

A potentially faster (but less-visual) method can be to use a simple short delay on the close mic, tweaking the delay time slowly while listening. Again, the increments here are small, much smaller than most conventional delay plug-ins will allow. There are often other delay plug-ins included for this purpose (or ones for the purpose of compensating for latency, which do the same job). This is one use of the Digidesign Time Adjuster plug-in included with Pro Tools.

Take Advantage of Phase
Finally, it is worth noting that every track not being perfectly phase-aligned may not necessarily always be a bad thing. It is precisely these phase and timing differences that create spaciousness and can bring out the sense of the room in a recording. These minute timing differences can also lead to a bigger drum sound or thicker sounds from an amp due to that split-second delay between the mics.

One idea to use these phase anomalies to your advantage could involve positioning a second mic on a guitar amp so it will add or cancel at a specific frequency. (This gets nearly impossible above about 900Hz, but could work below that.) Use a high- or lowpass filter to remove ranges where the effects of this phase manipulation are unwanted, keeping the positive effects. Again, it is critical to at least check your complete mix in mono to be sure you know exactly what these delays are doing to the mix in the various situations in which it might be listened to.

Overall, when dealing with phase—whether trying to eliminate anomalies or using it as an effect—knowing what to listen for will give you the control you need.

Brian Heller is a freelance engineer, composer, and educator in Minneapolis. He teaches in the Sound Arts program at Minneapolis Community and Technical College.