Print Page

Panning for Gold

3/1/2002

Once you've mastered the arcane science of signal routing, learnedthe ancient secrets of gain structure, and been initiated into themysterious ways of equalization, compression, and myriad other stagesin the black art of signal processing, you'll face the ultimatechallenge: the final mixdown. That is where all those perfect takes— often separated by months, miles, and styles — mustcombine seamlessly beneath your skilled fingers to create somethingsonically balanced and musically cohesive.

Moving the faders comes easy: up is louder. Equalization skills comewith time: find the pain and reduce the gain. But what's the rule ofthumb for creating an effective soundstage? Where the heck do you placeeach instrument and effect in the stereo field? Is there some secretpan-pot code for creating a good stereo image? Randy Hoffner, a formeraudio engineer at NBC, once said, “Stereo does not equal monotimes two,” and though that may seem obvious, stereo imaging ismore than first meets the eye — or rather, the ear.

There is no rule of thumb for creating a soundstage; as with allthings musical, the only regulation is that it sounds good (or at leastthe way you want it to sound). Still, most engineers agree that thegeneral goal is a clear, uncluttered, three-dimensional soundstage inwhich the elements can all be clearly heard and their positions can bereadily identified. For the novice, that is best achieved by learningfirst how to create a realistic soundstage — that is, onethat sounds believable, as though the band were set up and playingright in front of you. Just as Picasso mastered realism beforeventuring off into cubism, the mix engineer is well served by aninitial apprenticeship to nature — that is, to making thingssound the way they actually do. Naturally, once you've mastered therules, you will have a much better idea of when and how to breakthem.

This article covers what you need to know to create realisticsoundstages. I'll explain how stereo imaging works, discuss criticaltools of the trade, lay out general strategies, and offer practicaltips, tricks, and caveats for constructing an effective, true-to-lifesoundstage. In addition, I have provided two detailed, real-worldexamples of soundstaging (see the sidebars, “Figuring ItOut” and “Spinal Tap Dancer”) based on actual mixesyou can download from and analyzeat your leisure.

SOLID FOUNDATION

Although the word stereo commonly refers to any 2-channelsystem (whether audio or some other type), the original Greek wordstereo meant “solid” in the sense ofthree-dimensional (having breadth, depth, and height). Combinestereo with the word phonic, which means“sound,” and you get stereophonic, or solidsound. Just as stereoscopic, or solid-vision, imaging employs twoslightly offset photographs to create the illusion of depth (based onthe fact that having two eyes a given distance apart allows for depthperception), stereophonic recordings create the illusion ofthree-dimensional space from two speakers. Simply put, stereophonicimaging works because having two ears allows for localization,the ability to perceive the direction a sound is coming from.

The human ear is an extremely complex and precise instrument. Thestructure and interrelationship of the outer, middle, and inner earsallow for maximum energy transfer at specific frequencies. Impulsesgenerated by the nerve fibers in the cochlea (inner ear) are sent tothe brain, which can accurately determine, among other things, pitch(to within 0.3 percent accuracy), distance, direction, and, if thesource is moving, speed. Other cues related to timing and relativeintensity provide subtle clues as to the nature and size of a room orenvironment — whether, for example, a sound is coming from asmall room, a large auditorium, or the depths of a forest. All of thathappens incredibly fast, with no need for the listener to think aboutthe complex physics involved.

PANORAMIC VIEW

A primary factor in localizing sound is intensity, also known asvolume. Volume gives an indication of distance, because asound's intensity decreases as it moves away from the source. In a freefield, sound emanates from the source in a spherical pattern and with adetermined amount of energy created by the source; as the sphere getsbigger, the energy is reduced. The inverse square law states that soundpressure decreases proportionally to the square of the distance fromthe source. That works out to approximately 6 dB attenuation each timethe distance doubles. In other words, louder sounds closer.

Timing is another primary audio cue. The Law of the First Wavefrontstates that when two coherent sound waves are separated in time byshort intervals (less than 28 ms), the first signal to arrive at theears will provide the dominant directional cues. Direct sound arrivesat a listener's ears earlier than reflected sound (assuming the soundsource is in an enclosed space); thus you can clearly detect location,even in very reverberant spaces. For a mono sound played through twospeakers at equal volume (with the signal panned dead center), a delayof even 1 ms on either side can shift the image significantly to theright or left. Signals arriving more than 35 ms later than the originalare interpreted as a distinct, separate echo from the source.

Although knowing the mathematics is not essential to creating aneffective soundstage, it is important to understand how these twoprinciples, intensity and timing, underlie the phenomenon of stereoimaging. By altering the relative intensity and timing of signalsbetween two identical speakers in an enclosed space, you can create abelievable sense of space in a mix. The mind does the rest of the workfor you by localizing the source.

During the making of the 1940 movie Fantasia, audio engineersat Walt Disney Studios were asked to create the illusion of soundmoving back and forth across the screen. Based in part on the earlierwork of Dr. Harvey Fletcher (of Fletcher-Munson-curve fame) and histeam at Bell Labs, the engineers determined that a sound source that isfaded between two speakers seems to move between them, provided thetotal sound-pressure level (SPL) in the room remains constant. Aspecial two-gang potentiometer (essentially a two-output,variable-voltage divider) was developed for which the sum of the logattenuations equaled a constant. The Disney engineers dubbed it thePanoramic Potentiometer, or pan pot for short.

As the pan pot is turned, a mono signal is sent to two channelssimultaneously; the total intensity remains constant, but thedifference in intensity between the two speakers provides the cues forlocalization. That is called intensity stereo. Thanks in part tothe pioneering work of Les Paul, using pan pots and intensity stereo tocreate a lateral soundstage has been standard practice in popular musicmixing since the mid-1950s.

TOOL TALK

Of all the tools necessary for creating a good mix, none is moreimportant than two you already possess: your ears and brain. Thoseinstruments collect the sound from the room, process and interpret it,and let you build a mix. Needless to say, keeping them in good shape isimportant. Listening to extremely loud music can dull and eventuallydamage your hearing. Recreational chemical use, too, can bedeleterious. Not to be preachy, but some effects that make booze andother drugs fun — time distortion and heightened sensoryperception, for example — can lead to bad or even disastrousresults in the studio. Your mix may sound great to you now, but in afew days, once your head is clear, it very well may not.

If the ears and brain are the most important tools for mixing, thenext is surely the monitors and, by extension, the room. I include theroom because the premise of intensity stereo is intrinsically linked tokeeping a constant volume within an enclosed space. The listeningenvironment is thus a functioning part of the speaker system — aresonator, if you will. Together, the monitors and room provide theinformation to the ears and brain.

SPEAKER OF THE HOUSE

Many people (myself included) do much of their mixing in spaces thatare far from ideal acoustically. But by applying acoustical treatmentsto the listening environment, you can produce better resultsimmediately. A number of good articles about room treatment areavailable, in print and on the Internet, so I won't delve deeply intothe subject. However, you can do some simple things now to guaranteeoptimal performance from your monitoring setup.

First, if possible, make sure your monitor speakers are notpositioned in corners or directly against a wall, as that can causelow-frequency buildup and standing waves, making low frequencies seemlouder than they really are. A simple Rule of Thirds can be employed todetermine where to position the monitors in relation to the nearestwall: place the monitors at about one-third the depth of the room. Forexample, if your room is ten feet deep, place the monitors a bit morethan three feet from the wall.

Even more critical is speaker placement in relation to the listener.In a proper mixing setup, the listener sits at one vertex of anequilateral triangle formed by him or her and the two monitor speakers;that is, the speakers are the same distance from each other as they arefrom the listener (see Fig. 1). That setup allows the sound fromeach transducer to arrive at the listener's ears at approximately, ifnot precisely, the same time. An offset of just a couple of inches oneither side or a delay of just a few milliseconds can shift theresulting image significantly to the left or right, seriously degradingthe stereo image.

It's also critical that speaker levels be matched. A discrepancy ofless than 1 dB between the two speakers can shift the stereo imageseveral feet in terms of the perceived location of the source; adifference of 20 dB moves the image completely to one side. Simplysetting the levels of your amp or matching the trim on your activemonitors until they look the same won't do — the potentiometersused in those devices can have variances of 5 percent or more.Furthermore, the cumulative effects of the component tolerancevariances in the amplifiers can cause 1 to 5 dB of difference inloudness between the channels, even with the level knobs setidentically.

An inexpensive solution is to buy an SPL meter from an electronicsstore. Even an inexpensive SPL meter, though not laboratory accurate,can measure the relative SPLs coming from your speakers well enough.Use a constant tone to begin with — 1 kHz is suitable for a quickmeasurement. First, pan the signal hard left and take a measurement.Next, pan the signal hard right and take another measurement. Then,alternate between several frequencies or use pink noise to measure therelative SPLs, adjusting amplifier or trim levels until they match.

DAILY MONITOR

The quality of your studio monitors is another important part of theequation. Buy the best system you can afford. Some people will readilyspend a couple grand on a great microphone or a mic preamp and thengrumble about paying that much for a pair of speakers. But this is noplace to compromise; you use your monitors constantly to hear (andevaluate) every bit of audio in your studio, so ultimately, they arethe most important tools you possess.

People interpret what they hear somewhat differently, so there is noone best choice for everyone. If there were, only one company would bemaking studio monitors. What's important is how they translate —that is, how your mixes sound once they leave your studio and getplayed on hundreds of other, quite different systems. Speakers that“sound good” or are “flattering” do notnecessarily make the best choice. If everything played through a pairof monitors sounds great, some pretty awful mixes will leave yourstudio, because you'll have stopped working too soon.

That's why some engineers intentionally choose lower-quality, quitenonlinear speakers: to be able to judge how the mix will sound onsimilarly inexpensive consumer-playback systems. When used by someonewith trained ears, such monitors (a certain white-coned model comes tomind) let mix engineers familiar with them make good decisions; I donot, however, suggest using inexpensive monitors as your main tools.Neither do I suggest using headphones as primary mix tools, becauseheadphones are actually biphonic rather than stereophonic. Unless youintend to have your music heard only through headphones, use themsparingly, as a reference.

Select monitors that are as accurate, uncolored, and revealing aspossible. A good test is to listen to material you're familiar with. Ifyou don't hear parts that you know are there or if the image drifts orseems out of balance, those speakers aren't a good choice for you. If,on the other hand, the monitors allow you to hear subtleties you hadn'tnoticed before or cause you to begin to pick apart mixes you have longadmired, buy them immediately; they're telling you what you need toknow to make informed decisions about your mix.

If possible, try to audition monitors in your mixing space, becausewhat sounds great in the store may cause resonance problems or give youlistening fatigue at home. Mix a few songs with them and then listen tohow well the mixes translate on different playback systems.

SIMPLE PLAN

Before you turn that first pan pot, take a quick inventory of allthe elements to be placed in the mix. That initial assessment will helpyou form a mental picture of the soundstage, so you can betterdetermine where each sound might best go and how much area it shouldoccupy. If mixing an acoustic duet, for example, you may decide thateach instrument should occupy a fair amount of the soundstage. On theother hand, in a dense mix with, say, bass, vocals, background vocals,percussion, two keyboards, three guitars, and ten tracks of drums,there obviously isn't room for each instrument to take up much of thesoundstage. In that case, you will be seeking to create ample space foreach instrument.

In general, each instrument should occupy a distinct area of thesoundstage. The process of creating an area for an instrument is oftenreferred to as carving out a space. That may mean not onlyfinding the optimal panoramic placement for the instrument but alsoequalizing the sound so it doesn't mask or interfere with otherinstruments in the same or a similar frequency range.

When done well, that approach lets each instrument be hearddistinctly, results in more dynamic range, and requires less volumefrom each element. Ideally, when the mix is complete, the soundstagewill be clear and coherent. You should be able to close your eyes,clearly “see” the room in which the musicians are playing,and point to the position or area of each instrument.

However, don't hold to that (or any) plan rigidly; rather, use it asa starting point and let your ears be the judge from there. Imagine howyou want the music to sound and begin to visualize each instrument inits place on the imaginary stage. As you start to place eachinstrument, some things may need to be moved or swapped around withothers. That's fine as long as you leave sufficient sonic space foreach element.

The practice of carving out space for instruments also helps preventthe common pitfall of fader creep. Fader creep results from,say, bringing up the bass to be able to hear it above the drums, thenbumping up the guitar so it doesn't get lost behind the bass, thenraising the vocals so as to hear them distinctly, and so on.Eventually, the console runs out of headroom, and the mix becomesunintelligible, incoherent mush. At that point, it's usually best tobegin the mix anew.

It is good practice, especially if you're new at mixing, to closelyexamine and dissect mixes that you admire or that sound the way youwant yours to sound. That will give you a foundation from which towork. But even if you don't try to cop a particular mix style,comparing your work closely with that of others can help you see whereyou may need work, enlighten you to new mixing techniques, and give yougreater appreciation for masters of the craft. Take note of those mixengineers whose work you admire and listen to more of it; many veteranscan pick out specific techniques, or even a particular engineer, basedsolely on the style of the mix, regardless of who the artist is.

THE EARS HAVE IT

A fairly common mistake, especially among beginners, is to mix withthe eyes — that is, according to what looks right rather thanwhat sounds right. Some engineers, for example, work through what seemsa logical progression: placing the vocals and bass in the center,panning stereo signals (such as keyboards and drum machines) hard leftand right, and putting mono instruments into standard 9:00 and 3:00 or10:00 and 2:00 positions. At the other extreme, I have seen“perfectionists” take excruciating pains to measure andduplicate exact pan-pot positions from left to right, assuming thatdoing so will produce a more exact stereo image.

Neither method, however, is likely to produce natural or evenpleasing results, because each is based on logic associated with visualrather than aural cues. The problem is pan pots used in many budgetconsoles show variances in the 6 to 10 percent range. Therefore,pan-pot positions on the mixer won't necessarily correspond preciselyto instrument positioning in the stereo field. The best approach is toignore the position of the pan pot and simply listen to the results ofturning it.

Likewise, don't have too much faith in the meters on your mixer'sstereo bus. Not only are they, too, probably less than perfectlycalibrated, but on a typical budget console (with, say, 12-step LEDladders and a dynamic range somewhere between 84 and 90 dB), eachsegment represents at least 7 dB of gain — a resolution far toolow for exactitude. I bring that up because I've seen engineers attemptto balance their stereo mixes by offsetting the master-level faders (onboards that offer separate left and right master faders) in order tohave the LEDs line up perfectly.

Don't rely on mechanical pots and meters to tell you what your ears,which are much more sensitive instruments, can better discern. Inshort, mix with your ears, not your eyes.

HARDLY A HARD-PAN

Because most sound modules these days present you with stereooutputs, it would seem logical to simply hard-pan each left/rightoutput to its corresponding left/right position. Indeed, that may soundgreat when you solo the stereo source. But what sounds great soloed maynot blend well with the other instruments. Typically, when you pan,say, a stereo piano part (from a sound module) hard left and right, thekeyboard will appear more or less centered on the soundstage. That maywork fine in a mix of a duet — piano and flute, for example; butif you have five stereo keyboard parts and a stereo drum machine, allof them hard-panned left and right, the images will all appear acrossthe center of the mix, largely on top of one another. Obviously, youcan't get separation and a broad stereo image if everything is in thesame place.

That doesn't mean you have to make all stereo parts mono. Rather,you can retain a sense of space by slightly offsetting the pan pots,thus creating a smaller space for the instrument to sit in. For astereo piano, for example, you could position the pan pots at 9:30 and10:30. That would still give it some space — a sense of dimensionon the soundstage — but the piano would appear smaller and to theleft of center stage.

The same thing applies for guitar preamplifiers with stereo outputs;although the soloed guitar may sound enormous with the preamp outputshard-panned left and right, the sound will likely lose definition andluster in a busy mix. Again, carve out a smaller, more appropriatespace.

Note that drum machines typically have preset pan positions for eachdrum, cymbal, or percussion hit within a particular kit. You shouldtherefore learn how to change the panning of individual instrumentsfrom inside the drum machine; otherwise, you'll be stuck always havingto work around the preset pan positions.

String sections and pads are perhaps the best candidates for widestereo spreads; after all, a full string section would naturally takeup a large area on a true soundstage. Even there, though, I would avoidhard-panning left and right.

In fact, I rarely pan anything but effects — or those elementsthat I want to stand out without getting lost in the center — tothe hard left and right positions. After all, it would be rare in aconcert performance for any instruments to be positioned directly tothe left or right of the audience. So rather than filling all of theavailable space with a wall of sound, it's usually best — atleast, in the interest of realism — to reserve the extreme panpositions for what is sometimes called headphone candy: reverbtails and other effects that provide some breathing room and a sense ofspace around the musicians. For example, in a band mix for which Ipictured the band as playing on a 20- to 30-foot-wide stage, I wouldpan all of the instruments somewhere between 10:00 and 2:00 or at leastno farther out than 9:00 and 3:00 (see Fig. 2); that would leavethe areas between 7:00 and 9:00 on the left and 3:00 and 5:00 on theright for reverb and other effects to mimic the natural dispersion andreflection of sound from the band.

CONFLICT RESOLUTION

Earlier I mentioned the related roles of panning and equalization inthe task of carving out a space for each instrument. There are furtherconsiderations when mixing instruments with similar frequencies andtimbres — two electric rhythm guitars, for example. In short, tryto keep them apart spatially. That is not to say they should be as farapart as possible or even directly opposite from each other (whichoften works, by the way); just make sure they are clearly discernibleas separate instruments. In this case, the pan positions should notoverlap, and generally, I recommend at least an “hour” ortwo of panoramic separation between them.

Fig. 3a shows some typical frequency distributions for astandard rock piece and how the frequency ranges overlap. Fig.3b shows those instruments panned in such a way as to allow spacefor each as well as to account for natural sound-dispersion patterns asheard from the listener's perspective. Note that instruments of thelowest frequencies are typically panned toward the center. Lowfrequencies are less directional and thus more difficult to localize;they also require more power to be pushed from the transducers.Therefore, distributing low frequencies more evenly between the twomonitors lets the speakers work more efficiently.

Note that frequency and timbre are not the only two realms in whichelements may conflict. Elements may also conflict rhythmically or interms of importance or centrality to the mix. In such cases, look forinteresting ways to distinguish the elements so they no longer conflictbut rather complement or offset each other.

3-D, BUT NO CRAZY GLASSES

Because pan pots are two-dimensional controllers and you're panningbetween two speakers, it's easy to fall into the habit of just layingeverything out in a linear fashion — drums here, guitar there,and keys over there — as though the music were happening on astraight line. But when you listen to live music, you also hear thedimensionality or depth — the clear sense, for example, that thedrummer and the percussionist are located behind the singer and theguitarist.

As mentioned previously, the sound localization is based on apparentloudness and timing. However, though louder elements tend to soundcloser than quieter ones, what really creates a sense of depth within amix is a judicious use of timing cues, usually through delays. Ingeneral, delays of less than 25 ms help create a sense of space;anything over 35 ms is perceived as a separate image or echo.

Under normal conditions and at sea level, sound travels at about1.13 feet per millisecond. Therefore, 5 ms of delay will seem to movean image a bit more than five feet back into the soundstage (assumingequal volumes of the source and delay). Another way of saying the samething, but in real terms, is that the sound from a snare drumpositioned five feet behind a guitar amp will take about 5 ms longer toreach your ears than the sound from the guitar amp (assuming you arelistening from the audience's perspective).

Understanding that principle can help you considerably inconstructing a natural soundstage. For example, if you want the drumsin a mix to sound as though they are close to the back of thesoundstage, near the wall (as in a standard band setup), you could puta shorter predelay on them than you would on the instruments located infront of the drums, because the distance from the drums to the firstreflective surface (the rear wall) is shorter. Depending on the depthof the stage, they'd also take about 6 to 8 ms or longer to arrive thananything closer to the front of the stage. Likewise, if you want toposition a piano stage right near an imaginary side wall and you use astereo reverb on the piano, you might want to set the predelay severalmilliseconds shorter on the right channel than on the left, because inreality the right side of the piano would be closer to a reflectivesurface.

When using effects such as room reverbs to help create a sense ofspace, be careful not to combine clashing or contrary-sounding rooms— for example, a small tile room for the drums and a concert hallfor the vocals. That doesn't mean you have to use the same type ofeffect on everything; it just means that the effects should go welltogether to create a coherent sound. Again, though a particular effectsounds really cool on an instrument in Solo mode, it may clash orcontradict when mixed in with everything else, destroying the illusionof a natural soundstage.

COMPATIBILITY ISSUES

A debate is still going on these days about mono compatibility, themain question being, “Does it matter anymore?” My answer isyes: it most certainly does matter.

Mono compatibility refers to how a mix holds up when playedthrough a mono system (that is, when the two channels are summed toone). Phase problems, though perhaps not apparent in stereo playback,can result in dropouts, comb filtering, and other weirdnesses when themix is played mono. Such phase problems may exist not only between thetwo channels of a stereo-recorded source but also between mono sourcesrecorded simultaneously. Stereo effects, too, can be a culprit —what sounded spacious in stereo may evaporate or turn to mud whenreproduced in mono.

Many consoles provide a Mono button that, when engaged, sums allchannels of the mix to mono. That function provides a quick and easyway to check for mono compatibility. Use it. Mono playback remainsubiquitous in people's lives. Many television stations and cablenetworks still broadcast in mono, as do most AM and some FM stations.In addition, countless televisions, clock radios, computers, carradios, and other sound sources have only one speaker. Moreover, thestereo sound systems in many vehicles automatically sum to mono atlower volume levels so that half the music isn't lost to thedriver.

TAILS OUT

You won't always want to create a natural-sounding stereosoundstage, so if the project calls for something decidedly unnaturalsounding, by all means, go for it. But in many cases, a true-to-lifesoundstage is best for the music, and it's practically always a goodstarting point, no matter how much you end up deviating from natural bythe time the mix is complete.

Regardless of the soundstaging strategies that you employ, avoidfalling into a habitual approach — that is, automatically panningparticular instruments to the same spot every time you mix. Not onlywill you bore yourself (and eventually your listeners) but,undoubtedly, you'll also fall short of turning out your best work.Every song and performance is necessarily unique, so it goes withoutsaying that a formulaic approach to soundstaging will result in mixesthat don't sound as good as they could.

Finally, don't rely on spatial enhancers or similar processors toimprove your soundstage. If the soundstage doesn't sound right beforesuch processing, it certainly isn't going to sound right afterward.Although I wouldn't claim there is never a time and place forspatializers and the like, you are cheating yourself if you rely onthem to cover up mistakes. You're cheating your listeners, too.


Randy Neiman is an independent audio and marketing consultantliving and playing golf in sunny Los Angeles. Share studio stories andgolf tips with him at audioguru@mail.com.

FIGURING IT OUT

I was called on to remix a medium-energy pop single called“Figure It Out.” Instrumentation included an acousticguitar, an electric bass, a drum machine, live percussion, a twangyStratocaster, and female lead and backing vocals in the vein of NatalieMerchant and Sarah McLachlan. Upon arriving at the studio, I found asomewhat typical panning setup on the console: drum-machine trackshard-panned left and right, the two channels of stereo acoustic guitar(miked with a large-diaphragm condenser near the sound hole and a smalldiaphragm near the neck) also hard-panned, vocals and bass sitting deadcenter, and the Strat and backing vocals panned to 11:00 and 2:00,respectively. The stereo reverb returns for the vocals were hard-pannedleft and right, as well.

The drum machine had its own effects, which were also hard-pannedleft and right. I panned them to the 8:00 and 4:00 positions to tightenthe space, and then I placed a tabla part at about 2:30. Two percussioninstruments of similar sonic spectra, shaker and tambourine, needed tobe separated to avoid coherence (masking of one element by the other),so I put the shaker at around 3:45 and the tambourine at 8:30. The widespacing between the two left plenty of room for the tonalinstruments.

I found a nice spot for the stereo acoustic guitar by positioningthe track recorded with the large-diaphragm mic just left of 11:00 andthe other track slightly right of 12:00. That put the center image ofthe guitar a bit left of dead center and allowed for some interestingfret noise to the right. I panned the twangy Strat track far over, justshy of hard right, which let me bring its level down in the mix. Ipanned the Strat's reverb to the left side, which brought out a nicecontrast to the main acoustic-guitar part.

People commonly pan kick drum and bass guitar right on top of eachother, usually dead center, but a hint of separation between the twocan provide clarity and a more natural sound — some breathingroom, as it were — between two low-end instruments that oftencompete for a piece of the same sonic territory. I put the bass justright of center, at about 12:15, and used the drum machine's Pan menuto position the kick at 11:50. Snare drum fell in just right of thebass guitar and hi-hat slightly left of the kick, at 11:30. The resultwas a nice, tight image of the band playing together on a 12-by-15-footstage in a room 30 feet wide and 50 feet deep.

Lead vocal is another element traditionally panned dead center, andthere is an advantage to doing it that way: equal distribution betweenthe speakers results in greater apparent volume, making the vocal soundmore up front. But I often break with tradition and shift the voiceslightly off center, usually to the right. Why? Assuming the song isreleased commercially, it will most often be heard in people's cars:that's the only controlled environment you can truly count on forplayback. There the primary (or only) listener is usually the driver,which means the listening position is left of center, sometimes quitefar so. A centered vocal will thus arrive at the listener's ears fromthe left speakers first, which shifts its arrival time and intensity.By mixing the lead vocal slightly right of center, I essentiallycompensate for the driver's offset sweet spot; the result is anapparent centered vocal for the driver. (No worries — most peoplewill never catch the change, even when listening in their homes.) Thistrick also thwarts center-channel elimination devices, the boxes thatattempt to derive a karaoke mix by removing any midrange elements thatare panned dead center (which typically means vocals).

On “Figure It Out,” the female singer had a nice but notespecially strong voice, so getting her vocal to stand out was achallenge, short of unnatural processing, excessive compressing, orpushing the volume too high. Fortunately, I had several takes to choosefrom, and I found one that, though similar to the primary take (andalmost identical timingwise), had some interesting changes ininflection. I panned the primary take just right of center, to 12:45. Ithen put a hint of the alternate take even farther to the right, atabout 4:30, and sent the reverb return almost exclusively to the left.That added some strength to the voice without causing the overtchorusing that can happen when you simply combine two takes. Theleft-panned reverb, set fairly hot and coming in from opposite the mainvocal, helped make a thicker and wider vocal sound, and the slightchanges in inflection gave the track more depth.

With the lead vox now happening, I began placing the four backgroundvocals in spaces left open by the other instruments. This was nohairsplitting, knob-assessing affair; rather, I added the voices one ata time, without regard to pan-pot positions, simply listening until Icould hear each in its own little space and then adjusting the levelsso the four parts sat just right behind the lead vocal withoutdetracting from it. To increase the sense of a real space, I delayedthe reverbs slightly differently on each background vocal, based on thetrack's location in the stereo field. With all the instrumentspositioned on the soundstage, I increased the overall sense of space bypanning those backing-vocal reverbs to hard left and right positionsand opposite to their source tracks.

Keep in mind that none of the pan-pot positions I described (soprecisely) was mapped out or clocked in advance. Rather, the mix wasdone by ear, without visual regard to panning. What mattered wasnaturalness and a sense of space, not commitment to an exact rotationof knobs. Only after the mix was complete did I bother to note panpositions, in case they were needed later (which they were — forthis sidebar).

SPINAL TAP DANCER

More challenging than “Figure It Out” was “TapDancer,” from David Bryce's new CD, UltraMaroon (www.mp3.com/DavidBryce). This power ballad,centered around a piano part, has an acoustic guitar; two distinct,alternating bass lines (both synths); string pads; lead and backingvocals; four electric guitars; and a Hammond B-3 organ. All that isaugmented in the B sections by a “horn section” — ablend of real horns and arranged samples. Percussion was done using anAlesis DM Pro module, which supplied tabla, tambourine, and shakersamples. The project's recordings were done in Mark of the Unicorn'sDigital Performer 2.7, and many parts were already submixed as stereopairs.

Listen to Tap Dancer on

My first step was to place the primary instruments (stereo acousticpiano, acoustic guitar, basses, and drums) into a space within whicheverything else would work. The song breaks down to those fewinstruments — piano and acoustic guitar floating over a softstring pad — at several points, so they needed to stand out to beheard distinctly at all times, without lots of volume changes.

I started the mix by establishing the intensity of the lead vocals,which are the loudest and most important element and therefore the onethat largely determines the dynamic range for the entire mix. For manypeople, it seems most logical to begin a mix with the drums. Thatmethod is called additive mixing: building a foundation from the rhythmsection (drums, bass, guitar) and then adding auxiliary and leadinstruments in over the foundation, culminating with the vocal.Additive mixing, however, often hits a snag: as you begin to add in theother parts, the total volume keeps rising, and you eventually max outthe console's dynamic range to get the vocals hot enough. To avoid thatsituation, I usually do the reverse, subtractive mixing, which meansbeginning with the loudest part and placing everything else in beneathit.

Having established the song's overall dynamic range, I positionedthe stereo-recorded piano tracks, putting the left channel at 1:00 andthe right at about 2:30. That gave the piano some space — asixfoot grand, after all, won't sound natural coming from an areaseemingly two feet wide — without taking up the entiresoundstage. Next, I panned the acoustic guitar to about 10:45; thatplaced it about six to eight feet left of the soundstage's center.

I panned all the drums and their effects (except for the snare) fromwithin the DM Pro module. The shaker ended up at just inside 3:00 andthe tambourine at 10:30. I panned the drum overheads to just outsidethe 9:00 and 3:00 positions. Generally, I try to keep the drumsthemselves tucked around the center, as they would be on a real stage;however, I often make the reflections from the drums sound more liveand intense as compared with those from other instruments. I broughtkick and snare in on their own channels for better control; almost noEQ was applied to either.

As I added more instruments to the mix, it became increasinglydifficult to find logical and distinct places to put them. For example,I'd ordinarily keep the Hammond organ part away from other keyboardinstruments, but to get the guitars and horns to sit right, it ended upjust to the right of the piano, around 3:15. This example illustratesthe importance of using your ears and basing your pan and other mixdecisions on the piece itself. No matter what the style of music, ifyou listen closely, it will tell you what needs to be done.

In “Tap Dancer,” the electric guitars take precedenceover the horns, both in terms of frequency range and number of barsplayed; therefore, I brought in the electric guitars next. Three partswere submixed to a stereo pair from within Digital Performer, so I didthe panning in Digital Performer and left the console channelshard-panned left and right. The power chords fit nicely at around 9:00.I slipped in the delayed “seagull” guitar parts at 3:00,between the piano and Hammond. The delay was set at about 3 ms, justenough to place the sound back a smidgen. The solo guitar occupied itsown channel and was quite easily placed just inside 11:00, which put itabout four to six feet to the left on the soundstage. I delayed itabout 8 ms and set its reverb predelay about 25 ms longer than that onthe other reverbs.

The horn section comprised four live horns submixed to a pair ofchannels and a five-part arrangement of horn samples submixed toanother pair of channels. Building the section took some time, and Idon't have sufficient space to describe the process in detail; sufficeit to say that the net result was an interleaving of the nine partsonto the four channels. I positioned the horn section to the left ofthe solo guitar, between 8:30 and 9:30, so it took up about five feetof the soundstage.

As in “Figure It Out,” I panned the lead vocals just tothe right of center, at 12:45. I then inserted several channels ofindividual backing vocals and vocal sections into the few remainingopenings and panned each corresponding reverb return a bit farther outthan opposite its source. String pads were the hardest panned parts,with one channel at 8:15 and the other at 3:45; that translates tonearly the entire span of the 30-foot-wide soundstage I imagined forthe song.

  Print Page