MIXING in the Round

A wealth of choices can be a blessing and a curse. Wouldn't you know it? Just when recording engineers have mixing in stereo down cold, 5.1-channel surround

A wealth of choices can be a blessing and a curse.

• Wouldn't you know it? Just when recording engineers havemixing in stereo down cold, 5.1-channel surround sound comes along.Of course, movie soundtracks have been using surround sound foryears, but the basic formula is pretty straightforward: dialog inthe center, music in the front left and right, suround effects inthe rear, low-frequency effects (explosions, earthquakes, and soforth) in the subwoofer.

Now audio-only music recordings are being mixed in 5.1 surround,and the old rules have been thrown out the window. Where do youplace the listener with respect to the performer — in the“audience” or in the midst of the ensemble? With fivemain channels surrounding the listening position, the number ofmixing decisions has increased substantially compared with stereo,and engineers are only starting to comprehend the enormity of thetask.

I like to think of 5.1 surround mixing as similar to usingdifferent camera angles for shooting a movie. Sometimes up closeand personal is the right approach, but other situations call for awide panoramic shot. In any event, you need to understandsurround-mixing technology before you jump in head first; if youare unfamiliar with this technology, see “You'reSurrounded” in the October 2000 issue of EM.


Consider a typical stereo sound system with two speakers placedin front of a listener centered between them. The space between thespeakers is called the stereo sound field, and individualsounds in a mix can be placed at any location within this space.Two basic principles of psychoacoustics let engineers do this:relative level and interear time delay.

Relative level is the way in which a sound's volume at each earhelps determine its source's location. In a stereo mix, each inputchannel's pan pot determines the relative level of thecorresponding signal in the right and left speakers, and the mainfader controls the signal's overall level (see Fig. 1). Ifthe pan pot is centered, the signal's level is equal in bothspeakers, and the listener's brain is fooled into believing thatthe sound is coming from the point halfway between them. It's asthough there were another speaker at that location; in fact, thisvirtual speaker is often called a phantom center. If youmove the pan pot to the left, the signal's level is greater in theleft speaker, and the apparent sound source moves to the left ofcenter. Move the pan pot to the right, and the apparent soundsource shifts to the right because that sound's level is greater inthe right speaker.

The other psychoacoustic principle, interear time delay, helpsyou localize a sound source according to the difference between theinstants at which a sound arrives at each ear. For example, if asound source is to your left, the sound arrives at the left earbefore it arrives at the right.

This principle is best simulated in a stereo recording by usinga pair of microphones to pick up an entire ensemble, rather thancombining multiple tracks through a mixer (see Fig. 2). Ifthe microphones are placed too far apart, you lose the intereareffect. An Office de Radiodiffusion-TelevisionFrançaise (ORTF) configuration works quite well: simplyplace two cardioid mics at an angle of about 110 degrees with thecapsules roughly seven inches apart (the average ear-to-eardistance on a human head).

With this technique, what you hear is what you get. You canadjust each instrument's volume by moving the musicians nearer toor farther from the mics, and you can change each instrument'sstereo placement by moving the musicians to the left or right infront of the mics.

Unfortunately, listening to such a recording's playback onspeakers can result in speaker crosstalk, which occurswhen the left speaker's sound reaches the right ear and vice versa.This can obscure the interear effect in the recording, butseparating the mics by 10 to 12 inches reduces the problem.Listening on headphones eliminates the problem altogether.

This procedure has been employed on some of the finestorchestral and acoustic recordings. It's more difficult to do itwell because you must think about the mix from the very beginningof the recording, and many engineers and musicians don't want togive up the luxury of fixing it in the mix with punch-ins and pitchcorrectors. That's a pity; this technique can make a beautifulstereo sound field that simply can't be duplicated with separatetracks and pan pots. All you need to record in this manner is anice pair of microphones, a great room, and a stereo recorder.


The same two principles can be applied to 5.1-channel surroundrecordings, which are played back with a surround-speaker systemthat includes front left, center, and right speakers; left andright surround speakers; and one or more subwoofers, all arrayedaround the listening position. You can start with a multitrackmaster and send all the tracks through a surround mixer. Butinstead of a simple right-left pan pot, each input channel includesa surround-panning control, which functions like a joystick (seeFig. 3). Such a mixer might be a hardware device, or itmight be implemented in software that runs with a multitrackdigital-audio program.

If you want an instrument or voice to sound as though it'scoming from a particular speaker, simply grab the panning controland pan the sound to that speaker. What's more, you can adjust eachtrack's apparent location anywhere between the speakers by movingthe panning control to any available position. For instance, youcan spread the drum kit across the three front speakers and placethe guitar anywhere you like in the front or back. It's just likestereo mixing, but now you can decide whether you want to place thelistener in front of the band, in the middle of the stage, or insome other strange place.

You can also record ensembles with a surround-microphone array,which is just an extension of a stereo array. However, you have tochoose where to place the instruments in the surround sound field,which determines the listener's perspective. For example, you canposition the ensemble in front of the array, using the rearmicrophones to pick up the room's ambience (see Fig. 4a),which puts the listener at the conductor's position. Alternatively,you can place the array in the center of the ensemble, therebyputting the listener within the group (see Fig. 4b). Ineither case, the sound field in the surround speakers isparticularly effective because these speakers are frequentlypositioned to the sides of and just behind the listener, sort oflike oversize headphones.


All 5.1-channel surround systems include a physicalcenter-channel speaker that offers yet another choice: do you putthe center-channel information in the physical center speaker, thephantom center (that is, equal volumes in the front right and leftspeakers), or both? Many joystick panners have a width or focuscontrol that determines the proportion of a track that is routed tothe physical center and phantom center (see Fig. 5). Forinstance, you can set a joystick panner so that a center pan putsthe track exclusively into the physical center speaker, equally inthe left and right speakers, or a combination of both in anydegree.

Here is a center-channel goof that even some famous engineershave made: if you place the dry lead vocal in only the centerspeaker and pan the return reverb from that track to the left andright speakers, you have a potentially embarrassing situation. If aconsumer turns on only the center speaker and thus solos the leadvocalist without reverb or delay support, it can sound pretty bad.This can even happen if the listener stands next to the centerspeaker.

Few singers sound great perfectly dry (without reverb or delay),and there's the potential to hear all sorts of sniffs, grunts, andother lip noises, which are not flattering. That's why anythingpanned to the center speaker should have a small amount of reverband delay. It doesn't need to be as heavy as the returns to theleft and right speakers, but it should be there nonetheless.

Furthermore, it's generally a bad idea to put your lead vocalistexclusively in the center speaker. Some home-theater playbacksystems rely on the television's speaker to serve as thecenter-channel speaker, but I wouldn't want my vocals to be pipedthrough that little thing. Also, some people forget to turn the TVon when listening to music, and others do not have a centerspeaker. As a result, I like to pan some of the vocals to the leftand right phantom center, with the majority of sound going to thephysical center speaker with its own reverb and processing.

Why bother to use the center speaker at all? Some greatengineers, such as Al Schmitt and Alan Parsons, simply don't usethe center speaker on some of their projects, effectively making aquad mix. I don't agree with that philosophy, and I think they'remissing a mixing opportunity. If you use the center speakerproperly, you can widen the front sound field for more of thelistening audience you find in a typical living room.

For instance, with a pair of front speakers, there is a verynarrow sweet spot in which the stereo sound field is correct. Movea few feet to the left or right, and the image collapses to thatside. A center speaker adds focus to the stereo image in the front,effectively widening the sweet spot so that everyone can hear thevocals (or whatever musical element you put there) coming from thecenter. The stereo sweet spot with a phantom center can never be aswide.

In addition, I have done various surround mixes during which Itreated the left-to-center pair as one stereo mix and thecenter-to-right pair as yet another. That configuration works greatwith two percussionists, such as a conga player on one side of thestage and a regular drum kit on the other side. Pan the congasbetween the left front and the true center speaker, and the drumkit between the true center and the right front speaker. Rememberthat I'm talking about a virtual mixing stage; it has nothing to dowith the original positions in the studio. If you have the properlyisolated instruments on separate tracks, you can build your ownsurround stage with the joystick panning controls.


The low-frequency effects or LFE channel (the .1 in5.1) is the most controversial part of surround mixing, and itcertainly offers the greatest potential for screwing up the mix.Its bandwidth is specified as 5 to 120 Hz. But do you need to putanything in it at all? If you choose to put some bass sound in it,do you exclude that information from the main or surround speakers?These are all good questions, some of which have not been thoughtout by some big-money mixing engineers.

First of all, you don't really want to use the entire top end ofthe range all the way to 120 Hz. A brickwall filter at 120 Hz witha 48 dB/octave slope is applied to the LFE track when it's encodedas DTS or Dolby Digital. That filter sounds pretty bad, so it'sbetter to insert your own 24 dB/octave filter at 80 Hz or even 60Hz.

In addition, don't remove the bass from the other tracks andplace it in the LFE channel exclusively. If the listener chooses tolisten to your surround music in stereo (a process calleddownmixing that is performed in the receiver or surroundprocessor), the LFE track is thrown out. If you take all bass below80 Hz from the kick drum or bass guitar and place it only in theLFE track, that information will disappear if your listeners chooseto downmix.

For music mixing, you never really have to put anything in theLFE channel. All home-theater systems employ a process calledbass management, in which any low-frequency information isredirected to the subwoofer by filters in the receiver or surrounddecoder. It is better to save the LFE track for truly bottom-heavythings, such as the cannon shots in the 1812 Overture oran octave-down synth bass that goes down to 18 Hz. That's how theLFE channel is supposed to be used.

Nevertheless, LFE bass is like a drug, and most mixing engineerscan't stop tweaking up the LFE channel to make the mix reallythump. The LFE channel has 10 dB more headroom than the other fivefull-range channels, but that doesn't mean home listeners have 10dB more power reserve for their subwoofers. Most home subwoofersare seriously underpowered and will run out of bass headroom longbefore the five full-range channels top out.


As mentioned previously, in stereo you can only pan each trackleft to right in front of the listener's ears. But in 5.1 surround,you can pan left to right and front to rear, thus creating a wholenew array of audience positions. This enables you to make the bandsound as though it is in front of the listener (with audiencearound it and slap echo coming off the back wall, as in a concert),or you can position the listener in the middle of the group or anycombination thereof. You also can make the room spin around thelistener's head, put sound effects in the rear speakers forrealistic acoustic segues, and place your backup singers in therear of the room. The possibilities are endless.

Here are a couple of examples of different treatments for thesame basic tracks. In the mix depicted in Fig. 6a, Ipanned the lead vocal to the center speaker, the guitar to the leftfront speaker, the keyboards to the right front speaker, and thestereo crowd mics to the rear speakers. I also put the backupvocals in the left and right front speakers. This setup gives areal room sound to the mix because the crowd mics pick up theslapback echo off the rear wall and add room ambience. Reverb andvocal echo are returned to the front speakers, as in a traditionalstereo mix.

In Fig. 6b, I panned the backup vocals to the left andright rear speakers and then added some delay and reverb to thecenter lead vocalist that is returned to the rear speakers by itsown surround joystick. This makes the listeners feel as thoughthey're positioned in the middle of the band. A second stereoreverb processor can be used on the same vocalist and returned tothe left and right front speakers.

That's the primary reason you need multiple reverb and delayprocessors for surround mixing. You really don't need to pay $5,000to $25,000 for the latest surround reverb processor from Sony,Eventide, or TC Electronic (as fabulous as their reverbs certainlyare). Indeed, you can create great surround mixes with just two orthree stereo reverbs and delays. One reverb is returned to thefront right and left speakers, a second reverb is returned to therear left and right speakers, and a third reverb can be returned tothe center speaker.

Surround sound is becoming more and more important for music aswell as movie soundtracks, and it behooves engineers at all skilllevels to begin exploring this vast new frontier. Hopefully, theinformation I have presented here will help you to develop yoursurround-mixing skills to the point that groups will start beatinga path to your studio. But until that time, practice the techniquesI mentioned and try out your own ideas, which will undoubtedly leadyou in some interesting directions.

Mike Sokol is a live-sound and recording engineer with 30years of experience on both sides of the console. He conducts freesurround-mixing seminars at recording schools; see www.modernrecording.com for tourdates.