From setting up a proper monitoring environment to learning what to do with the LFE channel, pros share advice for mixing in 5.1
Even if the “surround revolution” that was supposedly going to save the music industry by hooking us all on high-res 5.1 audio releases never quite comes to pass, there is still much surround work being done throughout the entertainment world, from film, TV, and DVD mixing, to video games and, yes, music-only audio releases. Each of those fields has different and specific surround mixing requirements and conventions, yet the very nature of multichannel audio for any format inspires creative and resourceful individuals to continually push our sensory boundaries. And this much is clear: Engineers and musicians who embrace surround will undoubtedly increase their career options—if not now, in the future. Multichannel sound isn’t going away.
We spoke with a handful of mixers from music, film, and games to learn more about the unique world of surround mixing for different media. You’ll also find expanded coverage of the topic online at emusician.com.
Frank Filipetti A Grammy-winning engineer many times over for his work with popular artists and on musical theater soundtracks, Frank Filipetti was an early proponent of surround sound. A handful of his standard-setting 5.1 mixes include albums for Billy Joel, Polyphonic Spree, Meat Loaf, James Taylor, and more.
“When I did my first surround project, which was James Taylor’s Hourglass , it was very difficult to get information about how to set things up, so you did trial and error while you were working at it,” says the veteran New York-area engineer and mixer. “But now you can go to NARAS [National Academy of Recording Arts and Sciences] or several other websites and read a ton about it. A bunch of us got together in the late ’90s or early 2000s and wrote a paper for NARAS which is a really good reference for somebody who is starting out and wants to learn what engineers, producers, and mixers who are doing surround suggest. It’s called ‘Recommendations for Surround Production.’ [grammy .org/recording-academy/producers-and-engineers/guidelines]. We spent about two years writing it, so there’s a lot of data and it’s pretty detailed, but you can also just browse it and learn a lot.”
Filipetti notes that some of the recommendations he and his colleagues came up with for the NARAS paper are at odds with both the conventional wisdom and even some widely circulated specs and guidelines. He also despairs about the lack of agreement on surround formats. “The audio community has really blown it on two or three occasions through the use of multiple formats and the difficulty in setups,” he says. “There’s nobody that wants to agree on anything. I know, for example, that most of us who mix in this format have been saying for years that ITU [International Telecommunications Union] specifications that a lot of the manufacturers publish in their data are not at all in line with the way we actually mix. The ITU specifies the rear speakers in a surround setup at 110 degrees, which is not really the rear, but more like the rear side, and at 110 degrees there’s no phantom center at all. Although you’ll also get disagreement on that from people who say you can’t hear a phantom center behind, which is bogus; it’s baloney—of course you can.
“The ITU specifically calls the rear speakers ‘ambient speakers,’ or effects speakers. They don’t consider them equal partners with the left, center, and right. So if you’re using the ITU specs, they don’t even recommend that they be the same size and the same model. I did a Frank Zappa project where I put some of the band in the left, some in the right, some in the center, and some in the right rear and some in the left rear. I had the band set up like you were sitting in the middle of the band. But if you’ve got a bass coming out of your left rear speaker and it’s only got a three-inch driver, well, sorry—you’re not going to hear it as well. So you always want to have the same speakers throughout the system.”
These days, Filipetti mixes exclusively in the box, using an Avid ICON D-Command controlling Pro Tools 10—at his very well-equipped studio in Nyack, NY. “Pro Tools itself gives you quite a bit of freedom to do things in surround, but I also purchased a plug-in from a New Zealand company called Maggot Software called Spanner, which increases a dozen-fold your capabilities in surround,” he says. It allows you to do all kinds of things you couldn’t with the standard Pro Tools panner, which is a basic algorithm—you can put things in the left, right, and center, and the surrounds, and you have some divergence. But the Spanner plug-in gives you a multitude of other options. You have ways of sending differing amounts of LFE to different [speakers]; it does all kinds of pans you couldn’t do otherwise, and you also have ways of doing fold-ups and fold-downs as you’re doing other things. It’s something I think a lot of the film guys use, and anyone using Pro Tools who is deeply into surround would probably want it.”
Brian Schmidt Brian Schmidt When Brian Schmidt was honored with a Lifetime Achievement Award from the Game Audio Network Guild (GANG) a few years ago, it was because he has a résumé second to none in the industry: He’s done sound design and/or music for well over a hundred games of every kind, including his latest audio-only headphone surround title, Ear Monsters. He developed the sound and music systems for the Xbox (the first with Dolby Digital 5.1) and the Xbox 360; he founded the GameSoundCon conference; he holds patents for an assortment of audio-related processes; and he is well-known as an educator and speaker on all things game audio.
Schmidt notes that working on surround audio for games is quite different from creating surround mixes for film or music video. “The parameters are more liberal than in film, where the screen dominates, so you can use the rears more creatively in games. In a movie, the screen is utterly dominant and everything else tends to be sprinkles or icing, or whatever food metaphor you want to use. In a game, if you think about it, the screen is just a small visual window into the universe that’s being rendered for the player. Whereas your visual screen is 3 or 4 degrees of viewing, your sonic screen is 360 degrees. I’ve always said surround sound is more important in games than it is in traditional linear media because you can literally expand the play-field with sound, and drive sound as game-play elements you have to pay attention to.
“We use the term ‘mix the game,’ but that’s not really accurate. In a movie or TV show, you’re literally mixing it—you’re setting the volume, you’re setting the EQ, you’re setting the compression for each of the sounds. In a game, you don’t know what those levels are going to be until the instant the game is played, so they’re not done in advance. If there’s a helicopter in front of me, and I walk toward it as a player, that helicopter has to get louder. If I walk farther away from it, it’s got to get softer, and if I turn my head, it’s got to go to the left. So when we say we ‘mix’ a game, what we really mean is we’re giving the instructions to the game [engine] for how to mix itself for when different game events happen. So instead of specifying how loud is the helicopter, which you would do in a movie—use a fader—instead you’re going to say [to the game engine]: ‘Here’s how I want the volume of the helicopter to change with distance. Here’s how I want a lowpass filter on that helicopter to change with distance.’ Or, ‘Here’s how the panning of the sound should occur if the helicopter is at a certain azimuth and elevation relative to me. How much Doppler effect should there be if it’s moving?’
“Music’s a little bit different,” Schmidt adds. “You will do a surround mix because, as you play the game, you don’t need to change the relative position of the string basses and the trumpets or whatever. And how you handle that mix gets into the aesthetics of what people like. A lot of people who are delivering their game in 7.1 still like to mix their music pretty much in stereo—maybe have a nice convolution reverb that sends the reverb to the surrounds. Some people think if you mix 5.1 music or 7.1 music where you’re in the middle of the orchestra, that’s going to be overwhelming in the game, because music is a non-diagetic sound undefined. So some people are wary about making that too immersive and too surround, because it can be distracting to the game-play. But there’s no hard and fast rule about that.
“The other thing is a lot of games have cut scenes, which are like mini-movies [narrative sections with no game-play], and you handle sound for those like you would a film. It’s predestined ahead of time and you would want a nice 5.1 or 7.1 Pro Tools mix you deliver as a multichannel .WAV file that exactly syncs with that cut scene. So you still have traditional pieces to do as well.”
Richard Warp Richard Warp Working primarily in the “new music” and modern-classical realms, Richard Warp got into surround mixing in a fairly traditional way: “It came out of my composing work more than my producing, really,” says the transplanted Brit, who now lives in San Francisco. “I was doing a lot of chamber music at one point and putting on concerts in San Francisco, and one of them caught the attention of a new-music aficionado, Glenn Cornett, who had a particular penchant for putting on house concerts, and he invited me to come along and start recording these. Originally it was in stereo, but then I suggested the idea of recording the ambience of the live performance spaces; I thought it was an interesting thing we could do. A lot of these were pretty stripped-down—maybe just piano and saxophone, fairly minimal—so what you ended up with was a really interesting interplay in the space. So I started recording the performances in surround. I set up a Decca array arrangement using various different mics to capture certain aspects, and creating a [surround] document of the performance. And, at the time it was in DTS [surround].”
That led to making surround audio recordings, such as his acclaimed recent DTS-CD and Blu-ray release of Luigi Nono’s avant-classical composition La Lotananza Nostalgica Utopica Futura for the Urlicht AudioVisual label. Nono designed it as a live performance piece for solo violin and eight channels of various prerecorded sounds and music that are “performed” simultaneously by a tape operator raising and lowering faders as indicated in the score, with the audience seated inside a ring of eight speakers. For this recording, at A Bloody Good Record studio in Long Island City, New York, violinist Miranda Cuckson was accompanied by Christopher Burns, who fired off digital file versions of Nono’s original eight taped tracks from a laptop, with everything captured in Pro Tools. Warp did his initial mix at his home studio in San Francisco, working in Logic (“mostly for cost reasons, to be honest”) and monitoring on his Blue Sky loudspeakers, and collaborating with Burns remotely: “I would send him versions of the DTS mix that he would listen to [in his 5.1 room at the University of Wisconsin] and we’d discuss over email and Skype.”
Warp says that though his home mixing environment “is as treated as I can make it, with [acoustic materials] on the walls and ceiling to catch the reflections, I’m very cognizant that it’s not an ideal mixing space, so I actually went to do the final mastering/listening session up at the studios at CUNY [City University of New York] with Paul Special, who is a longtime network TV engineer, to basically sanity-check what I’d done and make sure it was going to work in a more clinical environment. I was very happy that it didn’t need much tweaking at all.”
That piece had its own unique challenges as a surround recording, such as trying to mimic the movement of the violinist (who is prescribed to play at multiple music stands within the performance circle) and whittling the eight tape channels down to a 5-channel environment. Also, Warp notes, “I had the LFE [sub channel] off because there is so little low-frequency information in the piece, it didn’t seem appropriate to use it.”
Other surround recordings Warp has worked on have called for different degrees of mixing. For instance, on his latest Ulbricht release, Melting the Darkness, featuring violinist Cuckson performing pieces by several different composers, “the surround aspect of it is really just ambience. I used the Waves Renaissance [reverb plug-in] to create a sense of the room, but in a really subtle way. When I was A-B’ing between the surround and the stereo, what I was noticing is you don’t really notice the surround is there until you turn it off. It adds an impression of something being there, without banging you over the head with it. It creates a sense of warmth, an additional sense of realism, but it’s not something you can really put your finger on.”
Dennis Leonard In the first half of Dennis Leonard’s career, he was a music tech and mixer known as “Wiz,” short for Wizard. But since the late ’80s, he’s mainly (but not exclusively) been working in film sound, usually as a supervising sound editor and/or re-recording mixer based at Skywalker Sound in Marin County, CA. His CV spans more than 60 films, including Cast Away, two Harry Potter movies (Chamber of Secrets and Goblet of Fire), Flight, and many popular animated features, from The Polar Express (which earned him an Oscar nomination) to Madagascar (2 and 3) to both Despicable Me blockbusters.
Leonard knows that when it comes to surround mixing, he’s been spoiled by working on the best high-end equipment available: Skywalker’s mixing theatres are equipped with enormous AMS-Neve DFC consoles. But he also does a lot of work in his personal surround room at Skywalker. “You can really do a lot with Pro Tools and a small control surface, like an [Avid] D-Control, and there are a number of surround panners that you can use with that. The critical thing is getting some sort of monitor controller so you can EQ the monitors and make sure all the speakers sound correct, and ascribe to the SMPTE EQ and level standard if you’re doing film work. It’s unbelievable what you can do now. I’ve mixed films in my room and I don’t even have D-control. I just mouse-click and have a couple of fader packs that have knobs on them.” His monitor setup? “It’s all Meyers: three UP Juniors, a USW for the sub, and I’m doing a little bit of bass management because the UP Juniors only have an 8-inch woofer, and then I have four UP4s for surrounds; when I’m in 7.1, two are the sides and two are the rears.” Leonard says that a vital aspect of creating a surround mixing environment “is making the room fairly acoustically neutral. Ideally, if it’s a rectangular room, you’ll want to get some bass traps in the corner—there are tons of manufacturers making those, or you can also make them yourself fairly easily.”
When it comes to mixing films, “You have to be very careful about what you put in the surrounds and when you put it there. The last thing you want is someone to recognize something and snap their head and look at a wall in a movie theater, and transient sound stimulates that response; we’re built to localize on things because it’s a survival instinct. So I want the surround information to be more ambiguous.
“For me, films are like enhanced mono; I want to keep most of the sound on the screen and keep people sucked into the screen. That doesn’t mean you don’t move things around on the screen. If you’ve got a three-shot [three characters] across the screen, you’re thinking about putting one left, one center, one right. Of course following characters on- and off-screen has to be considered panning, as well. And on this film I did called War Story, in Sundance this year, I had a two-shot and they were talking over each other at times, and it was disturbing to hear both voices only coming out of the center speaker, so I panned each of them ever so slightly left and right and that made it work. This is not always the case; you’ve got to experiment and you’ve got to stay open to whatever works in a given situation. In my world, the film world, it’s all about the story.”
When it comes to surround music mixing, he says, “My take in mixing music for 5.1 is to put the listener at an ideal seat in the audience. Some people like to put the listener onstage, and have instruments behind you, but I’m fairly conservative. My approach is pretty much to put reverb returns in the rears, or if it’s a live concert, to put the room mics in the surrounds, so it becomes immersive. I’ve heard snare drums in the surround and I want to barf! To me that’s the antichrist of mixing.” He’ll often put the kick drum, bass and vocal elements in the center channel, and make sure that if he pans, he moves them among the left, center, and right channels, and notes that an advantage of having three channels across the front is, “One has less time domain corruption panning between left and center, and then center to right, rather than just left to right. You get a much more articulate image using all three of those speakers.”
Chuck Ainlay Nashville-based engineer/mixer/producer Chuck Ainlay has worked on a zillion records by the likes of Mark Knopfler, George Strait, Steve Earle, the Dixie Chicks, and scores more, and he’s done quite a lot of surround work, both audio-only and with picture. That list includes artists such as Vince Gill, Peter Frampton, Eric Clapton, and Dire Straits, whose Brothers in Arms surround remix earned Ainlay a Grammy in 2006. Ainlay was also part of the blue-ribbon panel that put together surround mixing recommendations for NARAS (see Frank Filipetti’s description).
Ainlay freely admits that he has perhaps a different view of the center channel from most mixers in 5.1. “It’s a hard thing to wrap your head around, because as far as being able to translate from what we’re used to hearing the center image to sound like [in a stereo mix] and what comes out of the center speaker, there’s like a 2k rise from the center speaker—a midrange bump—that you get coming out of a discrete speaker that’s not there as a phantom image. So all of a sudden, vocals can seem almost harsh. What I tend to do is more phantom imaging for things I want soft in the middle; and for things I really want to poke, I’d use more of the center speaker. A rule of thumb I’ve come across is, if you’ve got something in the center speaker, as well as the left and right speakers, bleed it out a bit and take away that focus of the center speaker. If you keep the left and right speakers down about 6dB or more from the center image, it gets away from a phasing issue that can happen. You don’t want to have the source equal in the center speaker and the left and right speakers.
“I’ve gotten blowback from surround aficionados that I’m not using the center speaker, but I tend to like a phantom image better. Also, many home systems are more set up for home theater than music listening, and the center channel speaker can be entirely different because it’s actually voiced for speech. My thought was that you could get a more musical sounding mix by avoiding the center channel in a lot of cases—using it, but as more of an FX speaker than a center image.”
Ainlay says that for his Brothers in Arms surround mix, “I was very sympathetic to the original mixes, and we remastered the stereo mix, but didn’t remix the stereo because it’s so iconic. When I was doing the surround mix, I spent a lot of time A-B’ing my mix—the fold down of my mix—to the stereo mix. I didn’t want to completely duplicate the stereo mix, because I felt there were things with time and technology I could do better, and I wanted to warm up the original record a bit. But when you’re dealing with mixing iconic records, you can’t forget what the original was. If you listen to Brothers in Arms, it is very discrete in the rear and it sounds really interesting, yet it doesn’t destroy the record.”
Asked for advice for people just getting into surround mixing, Ainley mentions the NARAS white paper and adds, “Having a great monitoring environment is really, really critical to getting it right. So get some good speakers and learn how to set them up right. It’s all there in that document.
“Beyond that, forget what you traditionally think about creating a stereo mix and forget the idea that you have to carve out the sound and compress the sound to squeeze everything into a stereo field. Dynamics are definitely more enjoyable in surround, and sounds can be more full. If you squeeze everything and compress everything in surround it becomes very bland and hard to listen. Use the space you’ve got!”
Blair Jackson is a San Francisco Bay Area-based writer.