Search Gear
 

Surround Mixing Master Class: Interview Outtakes

April 1, 2014
share

The March issue of Electronic Musician features a Master Class feature on mixing in surround. Here, we share interview outtakes from Frank Filipetti, Brian Schmidt, Dennis Leonard and Richard Warp

BY BLAIR JACKSON

Frank Filipetti

More on the ITU surround specs and format disparities:
“The music the ITU used to come up with their specs for doing surround recordings was entirely classical and jazz—they didn’t use any pop or anything else. So what we’re saying [in our NARAS paper] is that the ITU’s data is skewed to those particular formats and ignores the general popular music category—and those are the ones that are going to sell. Let’s face it, Beethoven’s Ninth in surround has a very limited market. On the other hand, R. Kelly or Eminem’s latest album is surround would have a much bigger market. That’s one of the reasons we put out this NARAS paper—we wanted to get rid of some of that misinformation.Yet still, when I pick up some gear from Sony or some of these companies and I open the manual, they show surround setups in the ITU setup.

“The difference in release formats is also hurting the development of surround. Blue-ray may be here to stay, but then you have the entire Apple company, which won’t support it. So we have a hi-res format which has been accepted by much of the world in video and audio, yet the biggest supplier of computers for the music industry—most of us are on Macintosh—won’t support it. They and Sony and these other companies are so damn proprietary—they couldn’t care less about us.

“In the digital surround market—the downloads—Universal has come out with theirs and Sony has theirs; they’ve all come out with their own versions. I have four letters for you: M-I-D-I. It’s a system that was invented many years ago—it’s so old and archaic it’s practically creaking—yet it’s the only thing that has really brought that whole area of music into the mainstream, because everyone agreed on something.”

More on “conventional wisdom” myths:
“The conventional wisdom says you can’t hear a phantom center in the rear. That’s not true. I can tell immediately if I’ve got a mono rear or a stereo rear. You can’t hear it as well—your resolution is probably two-thirds of what you get in the front—but you can certainly hear it. The same thing with that idea you hear—‘bass frequencies are non-directional.’ That’s also not true. When I put my surround speaker on the side or behind me and then I put it in front of me, the subwoofer sounds different in front of me than behind me. And it’s not just because the room is carrying it differently. It’s because I can hear the positioning of it behind me. Suddenly the attack is coming from the front, and the bottom is coming from somewhere else. So these conventional wisdom things are a lot of poppycock.”

Does knowing in advance if of project is going to be released in both stereo and surround affect that actual recording sessions?
“A little. It affects it in the sense that you’ll probably set up some additional microphones. But you’re not going to record the lead vocal in 5.1. Probably if you’re recording an orchestra or a band, you’ll put up some rear surround microphones. In the stereo world we still record an awful lot of mono stuff; it’s not like you’re going to go out and record everything in 5.1. You can—I recently purchased the Soundfield [surround] microphone, which is an amazing microphone, and that does allow you to record it all together without setting up a whole bunch of microphones. But by and large you’re still going to record your vocals and keyboards in mono or stereo. You might record an ambience track and put it in the back, but you’d do that anyway. If I’m recording Korn’s guitar amps, I’ll record two or three direct amps and two or three ambience microphones. But I would do that in stereo as well.”

Filipetti gone wild!:
“When I [mixed] An Evening with Dave Grusin [2010] Larry Rosen [producer] said, ‘Frank, go ahead and do whatever you want [for the surround Blue-ray]. We’ve got the stereo [CD release], I don’t really care about the 5.1.’ So I really went to town. I put the horns behind me and some of the strings, and I put all kinds of stuff all around the room and I loved it. It was a really a joy to work on. Thankfully the Academy [NARAS] felt the same—they nominated it for a surround Grammy. It didn’t win—some hobo named Scheiner won. [Laughs] That one, I really got to play with because nobody said ‘Don’t do this.’ But I don’t get that opportunity very often.”

Brian Schmidt

On the rise of schools offering courses and degrees in game audio:
“Once again I’ll be teaching at DigiPen [Institute of Technology, in Redmond, Washington, near his Seattle base]. I believe it’s the largest four-year accredited video game program. It started out as programmers and artists; a pretty rigorous computer engineering program. So if you wanted to be a programmer for [video game companies such as] Bungie or Valve or EA, you might want to go there. They added an art program, as well. I helped them develop their curriculum, and I’m teaching one course for music students and another for engineering students in audio program.

“More and more schools and universities are starting to incorporate game audio into their curriculum—Berklee [School of Music in Boston] has a couple of courses; Carnegie Melon [in Pittsburgh]; Full Sail [near Orlando]; Vancouver Film School has a game audio program. And there are others of course.

On the range of games being made:
“It’s a bit like in film, where you have everything from big blockbusters to little independent films. The difference would be that little independent games are a lot more ubiquitous and known than indie films, where you kind of have to be an indie film buff to know about them. Small-scale—what we call ‘casual games’—get attention a lot of attention through [gaming] magazines and at conferences and by word-of-mouth, of course. So there are big blockbuster games, little indie ones and somewhere in the middle is the professionally developed casual game, where you have a ‘real’ game publisher, but they’re not making a game like HALO or Destiny; they’re making a game like Peggle. It’s a much smaller-scale game, much smaller team, much smaller development budget, but it’s still a professionally developed and professionally marketed game.”

On the importance of your personal studio:
“A good acoustic space is a minimum bar these days. It depends a little on the style of game you’re doing. If you’re doing an iPhone game, having a 7.1 listening environment isn’t as big a deal. But if you’re doing any king of game that’s being delivered in the living room, it’s really important to have a notion of what the surround mix is going to be like. In terms of numbers, most people don’t listen in surround, but the people who do tend to buy more games and they tend to be the influencers—the common folk ask for their opinion about games because they know these guys are really into it. So you want to make sure you have a good way to do a 5.1 or 7.1 mix of the game. You’d hate to have to turn down the opportunity for a potential gig that required you to do a surround mix.

“Some people are adamant that you should have, if not a THX-certified studio, at least an extremely well-calibrated home studio [to mix in]. I’m not quite that disciplined about that. The home theater environment is like the Wild West, where you don’t know how loud they’re going to be, you don’t know where the speakers are going to be, you don’t know how well-calibrated the system is and you don’t know if they’re hooked up correctly. If you actually go into most people’s living rooms and look at their surround sound setups, most people have them set up wrong. The benefit of having a properly set up and reasonably well-calibrated studio environment [to work in] is that it tries to make the best out of whatever bad situation the person’s living room happens to be. And for the person who really cares about it, they get a really good experience.”

Changes in game music:
“Most people who are getting into games these days still started in more traditional linear media and are moving over to games. But now you’re also finding a lot of people who were originally inspired by game music. It’s a relatively recent phenomenon that game music has traditional music production values and aesthetics. When I started doing games, there was a little synthesizer chip built into the game, so what you used to do was write a little MIDI-type score that would drive that synthesizer chip as the game was playing. Then, that progressed to people working in home studios with synths or a couple of instruments. Now, since the original Playstation 2 and the original Xbox hit—when DVDs started being used in games, about 13 years ago—there was enough room to have real music and you could go into a recording studio and record an orchestra multitrack.

“If you’re doing sound for a Triple-A [i.e. big budget high-profile] title—Call of Duty, Battlefield II’s— you have fairly large teams; a sound team might be a dozen people, where you have a composer and an orchestrator and somebody who integrates it into the game, and specialized Foley people, specialized dialog people. For smaller games it’s more common for one person to do the whole sound package—both sound design and music.”

The center channel in games.
“One of the most interesting differences between games and linear media is how to best use the center channel. There are a couple of schools of thought there. Some people say: ‘Hey, the center channel is no different than anything else. It’s just one more of the speakers you’ve got.’ The reason there’s a center channel in 5.1 and 7.1 goes back to tying dialog to the screen. But if you don’t need to do that, or you don’t want to do that because you want to emphasize the location of characters around you, you wouldn’t necessarily use it for dialog in the same way. Some other people think it’s better to treat the center as something somewhat special, use it for disembodied dialog—if you’re doing a Madden [football video game], you might stick the announcer in the center speaker. Sometimes, if you’re in a first-person game, people like to put the sounds associated with the player-character in the center. So you’re hearing your own clothes rumble or your own footsteps in the center speaker. Or, non-diagetic sounds—your menu beeps and bloops. But it’s always an aesthetic decision, because there’s no specific need, like you have in a motion picture, to tie your dialog to the center of the screen.

“Another things games have a problem with sometimes is LFE. LFE is a dedicated channel, the subwoofer is a speaker. Sometimes people put too much in the LFE. As a matter of course, they might route everything to the LFE when they don’t need to.”

On stereo fold-downs ion games:
“A game on a console like an Xbox or a Playstation is going to have this surround sound mix created as the game is playing sent out to speakers, but if the person’s only listening in stereo, it’s the game itself that’s doing the stereo downmix most of the time. If the game developer wants to get fancy, they sometimes put I na specific menu item that asks: ‘Are you on a surround sound system? Are you on a stereo system? Are you on headphones? In which case, the sound designer has to issue three different sets of instructions: ‘Here’s how you should try mix yourself if you’re on a surround system.’ There’s always a default downmix; it’ll do its own 7.1-to-stereo downmix. But some games do think it’s so important to treat those separately that they provide the sound designer a way to tweak the way it’s going to be mixed depending on what the user has.

Is info lost in the downmix?
“In the standard downmix, they’re simply taking the surrounds in the rears and folding them to the front, so in a sense, the sound is not lost, but the location of the sound has changed. But because of the phenomenon of masking, if you have a subtle sound behind you and a noisy environment in front of you, it’s easier to hear the subtle sound behind you, where if that same sound gets mixed to the front, you don’t hear it as well. It’s called ‘stream separation,’ based on localization. If a sound is coming from a different position, it’s a little easier to pull it out than it is if it’s coming from the same place as everything else. It’s like the cocktail party effect. You can listen in to somebody else’s conversation because it’s coming from a different space, and your brain has the ability to disambiguate sounds that come from different places, whereas if the same sound is folded into the fronts, it’s going to be harder to pick up the subtlety.”


Dennis Leonard

On his entrée into the world of surround:
“In 1989, I worked with [Oscar-winning sound designer] Gary Rydstrom on a Disney simulator ride called Body Wars [at EPCOT Center; the ride closed in 2007. One evening, I went into the room when they were wrapping up [work on the sound for the ride] for the day and Gary taught me how to use the Synclavier, and I spent a few hours in there building an ambience for the shuttle bay, as it were, where the simulator was launched from, which ended up in the 15-minute ride sequence. It was the first time I got to use surround and I was thrilled. I was from free the world of stereo all of a sudden. At that time I was not concerned with the center at all. I was concerned with the left and right and the surrounds and that was about it. The idea of immersion, and having sound in four quadrants was so exciting.”

When working on a film does the mixer get a 5.1 mix from music side, or do you prefer working from more stems?
“Generally, the music department will give you stems, because one of the things you want [as a re-recording mixer] is the liberty to separate things sprectrally. So in the case of a dialog scene, you want to be able to pull the instruments in the dialog spectrum down a little bit.

“When they record an orchestra, there’s always a [three-microphone] Decca Tree in the room and surround mics, so you’ll get a 5.1—maybe a pzm in the middle of the room, which is like your .1 mic; you’ll get that core. Then you’ll also get separations—the horns or the Taiko percussion; whatever featured things there were. In a well-organized score delivery, you will get specific notes regarding the desire to feature one thing or another. During the mix, the music editor will generally indicate what the composer is looking for. One can also see on the Pro Tools monitor when extra material is presented. It’s the composers’ and scoring mixers’ take on what the re-recording mixer is going to want the liberty to play with.”

Know your LFE:
“The other thing we should cover is the LFE track, because that’s something that’s different from regular stereo mixing—a discrete track that only takes low frequency information. The wonderful thing about that is, the further down you go, the more physical movement the transducers have, and when you get into, say, a standard two-way system—12s and a ribbon or something—the 12-inch speaker is doing a lot of work. It’s handling mids and even fairly high mids. By putting low frequency information in [that two-way system], you’re getting into some intermodulation distortion. And when you put extreme low-frequency information in there, you’re getting a lot of intermodulation distortion. So by being able to separate everything from a 120 Hz down, if you wish to, and putting that in the subwoofer [LFE channel], it makes what you’re putting in the mains a lot more clear. It gives you a dedicated system that’s really built to roar.”


Richard Warp

Another wrinkle in recording the multichannel Luigi Nono piece:
“[When the piece is performed live] six leggi [music stands] are placed in the surround environment and the violinist wanders from one to the other to the other. So the re-imagining in surround sound was exactly that way, where we have the discrete sound of the violinist between the two speakers, and one leggio was outside the speaker ring, because that’s how it was done in performance—beyond the ring. It was an interesting challenge to get the impression that the violinist was further out than the listening environments you’re in, so that we had to use some reverb to create that sense.”

More on making Melting the Darkness a surround recording:
“When we decided to go ahead with it, the original idea was that all the pieces were going to be reworked into surround, but in the end the fact was that a lot of the pieces weren’t appropriate to be being reworked. Some of the tracks were even solo violin. So we only ended up remixing one of them in surround and the rest we did with the surround ambience idea. It involves the particular use of reverb and localization, which seemed more appropriate for the solo violin pieces. But we also included a spatialized piece from a composer, which was more throwing the sound around the room a little bit more.”

More on surround aesthetics:
“It’s very easy and tempting to throw everything at the wall, so to speak, and go nuts. ‘Oh, I’ll pan this 360 and make everybody’s head spin!’ There are certainly situations where that’s appropriate and to a certain extent—if you’re doing surround mixing for video games or film, there’s going to be an element of wanting that hyper-realism—that bombastic sense of things happening all around you. But it’s usually not appropriate for the kind of music I do. When you don’t have a visual guiding everything, you have to be careful, because you’re no longer distracted by the visual, so all of your focus, all your attention, is going on the sound. So if you’re doing too much with it, it can be incredibly fatiguing.

“One album I mixed [in 5.1] that let me try some different things was an album by a Chapman Stick player named Michael Bernier called Leviathan [2011]. That’s a prog album, so it was perfect for that kind of crazy, psychedelic [mixing] experience, so I went to town on that one. That was a very different animal. I didn’t record it, but he sent me his stereo album and said, ‘Here you go!’ Some of the sessions he sent me were 40-plus tracks of different pieces of material and all sorts of effects and synthesized sounds. In the end it came out very well, but it was a different experience working that way. We also did the final mix of the Bernier at the CUNY studios after I had created the initial mix at my home studio.”

Monitoring using different sets of speakers:
“I made sure we had a ‘grot-box,’ as we call it in the UK, which is a Mickey Mouse system. It’s much more important for surround mixers to make sure you’re monitoring what you’re doing on both kinds of systems, because a lot of people are going to have these very basic 5.1 systems that are not great—maybe they’re being used mainly for gaming, but maybe they want to listen to some [surround] music at some point. Obviously the idea of space is more of a priority if you’re surround mixing than if it’s just stereo. Listening to your mix in different sizes of environments—large rooms, small rooms—to give you a sense of how it’s working—is something I think about. So I have a consumer system for that, and I even made sure the placement of the speakers was sub-optimal because let’s face it, we’ve seen some horror stories of people putting surround systems in their homes—‘You put the speaker where?’ ‘In the book case on that wall, and the right rear speaker is down over there.’”
Show Comments

These are my comments.

Featured

Reader Poll

Do you spend more time producing or playing?


See results without voting »