Favorite effects tips from the trenches

Synthesizers and samplers are great for certain types of electronic and artificial effects, but I rarely use them for sound design. I find that there's

Synthesizers and samplers are great for certain types of electronic and artificial effects, but I rarely use them for sound design. I find that there's no substitute for starting with a field recording and manipulating it in a digital audio workstation. There's so much depth and richness to sounds taken from the real world, and, of course, there's an endless supply of material to work with.

How do you use these real-world recordings to create sounds that don't exist in nature? I suggest starting with a recording that is somehow related to what you want to create. Speed it up, slow it down, play it backward, splice, sample, layer, rerecord, and otherwise transmogrify the sound any way you can until you get interesting results you like.

There are loads of great programs available to help you manipulate sound once it has been digitized. (For more information, see "The Complete Desktop Studio" in the June 1999 issue of EM.) Although I'll mention a number of different audio tools, I'll focus on general concepts of sound design independent of platform or software and show you how I've used them in real-life situations.

GOING BATSVampire movies without bats are like baseball games without peanuts. While working on the remake of the animated film Vampire Hunter D, I needed to create the sound of a flock of flying bats. I began by imagining the sound I was after, then deconstructed it into two manageable pieces: vocal sounds and wing flaps.

Although bats are mammals, they are basically birdlike. For vocal sounds, I felt it would be productive to begin by recording a large bird, then speed up the sounds and raise their pitch. For this, I chose my favorite fowl-the noble chicken.

I located a chicken coop in a quiet neighborhood and talked the owner into letting me record about an hour's worth of chicken utterances using my field recording rig: a Neumann KMR81 shotgun mic, a Grace Designs Lunatec V2 mic preamp, and a Tascam DA-P1 portable DAT recorder (see Fig. 1). (I often use a Sony stereo mic or a matched pair of Neumann KM184s. In this case, concerns about background noise led me to select the shotgun mic.)

I stuck the mic into the coop at a variety of angles, always trying to get the loudest and best-sounding signal to tape. Back home, I used a Pro Tools AudioSuite plug-in to shift the pitch of these recordings two octaves up. This turned the long, languid chicken clucks into fast bat clicks and chirps. After sifting through the material for the best examples, I now have a gold mine of squeaks that could easily pass for rat or bat vocalizations.

ON A WINGBats have long, leathery wings. To re-create the sound of bat wings in flight, I followed the suggestion of Foley artist Jana Vance and recorded the flapping of leather gloves. It took a bit of practice to get the gloves to sound rhythmically even and winglike. Mic choice and placement were also important; I used a Neumann TLM103 with a pop filter, about 12 inches from the source, inside my homemade isolation booth. Once I got these elements together, the recordings sounded wonderful.

I recorded single glove flaps and pairs of gloves flapping in long rhythmic streams at several different tempos. I experimented with pitch shifting and EQ, dropping the flaps two semitones and pumping up the low end slightly. Then I edited out any stumbles in the performance. At this point I had long, steady recordings of a single bat's wings. But I needed the sound of a large group of bats, so the next step was to animate these steady sounds into the "pass-by" effects of many bats fluttering past your head.

DOPPLER EMULATIONTo convincingly re-create the Doppler-shifted sound of an object moving past, it is important to look at the volume, pitch, pan, and EQ elements of a sound all at once. As a sound begins far away, it is quiet and dark, often with some reverberation present. As it moves toward you from one side, the volume increases and the sound brightens up a bit. At the moment the object hits the center of the stereo field, the sound is at its loudest and brightest, and the direct sound completely overpowers any reverberance. As it moves past, heading toward the other side of the stereo spectrum, the pitch of the sound bends down about a semitone, the volume fades away, and the EQ darkens again.

All these timbral changes need to be timed correctly to make the effect convincing. Fortunately, a number of great resources are available for this purpose, including the Doppler program from volume 1 of the GRM Tools TDM plug-in (see Fig. 2) and a number of presets on the Lexicon PCM 80 effects processor. The PCM 80 algorithms were originally meant for automobile pass-bys, but I use them on all sorts of material to create a sense of motion.

To create the illusion of thousands of flying bats, I began by layering section after section of bat-flap steadies and pass-bys on top of each other in Pro Tools. Once I had layered about 30 tracks, I pitch-shifted many of them up or down a small amount to add variation. This gave me the sound of a small flock of bats.

But moving from a small flock to a massive group was a stumper. When I tried to add more layers, it sounded like more of the same, rather than thousands of bats. Nothing worked, until the director came up with the solution: an enormous number of wing-flapping bats push an incredible amount of air, creating wind. So I edited ferocious, blustery wind recordings to fit the motions of the bat flock (matching the amplitude envelope and playing with the brightness and pitch), and the number of bats suddenly seemed to increase.

Although the original bat flaps were good, they contained too much detail to sound like such a large number of the animals. In effect, the wind added depth and suggested the motion of thousands more bats in the background.

GAINING WEIGHTA common technique in sound design is to increase the weight and impact of a sound by enhancing its low frequencies. Extra thuds attached to punches, bullet hits, and castle-door slams can, when used judiciously, help sell the effect and jar the audience out of their seats. In this era of subwoofer-happy theaters, home surround systems, and PC-game speaker systems, low-end impact "sweeteners" have become de rigueur. Here are some ways to add low-end punch to the sound of an impact.

EQ. Sometimes all it takes is a bit of low-shelving boost starting at about 125 Hz. If the low components are not part of the original sound, however, EQ can't do the job.

Generating synthesized subharmonics. For this, you can use processors that analyze existing material and add low-frequency subharmonics that were not previously there. The Waves MaxxBass and Aphex Big Bottom Pro plug-ins do this well, but my favorite device for this task is an inexpensive analog signal processor, the dbx 120XP Subharmonic Synthesizer (see Fig. 3). By making a few short dial twists, particularly when I'm using them in-line with a compressor, I have turned the filtered pink-noise output of an ancient, cheap synthesizer into the thrumming blast of a rocket jet capable of shaking the walls. I've seen the dbx 120XP in the racks of many a sound designer's studio.

Adding a pitch-shifted duplicate. One quick and dirty variation that sometimes works well is to make a copy of the original sound and drop it an octave or two while preserving the duration. Compress this new pitch-shifted copy, if necessary, to beef it up, then line it up as a new separate track under the original. Align the initial transients as closely as you can, while repeating the sound-design mantra, "Smeared transients equal mud!" When I'm using a pitch-shifted duplicate, I often filter out most of the mids and highs that are left in the copy to clarify the harmonics of the original.

I tend to use the Eventide H3000 Harmonizer or the Lexicon PCM 80 with the PitchFX algorithm card for pitch-shifting, but virtually all sound-editing programs have DSP functions that do the same thing.

Low-frequency sweetener. Often the best approach is to add a different punch or thump to the original sound. This new addition, known as a sweetener, should have very little character of its own. My preference is to have virtually no frequencies above 150 Hz. The sound should be short, punchy, and generated from materials that sound similar to the effects you are trying to augment.

For example, I've created a series of body-fall enhancers by throwing myself into a sturdy carpeted wall. (Dedicated sound designers will do anything for their art.) The carpet muffled much of the high-frequency content, but I still got the general character of a body hitting a surface. I then applied a bit of Waves' Renaissance Compressor and some Renaissance EQ to roll off the highs and mids. I picked the best hits and now have a file in my library called "carpeted bodyfall LFE sweeteners" that will add punch in many projects to come.

PERSPECTIVESounds don't exist in a vacuum, except in science fiction movies. Any recorded sound in the real world got recorded in some type of spatial environment. Much of the time, sound designers try to record sounds as dry as possible, often in a dampened small space such as a vocal booth. This gives us maximum flexibility in adding appropriate reverberation later on. But while working with sound designer extraordinaire Ren Klyce (Fight Club, Seven, The Game) on the film Being John Malkovich, I gained a whole new perspective on the art of recording at a distance. The simple truth is that EQ and reverb can simulate only distance and room size: nothing sounds as authentic as something recorded in a room of the appropriate size, with the mics matching the camera's distance from the action.

In the film's "Malkovich Malkovich" scene, John Malkovich finds himself in a restaurant filled with clones of himself. People are heard eating in the background, a pair of Malkoviches toast each other across the room, and so forth. I recorded the sounds of forks and knives clattering on plates, wine glasses clinking, and general body movement in a room that's approximately the size of the restaurant, with two mics recording in stereo and positioned about as far away from the sound as the camera was from the action. The results were perfect, and they worked much better than if I had recorded the sounds in a vocal booth, rolled off the top and bottom, and added reverb. Digital reverbs are wonderful, but they can never match the quality of a natural space.

Recording with distance, or in roomy environments, is useful not only for creating realism but also for concocting big, imposing sounds. Using a large stairwell with a long, gorgeous reverberation, I recorded door slams and latch clicks that captured the feeling of being locked in a big, spooky dungeon. I also used the sounds in other settings-when I needed to add weight and power to explosions (the door slams) and simulate the cocking of a menacing weapon (the latch clicks).

FOLEY FESTA critical part of any convincing sound-effects track is Foley-the art of recording human movement and interactions in sync with a motion picture (see Fig. 4). Without it, the actors' movements seem empty and lifeless.

Foley is usually recorded on a sound stage after the picture has been filmed. An actor's movements are typically re-created in three passes: footsteps, clothing (rustling cloth, jingling keys, and so on), and prop handling (ordinary interactions with objects, such as picking up a towel and drying your hands).

Foley is a well-established practice in film making, combining with the ambient background and "hard" effects to create the effects layer. I've noticed that it often gets short shrift in game development, though. Often, the best we can expect in a game environment is four- or six-footstep samples on each of a few surfaces (usually wood floor, cement, grass, metal, and puddles). These footsteps are often cut from the same sound-effects CD libraries that everyone has, so we end up hearing the same footsteps in game after game.

I'm a proponent of taking Foley much further in games. For the LucasArts Entertainment title Grim Fandango, the supervising sound editor, Jeff Kliment, gave me a good deal of leeway in adding short Foley sounds. Though my equipment was not fancy, I had everything I needed: an Earthworks microphone, a quiet space in which to record, and some simple props such as Velcro, coconuts, bits of wood, bottles, coats, and keys. As a result, the game breathes, the 3-D characters move around with much more life, and the overall experience is improved. (You can read about the music created for Grim Fandango in "Dance of the Dead," in the September 1999 issue of EM.)

A simple but effective example of Foley in Grim Fandango is in the management of inventory. You, playing main character Manny Calavera, choose from your currently held objects by riffling through your jacket pocket and pulling out one item at a time until you find the one you want (see Fig. 5). This aspect of the game was clarified and improved with Foley sounds for each type of inventory.

Each time you want to get an object in the game, the action starts with a cloth rustle as the hand goes into the jacket. As the hand comes out holding the object, there is another rustle: small if the object is small, and more cloth moving and stretching if the object is big. Finally, each object has a little sound to help identify it as it appears on the screen: crinkling paper for notes and maps, a little hollow sploosh for a bottle filled with liquid, a small gun ratchet for the dart gun, and so forth.

SOUND DESIGN IN 3-DWhen I try to figure out what an individual sonic event should sound like, I like to step back and picture the entire sonic tableau. I plan out a sound within the larger audio context by thinking about how it will fit in three dimensions: frequency range, amplitude envelope or shape, and a more ephemeral dimension of timbre, aesthetics, and meaning that I like to think of as "color."

Pitch: the vertical dimension. Let's start with the frequency range. Should a sound be focused in the low end (such as the engines of a starship), the midrange (a door opening or wind blowing), or the high end (car keys jingling, glass breaking)? Should the sound fill the whole frequency spectrum? Explosions and rocket launches are built from white noise and often have low, medium, and high components. The trick is to create a niche within the frequency spectrum for your sound to inhabit uniquely.

When sounds have similar frequency content, they have a tendency to mask or interfere with one another. This problem is particularly acute in the midrange, where you have to make sure you don't mask the dialog. Low-end sounds can easily build up into a thick, muddy soup if you're not careful. And lots of high, brittle sounds mixed together are just plain irritating. For example, I've created explosions that were to be used in the same events for which the composer had written timpani hits; mixing the two elements together was a recipe for instant mud.

The best solution is to use EQ to create a space in the frequency spectrum for each element. When working on middle- or upper-register sounds, I frequently run a high-pass filter to clear out any unnecessary frequencies on the bottom. Start with a cutoff frequency of 85 Hz and increase the frequency until it audibly changes the midrange in a way you don't like-then dial it back a bit. We often record low rumbles without even realizing it; this EQ technique will clean them up. The goal is to leave a clear, open space for each sound to be heard.

Shape. Imagine a sound as a physical object. What does it look like? Is there a "shape"? Is the sound sharp and pointy (such as metal impacts or gunshots), jagged or crunchy (scrunching Velcro or walking on gravel), swaying and colorless (wind gusts blowing through trees), or pillowy and soft (such as filtered ocean waves)?

I like to think of the shape of a sound as being related to its amplitude characteristics. Combinations of opposing shapes, such as a soft sound and a sharp sound, work well together because it is easy for the ear to tell them apart. Shapes that are too similar are more likely to heard as a single, jumbled entity, particularly if the shapes inhabit similar frequency ranges.

Color. I think of a sound's color as involving its timbral characteristics, as well as the literal and symbolic meanings of the sound. This approach is quite personal and aesthetically oriented; as with dreams, everyone pictures sounds using symbols and colors that are meaningful to them.

LIGHT MY FIREAs an example of this visual, three-dimensional approach to sound design, let's look at the background ambience of medieval Prague in flames that I created for Nihilistic Software's game Vampire: The Masquerade. I imagined the flames as long yellow and brown strips moving horizontally across the picture, with little pits and holes. The strips represented the fairly steady noise components of the sound, and the pits and holes represented the crackles and sputtering of burning wood.

I wanted the flames to sound huge, so I decided to create a full-frequency experience with them. Since these elements would have little variation in frequency or amplitude, I could fill up the frequency space without overshadowing the shorter sounds that would stand out and add variation and color (see Fig. 6).

The background flames consisted of three layers of pads. The first was a low, rumbling furnace, with all the highs rolled off and the lows boosted to create an earthquake-like low end. The midrange was a big roaring wood fire, and the top end was a small crackly brush fire, complete with wet sputters and spits. When mixed together, these elements created an inferno with depth and weight.

To give variation and movement to the fire, I placed the sounds of flames in motion over the steady bands of color. I did this with a series of flame "whooshes" and fireballs created by whipping a torch past a microphone as well as changing the envelopes and EQ of wood-fire samples. Although similar in timbre to the background flames, the fireballs ebb and flow from the background because of their difference in shape. The combination of sounds adds motion and excitement without appearing to be created from separate layers.

Next, I added different sounds that were not based on fire to complete the scene of a city in flames. I imagined the inferno causing huge wooden timbers and trees to burn and fall, things to break, and general chaos to erupt. I mixed deep wood creaks with some underplayed wood crashes, and added spooky, crumbly cracks and grinds to portray the chaos. (Remember that these sounds need to sit in the background. Anything too far out front would detract from the interactive elements in the foreground that are added during game play.)

For the final layer of this background scene, I added human screams to enhance the mood. I grabbed some male and female screams from previous projects, created new ones in my vocal booth, and chose a few more from CD libraries of sound effects. I wove the best ones around the wood creaks and flame bursts, roughing them into place and then fine-tuning them until all the elements came out clearly. Because the screams were meant to be far away and were not portrayed by anyone onscreen, I lowered their volume 10 to 15 dB until they were audible but not out front. I panned the voices across the stereo space to diffuse them, then added a touch of reverb to soften them further.

Once I had the final mix the way I liked it, I listened with my master fader down low. When the feeling of the track met my initial image, and the screaming and wood-creaking events poked out just enough to add depth without being obtrusive, I knew the mix was done. I added a hair of Renaissance Compressor to bring the overall level up 2 dB, then I used the Waves L1 limiter with a ceiling of -0.1 dB to make sure there was no digital clipping. Finally, I used the Pro Tools Bounce to Disc command, which created a 2-track mix of all the tracks.

DON'T FALL IN LOVEBear in mind that it is rare to hear the entire soundtrack (including voices and music) while you are working on the effects for a project. The music is usually being created at the same time as the effects, and final dialog is often not available either.

The big trick behind good sound design is to get enough experience doing it to know what works, then try anything anyway. When working on the sound of experiencing the world from inside of John Malkovich, I shoved a $20 Radio Shack lavalier mic up my nose and into my sinus cavity to record breathing from inside my head. The result was great-a thick, rumbly, dark sound. But since I didn't fall in love with it, I wasn't heartbroken when it ended up in the Pro Tools equivalent of the cutting-room floor.

Do your best to imagine where your sound will live within the final tapestry, and create the best work you can-but be ready to change or lose it at the mix. It's no fun when that happens, but it is the nature of this business. Unless you are working in a vacuum on a project where no one else has any input, it's likely that some of your work won't make the final cut. Remember, you are there to serve the director's vision of the finished product, not your ego.

Nick Peck designs sound for films and games. He also composes and performs with the band Ten Ton Chicken. For more information, see www.tyedye.com/nick.html, or e-mail him at nick@tyedye.com.

Here are my Top 12 suggestions for effective sound design. Feel free to try these ideas at home!

1. Using compression on sound effects can be a blessing or a curse. At best, it pumps up sounds and makes them beefier. At worst, it can squash the dynamic range and reduce the overall drama of the sounds. Try compression on your material to see what works.

2. Work quickly. This allows you to capture the inspiration of the moment and create sharp edges that would otherwise lose their impact through too much fussing.

3. Take all the time you need to bevel and etch each sound effect to perfection.

4. Record your own sounds. If you can't do this, borrow someone else's recordings. If you can't do either, resort to CD libraries, but layer and edit different sounds together to create something new.

5. Foley and point-source hard effects are usually mono; ambiences are always stereo. Big effects usually sound best in stereo. Interactive sound effects (games, multimedia) are nearly always mono, but lobby for stereo music and ambiences.

6. Clean up low-end mud, high-end hiss, and all buzzes, hums, ticks, and miscellaneous junk that aren't part of the sound you want.

7. Leave handles (extra sound) at the tops and tails of sounds for film, and cut sounds for games and multimedia as tightly as you possibly can.

8. Trim files so that they begin and end on zero crossings, or put tiny fades at the top and tail of each file. Be careful, though, not to disturb powerful initial transients when you add your fades.

9. Find a sound-editing package that has a fast and easy user interface, and get really good at using it. Memorize the program's macro key commands. Speed and directness are more important than features.

10. Listen to the world. Record it. Live it. But watch out for wind rumble.

11. Avoid cheesiness.

12. Be yourself.