Outer Limits

How to use good sound design in your music post-production work.
Author:
Publish date:
Social count:
0
How to use good sound design in your music post-production work.
Image placeholder title



BONUS MATERIAL
Web Clips: Listen to audio clips that demonstrate special effect sounds created with pitch- and formant manipulation, reversing, editing, vocoders, subtractive synthesis, and more

The next time someone seems underwhelmed by your creative calling, put them in front of a movie with the sound turned down. Sure, visuals are impressive, but it takes sound to give them real substance. Without the dialog to weave a thread of meaning, viewers would rarely be able to follow a movie's complex plot. Without the music to steer and massage our emotions, most movies would never grab our hearts the way we want them to. And, of course, Foley brings us — consciously and subconsciously — just enough of the sounds of the characters' world to make it seem real without distracting us from the fantasy.

Image placeholder title

FIG. 1: Time- and pitch-manipulation effects such as Soundtrack Pro''s Vocal Transformer usually offer independent control over pitch and formant.

For pure visceral impact, however, it's the sound effects that matter. Nothing makes your heart feel as if it's about to leap out of your throat like the growl of a tyrannosaur or the shriek of an alien. The problem is that you can't always get a dinosaur into the studio — the good ones are booked solid, and their scale is totally unreasonable.

Good sound design brings to life creatures that never existed or that don't exist any longer. It takes ostensibly mundane sounds and makes them sound as threatening (or tragic or exciting) as the onscreen action feels. In this article, we'll explore some techniques for creating sounds that are supernatural or surreal. These methods are used in both film and game production. Although the two fields differ greatly in the way sounds are implemented, the characteristics of a good sound are essentially the same for both.

Familiarity Breathes

Although it's natural to be fearful — or at least a bit apprehensive — at the sight of a space alien or werewolf, filmmakers understand that evoking real terror requires something the viewer recognizes as dangerous. Thus, most movie creatures have fangs, claws, stingers, pincers, or something familiar enough to scare us immediately. It's important that creature sounds take advantage of this familiarity, too. Although a sound you've never heard before might alarm you, a sound that reminds you of a lion's roar will raise the hair on the back of your neck before you even start to deal with it on a rational level.

The flip side of this familiarity axiom is that if the listener hears that the sound of an alien is actually a lion's roar, the illusion is blown. So although it's good practice to build supernatural sounds from natural sounds, it's essential to fool the listener into feeling the familiarity without recognizing the familiarity. This is accomplished by breaking the original sound's context.

Both multitrack and 2-track audio editors offer an almost unlimited variety of tools for disconnecting a recorded sound from its original context. Cut up a few lines of dialog and shuffle the syllables, and you can create a dead language. Reverse the same lines to create the classic dream sequence. Notable sci-fi villains like Dr. Who's Daleks or the original Battlestar Galactica's Cylons were created by using ordinary dialog as the modulator input on a vocoder. Each of these examples plays on our ability to accept and process the familiar aspects of a human voice while simultaneously pulling us out of our comfort zone by processing that voice in unnatural ways.

Let's start with a bit of alien dialog. Although it's tough to imagine that when we meet visitors from another planet they will speak English, it's important that moviegoers be able to recognize film aliens' utterances as language. Thus, most movie alien sounds begin as normal human dialog.

Hearing Voices

Web Clip 1 is a short conversation between aliens from two separate planets. The raspy voice is my friend Andy trying to coax his two-year-old daughter Sachiko to the microphone; the second voice is hers. I imported Andy's lines into Apple Soundtrack Pro, which is part of the Final Cut Studio 2 suite.

Image placeholder title

FIG. 2: Sonar''s V-Vocal offers a graphical environment for altering pitch, timing, volume, and formant.

Reversing the file made the language unintelligible while maintaining a sense of timing and intonation that is immediately recognizable as conversation. The challenge with reverse dialog is that too many syllables end up as crescendos, giving away the technique. To avoid this, I deleted some of the suspect words, simultaneously rearranging words to create a different flow. In some cases, I unreversed consonants and then grafted them onto the beginning of certain words.

The voice still sounded too human, so I inserted Soundtrack's Vocal Transformer effect and experimented (see Fig. 1). Andy has a nice, deep voice, so I had plenty of flexibility. Like most current pitch plug-ins, Vocal Transformer offers independent control of pitch and formant, allowing you to reduce the chipmunk effect when tuning a vocal. In this case, I embraced the rodents by cranking the formant control up several semitones. It immediately gave the impression of having moved Andy's larynx up to his forehead without actually reducing him to a chipmunk.

To balance the nasal quality of the voice, I lowered the pitch by about an octave and a half. The result is a creature voice with weight befitting a large and powerful body, timbre suggesting an alien vocal apparatus, and phrasing that evokes ordinary conversation. Applying the same processing to dialog with more emotional range would give a result that tracked the actor's performance well.

The other creature voice was somewhat more challenging due to the airiness of Sachiko's voice. I tried several different pitch-shift plug-ins within Digidesign Pro Tools, Cakewalk Sonar, Apple Logic, and Soundtrack before settling on Sonar's V-Vocal. V-Vocal's graphic time manipulation allowed complete freedom over Sachiko's phrasing (see Fig. 2). I stretched and squeezed syllables, even individual phonemes, to shape each line as needed.

I shifted her pitch down about an octave and her formant a little more than half to obfuscate her gender and age. The result retains her innocence, suggesting a naive, daydreaming sort of character. Occasionally, the algorithm struggled to track the little hesitations and chirps that are typical of a two-year-old's voice, so after bouncing the processed vocal to a new clip, I selectively edited out those sounds, leaving some in for effect. Sachiko's native accent is Japanese, and she created some wonderful nonsense phrases, so I had great freedom in editing together the sort of phrases I needed.

As I Live and Seethe

Some of my favorite science-fiction shows imagine organic technology, the sort of thing where a space vehicle is to some degree a living thing. Despite the fact that sound does not travel through the vacuum of space, the spaceships in virtually all movies, games, and television shows are audible to the audience. Most of us choose to overlook the paradox and accept the aural cues as indications of an object's size, speed, position, and even purpose. Hearing the sound of a living, breathing, menacing space vessel in my head, I turned to the vocoder, a time-honored device for blending human and electronic sounds.

Image placeholder title

FIG. 3: Native Instruments Vokator is a powerful sound-design tool built around a flexible vocoder architecture.

I created an audio track for my voice and an instrument track for Native Instruments Vokator (see Fig. 3). I used a bus to route the output of the vocal track to the input of the Vokator track and then set Vokator to use that input to modulate its internal synthesizer. Because the vocal track would not be heard directly, I grabbed a ten-dollar mic and started tweaking the sound.

One of Vokator's better features is its ability to morph between settings in response to CC 1 messages, so I created two timbres — one for the ship's approach and another for its departure — and used my keyboard's mod wheel to glide between them as the ship passed by. It took a good deal of fine-tuning to get the right balance of pitched to unpitched sounds in each timbre in order to make the ship sound more mechanical and less musical.

To give the alien vessel as much subliminal angst as possible, I recorded several tortured cries. Over these sad syllables I played random 3-note combinations, eventually settling (fittingly, some would say) on a minor triad. I used a fade-out to smooth the sound of the ship flying away and a fade-in to obscure the initial consonant, lest the illusion be compromised. With a bit of practice, I was able to get the Doppler shift about right with my keyboard's pitch wheel, but I still did some hand tweaking. With the addition of a small amount of reverb, Web Clip 2 was born.

BONUS MATERIAL
Web Clips: Listen to audio clips that demonstrate special effect sounds created with pitch- and formant manipulation, reversing, editing, vocoders, subtractive synthesis, and more



BONUS MATERIAL
Web Clips: Listen to audio clips that demonstrate special effect sounds created with pitch- and formant manipulation, reversing, editing, vocoders, subtractive synthesis, and more

Landing the Mothership

To create a sound for a biomechanical mothership, I turned to my cats. A cat's purr is a mysterious yet comforting sound, until you scale it up to the size of a tiger looking for lunch. It's also a complex timbre with which to work.

Image placeholder title

FIG. 4: A fundamental technique amid all the space-age tools: zero-crossing edits make for a seamless loop of cat purrs without requiring any crossfades.

Cats don't purr on demand, however, so I had to be sneaky. For several days, I kept my M-Audio MicroTrack 24/96 handheld recorder close at hand, waiting for an opportunity. It came in the middle of the night, when two of our three feline companions took turns stealing my pillow. Pleased with themselves, they purred victoriously as I reached out to my nightstand for my recorder and slowly brought it in close to their smug whiskers.

A cat's purr continues seamlessly as the cat breathes, but its timbre changes between inhale and exhale. I started by editing out all the inhales from a particularly vigorous episode. With some careful old-fashioned zero-crossing edits, I created a seamless drone that sounded more like the thrum of a giant spaceship and less like a happy Himalayan (see Fig. 4).

To make the sound much bigger than life, I dropped the pitch several semitones with Digidesign's Time Shift plug-in. I followed that with some heavy compression with Sonnox's Oxford Dynamics and Inflator. The compression leveled out the volume fluctuations to create a steadier engine sound. Inflator has a way of adding more beef to a sound without actually making it louder.

Digidesign's Hybrid synthesizer provided the mechanical part of the engines. I adapted a drone preset by modulating the pitch of two oscillators with separate semirandom LFOs. The third oscillator provided mostly noise, with the whole thing being lowpass filtered pretty heavily. The resulting timbre held only a hint of pitch, especially when played in the lowest octave.

I created escort ships with another Hybrid patch using its various envelopes to drop the pitch and lower the filter cutoff as the ships pass by. Small amounts of pan automation for each note — each note being an escort — contribute to the illusion of motion. I had to draw each pan curve meticulously by hand to keep the tail of one ship from jumping to the position of the next ship (see Fig. 5). Had I printed each ship to its own track, or at least enough tracks to ensure that no two adjacent or overlapping ships shared a track, the panning would have been easier. My work flow is necessarily shaped by the tools at hand, however, so I tend to conserve tracks when working in Pro Tools M-Powered, which supports only 32 audio tracks.

Image placeholder title

FIG. 5: Layers of sound, each with one or more automation envelopes, combine to create the sound of a spaceborne warship passing overhead.

There's one special fighter that flies by at about 25 seconds — it sounds a bit like a Formula One car. I chose another organic sound to call attention to it and to imbue it with a hint of emotion. It's the sound of another of our cats getting annoyed at the recorder, a delicious meow that I massaged into a flyby. First I molded the timing of the sound in V-Vocal, working it like a lump of Play-Doh until it had the right velocity. I used V-Vocal's pitch and volume envelopes to get the Doppler shift right. When working with pitch- and time-manipulation plug-ins, I am always listening carefully for the point at which the algorithm “breaks.” The sound gets grainy and artificial, and I have to pull back the effect or try another plug-in. For this task, V-Vocal allowed me all the headroom I needed, and its graphic interface made the job a snap.

I bounced the clip and exported it into Pro Tools, where I doubled it and transposed the copy an octave down to give it bulk. I bused the two tracks to an aux, where I added a chorus with the Short Delay plug-in. The chorus gives the sensation of the beating of two engines, a sound that pilots know well and the rest of us recognize at least subconsciously. The fighter's trajectory was achieved with a bit more-dramatic panning than the escorts, another subtle sign that it is special. Web Clip 3 is the final 40-second sequence.

Bring Down the House

Recording an avalanche is tricky business, what with the risk of death and all that. With a bit of imagination and the right tools, however, a world-class landslide can be created in the studio — or, in this case, the kitchen.

As I held the microphone as close as practical, my ever-patient wife poured 4 pounds of cat food from one container into another over and over. Predictably, this drove our cats into a feeding frenzy, so I retreated to my studio and left Barb to appease them. I imported the files into Pro Tools and auditioned them for the best bits, editing and naming them as I went.

Image placeholder title

FIG. 6: Sometimes a heavy hand is warranted, as with these three EQ plug-ins adding up to 10 dB of low bass to the primary rumble tracks.

Although Digidesign has introduced two more-sophisticated time/pitch plug-ins, its original Pitch Shift has two appealing features. First, it allows you to defeat time correction, creating an old-fashioned varispeed-style effect. Second, its Accuracy slider lets you optimize the algorithm for sound or rhythm. In this case, I chose to optimize sound and slowed down various regions by as much as 4:1.

The slowed-down cat food did a pretty good job of imitating falling rocks, but creating a compelling avalanche requires enough bottom end to give the impression of the earth belching. For this, I turned to Way Out Ware's TimewARP 2600. I created a simple patch that blends all three oscillators, each set to a different low base pitch. I set the initial filter cutoff to about 100 Hz. The thing that really made the earth move was patching the noise generator to the filter cutoff. This made the timbre shift unpredictably, lending the sound the sort of natural chaos it needed. This flexible architecture is one reason modular or semimodular synths like the ARP 2600 and its clones are so powerful for sound design. I played this sound in real time to create accents that help steer the intensity.

The core sound of the avalanche was complete: two copies of the lowest-pitch cat food region, offset by several seconds and panned stereo left and stereo right, complemented by a synthesized rumble. Each part was heavily EQ'd to emphasize its low end (see Fig. 6). The painstaking work, however, was yet to come.

I had succeeded in creating an avalanche ambience without any clear and present danger. The basic cat-food-turned-landslide regions were in stereo, and, panned to opposite sides, they established a good three-dimensional environment for the landslide. Next I needed to get up close and personal with some heavy rocks.

Pro Tools always deals with stereo regions as multimono files, so creating a mono region from a stereo region is as simple as dragging one channel from the region list to the tracklist. I harvested a handful of such regions that had been pitch-shifted less severely than the “rumble” regions. Each occupied its own mono audio track so it could be mixed individually. I hand trimmed each region to an appropriate length and hand drew volume and pan automation to make it come crashing into the picture.

Sonnox's Oxford Reverb includes a preset called Canyon, and it proved to be just the thing to keep things rumbling through the surrounding mountains. Web Clip 4 is the result of all this faking and tweaking.

Postscript

Note the recurring application of basic synthesis techniques. Even when a synthesizer is not directly involved, you'll need to apply volume and pan envelopes and automate effects parameters — after all, life is never static, so why create static sounds?

Given a Hollywood budget, I would have done a few things differently, such as experimenting with different brands of cat food — perhaps even dog food. Seriously, though, the common thread through all these examples is the triumph of imagination over money. If you listen carefully to the sounds around you and take note of their distinctive characteristics, you will start to hear the possibilities in a chair squeaking or crickets chirping. Almost any sound can be pressed into service if you break it out of its original context. To a good sound designer, there are no ordinary sounds.

Brian Smithers is the author of Mixing in Pro Tools: Skill Pack (Thomson Learning, 2006). He would like to thank his family, friends, and pets for their contributions to this article.

BONUS MATERIAL
Web Clips: Listen to audio clips that demonstrate special effect sounds created with pitch- and formant manipulation, reversing, editing, vocoders, subtractive synthesis, and more