FIG. 1: Changing the wet/dry reverb mix, increasing the decay time, and adding a lowpass filter creates the effect of sound receding into the distance.
Automation can go a lot deeper than recording changes in a track's volume and pan position. Just about any parameter of any synth or effects plug-in can be automated. You can exploit automation for sound design, tracking, and mastering. I'll describe six creative uses of DAW automation that can enliven your tracks. Though I happen to use Pro Tools as my DAW, these techniques will work just as well with any DAW and a basic suite of plug-ins. The parameter names may be different, but the concepts are the same.
I'm a huge fan of physical controllers — for me, there is something about moving a fader that is more musical than drawing lines with a mouse. Many of these automation ideas lend themselves to twists and pushes of knobs and faders. It's well worth the time to program a fader on your MIDI controller to work with the parameters discussed here.
Fun with Filters
Filtering is the most powerful and most often used tool in any engineer's arsenal. Equalization (EQ), which emphasizes or deemphasizes different parts of a sound's frequency spectrum, is often necessary to use to make different elements coexist peacefully in a mix. Static EQ is enormously powerful, but you can further enhance sounds by automating the parameters of EQs and other types of filters (see Web Clip 1). Filter effects such as a swept lowpass filter darkening and brightening an arpeggiated synth line have become musical clichés for a reason — they sound great.
There are dozens of different ways to use automated EQ effectively. In film work, dialog editors are often required to match dialog recorded by different mics at different times. To smooth over the differences, an editor creates snapshots of EQ settings that are applied as the lines play back.
In a more musical context, you can automate EQ to change the timbres of instruments at different sections of the music in order to create contrast. You can, for example, use a highpass filter to drop the low end out of a track or a mix for a couple of bars, creating a floating moment that crashes back to earth when the low end returns. Or you can filter out the highs and the lows, creating a midrange peak to simulate an AM radio effect.
For a real change of pace, try a frequency fade-out. Slowly lower the cutoff frequency of a lowpass filter instead of dropping the level, and then use a volume fade at the end to finish it off.
You can apply LFOs to a filter plug-in's cutoff frequency, adjust a midrange EQ's gain or center frequency over time, or even split tracks into multiple EQ bands that can move around the stereo space independently. There are endless ways that you can use filtering to animate sounds.
The ratio of direct to reflected sound and the sound's high-frequency content are cues to the distance from the source to the listener. High frequencies, because of their shorter wavelengths, are more directional in nature — the farther you are from the source, the more the high frequencies scatter and the less you hear them. Here's a trick I like to call the Going Away effect (see Fig. 1 and Web Clip 2).
Insert a reverb plug-in on a vocal track, and choose a nice room, hall, or plate algorithm, with a reverb time of around three seconds. Start with the wet/dry mix set to 100 percent dry, and slowly move it to 100 percent wet. Done slowly enough, this effect can be quite subtle. If your reverb plug-in can handle changing the reverb-decay time in real time without creating unpleasant glitching artifacts, you can enhance the effect by slowly increasing the decay time as well. Try starting with a decay time of 700 ms and increasing it to 3 seconds or more.
You can further enhance the sense of moving away from the listener by slowly rolling off the high frequencies of the dry signal. If your reverb plug-in includes an input EQ, you can use that; otherwise, simply insert an EQ or lowpass filter plug-in before the reverb.
Electronic-music pioneer Alvin Lucier created the ultimate artistic statement of this technique, with his piece “I Am Sitting in a Room” (Lovely Music, 1980). He recorded himself speaking in a room, and then rerecorded that recording, repeating the process 32 times. In the end, there was nothing but a wall of reverb.
The Beatles are known for their gorgeous vocal sounds. At first, they precisely rerecorded the vocal parts in synchronization with the first vocal pass — a process known as double tracking. John Lennon wanted to find a way around this tedious process, and in 1966, Ken Townshend, a recording engineer at Abbey Road, invented Automatic Double Tracking (ADT) as the solution. ADT amounts to making a copy of the original vocal on another tape recorder, and then playing both recordings simultaneously for the mix. Millisecond variations in the playback speeds create the doubling effect.
You can now achieve a similar effect more easily with digital delay. Insert a delay plug-in on the vocal track, set the wet/dry mix to 50 percent with no feedback, and find a delay time that sounds about right (settings of 30 ms or less usually work best). Keeping the delay time of the doubled track static doesn't quite create the desired doubling effect. You can get closer by subtly automating the delay time.
FIG. 2: The Beatles utilized multiple tape recorders for Automatic Double Tracking, but the plug-in method is much easier.
To better capture the spirit of ADT, duplicate the original vocal and place it on a second track, with a delay plug-in set to 100 percent wet (see Fig. 2 and Web Clip 3). As you automate the delay time in your piece, notice the colors that appear. Below 10 ms, you get comb-filtering effects that color the frequency response of the signal. Between 10 and 30 ms, you get a more pronounced phasing effect, and from 30 to 80 ms, the tracks start to break apart and appear as separate elements.
Many delay plug-ins produce audible clicks when the delay time is changed on the fly. If your plug-in does that, you may be able to minimize the problem by adding a lowpass filter after the delay.
Remember, you're not limited by a physical number of tape recorders. Try using three, four, or even five separate delays, modulating each one's delay time by hand. You'll get a big sound with lots of animation and subtlety.
Mid-Side Vocal Widening
I'm a big fan of mid-side (MS) recording. I use this technique all the time for field recording, because it allows me a great deal of flexibility back in the studio. I can use a mono on-axis, a mono off-axis, or a stereo version of the same recording, and can even tailor the stereo width to the center strength of the signal. All of that power is encoded in just two channels.
FIG. 3: Any two mics can be used for mid-side recording, as long as they have the appropriate polar patterns.
For those unfamiliar with MS recording, it uses two microphones placed at the same location. The mid-channel (also called the center-channel) microphone uses a cardioid or omnidirectional capsule, whereas the side-channel microphone has a figure-8 polar pattern (see Fig. 3). Matched pairs of mics are best, but any two mics will do the job, as long as they have the appropriate polar patterns. The mid channel is recorded on one track; the side channel is recorded on another. The side-channel track is duplicated, and one instance is flipped 180 degrees with respect to the other. The two side channels are then panned hard left and right, while the mid channel is set to the middle.
You can take advantage of the MS technique for vocal or instrument tracks by automating the levels of the stereo and center elements to enhance the drama in the music. For example, you could use the mid channel alone for the verse, and then slowly fade in the side channels for the chorus. That will open up the track and build momentum.
For contrast, try using only the side channel for the bridge. Or, to create a subtle punctuation, use automation to bring in the mid channel on selected words or phrases. Those same techniques can bring an animated stereo image to electric guitars, keyboards, and sound effects (see Web Clip 4).
Riding the Fader
Dynamics processing has had a gargantuan impact on popular recording techniques over the past 40 years. Legendary boxes such as the Fairchild 670, the LA-2A, the 1176, and the Distressor have pumped, squished, and slammed tracks and whole mixes. The result can be cohesive mixes, big fat drums, huge bass tracks, and vocals that soar over everything else. On the other hand, you can have too much of a good thing.
Overcompression can lead to tracks and mixes that are highly squeezed, removing all trace of dynamics on both the micro and macro levels. Loudness is context specific; it has meaning only relative to softness. Contrast in music is a good thing. But because there is always the challenge of making vocals work over backing tracks that usually don't change much in volume, compression will always be with us.
The great engineers of yesteryear solved problems in level matching by riding the faders during recording and mixdown (see Web Clip 5). That may sound obvious, but if you are paying attention to the loud and soft moments of a vocal or an instrument during a tracking session, you can avoid the need for heavy dynamics processing by adjusting the input level as you go. Of course, that is much easier to do during overdubs than while tracking a whole band.
You can use automation to ride a vocal track's level during mixdown, increasing gain for whispered passages and reducing it for louder ones. If you can't react quickly enough to satisfactorily ride the levels, try playing back the track at half speed, assuming your DAW supports that. Your automation data will play back correctly at full speed, and your tracks' dynamic levels will rise and fall smoothly.
If you want to use some compression while still having a bit of manual control, automate the compressor's threshold control. Play through the piece varying the threshold, bringing in more or less compression at different moments. Once again, you can use this technique to create contrast between different sections of music, perhaps having an open, dynamic sound during the verse, and a squashed, beefy sound during the chorus.
LFO-like Audio Modulation
Low-frequency oscillators (LFOs) have been used to modulate synth parameters since the beginning of electronic music. You can stretch beyond the traditional uses of LFOs (controlling an oscillator's pitch or a filter's cutoff frequency) by creating LFO-like automation on digital audio tracks in your DAW.
The technique is easiest to apply on systems that have some form of draw tool that supports waveshapes, such as Pro Tools' pencil tool. But the technique is worthwhile even if you have to manually approximate LFO waveshapes, because you can apply it to any audio parameter you can think of.
Perhaps the most obvious example is panning. Using a sine or triangle waveshape, draw pan automation, bouncing between far left and far right at the highest possible resolution of automation. (To ensure that you get the highest resolution, turn off any thin track-automation preferences.) The result is a warbly, animated effect (see Web Clip 6).
FIG. 4: Drawing rapid automation curves to modulate pan and level parameters can create interesting effects.
You can apply the same technique to volume, creating rapid tremolo effects. Or, instead of volume, try using the mute parameter for a choppier effect (see Fig. 4). If your system can play back this kind of automation at more than 20 Hz, you can even get into the range of amplitude modulation, creating audible sidebands.
Try other shapes: square waves create sharper edges, and pulses of different levels with varying timing create turbulence and randomness (think sample and hold). Finally, you can just freehand it — you never know what kinds of weird effects you might get by scribbling with a mouse or a fader.
The ability to change effects parameters over time gives you myriad possibilities for experimentation. So dive into your DAW, pull up some plug-ins, and start exercising that fader finger.
Nick Peck is a sound designer, engineer, composer, and Hammond organist living in the San Francisco Bay Area.