Master Class: Processing in the Box

Image placeholder title
Image placeholder title

Linkin Park''s Mike Shinoda works in the box at NRG in Hollywood.

It may not be obvious, but when you record into most DAW software, any plug-ins that you insert in an audio track are in the monitor path. They affect playback of audio, and not the file that is being written onto disk. For example, when you apply EQ to a track during recording, that EQ affects only what you hear in the monitors. Furthermore, when recording guitar or bass tracks using amp simulator plug-ins, you are always recording unprocessed instruments and not committing “your sound” to the audio files.

This “nondestructive” processing means good news and bad news. The good news is that you have the option at any time to remove or alter that plug-in or amp simulation simply by removing or bypassing it. The bad news is that if you spend a lot of time working on the sound, and then for some reason it changes or you lose it (maybe the session crashes), you will have to manually re-create it. (Note: One of the great things about recording guitar or bass using an amp simulator plug-in is that—the recorded file preserves the unprocessed sound of the instrument—you have the flexibility to change the tone later in the production. You can even split the guitar output while tracking, then route it separately into the session and to a real amp for monitoring. This configuration lets you hear and interact with your amp for sustain, etc. while recording, but still gives you the option of re-amping the dry track later on.)

Computer crashes notwithstanding, there are several valid reasons for committing an effect to the recorded file. You may be recording on someone else''s DAW and they have plug-ins (or hardware processors) you don''t own. If you save the session with their plug-ins and take it back to work with it in your studio, those plug-ins will be inactive, making your session sound different. As you''re probably aware, plug-ins consume DSP resources. The more plug-ins you use, the more DSP resources are required, whether the DSP is happening on the host processor (“native”) or on a processing card such as an HD Core or HD Accel card (“TDM”). By recording the plug-in to the audio file, you can later play back the track without instantiating the plug-in, which frees up DSP. If you are “old school” and grew up EQing to tape, you may feel comfortable “printing” EQ and/or effects.

I have no problem EQing or compressing to tape, but I have never been comfortable recording echo or reverb into a sound file because such effects are very difficult to “undo.” If you''re concerned with running out of the horsepower required for quality ''verbs, we''ll examine ways you can record reverb (or echo) onto separate audio tracks to free up DSP. During the early days of my recording career, I worked with a singer who insisted that we record his lead vocal track with his favorite echo. (I think it was a Lexicon Prime Time!) It always freaked me out. When the track was isolated you could hear discontinuity in the echo where we punched. Fortunately most of it was masked when the vocal track was placed in the mix.

Some of the ideas that follow can cause latency when used with certain combinations of CPU, software, track count, and plug-ins. (As a refresher, latency is the slight delay introduced when a sound is routed into a DAW, through a record-enabled track, the back to the monitor output.) Latency can affect a performance, so you may need to play with the allocation of resources such as buffer size and/or number of processors dedicated to the audio app in an effort to make latency tolerable. (This will be system-dependent.)

It pays to be up-to-date on the latest drivers for your hardware, and if multiple drivers are supported (WDM, WaveRT, ASIO, etc), it''s worth trying all of them to see if one may outperform the others. Some audio interfaces eliminate latency by routing the input signal directly to a separate monitor output as well as into the computer. This is known as “zero-latency monitoring” and may be enhanced by the interface''s ability to provide DSP-based reverb (or other effects) for the monitor path. Keep in mind that these effects too, are not being recorded to the file.

One more important point: If you record with a compression plug-in, you are not compressing the input to your audio interface, which means you still need to set your input level with care. Compressing via plug-in while recording can help even out the dynamics of a track but—unlike using an analog compressor after the mic preamp—it will absolutely not keep the input of your audio interface from clipping. If your goal is to do the latter, you''ll need to patch a hardware compressor into the signal chain between the mic pre and audio interface. (More about this later.)

Image placeholder title

Fig 1. Plug-in routing for recording, using MOTU Digital Performer as an example.

Here''s how to record plug-ins with most DAW applications. When you are ready to record a new track, you add two new tracks to the session: one aux track and one audio track. The live signal is routed to the aux track, not the audio track. The output of the aux track is set to a bus. Any unused bus will do, except the main L/R output bus that you use for monitoring. Set the input of the audio track to the same bus as the output of the aux track. The output of the audio track should be set to the “normal” monitor bus. Figure 1 shows this routing in Digital Performer.

In this example, the track on the left (blue) is an aux track that accepts a microphone input. The output of this track is set to Bus 1. The track on the right (red) is an audio track. The input to this track is set to Bus 1 and the output is set to Analog 1-2. Signal from the microphone comes into the aux track and is processed with the Parametric EQ that is inserted on the aux track. This processed signal is routed to the audio track where it is recorded. If you are doing this with reverb or echo, pay careful attention to the plug-in''s “mix” control. You don''t want to have the plug-in at 100% wet or you''ll get none of the original (unprocessed) signal.

Image placeholder title

Fig 2. Recording an effect to a separate track.

What if you like the idea of recording the effect (especially when you are visiting a friend who has a great reverb plug-in), but you don''t want to marry the effect to the dry signal? Record the effect to a separate track. Here''s how to record reverb to a separate track while recording a snare drum, shown in Pro Tools (see Figure 2).

First, add three tracks to the session: one mono audio track, one stereo aux track, and one stereo audio track. Route the snare microphone to the input of the mono audio track (“Snare Mic,” framed in red) per “normal” procedure. Add an aux send on the snare track as shown. The output of our aux send is set to Bus 1-2. On the aux track (“Aux 1,” framed in blue), the input is set to Bus 1-2, and a reverb plug-in is inserted. Make sure that the reverb is 100 percent “wet.” We don''t need any dry snare because we already have that on a separate track.

The output of the aux track is set to Bus 3-4. Input to the second audio track (“Reverb,” framed in yellow) is set to Bus 3-4, and its output is set to Analog 1-2. The snare microphone comes into the first audio track. The main fader feeds the signal to the L/R mix. The aux send fader routes the snare to the aux track where it is processed with reverb. This processed signal is routed to another audio track and recorded. Note that the aux send is set to pre-fader (notice the “P” highlighted in blue to the right of the small fader), so you can adjust the level of the snare without changing the level being recorded on the reverb track. It''s also worth noting that this type of routing can add a few milliseconds of latency, but we''re talking about reverb here, so a few ms aren''t going to hurt anyone.

Such additional routing can be scary if you are tracking a live session with a lot of instruments. You can always record the effect separately using the above routing after the session has been recorded. In fact, you can do this several times with different reverb or delay tracks, giving yourself or your client a few choices, plus you get the bonus of freeing up DSP resources. If you like that idea, entertain the idea of bouncing a track with effects over to a new audio track. Alternatively, you can “freeze” the track to accomplish the same thing, and freezing tracks (particularly instrument tracks) often frees up quite a bit of DSP resource. Once the new track is recorded, you can disable the plug-in on the original, mute the original, or even delete it.

There''s a trend for audio interfaces to include insert points on the inputs, greatly simplifying the use of line-level processors when recording mic- or instrument-level signals. You''ll probably need a TRS-to-dual-TS insert cable to accomplish this because invariably the insert will be send and return on a single TRS jack. Simply patch the interface''s insert to the in and out of the processor, and the processor becomes part of the recording chain. Remember that anytime you involve a hardware processor with your DAW session, you must process and bounce in real time, even when using an external plug-in processor such as a Universal Audio UAD-2, SSL Duende, or TC Electronic PowerCore.

Sonar supports easy integration of external processors with its External Insert plug-in. Instantiating this plug-in opens a routing dialogue where you can set send and return jacks for your interface''s audio I/O, effectively making the hardware device part of the session. External Insert provides a delay offset to compensate for latency that may result from routing the track to and from the interface. Ableton Live''s External Audio Effect feature functions in a similar manner.

If you''re lucky enough to have an arsenal of outboard mic pres, simply route the output of the pre into a compressor or EQ, etc., and then route the output of the processor to the input of your audio interface. The signal will be compressed or EQ''ed before it''s recorded. You now have a reason to keep some of your old hardware around.

Image placeholder title

Fig 3. Integrating outboard gear with your DAW.

When it''s time to mix, integrating outboard gear with your DAW requires a different setup. Figure 3 shows mixer routing from Digital Performer to a Lexicon PCM90 hardware reverb. We can see several audio tracks of drums highlighted in red (Kick, Snare, Tom 1, Tom 2, Ohead L/R). The snare and tom tracks have an aux send (“PCM90”) that is routed to analog out 3 of the interface. Analog out 3 of the interface patches to the input of the PCM90. Output from the PCM90 is connected to analog inputs 3 and 4 of the interface. In DP, we have an aux track called “PCM90 Return” (blue) with input set to Analog 3-4. This is where the PCM90 comes back into the session. The aux sends on the channels are raised to get signal from the snare and toms into the reverb, and the aux track fader is raised to bring the output from the PCM90 into the mix. Note that the PCM90 track''s output is set to Analog 1-2, so that it becomes part of the mix.

You can easily create a session template with routing to outboard hardware, where aux 3/output 3 feeds a reverb, aux 4/output 4 feeds a delay, etc. Note that we skipped outputs 1 and 2 because outs 1/2 are typically used for the main L/R mix bus. If we want to make the PCM90 reverb a permanent part of the session, we can simply record it to a stereo audio track.

Any of these tracks may be automated during recording or mixdown, which gives rise to special effects such as adding delay to a long sweeping note at the end of a guitar phrase, etc. It''s a bit different from your typical DAW application, but it''s not so scary. Now, premixing ten drum mics to two tracks on an analog 8-track machine . . . that''s scary!