Multi-Effects 101

The early digital effects devices of the 1970s could perform only one processing function. Some years passed before processing power increased (and chip
Publish date:
Social count:
The early digital effects devices of the 1970s could perform only one processing function. Some years passed before processing power increased (and chip
Image placeholder title
Image placeholder title

The early digital effects devices of the 1970s could perform only one processing function. Some years passed before processing power increased (and chip costs decreased) to the point at which a single device capable of generating several effects at once could be offered at a reasonable price. When that finally happened, multi-effects processors, which combine delay, reverb, and other types of signal processing, won rapid acceptance. Today, stand-alone reverbs and other dedicated processors are still popular, but multi-effects processors, which are now available at all price points, dominate the field.

In this article I'll discuss the features that most multi-effects processors have in common. I'll also look at three popular effects that you'll find under the hood of nearly any multi-effects processor: delay, phasing, and reverberation.

The term multi-effects processor suggests an independent device, such as a rack-mounted unit. Increasingly, however, multi-effects processors come in other guises. They are built into or added on to instruments or amps, implemented on computer DSP cards, or realized as software. Nevertheless, most effects processors offer similar effect types and employ comparable signal-routing architectures.

Let's examine how signals enter, travel through, and exit an effects processor. It is common-though not universal-for an effects box to offer a choice of analog and digital I/O. I'll consider the analog case first.

Image placeholder title

FIG. 1: An analog signal passes through an effects unit in several stages. First is the conversion from analog to digital, followed by the application of the effect, then by a reconversion from digital to analog.

Fig. 1 illustrates a simplified mono signal path in which an input signal is fed to an analog-to-digital converter (ADC). The ADC spits out digital samples, which are routed through one or more processing stages that implement particular effects. The final stage is a digital-to-analog converter (DAC), which converts the samples back into an analog signal.

One of the most critical components of this process is the sample rate, which determines the unit's frequency response. Nowadays even moderately priced units feature sample rates of 44.1 kHz or higher with oversampling converters, providing CD-quality frequency response. When you consider an effects box, make sure that the specs on the converters are at least up to this standard.

The processor's sample resolution (or word size) is also important. A higher sample resolution yields a better dynamic range; today's processors usually employ 20-bit or wider ADCs and DACs. Inside the system, samples are typically represented with an even larger word size. That's because accurate processing of a sample often requires more bits than are in the sample itself. For example, the internal processing resolution of an effects box with 20-bit converters is likely to be 24 or 32 bits.

If you're using digital I/O, you don't have to worry about the effects processor's conversion specs because signals enter the system already digitized. But sooner or later you'll have to convert the signal from digital to analog, so you still have to consider the quality of the converters in the signal path before and after the processor. Whether you use digital or analog inputs, the issue of internal resolution is still important.

Most effects processors have left and right inputs and outputs, but some of them sum their "stereo" inputs to mono and then synthesize a stereo field. In contrast, a discrete stereo effects processor provides independent signal paths for the left and right channels. Some processors also offer a dual mono mode, which applies completely different effects to each channel. In essence, dual mono gives you two processors for the price of one. For example, you could reverberate the left channel signal while applying a phase shifter to the right channel.

Algorithms are the mathematical constructs used by the processor to create effects. For instance, an effects processor might use a single reverb algorithm to create several reverb programs. In a discrete stereo processor, separate mono algorithms are applied to both channels. In a true stereo processor, however, a more sophisticated algorithm that accounts for the sound's behavior in a stereo field is applied to both sides.

Algorithms are commonly confused with patches or programs. A program incorporates an algorithm and a set of parameters (such as delay time, chorus depth, and so on) that modify the algorithm. A program can be either a single effect or an arrangement of processing stages through which a signal passes.

Image placeholder title

FIG. 2: When a signal is processed in series (a), the output of one process is fed directly into each successive process. When a signal is processed in parallel (b), each of the processes is performed independently of the other.

Some programs route a signal in series through several stages (see Fig. 2a), creating multiple effects; others split a signal into parallel paths (see Fig. 2b). In many cases, you can tweak the parameters of each effects stage, but you can't arrange the stages in a different order. More advanced multi-effects units let you design your own effects chain. The final stage in many effects processors is a wet/dry mix control, which regulates the proportion of processed (wet) to unprocessed (dry) signal at the output.

No multi-effects box worthy of the name would be complete without a digital delay line (DDL). The basic operation of a DDL is simple. A DDL samples an input signal at a fixed clock rate, reading samples into a RAM buffer or queue. Each sample enters the input end of the queue and moves through it one memory location per clock count. Some time later, each sample emerges at the output.

The maximum delay (MD) available in a DDL is a function of the buffer size and the clock rate: MD = buffer size (in samples) 5 clock period. For example, consider a 32K sample buffer: 1K = 1,024 samples, so a 32K sample buffer holds 32,768 samples. Given a 44.1 kHz clock, MD would equal approximately 0.74 seconds (32,768 samples 5 1/44,100 Hz). Thanks to inexpensive RAM, current DDLs can achieve delays of several seconds or more.

Achieving shorter delays is simple: instead of reading the output from the last location in the buffer, read the output from an earlier one. You select the location from which output samples are read with a software-controlled read pointer. If the read pointer for the aforementioned buffer points to location 1,000, the signal is delayed by only 22.7 milliseconds.

The read pointer is often called the tap. When you tweak a DDL's delay-time parameter, you're adjusting the tap. A multitap DDL has multiple read pointers, allowing you to delay the same signal by different amounts.

Delay in itself isn't an impressive effect; the input signal just comes out later, sounding the same. A notch above this in sophistication is loop sampling. In this technique, the DDL records samples into its buffer and plays them back repeatedly, locking out further input. In effect, the DDL becomes a crude sampler.

Image placeholder title

FIG. 3: A delay with feedback mixes time-delayed versions of a signal back in with the undelayed input.

The most widely used DDL applications involve feedback, usually accompanied by wet/dry mixing. The simple configuration illustrated in Fig. 3 yields a variety of audible effects. These can be categorized by range of delay times.

When a signal delayed by about 50 ms or more is fed back into the input, discrete repetitions or echoes occur. The feedback is usually attenuated, which causes the repetitions to fade out. Almost all DDLs offer controls for the echo repetition rate (delay time), regeneration or feedback level (which determines how long echoing continues), and wet/dry mix.

DDL-generated repetitions are a popular rhythmic device. They're so popular, in fact, that many effects boxes let you set the delay in terms of beats and tempo rather than time, or set the delay time by tapping a footswitch. Many DDLs can also lock the delay time to MIDI Clock messages from a sequencer.

Medium-range delay times of about 10 to 50 ms are often used to produce ambience around a signal. Simply delaying a signal by 10 to 50 ms and mixing it with the original signal can fatten up or "double" a sound. Routing delayed and unprocessed signals to separate channels is a common technique for synthesizing a stereo image from a mono signal (for more on synthesized-stereo samples, see "Master Class: The Splittin' Image" in the May 2000 issue of EM). Medium delay times combined with moderate feedback levels produce tightly spaced echoes that add a reverblike "tail" to a sound.

Complex comb-filtering effects occur when a signal delayed by less than about 10 ms is mixed back with the original signal. Each partial of the delayed signal is phase-shifted by a different amount, with respect to the original signal. When mixed, some components are 180 degrees out of phase. These cancel each other out, producing notches in the spectrum. Other frequency components cancel partially or not at all. The resulting spectrum consists of peaks and notches, spaced uniformly through the audio range. When plotted, the spectrum resembles a comb. Comb-filtering effects are reinforced by feedback, which sharpens the peaks.

Flanging is a comb-filtering effect in which the comb is periodically swept up and down the audio spectrum. Flanging requires a time-variant DDL; that is, one with which you can vary the delay time by controlling it with some signal. Typical control signals include LFOs, envelope generators or followers, and MIDI continuous controllers. The classic flanging effect employs an LFO sine or triangle wave to modulate the delay time. The LFO frequency determines the sweep rate, while the LFO amplitude controls the sweep's "depth"; that is, the amount of variation in the delay time.

You can obtain an extension of the flanging effect with a multitap DDL. Routing a signal through a multitap delay while varying the delay time of each tap over a small range is equivalent to running the signal through several flangers in parallel. The result is a complex chorusing effect.

Static comb filtering, in which you don't sweep the comb at all, can also be an interesting effect. A static comb filter acts much like a bank of resonant bandpass filters. Adjusting the feedback amount adjusts the "resonance." Negative (inverted) feedback reverses the spectral peaks and dips.

Phasing is another comb-filtering effect, obtained by different means. A phaser routes the input signal through several allpass filters. Unlike conventional filter types such as lowpass and bandpass, allpass filters have (ideally) a flat frequency response. Allpass filters have little or no effect on a signal's frequency content. They do, however, shift the the signal's phase. Different frequencies are phase-shifted by different amounts. Thus, when the original and shifted signals are mixed, cancellations produce a spectrum similar to that produced by flanging. Unlike with flanging, however, the spacing, depth, and width of the peaks and notches is adjustable. Sweeping the phase-shift amount with an LFO sounds like a gentler form of flanging. Phasing, like flanging, is often enhanced by feedback.

Analog phasers are making a comeback. For example, Big Briar has just introduced the Moogerfooger MF-103 12-Stage Phaser. And the Electro-Harmonix Small Stone phase shifter, beloved of guitarists, was recently reissued.

A reverberator usually attempts to reproduce the ambience of an acoustical space; for example, a room, a hall, or perhaps an imaginary space. In a physical space, direct sound radiating from an instrument or other source reaches the listener first. Reflected sound waves, rebounding off walls and other surfaces, arrive later. Early reflections, bouncing directly off the surfaces, are followed by delayed reflections that are produced as sounds continue to bounce around in the space. Early reflections are similar to very rapid, but still discrete, echoes. Delayed reflections, often called global reverberation, can arrive at rates of 1,000 or more echoes per second, fusing together in the listener's perception. Global reverberation takes some time to build up and fade away.

The space's size and shape, as well as the materials that form its surfaces, determine the strength and duration of the reflections. Reflections in a large hall reach the listener later, and last longer, than do reflections in a small closet. If the space contains materials with a high absorption coefficient, such as carpets or curtains, reflections die out more quickly than they would in a room with hard surfaces, such as tile or marble. Moreover, high frequencies are more easily absorbed than low frequencies.

Reverberation is probably the most complex component of a multi-effects processor. The reverb section of a multi-effects box typically combines multitap delay, filtering, mixing, and other processing. These processes can consume a considerable portion of the available computing resources. Therefore, some multi-effects processors (such as the Lexicon MPX 1 and MPX 100) dedicate special-purpose chips to reverb, leaving other hardware to handle more mundane chores.

Because reverb is so complex, most reverberators offer a selection of basic reverb types that imitate the characteristics of an acoustic space, such as "Jazz Club" or "Taj Mahal." Reverb types that mimic electromechanical reverb devices such as plates and springs are also common.

After selecting a reverb type, you work within its parameters to tweak the sound to your liking. The number of parameters you can control and the terms that describe them vary widely. However, programmable reverberators usually provide independent control of three components of reverberation: early reflections, global reverberation, and the ratio of direct to reflected sound.

Early reflections are often simulated by a multitap DDL. The diffusion or echo density parameter controls the spacing of echoes. Predelay sets the amount of time that passes before reflections start. Early reflections may also have a level control and a shape or envelope that controls the amount of time it takes for reflections to build up and die away. A high-frequency rolloff (lowpass filter) is often used to simulate the absorption of higher frequencies. In more sophisticated reverbs, a crossover may split early reflections into high-, mid-, and low-frequency ranges, with each range subject to independent level and envelope control.

Global reverberation is controlled with parameters similar to those controlling early reflection: predelay, diffusion, level, envelope, and some type of rolloff or crossover. However, the overall echo density will be much higher than that of early reflections, and the decay time will be longer.

Finally, the relative levels of direct and reverberated signal are set with a wet/dry mix control, by far the simplest component of a reverb.

In this brief introduction, I've only touched upon the basics of multi-effects processing. It's a vast but fascinating subject-the literature on reverberation alone could fill several bookshelves. In a future article, I'll explore many other types of processing that you can find within a multi-effects box. In the meantime, I hope you've been inspired to hit that Edit button and do some exploring yourself.

John Duesenberry's electronic music is available through the Electronic Music Foundation. Check out the EMF catalog at