Sequencing with Style

Exploring the sonic possibilities of MIDI sequencing, and how to learn to get the most out of your MIDI tracks.
Image placeholder title

Compared with live recordings, MIDI sequences often lack warmth and expressiveness. That is in part because many electronic musicians regard MIDI sequencers merely as recording devices with a few tools for correcting notes, timing, and basic dynamics. I can't count the number of times I've seen MIDI musicians play the notes, tighten up the parts, and simply move on to the next track.

Image placeholder title

In fact, MIDI recording offers a deep synergy between the sequencer and the inner workings of the synthesizer's sound-shaping capabilities. The ability to change virtually any aspect of a performance at any phase of the creative process is an immensely powerful creative tool.

Expressive sequencing is achieved using three main elements: the synth architecture, the sequencer, and the controller. All too often, articles about MIDI sequencing focus on just one of those aspects. For a truly animated musical performance, it's vital to consider all three components as a whole.

To that end, I enlisted the help of artists whose work reveals a deep understanding of MIDI and sound design coupled with stylistic know-how (see the sidebar “Getting to Know You” for a bio of each contributor). The result is a wide-ranging pool of ideas from the standpoints of synthesizer programming, sequencing, and control options.


Whether the synthesizer is a sample-playback unit, a physical-modeling synth, or something else, it has many features that are common to all synths, including envelopes, low-frequency oscillators (LFOs), and other modulation capabilities. Those features primarily control timbre, loudness, and pitch. Modulation features such as LFOs and envelope generators (EGs) can run free, but your best option for lively, nonrepetitive sequencing is to bring those capabilities under real-time control.

For example, LFOs are great candidates for modulation with Aftertouch. You can supplant the periodic effect of LFOs with a more humanized effect by controlling their depth or speed in real time. Many late-model synths offer knobs, sliders, and other controls that govern a variety of modulation features. Those controls often transmit Control Change (CC) messages instead of less efficient, bandwidth-consuming System Exclusive (SysEx) messages. If your synth offers such controls, you can capture and manipulate them in your sequencer.


Once you've grasped the capabilities of your synth's sound-shaping tools, what do you want to do with them? Whether his synth sounds are emulative or not, Rob Mounsey looks for elements that evoke acoustic instruments. “I always try to make sounds that suggest that they could be some sort of real instrument that you haven't run into,” he says. “I try to create the illusion that you've found an unusual instrument that people haven't heard yet — one that could actually happen in an acoustic space with acoustic materials. The way to get there is to carefully analyze acoustic instruments that you like to hear.”

Lyle Mays also finds inspiration in the behavior of acoustic instruments. One of his signature sounds is a swooping, ocarina-like synth patch. Mays explains the acoustic orientation of that sound: “It reflects the way pitch responds when a string is plucked; the harder you pluck it, the more out of tune it is at first before it settles. The other acoustic principle is the way ensembles, especially young children, start things out of tune and then gradually end up more in tune. I was thinking specifically of a grade-school choir of ocarinas, and the pitch attacks are just all over the place. The kids are listening, so they eventually get closer in tune with each other.

“That's an oversimplified version of what I'm talking about. It's much subtler in the synth sound, but one of the oscillators does start sharp and then comes down in pitch, and the other one hits the pitch. There's pitch information on every attack.” Routing Velocity to control oscillator pitch adds a bit more acoustic behavior in that acoustic instruments, particularly plucked strings, stretch and go further out of tune the harder they are hit.


Sampled instruments often provide a superficial realism, but sustained listening can be boring. The static nature of samples often works against a natural feel and sound, but using sampled instruments doesn't have to be a sonic dead end. If you understand your synth's architecture reasonably well, you can find ways to imbue samples with new life and realism.

Sometimes all that's missing are the imperfections that occur naturally in acoustic sounds. George Whitty enhances the realism of his sounds by using waveforms from unrelated instruments. “I used to create my Hammond sounds by putting the [frequency modulation (FM)] part of a Yamaha SY99 through a SansAmp to dirty it up, but that messed around with the bottom end too much,” he says. “The most suitable thing to create Leslie grit is a highpassed alto-saxophone wave. The gritty grunge of a real Hammond through a Leslie cabinet creates an aggregate effect that's not just a bunch of sine waves added up, but a kind of dirty, tubey thing. In trying to simulate that dirt, the high end of the saxophone samples works great; I filter out most everything below. I can make a sampled string section play more expressively by assigning a bit of bandpassed distorted guitar to the expression pedal to add some bite as things get more intense.”


Occasionally, the right sound exists in the outer regions of a wholly unrelated instrument. Jimi Tunnell carefully tests his sounds outside as well as within the usual playing range suggested by a patch. He finds that the categories suggested by preset titles can often lead you to overlook material that's viable for completely different applications.

“Don't look at the name of the sound,” Tunnell says. “Just because a patch is named ‘Flaming Gibbons’ doesn't mean its only possible use is to imply monkeys on fire. Forget the names and listen first to the general shape and timbre of the sound.”

I have a background in bluegrass and country music, and I've often sought the perfect pedal-steel-guitar sound. I've heard patches that approximate the instrument's slow, weepy characteristics, but I've rarely heard a patch that captures its higher registers or one that conveys the fast staccato soloing techniques I've heard from some steel players. However, when I accidentally sent the wrong Program Change message to my Roland Sound Canvas, I heard just the right sound from its fretless-bass patch. To help complete the country tune, I found an effective Telecaster-like sound in the General MIDI (GM) Clavinet patch; it was perfectly nasal, though a tad synthetic sounding. With a bit of adjustment to the filter's cutoff frequency, I found just what I needed.

David Battino takes his cue from movie sound design. “Often, technically accurate samples sound wimpy and unrealistic in context, so you need to exaggerate them, subtly adding timbres the mind expects to hear,” he says. “For a movie soundtrack, I had to create an electric-bass solo for an actor to match during filming. I set up a layer in a Korg T3 to trigger a fret-squeak sound in a very limited Velocity range — something like 55 to 64 out of 127 possible values. That meant the squeaks appeared almost randomly.

“When I saw the final cut of the movie months later, I initially thought they'd replaced my performance with a real player. I doubt a real bass would have produced those squeaks, but they lent a certain organic realism to the performance. The Roland SC-8850 Sound Canvas and the Yamaha Motif, among other synths, include numerous performance artifacts such as scrapes and breath noises that you can use to desterilize a track.”

Mounsey likes to beef up sampled sounds with analog synthesizer waveforms. “People have been layering samples with analog stuff for a while; that's an old trick. You can hide deficiencies in the sample that way and make it more even or full. I like to take a sampled sound and mix it in with something different that's filling in certain holes, maybe rounding out frequency ranges that I miss or coloring the sound a little differently.”

One reason that sampled single-instrument sounds usually fall short is that they don't evince the complex timbral changes that acoustic instruments go through. Simply layering another waveform with the original isn't going to do the job; you need to continuously vary the balance between one layer and the other. More importantly, you need to do it in a way that the sequencer can capture.

Stock fretless-bass samples sound a bit too muddy and static for my taste, for example. Instead of relying on those samples, I use a dual-oscillator patch with a sampled, fingered electric bass on one oscillator and a tuba sample on the other. (Other sampled brass instruments such as French horn also work.) I control the second oscillator's amplitude (and to a lesser extent, its filter frequency) with Aftertouch. Bearing down on the keys brings up the tuba waveform, producing that hornlike Jaco Pastorius tone. You can also use Aftertouch or Modulation to bring in a light, slow LFO to get that characteristic slow, wide vibrato, but be careful not to overdo it.

Even if your goal is a replica of an acoustic instrument, don't forget to listen carefully to unabashedly synthetic waveforms; you never know when a little fine-tuning with filters or envelopes will yield the basis for a perfect instrumental sound. For example, to imitate the nasal qualities of a fingered electric bass, I've had great success using pulse waves at roughly 25 percent pulse width. By subtly modulating pulse width, you can vary the virtual picking hand's distance from the bridge; as pulse width approaches 50 percent, you can simulate the rounder, more hollow tone achieved by playing a bass closer to the neck.

It's a good idea to become acquainted with your synth's raw, unprocessed waveforms. Familiarity with your palette of waveforms can suggest new sounds or offer alternatives to old favorites (see the sidebar “The Naked Synth”).


Some wind instruments are among the most problematic instruments to bring to life. Listen to any decent saxophone player, and you'll realize that the number of timbral changes that occur in a short time is just impossible to capture with any sampler, much less a sample-playback synth with a limited ROM sound set.

Fortunately, you don't have to resign yourself to static saxophone snapshots. Frequency modulation is a potent technique for animating sampled wind instruments. You don't need a DX7 or the like to use FM; many synthesizers provide LFOs that creep up into the audio-frequency range, which should be enough for this trick. Take a boring, static sax sample and route Aftertouch to control LFO level. Set LFO speed to maximum. When you press down on the keys, the sample vibrates rapidly enough to produce sidebands that should effectively simulate an overblown effect. Adjust the LFO speed to taste and experiment with different LFO waveforms for different sidebands.

An old ploy for imparting realism to sampled saxophones, flutes, and other wind instruments is to record them with breath noise in the attack. The problem is that even legendary sax player Ben Webster (noted for his breathy sound) took a break from that technique now and then.

Jack Hotop explains how to conquer the sampled breathy-saxophone sound: “I've used [Korg] Triton and [Korg] Karma highpass filters on bottles and pan flutes and then added them to saxes and other woodwind sounds to create a breathier quality. I often will keep them at a low level initially so that they can be mixed in using Velocity, the ribbon, or the y-axis of the joystick. That provides you with more control over breathiness. Besides, constant blowing can make you pass out after a while.”

It's difficult for a sampled instrument to duplicate the attack transients of the original. When you play a sample above or below its original pitch, you transpose the transient's pitch and envelope. In addition, the transient spectrum needs to modify in response to variations in the attack's intensity. Again, a little creative frequency modulation goes a long way.

From programming my Casio CZ-1000, I learned that you can use pitch envelopes to provide artificial attack transients. Program an envelope generator so that the oscillator quickly rises above normal pitch during the attack and immediately falls to normal pitch during the decay. Experiment with the pitch envelope's attack level to tune the fake transient's frequency. To keep the transient's pitch consistent regardless of which note you play, don't assign note number or key position to modulate the pitch envelope's rate or level. On the other hand, modulating the pitch envelope's depth with Velocity can add a stronger snap when you dig in.


Surprisingly, a synthesizer's bugs or quirks can provide realistic artifacts. If you've ever programmed a Korg M1, for example, you might know that certain samples overload the instrument's outputs when you play them in a raw, unfiltered state. With the amplitude and filter wide open, a few samples actually produce an aliasing, fizzling sound and quickly shut off the outputs.

By reducing the oscillator level and filter settings, I discovered a way to creatively use that idiosyncrasy. If you route Aftertouch to control oscillator level, you can selectively add distortion and aliasing to the M1's static sax sample. When you press down on the keys, the otherwise unpleasant artifacts provide a fine simulation of an overblown saxophone's squealing harmonics.

A synthesizer's quirks can also add a unique, less realistic touch. “While reviewing a cheap General MIDI keyboard, I became curious [about whether] it would respond to external MIDI Control Changes,” says Battino. “So I hooked up my trusty Keyfax PhatBoy and spun the knobs while the $300 synth played one of its demo songs. Apparently, the manufacturer had skimped on the microprocessor, because the additional data just bamboozled the keyboard, causing it to spew horror-movie sounds. I've noted similar effects when overtaxing synthesizers, but this one was notable because I only had to twist a single knob to bring it to its knees. The sounds were so tortured.”

Whitty was irritated when his Yamaha EX5 wouldn't play back a Mono-mode sample perfectly legato. “I'd intended to sample all my favorite 12-oscillator Oberheim stuff into the Yamaha and try to take the Oberheim off the road,” he says. “When you play legato, there's a really obvious dip in level between the notes. I spent quite a while futzing with the EX5, trying to fix it, until I figured out that it's really kind of great the way it is. I can entertain myself for quite a while just holding one note and tweaking it by playing other notes briefly. That resets the held note to different Velocities, each of which is preceded by a little volume dip and pitch swoop. The result is a sort of tweezed ethnicity.”


You might be tempted to use sampled or synthesized ensemble patches because that's easier than building ensemble performances instrument by instrument. Unfortunately, though, that might make your music less expressive.

Why are ensemble sounds appealing? Mounsey says that “our standard paradigm for a big, warm orchestral sound is a string section playing chords. One thing that we love about a string section is that there are a lot of individuals playing with different vibratos — not just different depths, but different speeds. They're coming from different places in the stereo field. So one magical way to make a lot of beautiful space is to have very subtle, multiple vibratos going on. With sustained orchestral-instrument sounds, what we should be using is continuous controllers.”

To simulate an ensemble, Mounsey relies on multiple synthesizers and string sounds, each with a different modulation rate and depth. “Normally, I make very careful edits on volume curves and modulation to create vibratos,” he says. “I do a live pass and then edit it.”

Polyphonic Aftertouch is also useful; each note can have its own vibrato, and you can vary it continuously. It's also helpful to route Polyphonic Aftertouch to filter-cutoff frequency and maybe add a touch of control over resonance; after all, in a real ensemble, no two players have exactly the same tone.

“If you're using samples of string ensembles, try layering a little bit of a solo-instrument sample on top of them; that's a pretty standard trick,” says Mounsey. “If your samples are legato without much attack, add a tiny bit of a staccato sample; just dial it in slightly to give a little definition to the sound.”


Rob Shrock uses samples of smaller ensembles to compose orchestral parts. “My basic orchestral template consists of over 60 MIDI tracks, which I adjust as needed for each piece,” he says. “The string section takes up the largest number of tracks. I split the strings into sections much like a real orchestra: first violin, second violin, viola, cello, and bass. The device that adds the most impact to the overall sound of simulating a string section is adding several solo instruments to the ensemble sounds.

“For instance, when sequencing a large section of violins playing a melody, I will typically sequence a large ensemble sound (12 or more players) followed by a small ensemble sound (4 to 8 players). I will then add three or four individual solo-violin tracks playing the same part. Each part is played as a separate pass — the variations in performance impart density and interest to the musical line. If there are second violins, I will repeat the process again for those parts, usually with duplicates of the same sounds, but always on different MIDI channels. The same process applies to the cellos. Although I use a wide variety of samples depending on the specific articulations I'm going for, this technique works well even without a massive sample library.

“It is critical to manipulate the dynamics of the lines as you are sequencing. Learning to manipulate a volume pedal is probably the single most important factor in creating good orchestral MIDI parts. I highly recommend using expression [CC 11] for manipulating your volume pedal, slider, or wind controller. That frees up MIDI Volume for overall balancing of the parts.

“As is common, a lot of my sounds are tweaked so that faster Velocities create shorter attack times. However, I also use a few other techniques to help provide variations to articulations. I often layer three or four different sounds that are Velocity switched, which provides immediate access to several articulations instantaneously. Because a lot of the actual dynamics of the performance are coming from the manipulation of Expression and Volume data rather than Velocity, I can use Velocity to switch articulations.

“I tend to group articulations into basic categories — for instance, melodic, marcatos, pizzicatos, and sordinos for strings. Each category gives me several selections based on how hard I play. When applying this technique, I usually keep it simple, dividing Velocity into soft, medium, and hard ranges that are easy to play on the fly. If I accidentally play a note out of the intended range and trigger an unintended sound, it is a simple matter of editing that Velocity in the sequencer. That technique speeds up the sequencing process.

“With woodwinds and brass, I tend to stay away from ensemble sounds, opting to build up sections by sequencing each instrument individually using solo sounds. For big, thick brass and French horn sections, I will occasionally layer ensemble sounds underneath for added density and power.”


Mays takes advantage of the multitude of tracks offered by sequencers. “This may be obvious,” he says, “but I use a ton of tracks. They're free. If I have to do some kind of brass function, ideally, I would write out the section as I would score it for a brass section and perform the individual parts. I use trumpet, second trumpet, and so on and do different performances of each. I also like to use a slightly different sound for each section, with its own volume rides. You avoid that pianistic ‘chord, chord, chord’ sound, where you have a whole bunch of notes at once, played on one patch. I'm very skeptical about doing keyboard-style parts for brass instruments.”

Preprogrammed envelopes often pose particular problems. Mays cites the example of sforzando-brass patches with preprogrammed envelopes. “There's just a uniformity to them that bugs me,” he says. “An envelope may only fit a certain section in one piece; it's just not a practical use of one's time.” Instead, he prefers to program envelopes inside the sequencer, drawing changes in amplitude and timbre by hand. “I think that's a superior way to go about it,” says Mays. “Basically, synth sounds are not complex, and they don't change over time unless you program them to. Acoustic instruments, on the other hand, just naturally change in time. Even if you try to bow a string exactly the same way twice, it's going to come out differently.”

In fact, Mays prefers to send control information from his sequencer rather than preprogram synth patches. He feels that preprogrammed sounds don't work as well in the studio when they're played in conjunction with acoustic instruments.

There are several ways to control synthesizer sounds with your sequencer. For example, you can program your synth to get louder and open up a filter with Aftertouch, Expression, Modulation, or any of a slew of registered and nonregistered parameter numbers. Editing MIDI messages within your sequencer can afford more control than many synths can offer.

Compared with the EGs built into a typical synth, hand-drawing envelopes in your sequencer can provide a great deal more power, continuity, and detail (see Fig. 1). If that's too labor intensive, there's a middle ground: use an expression pedal to record the changes and then fine-tune the performance by hand in the sequencer. Most current synths offer a wealth of knobs and sliders that can achieve the same ends.

“When you're sequencing, never copy and paste,” Mays says. “The more detailed the work you do, the more detailed the final results will be. More is always better in this department.”


One of the most significant aspects of synthesis is that you can create sounds that have no precise counterparts in the acoustic world; just the same, the sounds usually duplicate functions of their acoustic relatives. Synth pads often duplicate the work of string or piano parts, and you can substitute completely artificial synth leads for blazing guitar solos.

Even special effects or bursts of noise can supplant or reinforce some acoustic element of a sequence. Hotop explains: “Occasionally, I've used the Triton's GM sound-effects program Car Stop for a rap-style scratching effect, and I've layered Explosion with Orchestra Hits to add a little extra bang. I have even used the Heart Beat for a low, muted bass drum.”

On his most recent album, Solo: Improvisations for Expanded Piano, Mays augmented the sound of his MIDIfied grand piano with samples; he left no part of his piano untapped (or unscraped). “Most of the prominent sound effects on my Solo record started with acoustic piano,” he says. “We spent a day crawling around the piano, recording various hits and scrapes — a lot of them with the sustain pedal held down to take advantage of the natural resonance of the chamber. And then those sounds were massaged beyond recognition. One of the reasons they sound so rich is that the actual samples are high-quality recordings of the full envelope. They take up a lot of space; they're not faded out.”

Sampled electric and acoustic guitars and pianos are prime source materials for interesting, slowly evolving pads. The more continuous animation you can bring to the sound, the more intriguing the timbres will be. Offering his programming expertise, Hotop says, “The Karma and the Triton PCM [pulse-code modulation] ROMs have layered piano samples, which are called EP Pad 1, 2, and 3. These samples use long loops. It's easy to slow down the envelope attacks, add sustain, and then apply a tiny bit of filter modulation using the filter LFOs at slow rates. Add insert and master effects to further animate the pad if necessary.”


MIDI is often regarded as a keyboard-oriented technology. Nonetheless, EM readers are surely aware of the wealth of MIDI controllers with strings, pads, mouthpieces, and other nonkeyboard appendages. Those instruments offer capabilities that are not readily available from keyboards. Consider adding nonkeyboard controllers to your MIDI studio.

Tunnell often switches between keyboards and MIDI guitar controllers. “Although I put in lots of data from the keyboard, I have found that triggering certain sounds from a guitar synth can turn a horribly stilted and unusable patch into something much more organic,” he says. “Bass sounds, in particular, are drastically improved that way. Some of the glisses, grace notes, and the like are just impossible to emulate from a keyboard. Also, I occasionally like to layer bass sounds with different Velocity curves, which gives the effect of randomizing the attacks a bit. A pan flute or recorder sound that sounds incredibly stock really comes alive when triggered that way.

“I have to stress, however, that you really do need to study the phrasing of these instruments to pull it off. Don't play your Jimmy Page licks with this stuff. That being said, the inherently fluid articulation associated with the guitar seems to serve these sounds really well.”

Sequencers allow musicians to clean up their playing, but take care not to oversanitize a performance. MIDI guitars can often glitch or play notes that were caused by accidentally brushing an adjacent string; sometimes those imperfections can provide a little extra realism and funk to the track.

A MIDI guitar with Pitch Bend enabled can be invaluable for sequencing realistic string or brass ensembles, but as stressed earlier, avoid the temptation to use generic ensemble patches. Instead, with Pitch Bend enabled, record each instrument in an orchestra one at a time. Even if you think you've hit the note dead-on, your controller will send subtle amounts of Pitch Bend to each instrument. The process might seem a bit laborious, but when you've sequenced the entire ensemble, the performance will be more realistic and animated.

It's easy to overlook some of the gestural possibilities that a synth's real-time controllers offer. For example, I've always regarded ribbon controllers as an excellent way to create different types of smooth and continuous modulation. Battino and EM associate editor Gino Robair point out that you can tap the ribbon at divergent locations to create discrete, noncontiguous data. You can achieve a wonderful effect by tapping rhythmic patterns at different points on a ribbon controller that's controlling filter frequency; your synth will respond with drastic changes in timbre, all in time with your gestures.


Many people have wished for synthesizers with unique features; they're certain that one extra feature will be just the ticket for breathing fire into their sequences. You might not realize that many untapped capabilities lie under your sequencer's hood and that the sequencer you already use may offer more flexibility than the latest gizmo will allow.

For instance, many musicians have lusted after synths that can morph from one timbre to another; among the synths with that capability are the Sequential Circuits Prophet-VS and the Korg Wavestation. Both employ vector synthesis, which is nothing more than a bit of creative volume-crossfading and panning. You can easily achieve vector-type effects in your sequencer by performing volume crossfades between distinct synths and sounds. With your sequencer, you can go well beyond the limit of any single device's complement of features to create long, animated timbral changes.

Some synthesizers have the ability to scan a list of waveforms, often resulting in a rhythmic pattern of evolving timbres, percussive grooves, or both. You can simulate that effect using the arpeggiator that's built in to many sequencers.

For example, one neat feature I discovered in Digital Performer's arpeggiator is a small checkbox that reads, “Cycle through device group assignments.” A Device in Digital Performer is simply a group of instruments that are assigned to a track; any data sent to or from that track plays those instruments. With the box checked, the arpeggiator sends successive notes down the list of instruments, creating burbling, rhythmic grooves or wavetable-scanning-type effects, in the tradition of the venerated PPG synthesizers and Korg Wavestations. Best of all, playback is automatically synchronized to your sequence's tempo.

If your sequencer doesn't offer a similar feature, try this trick from Battino: “Select every nth note of a track and drag [or cut and paste] the selection to another track. Repeat this several times and then assign a different patch to each track. The effect is reminiscent of the old Yamaha TX802's Note Rotate feature.”


Given the wealth of ideas offered here, you no longer have any excuses for creating dull, lifeless sequences. Your sequencer is far more than a mere word processor for MIDI data. If you understand and take advantage of the close relationship between your sequencing software and your synthesizers, your sequenced music will take on a life of its own.

Assistant editorMarty Cutler's second computer for sequencing was a Commodore C-64; he shorted out his first one as he was connecting the video cables. Thanks to Kate Andrews of Shorefire Media and Gilles Amaral at the Ted Kurland Agency for their assistance.


Since attending Oberlin College, David Battino has worked at Village Recorder in Los Angeles, spent a year in Tokyo as a Henry Luce Scholar, and worked with Roger Linn on the MPC 3000. Battino is the founding editor of Music and Computers and EM's Desktop Music Production Guide. He currently runs Batmosphere, a music and media-production service.

Jack Hotop studied at the Berklee School of Music and the Boston School of Electronic Music. He has performed and recorded with Todd Rundgren, the Drifters, Gloria Gaynor, James Brown, Richie Sambora, John Entwhistle, and many others. As senior voicing manager, Hotop has provided sound design for Korg since 1983.

Lyle Mays has been part of the Pat Metheny Group since its inception in 1977, cowriting much of its music. In his youth, he studied with Rich Matteson and Marian McPartland, then studied composition and arrangement at North Texas State University, and eventually toured with Woody Herman's Thundering Herd. Mays has received four Grammy nominations for his solo work.

Rob Mounsey attended the Berklee School of Music and went on to work with Steely Dan, Paul Simon, Lyle Lovett, Milton Nascimento, Eric Clapton, Chaka Khan, and many others. He was recently nominated for a Grammy Award for his arrangement of Puccini's “Nessun Dorma,” performed by Aretha Franklin.

As Burt Bacharach's arranger and music director, Rob Shrock appeared on the live album Burt Bacharach — One Amazing Night. He recently toured with Bacharach and Elvis Costello in support of their collaboration, Painted from Memory. Shrock toured with Dionne Warwick for many years and has performed with Sheryl Crow, Elton John, Gladys Knight, Frank Sinatra, and Stevie Wonder.

Guitarist Jimi Tunnell studied at North Texas State University. He has toured with Yukihiro Takahashi and was a member of Steps Ahead. Tunnell has also performed and recorded with Laurie Anderson, Carly Simon, Malcolm McLaren, Adam Holzman, Omar Hakim, Tom Coster, and the Bob Belden Ensemble.

George Whitty spent several years as a member of the Brecker Brothers band. He recently began pre-production on a Michael Brecker CD and produced and played all keyboards for Randy Brecker's current release. Whitty is featured on Santana's Supernatural and has toured with David Sanborn and Peter Erskine. He also writes and produces music for television.


One task that's fundamental to assembling your palette of synthesizer colors is auditioning and evaluating the entire range of your synth's waveforms. Playing any synth patch by patch can give you hints about the unit's abilities, but presets are often just the tip of the iceberg.

The object is to train your ear by listening to the raw sound of the waveforms. You can audition raw waveforms by initializing a patch. Most editor-librarian programs provide the ability to do that, and some synths offer a patch-initialization button. If you do not have an editor-librarian or if your synth doesn't provide patch-initializing amenities, you'll just have to dig in to your synthesizer's programming menu.

Remove all effects and examine the sound with a simple envelope: instantaneous attack, no decay, full sustain, and instantaneous release. The filter and amplitude envelopes should resemble a square wave. One handy shortcut is to start with a generic organ patch, because organs are usually programmed with simple on/off envelopes and no Velocity sensitivity. Conversely, you can set all envelope depths to zero, making sure to open up filter cutoff frequency all the way and eliminate all modulation control, including Velocity, Aftertouch, and LFO modulation.