Electronic Euphoria

Building a powerhouse mix with electronic instruments.
Author:
Publish date:
Image placeholder title

This article originally appeared in the February 1999 issue of Electronic Musician.

Image placeholder title

The evolution of electronic musical instruments is undoubtedly the most important advance in music-technology in the past 30 years. The computer revolution has certainly been groundbreaking, but the rise of synthesizers in fact helped initiate the early use of computers in the personal studio. When synths hit the streets, the industry was introduced not only to a new kind of instrument but to an entirely new process of creating sounds.

We were no longer limited to the acoustic and electric instruments we and our friends could play; with practice, we could emulate a wide range of instruments using synths and samplers controlled with our favorite keyboard (and later, a variety of MIDI controllers). Our dependence on hired guns, although not eliminated, was reduced. Best of all, we could even create sounds that didn''t exist in the natural world. It took a while for mainstream studios to get the idea, but eventually synths and samplers became essential tools for many types of music production.

Working with electronic sounds has become an art form all its own. Combining synths and samplers to create a mix can be quite different from mixing acoustic instruments, and it can be just as tricky. Let''s take a look at some of the things you can do to create a successful mix of electronic musical instruments.

Some Remain the Same

Whether you''re working with acoustic or electronic tracks (or a combination of both), some principles of mixing apply across the board. There are six essential rules that I adhere to in any mixing situation. No matter how great a mixmaster you think you are, if you ignore these basic principles, your end product will suffer. So, let''s run them down quickly.

Image placeholder title

FIG. 1: By conceptualizing the mix as a three-dimensional stage where the vertical axis represents frequency response, you can graph the soundstage from the front or above and see the frequency/ pan position or volume/pan position, respectively. Make sure no two components are in the exact same place in either axis.

1. Be mentally prepared to tackle the mix. Get yourself in the right frame of mind. Make sure that you''re properly rested. This also means taking breaks from the mix periodically; I''ve found that a breather every two to three hours is sufficient. Avoid interruptions: mixing requires just as much concentration from the engineer as laying down a solo does from the musician. So turn off the phone. Finally, make sure the mood is right. Get a comfortable chair, dim the lights, and fire up the lava lamp. (Okay, the lava lamp is optional.)

2. Know your client. What kind of music are you working on? If it''s a band project, has the band previously released a record that you can listen to? What are the producer''s goals? You can completely alter the sound of a record in the mix, so you need to know what direction to go before you start working.

3. Be familiar with the monitors. If you''re not working in your own studio, or if the producer brings unfamiliar speakers, give yourself a crash course in monitoring on the equipment. Pop in a CD that you are thoroughly familiar with; listen for exaggerated or muffled frequencies, paying particular attention to the low- and high-end content. Make mental (or even written) notes. Once you feel that you know the speakers'' response, start working on the mix immediately.

4. Monitor at low levels. There are three reasons for keeping the monitors low while you mix. First of all, you''ll avoid a case of listening fatigue (not to mention other health-related problems associated with loud music). Secondly, just about any mix sounds good when it''s cranked up; the stellar mixes are the ones that sound good both loud and soft. Third, with the speakers at loud volumes you won''t catch level problems. I was mixing a radio promo recently and thought I had a great mix going, but I was completely crushed when I lowered the monitors to a normal level—I couldn''t even hear the announcer''s voice.

5. Reference your mix to similar commercial mixes. Listen to mixes that are similar in style to what you''re working on. Again, if the band has other material you can use, great. If your console has a 2-track input for a CD player, use it. This way you can A/B between your own mix and the one you''re striving to emulate. This step obviously isn''t appropriate for those projects you have no intention of patterning after someone else''s work, but it''s often useful when producing commercial music, especially when you have to please a record label executive.

6. Reference the mix on a variety of systems. This is the most important point to remember: a well-balanced mix will sound good on poor monitors and great on good monitors. Check what you''re mixing through several systems of varying size and quality. Before I complete a mix, I''ve listened to it on my studio monitors, a pair of headphones, a boom box, and my car stereo. If I really want to go crazy, I''ll burn a CD and check it out on whatever system I can find. (I''ve actually done referencing in the electronics department at Sears.)

If you keep these six points in mind while you work, your mixes will improve 100 percent—I guarantee it. (For more information on basic mix principles, particularly with acoustic instruments, refer to “In Your Face Mixing” in the May 1998 EM.)

Rough and Ready

If you''re mixing on a computer, you have the advantage of building your mix during the recording process so that when it finally comes time to print to 2-track, only minor adjustments will be needed. I once produced a project when we didn''t even have a dedicated mix day; instead, I took an hour to automate the vocal track and then printed the song to DAT.

Even if you''re not working with a computer-based system, start getting some ideas together as you record. One of the best bits of advice I can impart is to periodically record rough mixes during tracking. All too often, after listening to the same song for weeks or months, we lose the fresh perspective that we had at the beginning of a project. Rough mixes are the perfect way to recall that lost perspective. Check in with them often to find out what your ideas were weeks ago.

What Goes Where?

When working entirely with sequenced MIDI parts, you have the option of mixing tracks virtually without ever recording them to a multitrack. The benefits of mixing directly from the sound modules are obvious. For one thing, your signal path is shorter, so your chances of collecting sonic garbage drop substantially. In addition, sticking with MIDI tracks leaves more audio tracks available. But most important, it means you''re not committed to anything—you can edit and automate synth sounds and sequences in ways that would be difficult to do with audio tracks, and you can do it at the last minute if necessary. (If you''re performing virtual mixing with a digital audio workstation, make sure your system has the ability to mix live inputs and that you have enough I/O available.)

Image placeholder title

FIG. 2: In this example, the kick drum and one drum loop are right in the center of the mix, as in a live band. Because the synth bass has a heavy low-frequency content, I have panned it in the center, which also helps lock it in with the kick. The lead vocal and guitar solo are in the center and set hot. They don''t occur at the same time, and they have different spectral content than the kick drum, drum loop, and bass, so each instrument will be distinct.

However, many professional engineers like to print MIDI sequences to multitrack audio media (especially analog two-inch tape). Sometimes it''s unrealistic for the personal-studio owner to go this route, but it has several advantages. In a small studio with limited resources, for example, recording to audio multitrack allows you to apply outboard effects to individual tracks and submixes, freeing up your limited supply of effects processors for reuse at mixdown. In addition, submixing to tape or disk can simplify your final mixdown process. If you print to analog tape, of course, the tape recorder operates as a signal processor in the sense that it can add a desirable sonic quality.

Whether you mix virtually or print your sequencer parts to tape, it''s generally better to clean up the signals at the sound modules rather than at the mixer. Selective filtering and compression can be used to remove unwanted frequencies or dynamics before a sound leaves the module. However, I usually try to save elaborate effects processing for the mixing environment, where I can employ dedicated units (unless I''m mixing on a DAW, where DSP is a precious commodity—in which case I might do some processing at the modules and some at the mixer). It''s a balancing act, but it''s better to have too many options than too few.

If your mixer doesn''t offer dynamic automation, you can use MIDI Control Change 7 (Volume) and 10 (Pan) messages to automate your sequenced tracks. The drawback to this is that a mixer channel will remain open even when no signal is present, which is not the case when you manually ride the faders. With an analog mixer, you''ll get some hiss; to fix this, you can use noise gates or expanders at the channel inserts.

In general, it''s good practice to maintain as many dedicated instrument channels as possible. Granted, sometimes you may have no choice but to submix several parts to a stereo output, for instance. Just avoid unnecessary submixing; the more signals you have to work with at mixdown, the better.

Acoustics and Electronics

There are three basic kinds of electronic sounds: emulations or samples of acoustic or electric instruments; completely artificial sounds; and emulations or samples of real-world, nonmusical sounds (a dog barking, for example). The mix engineer needs to approach each type differently.

When working with acoustic and electric instrument sounds, the goal is usually to replicate an accurate image of each instrument, positioning it in a realistic place on a virtual soundstage and making sure its frequency content is similar to what it would be in the real world. You''ll then give the recorded instrument a volume and depth on that stage by using level control, reverb, and sometimes other processing, such as delay. (This is not a hard-and-fast rule, obviously; there are no rules in the creative arts.)

The same philosophy usually holds true if the “acoustic” or “electric” instrument happens to be a sample or synth patch. For example, even though the Roland JV-1080''s grand-piano patches are electronic samples, I''d probably still EQ them like real pianos and put them in realistic positions in the stereo image, unless I was trying to achieve a weird result. However, the methods you use ultimately depend on the style of music you''re producing: for alternative and urban styles, perhaps the piano would need to be equalized like a guitar and spread across the entire stage.

When working with completely synthetic sounds, a different set of rules applies. Trying to place these instruments in a realistic spot on a “stage”—an acoustic environment where they wouldn''t normally be heard—is pointless. In addition, there are no real-world templates of synthetic sounds on which to base EQ settings; I mean, what is a Telefunken or a Space Warp Pad supposed to sound like, anyway? The same is true of nonmusical samples (unless, of course, one of your band members is really a barking dog). The only exception here is when you are creating music for picture and want to position the effects to match the action.

Working with synthetic sounds essentially gives you carte blanche to create exciting mixes with sounds coming from all over the stereo image and frequency spectrum. And using creative dynamics control and multi-effects processing, you can mold those sounds into practically anything you want.

Spatial Placement

You have much more creative liberty with a mix of electronic instruments than you do with a mix of acoustic ones. I like to create a natural-sounding blend of all the elements. Artists such as Beck, Nine Inch Nails, Jane''s Addiction, and Alanis Morrisette often employ contrasting timbres that don''t blend smoothly—but for a lot of music, a smooth blend is preferable.

In most cases, instruments and sounds should not compete with one another either spatially or spectrally. You should be able to hear every part of a mix and immediately identify which instrument is which. To do this, I try to conceptualize the mix as a three-dimensional stage (see Fig. 1). Panning instruments moves them across the width of the stage; altering their level and adding reverb or other delay effects determines how far back they are. The vertical axis represents frequency response (for example, cymbals would be toward the top of the stage, with the kick drum sitting near the bottom). This way you can graph the stage from either the front or above and see the two most important relationships of a mix: frequency/pan position and volume/pan position.

The goal is to make sure that no two components are centered at the exact same place in either graph. I don''t mean that things can''t overlap—the lower keys of a piano will inevitably be situated in the same area of the frequency spectrum as the bass—but elements shouldn''t blatantly sit on top of each other. This is what causes a mix to become cluttered and muddy sounding. A clear mix is achieved through careful planning and adjustment of level, pan, and EQ.

Panning and Placement

Many people don''t realize just how much EQ and level can affect the placement of an instrument in the stereo field. To check this out yourself, try the following experiment. Put the faders of your bass and kick drum tracks up with both channels panned to center. Boost them both by 15 dB at 200 Hz and turn up your monitors. You''ll notice that it becomes difficult to distinguish the hit of the kick drum. Now pan the bass to nine o''clock and the kick drum to three o''clock. When you do, the kick drum hit returns. Finally, pan them both back to center and pull down the level of the bass track; you can again hear the kick drum better when the bass is lower.

Image placeholder title

Light compression with a unit like the Joemeek SC2.2 is often best for electronic instruments.

This experiment illustrates how instruments can occupy the same frequency ranges, provided they aren''t at the same spatial position in a mix (and vice versa). I''ll discuss specific EQ applications below, but this is important to keep in mind when panning tracks. In general, instruments that comprise the rhythm section are kept toward the center of the mix (see Fig. 2). Specifically, drum parts, bass parts, certain pianos, and loops should be spread no further than ten and two o''clock. In fact, try putting monotonous loops in mono; this opens up the horizontal axis for the supporting characters (guitars, piano, strings, and so on). Whatever you do, don''t pan your drum tracks across the entire stereo image: have you ever seen an acoustic drum kit with toms that run from stage left to stage right? Keep the kit in the middle.

As a rule of thumb, any part that has a heavy low-frequency content should be situated toward the center of the mix. Simply panning the two signals is a prescription for trouble. True, Beatles engineer Geoff Emerick split the drums and bass to opposite channels on certain Beatles tunes. However, his drum and bass panning was almost certainly done out of necessity for bouncing tracks to overcome track limitations.

Slightly outside the rhythm section lie the supporting instruments: pianos, strings, guitars, and horns. This is also where you might want to place certain background vocals and percussion.

Lead vocals are generally put right up the middle; anywhere else makes them just distracting to listen to. Although instrument solos are usually panned dead-center, I prefer to spread them slightly to either side. Often, a solo will be played along with the lead vocal, and if both are in the center they will be competing with each other. (Many solo instruments have a frequency range similar to that of the human voice.)

Finally, the outside edges of the mix are usually reserved for effects returns (particularly reverb), for certain types of percussion, and for high-frequency background vocals (à la the Bee Gees). Be careful when placing sounds completely in one channel or the other, especially if you''re also processing them with multi-effects; in these instances, delays and reverbs that are panned opposite the source sound can cause phase cancellation.

Auto-panning synth patches should be addressed with caution. At what spectral position do these sounds start their journey, and where do they end up? This path must be clear of other sounds in the same frequency range; otherwise, dropouts will occur. Once you have a clear idea of where you want to place everything, it''s time to make sure that instruments sitting in similar places aren''t competing for room in the frequency spectrum.

Issues of Timbre

Proper EQ means more than just getting a great sound from a track; it''s about eliminating congestion in the mix. I want to stress the importance of subtractive EQ. In general, your mix will benefit more from cutting than from boosting. To tweak a sound with a parametric EQ, I usually start by doing the opposite of what I was taught in school: I turn the EQ gain down all the way and sweep the frequency knob, so I don''t even think about boosting anything unless I really have to. However, if you cut enough frequencies, you''ll probably need to make up gain at some point. Fortunately, many digital mixing consoles and DAWs provide a “gain makeup” capability as part of the EQ section.

With electronic replications of acoustic instruments, your best bet is probably to retain the authenticity of that instrument''s natural sound. This does not mean that the samples don''t need equalizing: although they are supposed to be accurate, pristine recordings of acoustic instruments, many are flawed. Basically, you want to eliminate frequencies that aren''t needed. What is the instrument''s primary range? In other words, which frequencies are needed to get it to cut through a mix, when that is desired, or to keep it back in the mix when it is supposed to be part of a pad? Once you determine the range, you can whittle it down to the necessary frequencies.

Image placeholder title

Equalizers such as this Drawmer 1961 are useful for timbrally separating instruments that are in the same range.

Two of the most important sounds in a mix are the kick drum and the bass part, and there should be a synergy to their relationship. These two tracks constitute most of the low-frequency energy in a mix. You need to decide which of these tracks will be the primary source of low end. For a traditional mix, I usually opt for bass, simply because there is more motion to it, and spotlighting it makes the low end more interesting.

On a bass part, I find that rolling off frequencies below 50 Hz is a good start. Boosting frequencies in the 300 Hz to 1.5 kHz range (admittedly a large range) will increase the track''s clarity, and pulling them out will round out the low end. Once I have the bass sound, I work the kick drum in as support, accenting mid frequencies (1 kHz) and boosting a little around 80 Hz (3 to 6 dB, with a narrow bandwidth).

On the other hand, if you''re producing urban music, the kick drum sample or synth patch should be the more pronounced low-frequency element. Because many hip-hop beats are derived from premixed loops, you''ll want to boost the track by about 3 dB at 120 Hz. Loops also have little high-end clarity, so you may have to roll off upper frequencies with a shelving filter (usually above 7 kHz). A mid-range boost might also be in order. Most other electronic percussion instruments are fine with little or no EQ; if EQ is needed, it''s usually a boost at 7 kHz or higher.

Certain synth pads—especially organ sounds—tend to be heavy on the low-end content. You''ll probably find that, although they sound really fat by themselves, they just don''t sit well with the rhythm instruments. Try rolling off frequencies below 300 Hz or boosting the track in the 2 kHz to 3 kHz band and lowering the fader level.

Digital pianos can often present the same problem in a mix, especially when the lower half of the keyboard is being used. As long as the piano isn''t the only instrument playing, my advice is to roll off the low end. I recently used snapshots to automate the EQ of a piano track in a rock song. The piano started the song out and needed to sound full, but when the rest of the band came in, it totally clashed with the bass. So I simply set up two snapshots—the second one with a highpass filter engaged—and performed the change on the fly.

Electronic strings should accent the high-mid and upper frequency ranges, so a little boost might be needed somewhere above 5 kHz. Try rolling the low frequencies off below 500 Hz. One very useful application of EQ is reducing hiss from sound modules. Although expanders and gates are an option, you''ll find that rolling off frequencies above 10 kHz is sometimes a better approach. Keep in mind that this will work only on synth outputs handling signals that have no frequency content above 10 kHz (such as drum tracks, bass parts, and guitar parts). Remember, every change you make to one track—no matter how small—will affect the other tracks as well. A tweak to the piano will change its relationship to the bass, which may alter the way the bass sounds. So if you make major changes on soloed tracks, be sure to check the sound in the mix immediately.

Keep it Under Control

When used on acoustic instruments, compressors work to smooth the dynamics of a performance. Contrary to what many people think, electronic instruments have a good deal of dynamic range. You probably won''t have to squeeze anything to death as you would with an electric bass-guitar track, but slight compression of certain sounds can tighten up your mix. Synthesized strings generally sound good compressed at a light ratio (2:1 at -6 dB). If you have a live piano track, you may also want to process it with a little compression (-6 dB threshold, 3:1 to 4:1 ratio), especially if the musician performed with a lot of dynamic feeling. Compression is often used in an electronic instrument mix to blend several synth outputs together by performing a submix of the desired tracks, busing them to a stereo pair, and patching a compressor across the two channels. Many people use this method to combine sampled sounds, particularly if the samples were derived from a variety of musical styles. I often do it with drum sequences if the samples didn''t all come from the same kit. This achieves two things: first, it ensures that the kit will sound cohesive; and if I want, it lets me generate an intentionally overcompressed pumping sound across the drums (a typical hip-hop sound).

Image placeholder title

Graphic equalizers, such as this dbx 2231, are invaluable mastering tools.

Gates are commonly used on synth outputs to quiet or eliminate noise if no mixdown automation is available. Getting rid of extraneous noise is a priority; I have heard some really nasty sounds come out of certain inexpensive sound modules. You don''t need to gate the outputs if the modules are active throughout the entire song, only if there are extended periods of inactivity.

Although you could use an expander for this purpose, a gate doesn''t affect the dynamics of the performance the way an expander does. Make sure that the gate''s attack and release times are set properly; otherwise you may cut off part of the performance.

Finally, for those folks producing urban music, it''s not a bad idea to patch a limiter across the stereo bus—just enough to catch peaks from the kick drum. After all, you don''t want people getting mad at you because your mix blew their woofers out. (For a more detailed look at using dynamics processors, see “Conquering Peaks” in the December 1998 EM.)

A Touch of ''Verb?

In general, electronic sounds designed to emulate acoustic instruments should receive the same multi-effects treatment as their acoustic counterparts.

A piano should be processed like a piano, regardless of where the sound originates. (A little room reverb is a nice touch on a piano, incidentally.)

Now, you can really have some fun with original sounds. Nobody is going to criticize you for processing a Kosmic Kazoo with too much chorus! But watch the spatial placement of the effects. Just because you have the returns panned hard left and right doesn''t mean that the effect itself is located on the outer edges of the mix; it might be located somewhere in between, which could cause a conflict with one or more of the other mix elements.

From the Ground Up

When approaching a mix, I suggest you start by determining where you want to place everything on the stereo field and then pan the tracks accordingly. Push all the faders up, and get a rough mix going. Next, use EQ to tweak the sounds in context with the rest of the tracks so that nothing is clashing. If you hear something funny with any of the tracks, solo that track, isolate the problem, and fix it. When you''re done, pull all the faders back down.

Next, determine which track (or tracks) to mix the song around. Most people agree that you should build a mix around the most important element—whatever will sell the song. Doing this ensures that you won''t be caught with a great mix that has absolutely no room for the prized track. So generally, pop music (especially ballads) should be mixed around the vocal; jazz around the soloist; and rock, urban, and alternative around the rhythm instruments (drums, loops, and bass).

Bring the volume fader of your focal-point track up to about 80 percent. (If you decided to mix around the rhythm section, start with the kick drum and bass guitar.) Apply whichever multi-effects processing you want, but don''t obsess over the levels of the effects returns; they''ll need to be readjusted anyway once you start adding more tracks to the mix. Then start bringing in the rest of the components, adding effects where needed.

If you''re building around the rhythm instruments, follow the kick drum and bass with the snare and other drum tracks. Then bring in supporting instruments (piano, guitars, strings, and so on), followed by percussion, lead instruments, and solo instruments. Finish up by adding background vocals, samples, and sound effects.

When building around a solo instrument or vocal, I find it best to bring in some sort of accompaniment first, like acoustic guitar or piano. Follow that with the rhythm instruments, as outlined above, and finish with the supporting cast. If all your levels are good, you should still feel the energy of the first track you put up even after you''ve added all the other instruments. Don''t forget to check the mix in mono, especially if you think your recording could be broadcast; many radio and TV stations do not broadcast in stereo.

Next, address the tracks with dynamics processors where needed, make any necessary EQ changes, and adjust the levels of the effects returns. Finally, automate your tracks and print to tape. It looks easy on paper, doesn''t it?

A Little Mastering

Once you''re reasonably happy with what you hear, you''ll want to establish the mix''s overall frequency parameters. How much high end do you want? How much low end? True, the mastering engineer usually takes care of these things, but most professional mix engineers will use a parametric or graphic EQ across the stereo bus before printing the mix to 2-track. This allows them to set high and low boundaries for the mix—a particularly smart move when working with electronic instruments, where the frequency responses of the tracks can run the gamut. (It will also help you EQ and set the levels of the extremely high- and low-frequency instruments.)

Again, find a CD with similar content and audio quality to the project you''re working on. Listen to the overall volume of the upper and lower frequencies. Then compare that CD with the mix you have going, and make minor adjustments to the stereo EQ where needed. A graphic EQ—I prefer the dbx 2231—is an excellent tool for this application.

A lot of hip-hop, dance, and R&B music has extremely heavy low-end content. This contributes to these genres'' distinctive sounds, but loud low end doesn''t equal good low end. In other words, don''t boost 9 dB at 120 Hz during mastering to get the kick drum to stand out more; go back and fix it in the mix. A good mix should require very little tweaking at the stereo bus.

Finally, a little compression (-3 dB, 1.5:1 ratio) across the stereo bus can compensate for subtle level changes that you may not have caught in the mix. Alternatively, as I mentioned earlier, limiting may be in order. (Be sure to set your threshold just below the peaks you want to eliminate.)

Boogie Down

The most important thing you can do for any mix is put it to rest once you''re done. Let it sit for a few days, allow your head to clear, and then listen to it with fresh ears. At that point, you''ll probably want to make a few thousand adjustments, but that''s fine. What''s essential is that you take a break from the project.

When all is said and done, a mix of electronic instruments employs many of the same techniques as a mix of acoustic instruments. In fact, an electronic mix actually allows you to be more creative. If you keep in mind the basic principles I''ve outlined here, you should be able to construct a solid, three-dimensional mix that jumps right out of the speakers.

EM Associate Editor Jeff Casey recently turned a hip-hop song into a country tune with a 4-band parametric EQ.

SIDEBARS

BUILDING A MIX IN TEN STEPS

Many methods can work for organizing the mixing process. This straightforward ten-step process works well for me. Obviously, there can be much more to building a mix, but sometimes it pays to keep things simple.

1. Set pan positions.

2. Set levels to build a rough mix.

3. EQ each track in context with the others, soloing tracks to isolate problems.

4. Bring all the faders down.

5. Bring your most important track up to 80 percent volume.

6. If needed, process this same track with multi-effects.

7. Bring in supporting instruments, adding effects as needed.

8. Patch dynamics processors across tracks that require them.

9. Make EQ tweaks and check the mix in mono.

10. Adjust the levels of the effects returns.

THE DADDY OF DANCE

For the most part, dance music is created with electronic instruments; break beats, loops, sequences, and samples often constitute close to 90 percent of a dance track. So who would know more about mixing electronic instruments than a dance-music mix engineer? Chris Rivera has been involved in the New York dance-music scene for almost ten years. He recently finished mixing a single for the German dance band Electrik Kloud that will be released in the United States this summer. I caught up with Rivera while he was on vacation and picked his brain on mixing.

How does a dance-music mix differ from, say, a rock mix?
There are a lot more rules for people doing rock production. When I''m mixing, I don''t have to worry about things like making sure the piano sounds full. I can pretty much do whatever I want within reason—of course, the mix still needs to sound good. In general, though, I have a lot more creative liberty with dance music.

What element do you usually build a dance mix around?
Definitely the beat. When you''re on the dance floor, you need to feel the pumping of the kick. Once your beat is rocking, you can start bringing in supporting tracks, like synth pads. If there are any vocals, they''re usually the last tracks I bring in. I know that goes against conventional techniques, but it''s how we do it. Vocals just aren''t all that important with dance music.

What factors contribute to poor electronic-instrument mixes?
Poor placement of instruments within the mix. I''ve heard a lot of mixes from guys just getting started in this business—their transient samples come in right on top of the synth pads. All of a sudden, the pad disappears, and you''re focused on this sample. Then, once the sample has passed, the whole mix sounds empty. You need to have a place for everything in the mix.

How much multi-effects processing do you use?
Actually, very little. If I''ve been involved with a project from the beginning, I try to choose sounds and mold them so that they won''t need a lot of effects processing in the mix. Sometimes I''ll use an autopanner on a pad or a tap delay in certain places for effect, but generally I keep the mix fairly dry.

What''s the best advice you can impart about mixing dance music?
Hook up with someone who owns or works at a club. Print a DAT of your mix and bring it down to the club so you can check it out in the most important listening environment. Sure, your mix has to sound good on the radio, too, but you''re really mixing for the people at the club. So go down there, crank up the system, and see how it sounds.