Secrets of Sound Design, Revealed - EMusician
A trio of professionals shares their sound design strategies

Ask most people about sound design, and they probably won’t know what you’re talking about. Tell them you’re a sound designer, and you’ll more than likely be met with a blank stare followed by “What’s that?” And while nearly everyone is aware of movie sound effects, few have given much thought to the people who create them. Even musicians may not recognize the effort that goes into authoring synthesizer presets.

028_elm0918_Coverstory_SecretsSoundDesign-1

Sound designers are responsible for producing and reproducing a great many sounds we hear, particularly electronic sounds. Their creations range from every beep and bloop your smartphone makes to complex sound effects that wouldn’t exist without computers and synths.

I recently interviewed three prominent sound designers about their craft as it relates to electronic musical instruments. One is best known for conceiving of unusual timbres for an award-winning softsynth, another for decades of voicing factory patches with a leading hardware-synth manufacturer, and the third for creating patches for software instruments from a variety of developers.

Although each comes from a very different background and follows his own path, all three spend their workdays making sounds for musicians to use in their music. I asked each of them identical questions and was surprised at how different their answers were.

DIEGO STOCCO

029_elm0918_Coverstory_SecretsSoundDesign-2

Celebrated for his sound-design work for Spectrasonics, as well as YouTube videos that demonstrate his unorthodox sampling sessions, Diego relocated to the Los Angeles area from his native Italy. He calls himself an “explorer of sounds,” and it’s an appropriate title for someone known to many YouTube viewers as the guy who multi-tracked himself playing a tree. He designs and builds custom instruments for their potential to make weird sounds.

Diego focuses mostly on creating sounds for musicians and, along with the company’s founder and fellow sound designer Eric Persing, produced much of the content in Spectrasonic's flagship softsynth Omnisphere. He has also designed sounds for corporate clients like Apple, Adidas, and Panasonic; and you can hear his work in movie trailers such as the latest reboot of Halloween. He is currently working on an original library of sounds designed specifically for trailers, as well as something new from Spectrasonics.

030_elm0918_Coverstory_SecretsSoundDesign-2

When you were younger, was it your ambition to become a sound designer?

I didn’t even know the term or the profession existed when I started. When I was a kid, I started learning classical piano, but I didn’t have much patience for learning scales because I was more intrigued by the nature of the piano, itself.

One day I physically hit with a fist one of the keys in the lower range, and I broke the hammer: I was just tired of playing the scales. I hit it hard enough to physically snap one of the hammers. I think it was an old piano, as well, because I couldn’t have been that strong when I was only 9. So I hit the thing, the thing snaps, and I heard this sound, and I was like, wow, this I like! So my appetite for destruction of musical instruments comes from a very early age.

When I was in my 20s, I was working for a studio doing commercials for local radio. At the time, they wanted to do a CD for radio with sound effects—those things they use to transition in and out of commercials or jingles. They had an Eventide there at the studio, and the concept was “listen, take this thing and see if you can get anything out of it.” I knew they needed these big whooshes and laser things, and the Eventide was great for doing that. After I created a bunch of these things, they were packaged into a CD with other things like drums and all kinds of elements. On the CD I got credit as “sound designer,” and I was like, okay, I guess this is a thing. I didn’t even speak English at the time, so it was just a word that I obviously understand much better now.

Where do you usually look for source material?

I just love using whatever I can put my hands on, so whether it’s acoustic, digital, organic instruments, objects, everything is on the same level for me. If I build this object or instrument—I call them “sound design instruments”—how am I going to play it? So there’s a consideration about the ergonomics of this thing. Does it sound interesting? Does it look interesting?

Anything is open out in the world for me. If I hear something that I really like, I can pick up stuff from the hardware store and turn it into an instrument. It makes no difference, the origin or the intended nature of the object itself. I think my material gets picked for dark drama, heavy stuff. Because of my custom-built instruments, I can make these sounds so easily that are creepy and scary and intense.

Let’s say someone requests watery sounds. How would you approach that?

You cannot mix water with electricity, so the thing that you would have to do is—I think it’s an old technique—you put a condom around a pencil mic, and that shields the microphone from water. But you can use a Ziploc bag or anything that is sealable. Make sure it’s really tight, and then just drop it inside the water, and then maybe EQ the high end a little bit to make it sound muffled. And depending what you’re trying to record, just make it move a little bit.

030_elm0918_Coverstory_SecretsSoundDesign-1

Hydrophones are interesting because they are built to be immersed in water, but they do not give the sound we [expect to] hear when we think of water. When you watch a movie and you hear something underwater, that’s not the actual sound of water. That’s the sound design version of water. We got used to that, but it’s not actually what happens when you are underwater. Sound travels differently underwater, and you don’t hear the echo or the reverb thing that we are used to when we watch a movie.

There’s a technique that I discovered a few years ago. I made a little clip in one of my videos; it’s also available on Instagram. I put these electret mics inside water balloons, and then I submerged the balloon inside a little plastic container. Basically the water hits the surface of the balloon and creates this shimmering tonal sound that I did not expect at all. So it was just like, I’m just going to put things inside balloons because they look cool on video, but then the coolest sound came out, and I was like, this is interesting! [Laughs]

What software do you use the most?

To record, I use Pro Tools. I have two [Universal Audio] UAD systems with the Apollo 8, and then I have some API gear for preamps. I have microphones everywhere. I always need extra because some accidents happen. That’s basically the main rig. It’s worked great for me, and I’m happy with that. And for things where I want to do some live processing, like in real time, I use Ableton Live. It’s perfect for building this complex change that you think of all in real time with controllers and all kinds of things.

Do you use the native effects in Live?

I use everything. I use a little bit of the things that it comes with, and then obviously I have all the UAD plug-ins. I have a lot of Waves plug-ins. I have Soundtoys. So it depends on what I need to do. It’s a combination of things. I go with whatever is the most direct and practical use for that plug-in.

What are some of your favorite microphones for sampling?

I’ve been using Rode mics for many, many years. And the reason was that when I did the Burning Piano, the obvious concern was, are the microphones going to survive being in front of the fire? So I got some Rode mics, and I put them through essentially this test of fire, and they survived. I think the capsules got a little baked. [Laughs] But not too bad. Rode started making a lot more models—tube mics, and then ribbon mics—so I have a selection of mics from them.

Then I have contact mics, transducers that I can convert to contact mics, pickups, and all kinds of things. I like very small electrets that I can put inside objects, like I did for the Luminapiano, for example. I put the electret right on top of the tungsten filament inside the light bulb. That was interesting, because you have a tiny little sound where you use the proximity effect of the microphone to enhance it right to the point where it sounds like this is the size it should be. I get very, very close, because in sound design, it can be a way to discover a new sound.

Do sound designers influence each other’s work as much as musicians do?

I don’t separate sound design and the music world into two separate categories, mostly. There’s a little bit of confusion when you say “sound design,” because it’s initially intended for sound designers that work in movies, but what I do obviously is in the music realm. If I listen to a record and I hear something interesting, I am listening from a sound design perspective, so I’m trying to understand the technique, the source.

When I was very young, probably 13 or 14, I was starting to make my own compositions. I had a GEM S2 keyboard that had a little sequencer, and it would allow me also to import my own samples. That was a big foot forward for me. There was a friend that had an Akai sampler, so I could sample the sounds and then put them on a floppy disc and put the floppy disc inside the GEM keyboard and use those sounds. So I started thinking also in terms of, not only am I writing down the notes with the sounds that these people are providing me, but I can also add my own sounds. So that was already happening in my mind.

What’s the best background or training for someone who wants to break into the field?

It really depends. To do music sound design, you have to be a musician. You have to have a musical background, because you cannot create musical sounds, playable sounds, if you don’t know how to play them. But I think, in general, if you want to be a sound designer, you need to have an endless appetite for sounds, essentially. If you think about sounds all day long and those are things that really are interesting to you, then I think you could consider it.

However, there are a lot of other things involved. You need to learn how to use the tools. And again, it depends, because if somebody is focused mostly on synthesizers, then they’re going to learn about synths as much as possible. I like everything, so I focus on everything. I’ve been trying to absorb as much information as possible since I was a kid, in every possible way. So yeah, I would say that’s the starting point. Then if somebody wants, they can go to college for it, or not. It depends. I particularly don’t have a preference. I am self-taught, so I didn’t go to college for any of these things. There was no college in Italy available anyway, so that was not an option.

JACK HOTOP

032_elm0918_Coverstory_SecretsSoundDesign-1

For decades, in one hit song after another, you have heard Jack Hotop’s work anytime you listened to music on the radio. Currently Korg’s senior voicing manager, Jack will celebrate his 35th anniversary with the company this August.

During his tenure at Korg, he has been responsible for producing factory content for close to 100 products, from 1983’s PolySix all the way up to the latest expansion libraries for the company’s current flagship keyboard synthesizer, the Kronos. One of his greatest hits was the bestselling M1, for which he created most of the factory patches. He’s also a highly proficient player and composer, well known among keyboardists and trade-show attendees for his chops and world-class product demos.

In the ’70s, while playing keyboards in cover bands that worked in nightclubs, Jack had to emulate the sounds he heard on hit records. At the time, he says, information about synthesizer programming was scarce, and he learned a lot just by trial and error. That’s when he realized he not only had a knack for the job, but he also loved doing it.

When you were younger, was it your ambition to become a sound designer?

No. I started piano lessons at 7 for five years. I had about ten teachers. Every time I’d want to quit, my mother said, “Oh, I’ll find a teacher you like.” And I started gigging with bands with a Gibson 101 organ when I was 14. When I was at Berklee, I wanted to learn about synthesizers and take courses, but you needed two years of prerequisite courses when I was there in ’72 and ’73. I was an arranging and comp major.

So I met a band, and they paid for me to take courses at Boston School of Electronic Music, and Roger Powell was teaching there. He was working for ARP, who were in Newton [Massachusetts] at the time, and he gave me my first exposure to the Odyssey, 2600, 2500, and the EMS Synthi Sequencer, which is where I started to learn my ABCs about waveforms and envelopes and LFOs and modulators and stuff like that.

I wanted to buy gear, so then I went on the road with the Drifters and did a world tour with Gloria Gaynor in the mid-‘70s; and in the late ‘70s I was a musical director for Silver Convention. I kept gigging and playing music I liked until the fateful day when I bought a PolySix. I said, “This is great. I need to hotrod it and modify it.” I called the company, and I was on the phone for about three hours with an engineer at Unicord, the distributor for Korg in 1983. I said, “I bought a PolySix, but I was up for 36 hours as soon as I got it, reprogramming everything.”

He said, “Oh, man, you should bring it in.” I invited him to some gigs, then he invited me to the office, and the rest, as they say, is history.

Where do you usually look for source material?

I think that’s intuitive. It’s whatever strikes my fancy, almost like writing music. All of a sudden I may hear something, and that might spark my imagination to try to record something. It’s like, okay, if I hear something interesting, I’m going to record it. But sometimes I might hear an interesting sound, and that might spark an idea of how I could go about constructing something that would be similar or of that nature? So it’s like a lightning strike, I guess.

Let’s say someone requests watery sounds. How would you approach that?

I was just at Sweetwater Sound for Jordan Rudess’s KeyFest, and one of the things I did was I tried to go through the different synthesis engines on the Kronos. One of them is a plucked string-modeling engine. When I first started working with that, I thought, okay, there’s all kinds of ways you can pluck a string. You can use a pick, glissandos, bass, guitars, sitars, violins, acoustic, contra-bass. But besides the pluck stuff, I wanted to start stretching and fooling around and experimenting with parameters—dampening, nonlinearity—some of the newer things that were not present with previous synthesis structures that I had been working with.

And I kept experimenting, and all the sudden I found something that kind of sounded like a dolphin underwater. And then I started playing with modulators and I’m going, wow, we’ve got some humpback whales going here. So I played and demonstrated the sound, which is as far away from what you might imagine a plucked string-modeling synth engine could do. It definitely sounds watery and ethereal.

You talk about water effects; rainstick has nothing to do with water. It’s a bunch of pebbles inside the stick. But when you tilt it and the pebbles roll down, you get this sound that is like, gee, that kind of sounds like water. So if I think about sea sounds and environmental sounds—what you might hear in a rainforest or in other parts of forests or the sea—by transposing stuff, often transposing a sample way down, you sometimes experience a completely different character at a different pitch.

I saw a [TV] special where Bernie Krause had recorded some trees that were near a street. They were creaking naturally because of the nearby water. But he slowed that down, and it was actually like a rhythm. It sounded like a percussion ensemble, and you could actually tap your feet along to the rhythm. It was a consistent BPM, but it came from nature. So by playing around with sounds and also effects processing—using pitch shifting and other more exotic effects, ring mod, and stuff like that—through that playful experimentation, you can go down a trail that might lead you [to something unexpected]. If you say, okay, I’m going for watery sounds and you aim in that direction, I think that’s like a guide map as you’re playing and experimenting.

What software do you use the most?

With the Kronos, I use proprietary editors that only I have access to. They’re not available for the general public. It’s a combination of using those editors and then also working directly with the hardware, using a Kronos. I go back and forth depending on the tasks that I need to do. Editors are great for some things. I think also you can do some quick moves directly with hardware where focus is in one place. Your hands are on the keyboards, the controllers, etc. So I bounce back and forth with that.

What are some of your favorite microphones for sampling?

All of them; none of them. [Laughs] A lot of times, it’s what’s around. I don’t have an extensive collection. If I go to a studio and they have a good collection of mics, that’s subject to the thing that’s being sampled, so I can’t cite any favorites or preferences with mics.

Do sound designers influence each other’s work as much as musicians do?

Definitely. I’ve got to mention Eric Persing. He started out at Roland at the same time I started out with Korg. We met each other at NAMM, and I love the way the guy played, and Eric enjoyed my playing. I really loved his sound design and still do. He’s one of the best, one of my dearest friends, and when I hear something that Eric does, I’m going, my God, he’s forever inspirational. So I’m a big fan of his sound design, but also his playing and his compositional sense. It just really has always resonated strongly with me.

What’s the best background or training for someone who wants to break into the field?

That’s an interesting question. Certainly you can take courses, take classes, and that’s one way to do it. Another way is, okay, you’ve got a style or you like artists, and you like the way that the sounds are being produced and generated and the final output of that. We all have different tastes in music. It could be world music, it could be EDM, it could be anything. We pick the things that interest us, trying to learn how they’re done, deconstruct them, and then follow those steps to create your own spin and your own version. That seems like a formula that’s worked well for me, generally speaking. I think others could follow that path.

SIMON STOCKHAUSEN

034_elm0918_Coverstory_SecretsSoundDesign-1

As the son of famed German composer Karlheinz Stockhausen, Simon was recruited into the family business while still quite young. He discovered his talent for synthesis at the age of 12, while preparing to perform his father’s work Sternklang (Star Sound, 1971). Playing a Minimoog and using a ribbon controller for filter cutoff and pitch, he was encouraged “to get deeper into synth programming and to increase my comprehension of how things in the electronic audio world really work.”

More recently, Simon has become known for creating much of the factory content for UVI’s top-shelf softsynth Falcon, as well as Falcon expansion libraries. In addition, he creates sound libraries for software such as iZotope Iris, Applied Acoustics Systems Chromaphone, Vengeance Sound VPS Avenger, and 2CAudio Kaleidoscope. He is also a saxophone player and an accomplished composer who has scored music for films and live theater. Fascinated by live electronics, Simon is currently mixing an album of improvisations with his brother Markus for a project called Wild Life.

When you were younger, was it your ambition to become a sound designer?

My path to become a musician and composer was set by my dad at a very early age. For me, programming and inventing sounds has always been part of those professions as my tools were acoustic and electronic instruments, which needed to be fed with unique sounds, so I always was a sound designer, as well. Becoming a professional SD who sells sounds and is hired by companies to produce sounds was never a plan when I was younger, but then I did produce some sound libraries at the age of 18 for a company who sold floppy disks for the FZ-1 sampler by Casio and preset expansions for the [Roland] D-50, but I wasn’t really interested in pursuing that path at the time.

After school, I joined my father’s ensemble and toured the world for many years playing his music. I also produced the electronic music for two of his operas together with him, and he wrote quite a few pieces for me as a synthesizer player. At the same time, I began playing music together with my brother Markus in the field of jazz and improvised music.

I had already composed music beginning from the age of 7, and in my 20s I composed many pieces for various ensembles, chamber music, electronic music, then I got into theater music, film music; I cooperated with visual artists and whatnot. Doing all that, I always tried to combine acoustic instruments with electronic music, so my skills in producing electronic sounds increased over time through learning by doing (and reading).

Where do you usually look for source material?

Nature, factories, cities, the seaside, play yards, soccer stadiums, crowd gatherings, train stations, public festivals...

Let’s say someone requests watery sounds. How would you approach that?

I would ask, do you want naturalistic sounds or sounds that evoke an association of water? Then I would gather some water sounds, probably at the seaside with a shallow surge of waves or at a lake without any waves so that one could capture the actual sounds of water bubbles. I would record the water very close, like holding the boom with the two mics only a few centimeters above the water surface.

Then I would process those recordings with various tools, first radically removing all the background noise with RX6, then applying things like time-stretching (using IRCAM TS or Falcon, which allow for the separation of sine, transient, and noise components) and then processing the water with a resonator like [2CAudio] Kaleidoscope to create tonal/melodic resonances. Or processing the isolated water bubbles with spacious tools and reverbs to create fluid abysses, or audio-morphing the water with synth tones, using the water itself as a modulator.

What software do you use the most?

Hard to say—it depends on the task, but I frequently use granular tools like [accSone] crusher-X and GRM SpaceGrain; hybrid instruments like Falcon, HALion, Avenger, Alchemy 2, Metasynth; synths like Serum and Zebra2; audio-morphing tools like Morph 2 and Melda’s Morph plug-in.

I also use multi-effect plug-ins like Melda’s MXXX, which can do just about everything; Soundtoys; reverb and delay plug-ins like [2CAudio] B2, Aether, Breeze 2, Adaptiverb, the Valhalla brigade, various convolution reverbs, Surreal Machine’s Diffuse, Relayer, Logic’s Delay Designer. Also spectral manglers and resonators like the entire GRM collection, Photosounder, Kaleidoscope.

Then my main DAW, Logic Pro X, iZotope’s RX6 Advanced, Cubase, and lately I’ve gotten into Sound Particles, which I find totally fascinating.

What do you use to loop samples?

WaveLab, but mostly I use the loop functions provided in the instruments themselves (Falcon, HALion, Alchemy). But if a sample player doesn’t do crossfade looping (like Avenger for sample maps), I use WaveLab.

What are some of your favorite microphones for sampling?

Neumann all the way—they just sound so good and always work. I do most of my samples in L-C-R with the U 87 as a center mic and a stereo pair of KM 184s for the sides. Then I love the directional mics by Sennheiser. I’ve mainly used mics from the Sennheiser MKH 80 series and have done all my field recordings over the last year with a stereo pair of those.

Do sound designers influence each other’s work as much as musicians do?

Not for me. On the contrary, I try to do things in my sound design work that I haven’t heard anywhere yet, so my influence comes more from my own musical world or from my past, where I was influenced by a lot of electronic and acoustic music of all kinds.

037_elm0918_Coverstory_SecretsSoundDesign-1

What’s the best background or training for someone who wants to break into the field?

The best background is the fascination for audio and sonic art as a whole. Learn the basics from bottom up first. Understand how sonic waves compose a sound. Also learn [to play] an instrument to understand the way a musical sound is shaped and formed so that we declare it as musical, in opposition to noisy. Learn about harmonics, the relation of overtones, the overtone characteristics of certain instruments. Get a field recorder and start recording everywhere at any time. Learn how to edit field recordings, how to recognize melodies and rhythms in a sound you find, how to filter them, which components of a recording are relevant and which aren’t.

Learn about dynamic compression, equalizing, phases, phase synchronization when recording with multiple microphones, phase modulation in a synthesizer, DC offset, all that boring stuff. Also learn about modulation, LFOs, envelopes, step sequencers, MSEGs, and how to create an expressive musical sound using a synth or sample player. Study granular synthesis, FFT, FM synthesis, microphone techniques and types, and eventually imagine a certain sound and try to create it from scratch.