Q&A: Richard Devine

The modular mad scientist and in-demand sound designer takes us inside his new studio and sheds some light on his creative process
Image placeholder title

Richard Devine emerged 20 years ago as an industrial techno producer armed with a pawnshop ARP 2600 and a thrash metal mentality. As digital signal processing evolved, so did his passion for the fiercely nonlinear in Csound, SuperCollider, and Composers’ Desktop Project. His early years were spent recording oxidized robofunk and dislocated filters for IDM record labels, such as Schematic, Chocolate Industries, Warp, and Sublight. He was a laptop contortionist, performing in DIY venues worldwide. This led to long-running relationships with companies such as Native Instruments, who commissioned him to develop tools for signal mangling.

Over the past two decades, Devine has come full circle as both an artist chasing electroacoustic tangents and as an in-demand commercial sound designer. In recent years he’s shunned in-the-box performances to become a figurehead of the rampant modular synth community, headlining showcases and panels when he’s not kept busy creating presets for software instrument developers and companies (including Barnes & Noble, LG, Nike, Sony, Apple, and Google, and video game designers) in need of distinctive UI sounds, sample banks, campaign themes, and advert effects. Though, he says, it all still starts as chicken scratch outlines and scribbled Post-it notes.

Devine has launched a Vimeo series of in-studio Eurorack performances, complete with extensive patch notes (vimeo.com/richarddevine), which exhibit his enthusiasm to share his experiences. He took a few moments away from work on Google’s virtual reality experiences environment, Daydream, to discuss building a new studio, watching modular synths swing from dusty to desirable, and drawing inspiration from nature.

Devine's modular station comprises two Goike custom cases and 8 6U/12U modular portable cases. You started in the mid-’90s hoarding secondhand hardware, transitioned to digital sound generation ensembles, then dove headfirst back into compiling banks of blinking LEDs. What were your goals for your recent studio build?

Image placeholder title

When I built this space, we designed the room to be spacious, with everything accessible, because at one point years ago I had so much gear it was stacked in multiple closets, and I’d waste so much time looking for something before even using it. The more commissions I’d get, the more I realized I needed everything already running and ready, where I could just hit record. The room needed to feel clean and uncluttered. So, first I did something that was hard, but I recommend it: I got rid of a lot of gear I only used once or twice.

Did you hold on to any totems from your old workflow?


I still have an ARP 2600 that was one of the first things I bought, and it’s still a staple for sound effects. I grew up using it, understanding its signal flow and modulation potential, finding out what it was capable of at the most subtle and extreme levels, and that old-school approach stuck with me whether I was figuring out how to connect up objects in a program like Max/MSP or patching new Eurorack modules. If you try to do something simple with new gear—just listen to the oscillators, the filters and VCAs, try to make a simple snare or string pad—then you quickly realize what applications a synth is good for and where in your palette of sounds it sits. I often like to see if I can create an entire piece of music with just one synth or system.

The main mixing area centers around a Yamaha DM2000VCM, used for tracking of all the hardware synths and drum machines. Any gear you wish you hadn’t pruned?

Image placeholder title

I wish I still had an EMS Synthi AKS, but I sold it way, way before this studio. At the time I felt there was a shift happening where you could do such complex computational synthesis in the box, very detailed and exotic stuff. And it was becoming really hard to get parts and maintain things like the Germanium transistors, etc., for some of my older synths. Now there is a crazy resurgence in that stuff, people buying and servicing synths.

About 12 years ago my friend Josh [Kay, of Schematic Records and Phoenicia] picked up some early Doepfer A-100 stuff and one night we played with it for hours. I was blown away by the possibilities of custom-designing a system from the ground up. So I bought a case, and they don’t call it “Eurocrack” for nothing. Soon I had two cases, and then there were people like Plan B, Livewire Synthesizers, Analogue Solutions, Make Noise.

Fast forward to the past five years and I was really missing how I made music with the ARP and early Roland modulars in the late ’90s.

You can do incredibly powerful stuff in the computer, controlling every microsecond of a piece, but it’s through this crude interface of a mouse, keyboard, and controller that feel disconnected from the act of creating and manipulating sounds. I was missing the physical interactivity, that instant gratification of a hardware synthesizer, where you make 10 to 15 adjustments in seconds. For my professional work I’m still very much in the box, but for my personal compositions and performances I have shifted my focus to modular and exclusively using only external hardware/pedal effects like the Eventide H9 and Strymon pedals.


Devine's "synthesizer corner" is a mixture of classic analog and digital synths. So, that philosophy of immediacy influenced your hit-spacebar-and-go studio routing?

Image placeholder title

The entire idea was to center the studio around my travel setup, which is a MacBook Pro, a Teenage Engineering OP-1, and an iPad or two. With that I can get most of the sound design and editing I want done, whether I’m in a hotel, airport, coffee shop, etc., and then I come to the studio where I can just bring up a channel and record all the hardware instruments. I’ve got a Yamaha DM- 2000VCM console, with all the synthesizers and drum machines, etc., routed into its 56 inputs, and that is connected to another computer that’s dedicated to instrument tracking.

I run Ableton Live, Logic Pro, Pro Tools and Nuendo set-ups for different clients. But I don’t use my desk when mixing. I’ve got a separate analog rig that uses an Apollo 16 into a Dangerous Music 2BusLT/Monitor ST/DACST combo for mixing and monitoring, then I’ll take the stereo mix and send one final pass through the Avalon VT-747SP. A lot of my mixes are just UAD-2 Satellite/Apollo16 Thunderbolt OCTO, FabFilter Pro Q2 and Pro-C 2, BrainWorx BX_XL V2, SoundToys Decapitator, and iZotope Trash 2 plug-ins, plus Dangerous Music summing, which gets me the right coloration and control. For me it’s about keeping good dynamics. I don’t overcompensate with multiband compression to get a solid brick of volume out. I’m just about getting lots of detail, clarity, imaging, which is more about working with careful adjustments of levels, and giving each voice its own space. There are Genelecs all over for monitoring—including the 8250A, 8020C’s and 6020A, all with dedicated subs—because they are in a lot of post houses and offer detail without being fatiguing so I can listen for long periods at a time. I’ve also got them hooked up in quad and surround configurations. Plus, I have a Samsung soundbar, Avantone MixCubes and Sonos systems for level checks.


Having a long history bridging the digital and analog worlds of synthesis, where do you find they overlap and are most divergent?

I’ve used Max since Version 2, when it was sold by Opcode Systems, and Reaktor I have used since it was called Generator, around 1998. There were people scared to work in them, and Cycling74 and Native Instruments have done so much to make it easier to use so that now people see it’s an environment just like a modular synth: You create a sound source, then decide how you want it to be manipulated, where that should go and what it should then affect. Importing objects in the computer is an identical start to an empty Eurorack patch. Where it splits is that with patching physically it’s all about your ears, what you’re hearing, and sometimes working with a computer can be misleading, because you might choose to embellish something just because you see a lull in the timeline, not because it will create the best audio output. You have to make decisions based on whether something sounds interesting, not on what it looks like.

Close-up of the workstation rig. You recently contributed patches to the Moog Model 15 app for iDevices. Do you feel it’s a solid window into modular signal flow without the need for color-coded cable organizers?

Image placeholder title

I personally think that’s a great introduction for a lot of people who might not have dived into analog or modular synthesis. Picking the System 15 is perfect because it’s not too overly complex and has a lot of basic things: a delay and filter section, you can understand what a VCA does, how the envelope section affects working with the VCA. You can understand how working with attenuators can create nuances within a patch. I think they did all the right things to where even an extremely experienced person or a beginner can immediately get stuff right away. Sure, analog behaves differently based on temperature changes, age of parts and other factors, and you can’t replicate those fascinating inconsistencies in the digital environment, but it’s still a faithful reproduction to the point most people will only find finely nuanced differences between the real thing and the app.

And is it those perfect imperfections that have you reaching for more and more Eurorack?

I did laptop sets for nearly 15 years, moving from Cubase to Ableton Live, using the Lemur controller then Push and Max for Live, and those sets were great. But with Eurorack I really like the spontaneity, how something can go “wrong” at any second and shift the entire patch so drastically that the glitch becomes a happy accident driving into a completely different space with variables I might not have thought of, and I can improvise based on the energy of the room. It forces you to concentrate on the performance. My patches aren’t something I just plug in and bring the volume up. It’s all control voltage, which means there are fluctuations constantly moving from module to module so the slightest thing can shift everything toward the wrong direction. Once the sequencer runs I have to do everything, compose on-the-fly, and decide when something comes in or is pulled out. It’s not like in the computer where I could see there are just 16 bars of this instrument and where I can easily overlap something else to move things along.

A lot of the patch snippets you post recently to Instagram (instagram.com/richarddevine) have leaned more melodic. Is this because your spec work is for more universally palatable sounds and there is a bleed of influence between worlds?


No, I consider my artistic and commercial work worlds to be completely separate, even though they share gear. Designing UI sounds in interesting because they have to be [universal] and recognizable by anyone. They have to tell the user whether a device is connected, disconnected, or some mode or state that it is in with very simple elements: a pitch, a pattern, a sequence to convey the message. So I have had to study how envelopes, short sounds, pattern recognition translates through an ambient environment. And you can learn interesting things about how frequencies sit in the spectrum of everyday noise and apply them to composition. However, in my modular sets I try not to repeat things at all. There are no recalls. I perform the patch, pull the patch, and explore new things.

Devine's mastering mixdown setup features UA Apollo-16, Dangerous Music 2Bus LT, and Avalon VT-747SP Class A tube stereo compressor/limiter with built-in EQ. Both your personal and professional sound design incorporate a lot of field recordings. How do you capture these and then how do you balance real and fake space in mixes?

Image placeholder title

Recently I’ve had to collect real environmental recordings that aren’t processed in any way, captured at specific distances so that they match what a viewer is seeing in a scene. For that I’ve been using the Sennheiser MKH 8040 RF condenser microphones. I have four, so I have a rig that’s X/Y and one in ORTF for ultra-wide stereo. And I’ve been experimenting with the Sanken CO-100K, which records in extended range up to 100k and gets an incredible amount of high-end detail. The Sennheisers also go up to 50k, which comes in really handy when you’re pitch-shifting stuff down at 24/96. I record everything at 24/96, so when I pitch it down it replays all this information at super-high quality and you get some really interesting stuff my older microphones couldn’t hold up at 20Hz to 20kHz or 18Hz to 20kHz frequency ranges. For downsampling stereo mixes, I usually use the Apogee Rosetta 200’s UV22HR 24-bit to 16-bit dithering process, and I can hear the difference in natural sources.

When I’m doing binaural recording I use the DPA Microphones SMK4060 stereo kit on my own head held by iPod headphone clips for runners, the BudFits. For ambisonic recordings I’ve also been using the SoundField 450ST MKII system. And for a recorder I use the Sound Devices 788T, which is eight channels. Even when I’m just hiking to clear my head I always bring a Zoom H6 or Sony PCM-D100 recorder, because you never know what you’re going to run across. I try to plan the best times to avoid noise-pollution factors, but there are things you can’t escape, and that’s when I use iZotope RX Advanced to remove unwanted reverb and noises. Spectral Repair is your best friend.


In the digital environment I love abusing convolution technologies. They’re so much more interesting than just placing a sound in the typical halls or the Taj Mahal or whatever. You can take the amplitude frequency information of one sound and then convolute that on another sound, almost like a crude form of sound morphing. For example, you can take a cymbal sound, use that as an impulse and play someone talking and that cymbal impulse gets applied so it sounds like the cymbal is talking. A technique I like is taking white noise and doing these long-tailed automation draws, taking the noise and panning it left speaker to right speaker, then taking two paths of white noise and crisscrossing them and tailing those out maybe four to eight seconds and turning that into an impulse. Then you take that back into something like [Logic Pro’s] Impulse Response Utility and into Space Designer and play a dry snare drum through it and it will take that information and almost do that automation as it adds information that isn’t in the sample.

How do you establish the right amount of highly stylized processing in a project?

I recently made some presets for Native Instruments’ Replika XT multimode delay, and I look at my contribution as a springboard for users who might not have thought to do something. I try to give an esoteric personality trait to an instrument or engine without getting too overboard, unless that’s what the company wants. But to start I treat it like any other instrument, like my first ARP 2600, and I test out all its facets till I run across some spontaneous thing that reveals character.

Is initiating a dialogue your goal with all the patch bay expressionism on the Vimeo channel?

The videos are definitely about experimenting, trying to do more with less, which is a different side of me. In the past I’d always flood the entire frequency spectrum with every sound possible. But lately I’ve been trying to choose very wisely what elements I have in the mix, translating them to something that can work over the course or five or six minutes of music. I’m trying to find if I can not use 1,000 modules and sounds, just six or seven elements that are very choice and can elegantly flow you through a composition.

I don’t have a record label at my back saying I need to make something for them to sell. I can just do things because I want to try them. I’ve got probably 150 tracks recorded with the modular, and I’m seeing how receptive people are. My idea for an album is ten tracks that are all modular, with a video recorded of each to accompany them. So I want to release the album as both a vinyl record and a catalog of ten patches with links for how the entire album was made, a chronological order of how the songs came together, so you can study the patch notes and video performance of each track in real time. I want the entire album to be something people can listen to, enjoy and reference. People can look back and see what my hands were doing, what modules are used; it will be this diary of musical modular patches. It’s always been important for me to share a narrative with sounds, but also to give people something that makes them want to research. My biggest motivation is to hopefully inspire everyone.

 We caught up with Devine to find out what he's been up to since we chatted. Click HERE to read our bonus interview.