Interview: Jeff Rona

The art of ambient
Publish date:

Jeff Rona is one of those artists whose entire career seems to embody the essence of “ambient music.” As a soundtrack composer, technologist, and musician, he’s both ubiquitous, yet often in the background. The directors he has worked with include Ridley Scott, Robert Altman, and Stephen Spielberg, and his musical collaborations run the gamut from Philip Glass to Jon Hassel to Dead Can Dance. And he’s conceived and developed some of the most unusual Kontakt instruments on the market—Wide Blue Sound’s Orbit and Eclipse. But this is just scratching the surface.

Stepping back from film work last year, Jeff composed and produced one of the most impressive ambient albums in recent history, Projector (Wide Blue Sound Records). Seamlessly blending emotional performances ranging from live cello and guitar to spoken poetry—as well as Rona’s unique approach to processing acoustic material—Projector accomplishes the goal of being simultaneously ethereal and extremely complex.

As a synthesist and sound designer, I was fascinated by how elusive it was to determine the sources for his soundscapes, which are anything but conventional. Fortunately, Jeff was available to explain his production process, as well as give us a glimpse into what’s on the horizon for the year.

How did the idea of Projector come about?

As I was finishing up a beautiful and somewhat experimental Brazilian film, I was invited to be an artist at the Soundtrack Cologne, an annual film and videogame music event in Cologne, Germany. I was already invited to give a master class, but they brought up the idea of me doing a live concert. They didn’t have the budget for a full orchestra as they had previously, so the idea of something more intimate was of interest. Having just worked on this rather ambient score and having a lot of extra material that didn’t make it into the film, I asked how they felt about doing a concert not based on a score but on the underlying musical concept. They were really enthusiastic, and that became the genesis of the project. The result was quite unlike a traditional film score, and perhaps harder to define. In the end we did two concerts, one in the same theater where Keith Jarrett did his landmark The Köln Concert album.

How different was this process from doing a score?

On one level composing is composing—you begin with some sort of musical or emotional intent, then you use the musical tools you wish to achieve that goal. But of course, scoring film requires a composer to match musical intent with the images, plot, dialogue, and pace created by the filmmakers. There is no traditional musical structure in film music. It’s more like a literary structure, based on the storytelling. And while there are no set rules in scoring, there are certain conventions and commonsense approaches that do or don’t work. You’re often called upon not to be too distracting to the audience, which can be challenging when you’re trying to do something organically musical.

As I began the process of creating a concert-length live work, I began to realize so many constraints and creative shackles were removed—and I really embraced it. I started to do things that didn’t fit into my scoring work. It’s not radical, but I felt a certain freedom not working to picture and wanted to take advantage of that. So, I think the music for the Projector album reflects that liberated mindset.

In the many times I’ve listened to the record, it was difficult to discern the sources for the instrumentation. While some textures seem more obviously synthesized, others have a strong, organic quality. Would you share your process?

I worked in the same studio, using the same hardware and software I use day-to-day. Logic Pro is my main DAW, with a large collection of plugins that I’ve come to rely on over the years. I’d been working on some radical time-stretching experiments on that Brazilian film using Paulstretch, which is an amazing tool that’s been around a very long time. But I used it somewhat differently, processing some of the nature sounds of the Amazon that the filmmakers sent me from the shoot.

Instead of simply stretching them, which lead to some amazing results, I would create suites of variations of a single sound, then load those into a sampler programmed to crossfade between them via MIDI so I could have an organic but abstract sound, which could morph fluidly. This process became the underlying bed upon which I wrote several of the album tracks.


In the clicking, transient rhythms in “The Long Now” and “Sonos,” there’s a granular character, but it seems more complex than common approaches.

In “The Long Now” the rhythm began in Reaktor. It’s such a versatile place to create experimental elements. When I had something I liked, I printed it as audio in another track, then further processed the sound using FabFilter Timeless and Soundtoys FilterFreak. The original loop is actually pretty simple, but once processed with a weird delay and some modulated filtering…voila!

The rhythm in “Sonos” was done using Audio Damage’s Axon—an underrated rhythm generator if ever there was one. Again, it was processed through a complex signal chain using Sinevibes Deep, FabFilter Saturn, Audio Damage Replicant, and finally Glitch Machine’s Fracture. Much of my approach is to work with simple sources and then heavily process them to achieve interesting results.

“Cerulean Blue” and “Mani” are great examples of your creative approach to layering. The cello serves as an ambient lead, but the deeper textures and drones sound almost synthesized, with slight LFO drifts and vocallike qualities. What’s happening there?

It’s a combo platter. In the case of “Cerulean Blue” the overall texture is made up of several elements. The electric cello plays the high lines, but is also playing subtler shifting drones that I’ve processed. To that I’ve added three layers of me playing EBow guitar—high, mid, and low. The more interesting elements are highly time-stretched sounds recorded in the Amazon rain forest. But to make it sound specifically tonal I run it through Zynaptiq’s Unfilter, which has the amazing ability to take complex and nontonal information and let only the desired pitch elements pass through. You can take virtually any sound and force it into just about any key or mode.

With “Mani,” it’s actually a much simpler texture. I took an old orchestral piece of mine and time-stretched it out to the horizon! It’s completely unrecognizable from its original form, but it injects some unique color that shifts around slowly. To that, I added EBow guitar, but pitched down a couple of octaves and ran through a lot of effects.

As with a lot of ambient music, reverb plays a huge role in the treatments, but there’s a clarity and air to your processing. What reverbs did you use?

There are many amazing reverb plug-ins out there, but I keep coming back to a handful. I love everything from C2, the Eventide Black Hole, Lexicon, and I’m a huge fan of everything from Valhalla! There are also some lovely Reaktor patches that sound fantastic.

I don’t have a specific methodology about reverbs, except I try to keep them warm and modulated. I really adore reverbs that slowly move and shift and shimmer, while being careful to avoid muddiness. I think of processing and mixing much in the same way as an orchestrator thinks about the orchestra. You have different instruments playing different parts, and each one takes up a certain part of the range of human hearing. So if I have some warmer, darker sounds, I complement them with some thinner, brighter ones. The goal is not to let too many elements build up in the same frequency range. The end result is just so much more pleasing, and it’s not rocket science.


How did you work with the other musicians on the album?

I wasn’t sure at first if this was going to be a purely solo, electronic affair, or if I wanted to include other players in the live concert, or on a future album. But early on, after I had some rough sketches for maybe a third of the album, I was introduced by a mutual friend to British cellist Peter Gregson, who was visiting Los Angeles at the time. Peter came by my studio just to meet and have a chat about music. But he had his custom electric cello with him, and when I played him some of my sketches he just reached for his cello case and said “Plug me in.”

I honestly didn’t know what to expect, but as this was an experimental project I decided this would be part of the experiment. Peter improvised on a few of the tracks, and I was blown away.

I gave him some very light guidance of what I thought would fit, but I was mesmerized and delighted by his performances. After he left I sorted through different takes and began to build structures with my favorite parts. We continued to work together over the next several weeks via the Internet. I would send him my edits of his playing, and he would improve on those to create something even better. This back-and-forth developed a much stronger role for his cello.

While a few of these parts are clearly cello, others verge on synthesis. What was your processing technique for them?

I was processing Peter’s cello with a variety of harmonizers, delays, sequenced filters, and other plug-ins to create fresh sounds that often don’t sound like cello. Peter is such a melodic musician that, regardless of what I would do afterwards, it added a great sense of linearity to the tracks.


When I was a member of Jon Hassell‘s band [Hassell is a frequent collaborator with Brian Eno], we had a process by which we would improvise as a group for an afternoon. Afterward, Jon would go through all of it, excerpt his favorite parts and send those back to us. The next time we rehearsed we would refine and refine his favorite ideas. After months of this it became an album. My process is a more modern version of that, but it’s still a creative conversation between musicians.

Was Gregson the only collaborator?

Actually, there are a few other musicians on the album. The brilliant film composer David Julyan is a dear friend of mine who was visiting me a few weeks later. Although he’s not trained as a guitar player, his method of using the guitar as a sound device is amazing! So I gave David free reign over several of the tracks, simply to let him come up with whatever mayhem he could. And again I edited, refined, and processed his material to start building more structure to the tracks.

The only artist on the album whose work was not improvised was the spoken-word piece “Like Water” performed by Yoko Honda, a Japanese musician and archery expert here in Los Angeles. I wrote a poem I wanted spoken in both English and Japanese. Once I had that in the track, I did some pretty sophisticated processing of the voice. I used the Antares Harmony Engine [Evo] plug-in controlled via MIDI. Not only does it allow harmonizing, but with a separate MIDI control you can alter how open or closed the harmonies are, which created a fascinating effect. I further processed her voice through a stutter plug-in and some delays. The end result has a slightly Laurie Anderson quality to it, but I think it has its own vibe.

On a slightly different tangent, your company, Wide Blue Sound, has released two critically acclaimed Kontakt instruments: Orbit and Eclipse. How did these come about?

Every score I do begins with the process of sound design. It’s like a painter putting together their color palette before touching brush to canvas. And because I want my music to have its own unique character, I like to do as much of my own sound design from scratch, as opposed to relying on commercial sound libraries. I do use them often, but the heart and soul of my work comes from my personal sonic mayhem.

I was approached by Native Instruments about releasing some kind of library of my musical sound design. But they made it clear that they were not interested in simple collections of unrelated samples. What they suggested was the idea of some kind of synthesis or processing. To them, Kontakt is a synthesizer engine—they are not wrong—and in the time it took me to drive from their office back to my studio, I came up with the idea for what became Orbit: An engine that would combine up to four sounds in this unique rotating, spinning, morphing way.

I quickly built a scratch version and was really happy. From there my partner Nathan Rightnour and I developed a more sophisticated version and enlisted an experienced Kontakt programmer to develop it. From there, I created a larger sound set, continued to refine the engine, and my partner came up with the user interface, combined with all the other components needed to release a commercial product.

There’s a huge difference between a cool idea and a commercial product. Taking what we learned from Orbit we put out Eclipse some months later with a new sound set and an improved signal path. Both of them really caught on. We have an amazing roster of artists and musicians using our products in ways we never expected. It’s been incredibly exciting and we’re continuing to grow the products in new ways.

Each of those synths offers extensive layering options, some of which are evocative of your musical approach. How much of the source material did you create yourself?

Most of it. I’m not averse to using commercially available loops or sound libraries. I do all the time. But typically, by the time I’m done you’d be hard pressed to recognize the source, unless it’s something generic like orchestral or traditional rhythm instruments. In my scoring work, I find a balance between going for a unique personal sound and working under the pressures of deadlines.

However you want to describe it, there is something that unites all the different projects I do, whether it’s Projector or the wide variety of scores I do for film, TV, or games. Each assignment requires elements that push me in a unique direction. So like a character actor, there’s something that ties it together, while at the same time affording me the ability to do a lot of different styles of music.

Do you have anything else on the horizon for Wide Blue Sound?

Absolutely! We took a short hiatus before getting back to developing new products. Some things are coming very soon; others are in early development.

The goal of the company is to blend algorithmic musical processing with an approachable user interface. There’s a layer of intelligent synthesis that sits between you and the sound that’s more than just a [sample] player or even a synthesizer. We’re also working on some radical new interface ideas. If you’re interested, get on the mailing list for more information.