This article originally appeared in the August 1996 issue of Electronic Musician.
During the early 1970s, Alice Cooper's teen anthem "Eighteen" proclaimed the frustration of being thrust into the first terrifying stages of adulthood. While millions of teens identified with Cooper's angst, one teenager in particular was not, as the song stated, "living in the middle of doubt." That confident teen was Bob Ezrin, who at nineteen years of age, was the coproducer of not only "Eighteen" but Cooper's entire debut album, Love it to Death.
Surprisingly, Ezrin's Chart success owed little to the fact that he was a teenager producing records for other teenagers. His ears proved to be much more versatile than those of the stereotypical "with it" pop producer of the day. The young Ezrin possessed a vast knowledge of music theory and developed formidable engineering chops. His production style exhibited a yin-yang approach, as he rebelled against standard recording practices but adhered to traditional songwriting structures and classical orchestration techniques.
Ezrin was also a master at exposing his clients' unexplored creative sides. For example, he convinced Cooper that the "devil's house band" had a softer side and scored massive hits with the ballads "Only Women Bleed" and "I Never Cry." Ezrin pulled the same trick with rock's Kabuki Kids, Kiss, by producing a lushly orchestrated ballad entitled "Beth." The song became their first smash single. His golden touch even benefited the esoteric, art-rock troupe Pink Floyd when he rearranged "Another Brick in the Wall (Part II)" into an accessible hit-single format.
These days, however, the only knobs Ezrin is likely to twist are the ones on his car stereo. He is currently president and cofounder of 7th Level, an interactive-software company that produced, among other consumer titles, the tremendously popular Monty Python's Complete Waste of Time CD-ROM. Despite the successful career change, Ezrin has not taken a permanent vacation from the record business. He doesn't discount the possibility of future production projects, and he is seriously considering starting a record label with his partner in 7th Level, saxophonist Scott Page.
Ezrin graciously interrupted his hectic schedule to share some production secrets with EM readers and recount a selection of fascinating stories behind the making of some of the '70's biggest FM hits. Here's what went down.
Coproducing Alice Cooper's Love It to Death seems like an awesome responsibility for someone who was only nineteen years old. Did the album's executive producer, Jack Richardson, who had produced all of the Guess Who's smash hits, give you much creative latitude?
I was an idea guy, an arranger, and, to a certain extent, a performer, but I wasn't allowed to be very hands-on. The actualizers, from a technical point of view, were Jack and engineer Brian Christian.
However, every production or engineering technique that I used throughout my entire career was discovered while doing that first Alice Cooper record. After that, I was only refining what I had already learned. Jack taught me about the physics of sound, the characteristics of different microphone capsules, the importance of mic placement, and even the reasons why instruments are constructed in certain ways. He impressed upon me that true creativity must be built on a certain understanding of the craft, but what I didn't learn until later was that rules are made to be broken.
So when was it that you started breaking the rules?
It was during the recording of the album School's Out by Alice Cooper. I had moved to New York and was working with Roy Cicala, who owned the Record Plant. Roy used to set up these incredibly complex signal paths and effects chains, and he would train me by starting something and then leaving me alone. He'd say, "I'm going to get a drink of water," and he wouldn't come back! I'd be confronted by a snare drum track that was routed through a series of Pultec, Altec, and Urei compressors and then back into an input channel for some additional EQ. If I wanted a different snare sound, what was I supposed to do? He had left the room! Eventually, I learned that nobody would die if I turned a knob. So I became fearless and started turning knobs until I found what I liked.
The vocal tracks on that record sound very energetic and aggressive. What were some of the techniques you used to produce such an in-your-face sound?
Initially, I had followed all the rules about recording vocals the "correct" way, but when I listened to the tracks, they didn't sound real to me. I went reaching for whatever I could find to give the vocals some sense of power and space. The only things available were EMT plate reverbs and tape echo. I ended up combining tape slap and the EMTs to produce a long, thick reverb sound. I really started getting into playing games with slap and reverbs on School's Out.
I actually got a lot of my sounds out of really cheap stuff—I just used it right! For example, I almost always used a Shure 5M57 to record Alice's voice. The trick to using the SM57 on vocals is compressing it to even the sound out and then getting gross with equalization. I would dial in some real toughass midrange and a lot of top end. If you stick that in your mix, you'll get a compelling, gut-wrenching rock vocal sound.
However, I learned never to EQ the vocals until after I had compressed them. You see, there are certain frequencies that are naturally predominant in everyone's voice. When I tried equalizing before patching in compression, I'd often bring out some of the low-end stuff. At certain parts of a singer's range, this EQ would produce a low-frequency hump that hit the compressor like crazy and caused the signal to sound squashed and dull. I discovered that it's better to first let the compressor level out the overall quality of the voice and then go in and start tweaking the specific frequencies I wanted to emphasize or de-emphasize.
On an emotional level, it's difficult to get vocalists to deliver exciting performances if they feel uncomfortable wearing headphones. I'd typically have them sing without headphones and let them monitor the tracks through some huge speakers. To diminish leakage from the rhythm tracks into the vocal mic, I'd use the old trick of putting the speakers out of phase. Somewhere between those two speakers will be a point at which the sound-pressure level is close to zero. All you have to do is have someone move the mic around while you listen in the control room for the spot where there's almost no leakage. Leave your mic in that spot. Now, your vocalist can stand in front of this great big PA system and feel the music surround them, just as if they were singing onstage.
The electric guitar tones on the single "School's Out" really evoke the larger-than-life sonic quality of a rock anthem. How did you record them?
We drove the signal paths crazy! On "School's Out," we didn't put the guitars through an amplifier at all: we just plugged them directly into a Spectrasonic mixing console and absolutely creamed the mic preamps. I really wanted the guitars to be bratty sounding, so we went for ultradistortion. The preamp level was turned up to infinity, so you'd hear this fuzz guitar as soon as the channel fader was brought up.
On the other side of the coin, how did you track those beautiful acoustic guitars on Pink Floyd's The Wall?
The key was microphone placement. Whenever I recorded an acoustic guitar, the first thing I would do is stand in front of the guitarist and listen to the performance. Then I'd plug one ear —because most microphones only hear from one pinpoint source—and move around, listening with my open ear, until I found the spot where the guitar sounded best. In that sweet spot, I would really pay attention to the tonal contour of the guitar. Is it rich in the midrange? Does it have a big, boomy sound on the bottom and very little presence? The trick is not to use a microphone that has the same tonal qualities as the guitar. For example, a Neumann U 47 would sound too muddy on a guitar with a boomy low end.
Your love of classical music usually managed to sneak into your production style. How did you convince hard rock acts such as Alice Cooper and Kiss to incorporate string orchestrations into their records?
First of all, my enthusiasm was always relatively infectious. I think the bands got caught up in the excitement of trying something that was new for them. For me, however, it was fundamental to apply basic music theory and classical orchestration techniques to rock and roll. That didn't necessarily mean putting strings on a track; I could also orchestrate for the basic rock-band instrumentation. I would say things like, "This sounds like a cello part to me. Of course, it's a guitar, but we're going to treat it like a cello and play a cello line."
On Alice Cooper's "Eighteen," for example, the guitar, bass, and drum parts were arranged so that there was a rhythmic bed, a counter melody, a basic guitar-riff melody, and a vocal melody. The four-or five-part orchestration was a very classical approach to arrangement that produced a thick, moving sound without anyone having to do anything too fancy.
Pink Floyd's "Another Brick in the Wall (Part II)" is one of rock radio's most successful singles. Did you know it was going to be a huge success?
I was sure that "Another Brick in the Wall (Part II)" could be a hit single, but Pink Floyd basically refused to do singles. Initially, I was concerned because the song was only one verse and one chorus long. I asked [Pink Floyd songwriter] Roger Waters for a second verse and a second chorus, and he basically told me to bugger off. So I ended up cloning the tracks and splicing the two parts together to construct a basic track composed of two verses and two choruses. Then I spliced in a combination of two drum fills to get into the song's climactic guitar solo. When I played it for Roger, his eyes lit up because he knew it was going to he a hit.
What are some of the wilder things you have done to elicit a great performance from an artist?
Well, we gaffer-taped Peter Gabriel ten feet up a pillar to get a vocal take for the song "Modern Love" [from Gabriel's 1977 solo debut Peter Gabriel]. The chorus went, "Oh, the pain. Modern love can be a strain." I just wasn't believing him, so I said, "Look, if you don't make it believable, you're going up the pillar!" We gave him three or four takes to try it, and then up he went. I turned to the engineer and said, "Mic him!" And that's how we got the performance.
There were so many other things we did! For example, we were having a particularly tough couple of days during the sessions for Cooper's Welcome to My Nightmare album, so I brought a traveling circus into the studio. This little troupe of jugglers, magicians, and dwarfs interrupted the session and did this fun routine. Cooper's band immediately started playing circus music, and it really brought everybody up. Immediately following that, the band cut the master take to "Only Women Bleed."
Considering that you cut your teeth during the golden age of analog recording, it must be amusing to watch all these digital audio zealots scrambling to find old tube gear to warm up their tracks.
The reason people like to use old Neve consoles and other tube processors is because tube technology has a tendency to bend as opposed to break. There is a physical component to a tube sound; it's almost like a stretching muscle. You can push and push the signal toward the breaking point, and you can actually feel the strain. Then, when you hear the signal start to break up, you can control the amount of distortion by how hard you hit the device with the input signal.
Having said that, I must say that I love the digital domain—I just hate what the audio business did to it. The market has frozen technology at a certain level. We've doomed ourselves to deal in an audio standard so far inferior to everything else that it's laughable. I recently flew to London with a guy from a telecommunications company. He said, "Your topend sampling rate is 48 kHz? That's a joke! We have switches that operate at 300 kHz, and we're only talking about telephones! I thought you music guys were supposed to be into high fidelity."
But doesn't the 44.1 kHz sampling rate cover the 20 Hz to 20 kHz range of theoretically perfect human hearing?
Theoretically, yes. The 44.1 kHz sampling rate delivers an "on" and "off" for every frequency between 20 and 20,000 cycles, which should give you everything. But it really only covers the fundamental sound. It doesn't offer critical resolution for harmonically rich source sounds. And these harmonics are what give a sound a real sense of density.
You can hear an example of how today's digital technology fails to document these types of sounds when you listen to a digital reverb. Toward the very end of the reverb decay, the sound just drops off. It's gone. However, when you hear the end of a note in an acoustical space such as the Notre Dame cathedral, the reverb will slowly fade until it disappears into a thick soup of audio interference. There is a warmth and a density that gives the sound life. This is what's missing in digital.
But just because a sound is reproduced using numbers doesn't mean it can't be "real." You can produce very warm and organic sounds in the digital world, but you need to have enough digital information available to do it properly. A 44.1 kHz sampling rate just doesn't cut it.
What is your primary goal when making a record?
My primary responsibility is to make the recording experience as productive and constructive as possible for the people involved. My secondary responsibility is to ensure that there is some empirical quality level in the product. If all you care about is the product and you don't care about the people, the product will suffer because the people are what make it great.