The Recording Revolution Retrospective

Winter NAMM, Anaheim, California, 1990. I was in a darkened hotel room with a few other people where Opcode’s StudioVision, the artist formerly known as a MIDI sequencer, was running on the Mac du jour.It was all as expected, except for one thing: Next to it was a hard drive, its little LED winking knowingly, as it played back two tracks of digital audio. But the audio wasn’t only in sync with the MIDI data; it was living in the same environment. Snuggled in with the MIDI tracks were two more tracks, with waveforms instead of piano roll data.At that moment, the MIDI+digital audio revolution fired the first shot — and I guess one of the first casualties was my head exploding.

Of course, digital audio wasn’t new. It had insinuated its way into our musical lives in many ways. But StudioVision was profoundly different: Two formerly isolated worlds had collided. Instead of destroying each other, though, they morphed into a much, much bigger world.

But what kind of a world has been created? We know about the technology, the innovations . . . when Steinberg introduced their world-changing Virtual Studio Technology, when hard drives became cheaper per Gigabyte than ADAT tape, when DAT breathed its last as CD- and DVD-ROMs put a few more nails in tape’s coffin. Yet . . . are we better off? Is the new boss the same as the old boss? What price have we paid for the revolution, and what has it paid us in return for our loyalty to — and fascination with — zeroes and ones?


Some would say this happened when TEAC introduced the 3340 4-track tape recorder in the early ’70s. And they’d be right, because recording became accessible to many more people. But it was only a piece of the puzzle. Mastering was still totally out of reach, signal processors were expensive pieces of hardware, “cloning” a tape to back it up was impossible, and cheap mixers sounded, well, cheap. Besides, hitchhiking along with every recording was hiss, modulation noise, distortion, ever-diminishing high frequency response, head wear, and stretching tape.

Even those who romanticize the era of analog recording probably wouldn’t want to live there full-time. Sure, a high-end, superbly maintained analog system has a certain sound quality that digital can’t touch. But we’re talking about the highest of the high end. For anything less, digital knocks analog to the floor. And for those to whom money is no object, and can indeed afford to own and maintain the finest analog systems on the planet, “analog” recording often means capturing to tape in order to use it as a signal processor — then immediately bouncing it to digital to preserve the “analog” quality.

So more people than ever can record. But is this necessarily a good thing? Yes, but. . . .


Perhaps the most dangerous digital dilemma is the ability to create music that sounds flawless, but has none of the magic we want from music.What is it about digital that sucks the life out of music?

The answer is simple: There’s nothing about digital that sucks the life out of music. Digital doesn’t kill music, people do. It’s the people who quantize notes to 100% strength, and use pitch correction not as a creative tool, but as a quick fix that substitutes for getting a good vocal in the first place. It’s those who record one take and figure they’ll edit it into shape, rather than say to the artist “I think you can do better.” It’s the people who create “Frankenparts” by cutting and pasting elements that never existed as sequential events in order to create a “perfect” take — which seems about as “real” as a high-priced call girl who spends half her income going to Mexico for plastic surgery.

A scarier consequence is the image of “recording,” meaning one person, sitting in a room, staring at a computer monitor and spending more time editing than creating. Until the digital age, with very few exceptions, music was a social event — with all the interpersonal messiness, arguments, support, controversy, love, compromise, and resolution that entailed. Sometimes, great albums resulted from great conflicts. For the solo recordist, the only conflicts are system conflicts from drivers that haven’t been updated. Yes, we’ve even outsourced our conflicts to technology.

Friction creates heat, and because many of today’s recordings lack friction, they lack heat. It’s not a generational thing, because the under-30 generation is what keeps the Led Zeppelin and Pink Floyd back catalogs humming along. Prince is the exception who proves the rule: He records entire CDs by himself, and they kick butt. But he spends huge amounts of time playing with others, testing out ideas, and interacting. When he goes into the studio, it’s informed with the live experience.

And as mentioned in the December 2004 issue, this trend goes against genetic hard-wiring that makes our brain work the way it does. For details check out the article, but suffice it to say that the “right-brain” creative thinking you’d use in songwriting, and the “left-brain” linear thinking that you’d use in engineering, are two different activities that use different parts of your brain. Trying to do the linear thinking will tend to shut down the creative part of your brain, and vice-versa.

Of course, plenty of people still record as groups, whether to analog or digital. But those who don’t could learn something by getting social, or at least, playing out in front of an audience. One of the missed opportunities of digital is the ability to facilitate the recording of groups. Portable recording rigs with great fidelity can be set up just about anywhere; computer-based hosts can load templates designed to let a group start recording with a couple mouse clicks. (Try that with an analog recorder, unless the assistant engineer did all the alignment work and rewound the tape the night before.)

On the plus side, the composer now has more power than ever in realizing ideas; Bach would likely have written even more material had sequencing been available. Overall, though, digital has changed the dynamic of recording irreversibly — sometimes for better, sometimes for worse. It’s our responsibility not to be so seduced by the power of digital editing that we forget it’s the take that matters, not how thoroughly you can edit it.


When Steinberg introduced Virtual Studio Technology in 1995 (now up to version 2.4, with provisions for Mac/Intel machines and native 64-bit processing), it was ahead of its time. Computers of that era had typical processor speeds in the sub-100MHz range, memory was expensive, and hard drive costs — while diving downward — had a long way to go before they reached bottom. Still, the ability to say “goodbye” to external hardware meant the handwriting was on the hall: Software, not hardware, was the future.

Shortly thereafter Propellerhead Software shifted the virtual instrument movement into high gear by introducing ReBirth, a stunningly realistic re-creation of the long-defunct Roland TB-303 Bassline and TR-808 drum machine. This is also where the trend of virtualizing vintage gear in software began, and to Roland’s eternal credit, they had no problem with what was clearly a tribute from a bunch of very clever synth fanatics. It wasn’t long before the flood of virtual Minimoogs and other classic synths began.

Putting everything inside the computer brought new opportunities and also, new limitations. Gone was a musician-friendly control surface, replaced by a single mouse and some QWERTY keyboard equivalents. To be fair, this was a trend that had been going on for quite some time; digital synthesizers had previously shrunk the usual forest of knobs and switches to a few buttons, a data wheel, and a cryptic LCD. Paradoxically, the only relief from this was a computer-based editor/librarian.

In response, we got control surfaces. The Peavey PC-1600 was a pioneer controller, but today we have a ton of choices, including modular, expandable systems (e.g., Mackie Control) with moving faders and LCDs that spell out the parameters you’re controlling. And Reason 3.0 introduced a protocol called Remote, that allows control surfaces to support Reason with pinpoint precision.

At the Winter 2006 NAMM, Native Instruments announced Kore, their “unified field theory” of how to deal with software synths. Kore is a hardware/software system with a controller that’s optimized for use with software synthesizers. Rather than have to deal with different interfaces, you can use a single controller as an entry point to all your soft synths. Furthermore, a database tagging system allows finding patches in a far more convenient way.

So score a major plus for digital: We’ve gone from having to write down knob settings on patch sheets, to balky cassette interfaces that saved and played back patches, to editor/librarians that finally put things on a computer, to a system designed to streamline the process of patch selection, make it easier to play soft synths, and provide a common interface so gestures learned on it can be applied to any synth.


Reason wasn’t the first “virtual studio” piece of software — Arturia’s Storm preceded it. But Reason was the virtual studio that captured the imagination of the musical world. With its whimsical graphics, built-in sequencer (which almost didn’t make it into the product, but that’s another story for another time), ease of use, and great sounds, Reason hit a home run the millisecond it was introduced.

And why not? It was truly a complete soft studio, a self-contained world with the brilliant addition of ReWire, a protocol that allowed coupling Reason with other digital recording software. In a sense, Reason used digital technology to resurrect the modular synthesizer concept, where you weren’t locked into a technological straightjacket. But it also was a “best of both worlds” situation: Although you could do your own patching, and get pretty eccentric if that was your thing, Reason’s little digital brain could also “auto-patch” modules with reasonable intelligence.

On the hardware front, though, the “virtualizing a studio” award would have to go to Creamware. Their SCOPE system (introduced in 1998) put enough DSP power on a card to virtualize serious mixers, processors, and instruments — then included both software I/O to hook into your host sequencer, and hardware I/O for connecting with the real world.

With both these systems, digital went from replacing your multitrack to replacing your studio. It shrunk hardware into ones and zeroes, decimating the cost of entry and forever changing the definition of “recording studio.”


Changes in technology are no longer smooth crossfades, as the ADAT multitrack tape recorder proved. It sold for longer than anyone expected, then crashed to nothing. DAT? Same thing: a good run in pro studios, then it caved. MIDI sequencers? When the MIDI + digital audio sequencer appeared, MIDI-only sequencers became as irrelevant as yesterday’s disposable pop hit.

It’s ironic that digital, which finally allows a way to preserve sound virtually forever by cloning to new storage media, is becoming synonymous with “disposable.” You can still buy strings for a guitar made in the ’40s, but just try to run a program that ran on a Mac Plus on today’s far superior Mac OS X operating system. Music has gone from an “albums” market to a “singles” market, with artists appearing and disappearing as quickly as flowers after a spring rain. Life cycles for technology get shorter and shorter; the CD had a run of almost two decades before it started to fade, but the DVD is about to be eclipsed by newer variations not that long after it was the darling of the consumer electronics world.

The next revolutionary trend is having live performance become as digitized as the studio, while blurring the line between the two. Think of a product like the Open Labs Neko, Muse Receptor, or Manifold Labs Plugzilla: all of them host plug-ins — which were meant to virtualize hardware and put them into a software-based studio — in a hardware device optimized for live performance. How’s that for a twist?

Guitarists are using plug-ins live instead of carrying around a pile of amps and guitars, and digital mixers allow not just recall, but even automated venue tuning. The Variax 700 digital acoustic guitar from Line 6 gives an acoustic sound without feedback, while artists like Shania Twain run a sequencer to mute mics when no one’s singing into them.

Software-wise, the program that defines this live/studio crossover is Ableton’s Live. Is it a DAW? A musical instrument? A new way to compose? Something that’s part jazz, part rock, and part techno? The answers are yes, yes, yes, and yes.

Another trend is “order out of chaos.” Kore, mentioned above, is one example. Reason devising a standardized control interface is another. Apple casting its lot with Intel and Unix is yet another. But we’ll see digital used more and more in the future to unite rather than divide: more commonality of file formats, more seamless file translations, better restoration of analog thanks to digital, and more powerful emulations of analog-based techology.

And the sleeping giant in all this is video. It seems few musicians realize that the digital revolution has affected video as well; it’s now as approachable as digital audio. As mentioned, a technological changing of the guard seems to happen these days as suddenly as a meteor hitting and wiping out the dinosaurs. Soon, the idea of producing “audio only” media will seem as antiquated as producing TV shows in black and white. Musicians will need to learn about video, or partner with people who already know the ropes. Anything that seems out of reach now to the average musician — ultra-high sample rates, video recording your band in high-def, terabytes of RAM, hard disk arrays that make data loss just about impossible, the end of moving parts — they’re all part of the very near future.

The revolution may or may not be televised, but it certainly has been digitized. What’s really interesting is that even though some might think the revolution is over, the new establishment created by that revolution is just beginning.