Search Gear
 

The Electronic Century Part IV: The Seeds of the Future

May 1, 2001
share
man standing between antennas

FIG. 1: Joel Chadabe performing at the New Music New York festival at the Kitchen in New York City in 1979. The antennas are modified theremins that were custom-made by Robert Moog. Here Chadabe is using them to 'conduct" the first Synclavier, which is on the table behind him.

At the end of the 1960s, two distinct but parallel paths of technical innovation traversed the field of electronic music. One of the paths, leading toward a future of digital audio and digital signal processing, was computer music. It was neither musically nor technically an easy path to follow. But the difficulties of computer-music development, such as the lack of real-time feedback and the need to specify music in computer code, were offset by the promise of creating any sound imaginable—not to mention the advantages of precise control, repeatability, and nearly indestructible storage.

The other path of technical progress, followed by many musicians, led to the development of the synthesizer. Analog synths, many of which could be played like traditional instruments, opened up a new world of electronic sound in performance. With the help of hugely successful recordings like Wendy Carlos's Switched-On Bach and Keith Emerson's single "Lucky Man," synthesizers were becoming standard in virtually every band's instrumentation.

SYNTHESIZERS OF THE '70S

By the beginning of the 1970s, it was clear that electronic sounds were hot and that electronic music could become a viable industry. In fact, the market exploded during the decade, with many new companies developing new instruments, and the technology itself advanced quickly. As we moved from the transistors of the '60s to the integrated circuits of the '70s, computers and analog synthesizers became less expensive and easier to use, and they were often joined together in what were called hybrid systems.

In several experimental studios—including those at Bell Telephone Laboratories in Murray Hill, New Jersey, and the Institute of Sonology in Utrecht, the Netherlands—computers were used as sophisticated sequencers to generate control voltages for analog synthesizers. Emmanuel Ghent's Phosphones (1971) and Laurie Spiegel's Appalachian Grove (1974) are examples of music created at Bell Labs; Gottfried Michael Koenig's Output (1979) exemplifies music composed at the Institute of Sonology (see the sidebar "Recommended Resources").

The most important trend of the '70s, however, was the increasing accessibility of digital technology. With the invention of digital synths, the analog and digital paths—which had wound their separate ways through the landscape of electronic music in the '60s—began to converge. These new instruments combined the performance capabilities of analog synthesizers with the precision of computers.

In 1972 Jon Appleton was director of the Bregman Studio at Dartmouth College, which housed a large Moog modular system. Appleton asked Sydney Alonso, a faculty member at Dartmouth's Thayer School of Engineering, about using a computer to control this system. Alonso's advice was to forget the Moog and build a digital synthesizer. Together they did, calling it the Dartmouth Digital Synthesizer. Cameron Jones, a student at the college, wrote the software. Alonso and Jones then formed a company called New England Digital and, with Appleton's musical advice, went on to create the Synclavier.

The Synclavier was a computer-and-digital-synthesizer system with an elegantly designed keyboard and control panel. In September 1977, I bought the first Synclavier, although mine came without the special keyboard and control panel that Alonso and Jones had so painstakingly designed (see Fig. 1). My idea was to write my own software and control the computer in various ways with a number of different devices. For example, in Follow Me Softly (1984) I used the computer keyboard to control the Synclavier in a structured improvisation with percussionist Jan Williams. In 1983, Appleton composed Brush Canyon for a Synclavier with both the keyboard and the control panel.

By the late '70s, digital synthesizers were under development at research institutions such as Bell Labs and the Paris-based organizations Groupe de Recherches Musicales and Institute for Research and Coordination of Acoustics and Music (IRCAM). The market was full of analog, hybrid, and all-digital synthesizers, drum machines, and related devices. These products were manufactured by a long list of companies, among them ARP, Crumar, E-mu Systems, Kawai, Korg, Moog Music, Oberheim Electronics, PPG, Rhodes, Roland, Sequential Circuits, Simmons, Synton, and Yamaha. Technology was advancing quickly, the level of creativity was high, a new mass market was emerging, and price was increasingly important. High-end products were quickly adapted to a larger market. When Fairlight Instruments put the first sampler on the market in 1979, it cost about $25,000; by 1981, E-mu's Emulator was selling for $10,000. It was an exciting time, with new and powerful technologies appearing at increasingly affordable prices.

THE BEGINNING OF MIDI

Although innovation, creativity, and adventure were in the air at the end of the '70s, there was also a large measure of chaos in the market. Standardization was nonexistent: if you bought a synthesizer from one manufacturer, you had to buy other products from that same company to maintain compatibility. The marketplace was fragmented, with no fragment large enough to warrant major investment. In the view of Roland president Ikutaro Kakehashi, standardization was necessary to make the industry grow. With a global market unified by a digital standard, a company of any size could develop and sell its products successfully.

photo of a digital sampler

FIG. 2: Among the earliest digital samplers was the Ensoniq Mirage. It was supported by numerous third-party manufacturers that offered both hardware accessories and software enhancements.

In June 1981, Kakehashi proposed the idea of standardization to Tom Oberheim, founder of Oberheim Electronics. Oberheim then talked it over with Dave Smith, president of Sequential Circuits, which manufactured the extremely successful Prophet-5 synthesizer. That October, Kakehashi, Oberheim, Smith, and representatives from Yamaha, Korg, and Kawai met to discuss the idea in general terms.

In a paper presented in November 1981 at the AES show in New York, Smith proposed the idea of a digital standard. At the NAMM show in January 1982, Kakehashi, Oberheim, and Smith called a meeting that was attended by representatives from several manufacturers. The Japanese companies, along with Sequential Circuits, were the primary forces behind sustaining interest in the project, and in 1982 they defined the first technical specification of what came to be known as the Musical Instrument Digital Interface, or MIDI. At the January 1983 NAMM show, a Roland JP-6 was connected to a Sequential Circuits Prophet 600 to demonstrate the new MIDI spec. After some refinement, MIDI 1.0 was released in August 1983.

The adoption of MIDI was driven primarily by commercial interests, which meant that the specification had to represent instrumental concepts familiar to the mass market. Because that market was most comfortable with keyboards, MIDI was basically a spec designed to turn sounds on and off by pressing keys. For some musicians, this was a serious limitation, but most felt that the benefits of MIDI far outweighed its shortcomings.

FROM FM TO SAMPLES

In business terms, MIDI was a smashing success. Its universal format allowed any company—new or established, large or small—to present the world with an original concept of music.

In 1983, Yamaha introduced the first monstrously successful MIDI synthesizer. The DX7 was a hit not only because of its MIDI implementation, but also because it sounded great and was reasonably priced at less than $2,000. To generate sounds, the DX7 used frequency modulation (FM), which John Chowning had developed at Stanford University in 1971 and which Yamaha had licensed in 1974.

FM results when the amplitude of one waveform, called the modulator, is used to modulate the frequency of another waveform, called the carrier. As the amplitude of the modulator increases, the spectrum of the carrier spreads out to include more partials. And as the frequency of the modulator changes, the frequencies of the partials in the carrier spectrum change. In other words, by changing the amplitude or frequency of the modulator, a performer can change the spectrum's bandwidth and the timbre of sounds. The early advantage of FM synthesis was that simple controls could cause major changes, making instruments like the DX7 very popular for live performance.

Throughout the '80s, Yamaha continued to develop new applications of FM synthesis in a line of instruments, while many other companies—Akai, Korg, and Roland among them—developed their own synthesizers. Roland, for example, released the Juno-106 in 1984 and the D-50 family in 1987. To a growing number of musicians, however, the main disadvantage of synthesized music was that it sounded electronic. As it turned out, most MIDI musicians wanted emulative sounds. They turned to samplers, which allowed any sound-whether trumpet riff or traffic noise—to be recorded and played back at the touch of a key.

In the early '80s, E-mu Systems had broken through the first major price barrier in the sampler market with its $10,000 Emulator. In 1984, Ensoniq introduced the Mirage at less than $1,300 (see Fig. 2). And in 1989, E-mu lowered the bar even further. Its Proteus, a sample-playback device that came with 256 prerecorded samples and an exceptionally simple interface, cost less than $1,000.

The electronic-music industry continued to grow throughout the 1980s. By the early '90s, the market was overflowing with synthesizers, samplers, and other MIDI hardware, but attention was beginning to center on software development.

SOFTWARE BEGINNINGS

A MIDI software industry had already emerged in the mid-'80s. For example, Opcode Systems established itself in 1984 with a MIDI sequencer for the Macintosh and almost immediately expanded its product line to include David Zicarelli's DX7 patch editor. At the same time, other companies were forming and releasing similar software, among them Steinberg Research in Hamburg, Germany, and Mark of the Unicorn in Cambridge, Massachusetts.

software screen shot

FIG. 3: Intelligent Music's M algorithmic software was a remarkable tool for generating music on a Macintosh. The software also had a brief life span on the PC and has recently been reintroduced for the Mac by Cycling '74, a company founded by M's inventor, David Zicarelli.

As personal computers got faster and less expensive and computer-based MIDI sequencers became more commonplace, other MIDI software applications were developed. In 1985, Laurie Spiegel wrote Music Mouse, a program that contained harmony-generating algorithms. In 1986, Zicarelli developed two applications, M and Jam Factory, for Intelligent Music, which continued to develop other interactive-composing programs during the years that followed. Of particular interest was M, an interface of musical icons that controlled algorithms (see Fig. 3). Given a melody or other input, a composer could use M to generate an infinite stream of rhythmic and melodic variations, ending with something distinct and original. For example, I composed After Some Songs, a group of short improvisational compositions for electronics and percussion, by using M to transform some favorite jazz standards.

In 1985, Miller Puckette went to IRCAM to develop software for the 4X, a digital synthesizer built by Giuseppe di Giugno. By mid-1988, Puckette had developed a graphical modular control language that he called Max. At about the same time, Zicarelli saw a demonstration of Max and, after discussing it with Puckette, developed the language as a commercial product-first with Intelligent Music and then, after Intelligent changed directions in 1990, with Opcode Systems. Max was released in late 1990 and remains an essential tool for many customized music software applications.

The first digital audio programs were also developed in the mid-'80s. Among them were MacMix, written by Adrian Freed in 1985, and Sound Designer, a Digidesign product released the same year. In 1986, working in conjunction with a company called Integrated Media Systems, Freed added a specialized hardware device and called the system Dyaxis. And in 1988, taking advantage of increasing computer speeds, larger hard drives, and digital-to-analog converters, Digidesign released Sound Tools, which established an industry standard in audio editing. Digital audio was fast becoming accessible to musicians.

TRENDS INTO THE '90S

Everything expanded throughout the 1990s. The market filled with increasingly sophisticated synthesizers, samplers, drum machines, effects generators, and an enormous variety of modules, each of them doing something different and offering unique sonic possibilities. A large secondary market of patch panels and other MIDI-management gear formed around the needs of professionals with racks of devices. Software applications, including sequencers, patch editors, effects processors, and hard disk recording systems, also permeated the market. In fact, there was so much to learn that the following joke was frequently heard: "What's a band?" "Three guys reading a manual."

Some musicians may have felt a few pangs of nostalgia for analog equipment and sounds, but the 1990s were largely a digital decade driven by the availability of ever-faster microprocessors. Not surprisingly, as personal computers kept getting speedier and more powerful, digital audio became increasingly software based.

But the advances of the 1990s reach far beyond simply editing sound. Digital signal processing (DSP), which allows composers to transform as well as synthesize sounds, became an important component in digital audio systems. One of the most complete DSP applications to appear in the late '90s was MSP, created by Miller Puckette and David Zicarelli and marketed through Cycling '74, a company that Zicarelli formed in 1997 to develop DSP software as well as to make M available again.

Among the pioneers in DSP systems for composers is Carla Scaletti. In 1986, she began creating a software synthesis system that she called Kyma (Greek for wave). By the following year she had extended Kyma to include the Platypus, a hardware audio accelerator built by Kurt Hebel and Lippold Haken that sat alongside a Macintosh, received instructions, and generated sound (see Fig. 4). Scaletti's 1987 composition sunSurgeAutomata demonstrates the sound-processing and algorithmic abilities of the Platypus. By 1990, Scaletti and Hebel had upgraded the hardware to a system called the Capybara. In 1991, they formed Symbolic Sound Corporation and shipped the first complete Kyma system, available initially for the Mac and shortly thereafter for the PC. With its evolving hardware and continual upgrades, Kyma remains one of the most powerful sound-design systems available today.

INTO THE 21ST CENTURY

As we look to the future, it's hard to know which innovations will have the greatest impact on our lives and our work. Although we can assume that digital audio technology will keep improving as computing horsepower increases and prices drop, predicting exactly how this will play out in our studios isn't easy. I asked several leading figures in the music-technology field for their thoughts on what the next decades will bring. Here are some of their predictions:

Craig Harris (composer, author, and executive editor of the Leonardo Electronic Almanac): "New instruments will have enormous flexibility in both the sonic realm and the modes of interaction, such that composers can create in the way that is most effective for them, performers can realize works in ways that work best for their own personal styles, and audience members can benefit from a rich variety of interpretations. This is one realm that distinguishes electronic instruments from traditional instruments, in that there is no preconceived sonic realm or method for interaction that is inherent in the machine. For the first time, we have instruments that will have their limits established more by our imaginations than by the laws of acoustics."

three people standing at computer table

FIG. 4: Pictured (from left to right) are Bill Walker of the CERL Sound Group, Kurt Hebel, and Carla Scaletti with the Platypus at a sound check before a November 1989 concert in Columbus, Ohio.

Carla Scaletti (composer, software developer, and president of Symbolic Sound Corporation): "What seems to interest us is the process of making music. Bootlegs of tour improvisations on original album material are more sought after than the finished albums. Some musicians are beginning to post MP3 versions of `works in progress' on the Internet, so all of us can witness and participate in the process of exploration and refinement that goes into a `finished' album. Every album that is released immediately spawns multiple offspring in the form of remixes. Interactive and immersive environments like computer games require music that can be `traversed' in a multiplicity of ways; each path through the game results in a new piece of music. The 21st century will be `the composition century' where `objects' (like finished albums) will be virtually free on the Internet, while the creators of those objects will be highly sought after."

Daniel Teruggi (composer and director of the Groupe de Recherches Musicales in Paris): "If we put our analytical ears on, we see that there is still a great difference between a recorded sound and the sound produced and propagated by an acoustical device. Loudspeakers, microphones, amplifying systems, and digital conversion are the elements of sound processing that still have to achieve what I would call a more `realistic' image of sound."

David Wessel (researcher and director of the Center for New Music and Audio Technologies at the University of California, Berkeley): "Standard general-purpose processors like the PowerPC and Pentium are now capable of real-time music synthesis and processing. Laptops will become the signal processors and synthesis engines of choice, at the core of the new performance-oriented electronic music instrumentation. I'm confident that we will also see the development of a new generation of gesture-sensing systems designed with music in mind, including a number of the common interfaces like drawing tablets and game controllers adapted for the intimate and expressive control of musical material. And I see, and am beginning to hear, the emergence of an electronic music that might be more akin to chamber music or that of a small jazz group where musical dialog and improvisation play essential roles."

David Zicarelli (software developer and president of Cycling '74): "The computer, synthesizer, and tape recorder have become the new folk instruments of industrialized cultures, replacing the guitar. An overwhelming number of recordings are being produced in the electronica genre right now, and there is no sign that this will stop anytime soon."

You may now be wondering how to take advantage of the resources available to you right away and in the years to come. Start by thinking about what kind of music you want to write and which tools will best help you reach your goals. Explore the Web and read magazines such as EM for new developments in hardware and software. Above all, learn the history: read books on the subject and study recordings by the pioneers as well as the current movers and shakers. A 100-year tradition is waiting to be explored, and the more you know about the past, the better you can shape your own future.

Joel Chadabe is a composer, past president of Intelligent Music, author of Electric Sound, and president of the Electronic Music Foundation. He can be reached at chadabe@emf.org.

RECOMMENDED RESOURCES

For an overview of electronic-music history, read Electric Sound, by Joel Chadabe (Prentice Hall, 1996).

For an overview of MIDI, read MIDI for the Professional, by Paul D. Lehrman and Tim Tully (Amsco Publications, 1993).

The following compact discs feature music mentioned in this article:

After Some Songs (Deep Listening) is a group of Joel Chadabe's abstractions of jazz standards, for computer and percussion.

CDCM Computer Music Series, volume 3 (Centaur), includes Carla Scaletti's sunSurgeAutomata, for the Platypus.

CDCM Computer Music Series, volume 6 (Centaur), includes Jon Appleton's Brush Canyon, for Synclavier.

CDCM Computer Music Series, volume 24 (Centaur), includes Joel Chadabe's Follow Me Softly, for Synclavier and percussion, and Cort Lippe's Music for Clarinet and ISPW.

Computer Music Currents 2 (Wergo) includes Emmanuel Ghent's Phosphones, composed at Bell Telephone Laboratories in 1971.

Gottfried Michael Koenig (BVHaast) includes Koenig's Output, composed in 1979 at the Institute of Sonology.

Women in Electronic Music 1977 (CRI) includes Laurie Spiegel's Appalachian Grove, composed at Bell Labs in 1974.

These and other interesting items are available from CDeMusic at www.cdemusic.org.

The Electronic Century, Parts I–III

Part I: Beginnings

Part II: Tales of the Tape

Part III: Computers and Analog Synthesizers

Show Comments

These are my comments.

Reader Poll

Do you use music streaming services?


See results without voting »