Remembering David Wessel: The Breakfast Club


(L-R): Tom Oberheim, Roger Linn, Dave Smith, Jaron Lanier, David Wessel, Keith McMillan, John Chowning, and Max Mathews. This group of technology and electronic-music pioneers gathers to talk shop at a small Berkeley, Calif., coffeehouse every week, and most of them were part of the discussion for this article.Photo by Steve Jennings

We were deeply saddened to learn of the sudden passing of David Wessel this week. In addition to serving as Director of the Center for New Music and Technologies (CNMAT) as a Professor of Music at the University of California, Berkeley, David Wessel was an innovator in the live performance of computer music, with a special focus on real-time controllers, and a friend and mentor to many. In remembrance, we thought we'd bring you a look back at a classic feature on The Breakfast Club, a group of pioneers who spent years getting together regularly for a long-standing coffee date that included plenty of deep discussion about the future of music technology; in fact, there's a pretty good chance that some of the synths you're playing right now were born in one of these meet ups…Bay Area colleagues and friends are invited to join CMAT in sharing stories and celebrating David's life on Friday, October 17; click here for details.

As EM celebrates a quarter-century of music-technology coverage, it seems appropriate to look toward the future rather than dwell on the past. And who better to discuss the future with than six men who helped shape music technology, in some cases well before EM (and even Polyphony) was a gleam in its founder's eye?

Since our panel discussion titled “The Evolution of Electronic Instrument Interfaces: Past, Present, Future” at the 125th AES Convention in 2008, I've wanted to meet again with Roger Linn, Dave Smith, and Tom Oberheim and follow up on some of the topics we touched upon, such as new directions in gestural control and the continued popularity of the analog synth. Fortunately, it's not difficult to get them together because they form the core of the Dead Presidents Society, which meets regularly for coffee near the University of California, Berkeley, campus. (The name refers to the fact that each had been in charge of his own company.)

These days, they call themselves the Breakfast Club because the group has grown to include other pioneers in the field, including faculty members from the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University and the Center for New Music and Audio Technologies (CNMAT) at UC Berkeley, two of the most important research centers for music technology in the world. Consequently, on March 4, 2010, I was thrilled to have three additional club members—Don Buchla, Max Mathews, and David Wessel—join in on the discussion.

Rather than have them reminisce about the Good Old Days, I wanted to hear their thoughts about the issues that have yet to be addressed, despite the huge technological advances they've witnessed. It didn't surprise me that they had strong feelings about the subject and, at times, wildly contrasting opinions.

The discussion lasted nearly an hour; here are some highlights...

EM: I'd like to begin by talking about where electronic instruments could be going rather than focus on where they've been.

Tom Oberheim: Well, that leaves me out [laughs].

EM: Last time we met, you were being nudged back into business. Have you been surprised by the success of the new SEM analog synth?

Oberheim: Yeah, very much so. Dave warned me that it would do better than I thought. And I think Roger warned me, as well. I thought maybe I'd sell a few a month. But it's been better than that.

EM: What kinds of instruments would you design if the vintage-style analog synth market weren't so lucrative?

Dave Smith: I'd be in big trouble [laughs].

EM: Would you?

Smith: Look at the product line.

EM: Where would you want to go instead? Not just with sound machines, but including gestural controllers.

Smith: I'm the wrong person to ask that because my interest is more in the sound than in the control side of things. When it comes to the control side, I'd rather go with something that everybody uses and is used to—something that is easy to buy and easy to develop. I'm not a mechanical designer, by any means, and that's a huge part of any sort of alternate control system. These other gentlemen have a lot more to offer in that sense.

EM: Will we ever get to a point where digital sound will be equal in popularity or acceptance, in regards to what people say they want from analog? Will that ever even out?

Smith: Probably. It's almost more of a preference thing at this point. It depends on how an instrument is used. If it's buried in the mix, who can tell the difference between analog and digital? If you're playing it solo, then, yeah, you're still going to hear the difference.

Someday [digital instruments will] get better and better at being sloppier and sloppier. As I always say, as a designer, if you design something digital, you spend all your time adding slop in. If you're designing analog, you spend all of your time trying to take the slop and the noise out. Somewhere the two may join, I suppose, in the future. If you don't put time limits on it, everything will happen, someday. I don't know if that's a very good answer.

These 32 pressure-sensitive touch pads form the SLABs instrument used by David Wessel. It can send 96 channels of control data (32 each of x, y, and pressure). Ethernet is used for I/O, with Core Audio-compatible drivers for Mac OS X that send gesture data as audio.

EM: No, it's very interesting. I'm trying to figure out why people want analog gear. What is it that draws them to it? Is it just the sound?

Smith: There are two sides to it, I think. One side is the actual analog electronics. But the other side is being able to get a concise musical instrument with a certain set of controls that doesn't change and will be the same in 10 years, and doesn't change with operating systems and all that. Most people use the term digital to mean soft synths.

I saw an ad a couple of weeks ago for some soft synth that said, “It's going to take you a lifetime to figure out everything that this instrument can do.” And I kind of scratched my head: If I'm playing a musical instrument, do I want to spend a lifetime just learning what it can do? Or do I want to be able to play it, and play it the same way tomorrow, and the day after, and the day after that?

So a lot of people are getting analog instruments now because they want something that they can touch—turn knobs—and it always does the same thing. They're not clicking through menus. They're not bothering with software and having to update it every couple months. Some software synths don't work anymore because the company stopped supporting them: They're not porting them to the latest operating system. That's another one of my predictions from a long time ago—that the software synth you buy today will not work at some point in the future because it won''t get ported forever. So it's more than just the analog-versus-digital thing.

EM: When will we see a commercially viable instrument that has that software extensibility, but that would be recognizable by millions of people as an instrument, as opposed to only a dozen or so people?

Smith: That's the challenge. We all know that the electronic instruments are capable of doing all sorts of things with the right controllers. But the difficult part is getting the right controller that works well, and that people will actually try to use and try to learn.

Linn: A couple of points come to mind. Number one, there is something about the nonlinearities of the way that an analog synth responds to things like feedback or various types of control. It's less predictable. And, one analog instrument is going to be a little bit different from another one of the same model, just like a guitar. There may be a number of Martin D-28 guitars, but each one is a little bit different, and you have to find the one you like. One of the things that is difficult to emulate in software is that collection of nonlinearities.

But there's something else that is going on, too. I was reminded of this the other day when my nephew, Justin, sent me an email with a link to his new CD, and his band is called Vinyl Film. On the album cover it shows the three guys—it's a guitar band—standing in old-fashioned telephone booths. It occurred to me—and I don't know if this is true or not—but maybe it's a movement against all the change that happens at an increasing rate in society today, to be able to use things that are tested by time. Things that we can rely upon that aren't going to change.

Dave and I were talking about this last night: You buy a software synth, and then Windows 7 comes out, or Snow Leopard comes out, and it breaks. But you buy one of Tom's synths, or one of Dave's synths, or one of Don's synths, and they work. They work the same way because there is no operating system in the computer to change. And there are knobs there: you turn it, and it's more like a musical instrument. I think that's part of the magic with it, too.

Oberheim: You can ask the same question about tube amps. In the case of tube amps, I think it's purely sound. It's hard to fool musicians.

We're all involved in technology, whether it's the latest Macintosh, iPhone, Windows 7, whatever. And somehow the same people—us or musicians—might want the latest, greatest digital technology in our computers and our tools, but we may not want it in our musical instruments. I don't know what the explanation is for that.

(L-R): Tom Oberheim, Dave Smith, Roger Linn, Gino Robair, and Don Buchla at the 125th AES convention in 2008.

Photo by Larry the O

David Wessel: I''ve been pulling out Max patches that I made in 1989, and they just come up. And they''re working, just like they did.

Max Mathews: That''s an exception.

Wessel: It may be. But I think things have gotten better. The computers are more reliable. I have less fear of crashing than I did in the past, and I like the convenience and the flexibility. I''m really interested in rolling my own material. I have trouble with what I call the found-object notion: “Well I found this neat sound.” I want to know, in some mathematical sense, why that works so that I can preserve it going into the future. My take is that the situation isn''t so bad.

Then on the control side, for me, when you add more control in interesting ways, you gain a lot of extra sound quality. In other words, a lot of what we associate with quality is in the control of the sound.

EM: Your SLABs controller [for Cycling ''74 Max/MSP] is a three-dimensional controller.

Wessel: The inspiration for this came from years of working with Don''s Thunder, which I loved because it had pressure—and a very sensitive implementation of pressure. So you could use dynamics and play softly and loudly, which is, unfortunately, neglected in so many controllers today. Things play at mezzo-forte all the time, but usually with the gain knob turned up [laughs]. And compressed, and on all the time. Whereas, I''m really interested in the kind of thing you can get out of dynamics.

We had to build some new stuff to use, and the touchpad thing is very compelling. I used it last night, and I came away feeling like I''d been playing an instrument. And I get to practice. But this idea of practice, too, is very important. People are reluctant to make the investment with their bodies and practice a lot. I don''t see that going away. Analog instruments, acoustic instruments—they require an investment in getting chops and that kind of thing. I think that''s very important.

Mathews: That''s true for both analog and digital instruments.

Smith: It''s a vicious circle with controllers because nobody spends the time to learn them. They don''t want to spend the time to learn them because they''re afraid that, in two years, it''s not going to be around. And it''s not going to be around in two years if nobody takes the time to learn them.

EM: That''s the issue with off-the-shelf software that breaks when you move to a new operating system. It may take you three years to learn how to use it, but then it''s obsolete by the time you figure out where all the menus are.

Smith: I see it becoming worse now. And my perception is that the software is all becoming disposable. I see that mostly with the apps now on the iPhone. There''s just so much of it, and it''s all cool stuff—that''s the problem!

EM: Is it a distraction from coming up with a real instrument that you would master?

Smith: Exactly. If you have 35 apps to pick from, to play around with on your iPhone, then which one do you pick and how much time do you spend on it? Or do you learn the five quick tricks and then move on? And it''ll get worse with the iPad, probably.

EM: That brings up a point: In the acoustic world, and even in the analog electronic world, there are instrument-controller paradigms. But in the digital world, it''s a lot more open.

Wessel: But I think you could identify some paradigms. For example, the tablet controller. One of our people, Michael Zbyszynski, set out to make a method for playing with a pointing device—actual exercises, like you''d find in a method book of some kind. There are spatial gestures. These things are pressure-sensitive, as well; you can add that feature in right away.

I think percussionists, in particular, can adapt to a variety of acoustic configurations—typically, every piece has a different setup. So I think they''re perhaps the most susceptible to using new controllers because they''re ready to make the transition. And the spatial layout issue is going to be with us for a long time. The hand-waving thing is probably going to be around for a while. That was there in the Theremin; we know there are disadvantages with that.

If you look at the kind of controller technologies that are in [Microsoft''s] Natal, you actually have this sort of body model built into the video processing. Well, that same company that did the original Natal had pretty good hand models that they could extract from the video itself. People in the industry have said, “Well, let''s do a real Guitar Hero, where there''s a real guitar and we''re now looking at the hand and getting all the detail out.” I think we might see that because we have the processing to do it.

Max Mathews playing his Radio Baton.

Photo by Marjorie Mathews

Mathews: I want to mention something that could serve as a slight antidote or delay for obsolescence. I wrote a program in roughly 1964 or ''65: Music V. Eventually I went on to Csound, which Barry Vercoe wrote, and so I never used the program anymore. But recently, Bill Shotstat revived this program, took my code, and made it run on a Macintosh, using my original instructions. And, of course, the Macintosh runs about 10,000 times faster than the IBM 7094. This thing will play the scores that were written by a lot of musicians in the 1960s.

Now how did he do that? Well it was very simple. I wrote the original program in a compiler called Fortran, and Fortran still exists and is live. You can run it on a current Macintosh. Bill didn''t have to do too much work for this. Just put my code into it.

This inspired Bill to write a program that perfectly emulates the Samson Box—CCRMA''s very powerful hardware digital synthesizer—to run either on a Macintosh or on a Linux computer. The only things that remain of those are the original score in a digital form and an aging analog tape. So [John] Chowning has now resynthesized a number of his programs that he didn''t have good files for.

The lesson from this—and it is not a prevention of eventual extinction; I don''t know how to make anything live forever—is to write as little as possible in basic machine language and write as much as possible in a compiler language that will be maintained by someone else into the future.

Smith: You can always port it: There''s no question about that. It''s just that somebody has to port it. And as a designer, I would rather not have to keep working on my old products and keeping them running. I''d rather work on a new product.

Linn: You were talking about new controllers earlier, and somebody mentioned the chicken-and-egg problem. I think it depends on the controller. There is this recent controller released in England called the Eigenharp, which does a great job of recognizing independent finger pressure.

And regarding learning a new controller, my experience is that learning to use finger pressure to control the continuous envelope of sound is very fast to learn and very intuitive. I think this was proven by the old Yamaha CS-80 that had independent key pressure. People learned that very, very quickly and were able to articulate the different notes of chords on it.

I think the only problem right now is that there is not a good enough control surface to recognize multitouch with independent finger pressure. There was a company called TouchCo—they were in business for about six months, and then got bought up and shut down by Amazon. But they had a wonderful, independent finger-pressure control, multitouch device. And I think one of the things that''s going to be driving this, as it seems to be driving other technology, is what''s useful in a mass-market product like a laptop computer.

One of the problems in the touch interfaces that are happening right now is that none of them recognize pressure. So it''s kind of like having a mouse with no mouse button: You can move the pointer around the screen, but you can''t click it. Or you can push very, very large buttons with your thumb, like on an iPhone, but you can''t actually move the pointer and then add a little bit more pressure to click that small button. So it means that, without pressure sensitivity or some other trick like that, you''re going to have to redesign all of your user interfaces for touch interfaces.

I think these technologies are growing, and what we''ll see at some point soon is that touch sensitivity, in some form with multitouch, will arrive on a laptop or on a screen in low-cost form. And at that point, people will be able to write software instruments that are able to take advantage of the pressure sensitivity and make far more natural-sounding instruments.

EM: Wessel, you mentioned percussionists being adaptable to these kinds of instruments. Is that going to be one of the main controller paradigms developed for the popular market?

Wessel: Percussionists are much more willing to change their setup, develop a new technique, adapt to a specific musical goal, and so on, and that''s their job in many ways. Composers who write for them often ask them to do extreme things. They''re good candidates for new controllers.

If you look on the side of the road, you can see controller after controller after controller in the ditch. We did a thing at CNMAT where we brought out, I think, 30 different models, which we gathered together and put on a table. And they were all obsolete. But yet, there were certain features of some of them that were consistent. For example, the spatial-layout controller.

Wessel: One of the arguments, which Chowning maintains, is that no repertoire was developed for them. That''s a good argument, I think. Then, of course, there wasn''t a commitment on the part of the builder. A guitar-maker is going to build something, refine it, rebuild it, refine it. Whereas the people offering these instruments didn''t do that; they''re kind of one-shot deals. And that happens among the big players, too.

Roger Linn demonstrates a new instrument concept that he and Ingrid Linn are developing. It comprises a multitouch, pressure-sensitive TouchCo control surface overlaid with a grid of semitone rows offset by fourths, which control their custom Cycling ''74 Max/MSP sound-generator patch. Each finger has independent pitch (x), timbre (y), and expression (z) control.

Photo by Ingrid Linn

Don Buchla: I''ve developed a great number of controllers and control techniques, and so on. The breakdown is that the ones that are accepted and used and developed further are those that are most closely linked to the thought. It''s amazingly illustrated [by this story]: I work a lot with bionics, usually with amputees. I was impressed by what a woman said to me just three days ago. She no longer has to think about picking something up. She no longer has to think about a movement. She just moves—that is the word she used. She just moves her arm when it happens. She doesn''t think about it upfront.

If you play an instrument, you don''t want to think about it. The things that contribute to thinking about it and then playing are, one, latency, obviously; and two, nonfamiliarity with the process and the outcome of the process.

The gesture has to be spatially relevant. The percussionist is a good example. I don''t agree with David''s observations. I think the percussionist is the last person that''s going to embrace technology. Primarily because of latency, and because of the usefulness of tapping a thing, like a table, and hearing the sound immediately come from the table. You don''t hear it come from a speaker up there. You hear it come from the table. And it sounds like a table, and its decay time is natural. We learn about those things since we were born—how things should sound in nature. And we can create new sounds, but nevertheless, they obey the old laws. They sound, now, when we touch. It goes, now. It doesn''t go 10ms later.

You see jazz groups fall apart on a stage that''s too wide because the piano can''t hear the drums or because there''s too much latency there. They can''t play in time. It''s essential to music. Much music, not all music. But we have that immediacy and that''s what I think makes a successful controller. It preserves the immediacy that we''re familiar with, and we''ve grown up with.

EM: How has the response been to the ring technology and the spatial things in your instruments, as opposed to holding the Lightning rods, for example?

Buchla: The ring technology was developed just to accompany the tactile technology that is characteristic of some of my instruments. It does not have a higher response, right now. It has a negative response.

EM: A negative response in what way?

Buchla: People don''t want the rings that are playing a surface. Even combined. So it was a failure.

Wessel: I guess we could say the same about breath-controllers with keyboards. It''s an encumbrance. That''s the problem with the rings. People don''t want to have something on their hand.

Buchla: That''s true. But the Lightning has the advantage of a drum stick. That is, you have an accelerated, easy-to-measure motion at the tip. You can do something like that [demonstrates], instead of like that [demonstrates].

The ring works for some kinds of controls—if you want a level. If you want a gesture, like that, it doesn''t do it. You can''t sense the rotational acceleration that you can with the Lightning.

EM: But the mallets in the Marimba Lumina had that capability, right?

Buchla: They''re extensions. They give you that 6 inches or 12 inches of additional extension, which a percussionist uses to good advantage.

EM: The response to that controller was pretty good, though.

Buchla: Yes.

EM: Is that because it''s based on a drum stick?

Buchla: It''s because it capitalized a great deal on marimba technique, which is already highly developed. So a person could come in and play, effectively, a marimba, and add to it the capabilities that we added to it. And that''s considered a very good technique. The drawback is that there are not that many marimba players, so it doesn''t appeal to those that market such things. It doesn''t have a market.

EM: Do you feel that people took full advantage of the motion and tracking capabilities of the mallet, as opposed to just the striking capabilities?

Buchla: Some did, yes.

Wessel: I think it''s a remarkable instrument. I never thought about this thing about the stick. There''s a kind of magnifying effect on the velocity with which you can hit things. It''s almost like having a whip, where the end of the whip is actually moving at the speed of sound. We looked at how fast sticks move, and it''s 120, 130 miles per hour. And so you need a very refined controller to take what we can give with a stick.

And one of the problems, with many of the multitouch controllers that the commercial world has built, is that they won''t really have the high enough sampling rate to accept gestures that drummers can make, and hand drummers in particular.

Let''s say you have something you might call a hand-force image: you going to look at the image of the hand as it makes an impression on a tactile, touch-sensitive surface. Well, you need about 300 times the current video processing power. And if you do the numbers, you can show that you need about 300 times the horsepower to actually get an image out of that, that you could use to transfer something else. So, multitouch is going to come along to be sure, but it''s going to be for the pretty sluggish interfaces, I think, to start with. We''re looking at what TouchCo did, and the Lemur: they sample at pretty low rates.

EM: But isn''t the data transfer pretty slow?

Wessel: Well, I''m not so worried about the data transfer. In our work, we''re using large numbers of channels over Ethernet. I don''t think that''s going to be the bottleneck. It''s the [computer] horsepower that you need, to do the processing on the gestures. Gestures can be very rich.

With Ethernet you get connectivity and an interface that''s ubiquitous. The development we see coming down the pike is the Ethernet Audio Video Bridging [AVB], which I think is actually going to win out, given that Apple and Meyer Sound, for example, are very committed to AVB. We''ve now shoehorned Open Sound Control into AVB—at least we have a proposal. There''s large numbers of low-latency behavior using clock-synchronization technology like IEEE 1588. I think we''re going to see a solid possibility to have really good connectivity without the sluggishness.

However, I don''t think the interfaces are going to come along, initially, that will get rid of it. It just takes too much for some company to build a satisfactory touch controller.

EM: Isn''t there a medical application or military application that we can sponge off of?

Wessel: There are certainly those things. But, again, the medical application is that you want to watch people''s movements in bed: It''s slow. We have to just up the sampling rate. And when you talk to the companies, they say “What will be the commercial application of that?” [Laughs.] “How many drummers are there that need that?”

EM: Everyone''s potentially a hand drummer, right? With drumming circles and the right controller, couldn''t you could sell a million of them?

Smith: Well, Mattel thought that, at one point. Didn''t they come out with some little hand-drum things that went nowhere? It''s the same thinking: everybody taps their steering wheel when they listen to music, right? So of course everybody''s going to buy something that does that. It was probably the wrong thing at the wrong time.

EM: Speaking of Open Sound Control, what is the next protocol that is going to surpass MIDI, in terms of the commercial market? What''s actually going to happen?

Smith: There''s a lot of stuff, and may the best person win. It''s the same old thing: whoever can get the right five or ten companies to sign up for it, then it''ll happen.

EM: Is there going to be consensus like there was back in the ‘80s?

Smith: It''s a whole lot harder these days. It was incredibly easy when we did it. But now there are too many people involved. You''ve got all the software people, the hardware people. Back then it was just a few synthesizer companies. Now it''s the whole musical instrument industry. It''s cell phones. It''s computers. Who knows?

Linn: But there seems to be a tremendous amount of agreement around Open Sound Control. In fact, on the iPhone, there are five different apps that send OSC messages, and there are no apps that send MIDI, because you can''t send MIDI yet out of an iPhone. But it seems like for a lot of creative music, experimental music particularly, Open Sound Control is now the standard. You''ve got it in Max/MSP, you''ve got it on the Lemur, you''ve got it in PD. You''ve got it on the [Haken] Continuum, right?

Wessel: If it''s not there, and there''s some other protocol, it''s very easy to write a wrapper that will translate into these messages. But keep in mind that the big opposition that commercial organizations have to Open Sound Control is that there are no consistent semantics for it. It doesn''t really say “this is frequency,” “this is gain.” It''s kind of up to the user.

But if you have query systems, that let you know what you can do, what the potential is—the query idea will help. That''s the stumbling block for a lot of people. Of course in MIDI you also have arbitrary parameter assignments.

Smith: Everybody uses the fixed stuff, and that always works. And beyond that you do what you need to do.

Wessel: That''s right. I think the most disruptive thing that''s happening right now is the move to parallelism. That is, multi-core. The software being developed—still—is not really taking advantage of multicore very well. And of course, the graphics processing unit is now being invoked in the audio framework, but it''s very difficult to do I/O with it. It''s hard to program. But I think the disruptive technology for right now is going to be this move to parallelism. And I think it''s going to be worse that you can imagine. [Laughs.]

Smith: I can only imagine: “You take this core, I''ll take this core, and we''ll talk to each other.”

Wessel: That''s right.

Oberheim: Is part of this because music is always a stepchild?

Wessel: Bill Buxton says that if we get the music stuff right, then the military will have the right stuff, too, because it''s harder for latency responsiveness, and timely behavior. But, of course, not everyone cares. I''m surprised how tolerant people are to poor performance in terms of latency, jitter, connectivity, control intimacy.

EM: Is that even an interest for people?

Wessel: I think it is.

EM: Take something like the Monome—it''s just rows and rows of buttons. They''re either firing samples or they''re sequencing with it, and that''s basically all there is.

Wessel: Triggering is always going to be there. We need to know more about the shaping. Trigger and shape.

EM: That''s what cool about the Eigenharp. You have the button arrays, you have 3-dimensional buttons, and you have a slider, all built for the hand. But there might be only half a dozen people really pushing what it can do.

Linn: The literature states that they sample the x, y, and z axis of the button movements at 2,000 Hz. That''s pretty good.

One thing I like about the Eigenharp is that they consider the note. A lot of controllers are more conductors, or arrangers. They move things around or control things around. The Eigenharp is actually intended to produce notes with those buttons. I like that idea.

EM: And provide gesture for the different parameters.

Linn: it''s the same thing with the Continuum. It does a marvelous job at taking a key arrangement you''re familiar with—the piano keyboard, the black-and-whites—as a starting place, to get the lateral movement in pitch, and also the x, y, and z of each finger. Those are probably the most impressive instruments in my mind for making music. I''m sort of on a campaign to save the note. With all the looping instruments and this object-oriented composition that''s going on, it seems like the note has largely been forgotten. I guess it''s harder to learn how to play the note, but I want to save the note.

EM: [to Wessel] But you''re thinking about the same thing: you''re thinking about note, gesture, timbre—altogether—as opposed to just firing off events.

Wessel: Very much so. Out of the latest controller, we actually don''t send it as asynchronous data, like MIDI or even OSC. We send the gesture itself encoded in audio. No problem on a laptop handling 100 or 200 channels of audio. It surprised us.

EM: But you''re working with 10 fingers at a time, and it''s handling all that, information?

Wessel: No problem. We have 96 channels of 32-bit floating point, running at 44.1. We could run it at 96 kHz if we wanted, but that''s a little overkill for gestures. [Laughs.]

Linn: that''s just the control signal: he''s not sending any audio.

Wessel: That''s 96 control signals. And I use it all the time. That was the other solution about trying to predict the performance of a system. My strategy for avoiding crashes is to have all the resources available, running at full rate, all the time. Then, the CPU usage is constant. If you can get it in there, you''re not going to have some surprises when you call in some routine. So, I like the idea of having things running all the time at full rate. It''s more predictable.

Smith: So much of this seems to be dependent on the type of music you play. Most commercial instruments are made for commercial music, and these instruments tend to be made for—I hate to say experimental music, because that''s a catch-all phrase for anything that''s not popular music.

It''s the reality of popular music versus non-popular music—that''s one way of saying it. That''s probably more brutal than it should be, but they kind of go hand-in-hand. And I think that''s why the popularity isn''t there, because most kids growing up want to play guitar still. And some of them will play keyboards, and some of them will play drums, but most of them would rather play guitar.

EM: But the guitar is a really sophisticated multitouch controller.

Smith: Of course it is. And it''s standardized. Nobody needs to change it and nobody does, even though we know all the faults and everything surrounding it. “What do you mean it goes out of tune? What do you mean there''s fret noise? I don''t want that. Clean it up.” They''re non-perfect. It''s the old argument that, if you have a somewhat limited thing that you''re working with, it spawns creativity.

EM: A lot of the keyboards, even with today''s technology, don''t send Polyphonic Aftertouch.

Smith: That''s one of those funny things, because, you know, we had it in the Prophet T8 30 years ago, and some people loved it and a lot of people never used it or didn''t even notice that it was polyphonic. You put in other things, like release velocity. Well that could be useful.

Wessel: It''s really useful!

Smith: That''s what you say. But 99 percent of the people who used the keyboards didn''t even notice it was there, and wouldn''t take the time to even learn how to use it.

Wessel: I was talking to someone in the game industry the other day, and he was pointing out that, on the Sony PlayStation, they actually have pressure on the buttons. They did some study and they asked if people use pressure, and they said the gamers just have full-on and light touch, maybe two levels of pressure. And that''s all we need. There''s a movement against building pressure in that says, just like the release-velocity case, people don''t use it.

Smith: If you don''t use it, then, as a designer, you stop putting in.

Wessel: We don''t need to have loud and soft. Just loud. [Laughs.]

Smith: Well, you still have Velocity, you still have mono pressure, and you still have wheels. You can do an amazing amount of articulation, even with those very basic controls.

EM: Imagine if we had Polyphonic Aftertouch for 50 years in the instruments. Do you think people would begin to learn to use it?

Smith: If it was universal, sure. More people would take the time to learn it.

EM: is it easier to do it now than it was 30 years ago, financially speaking?

Smith: Probably. I still think one of these days some company in China is going to make a PolyAftertouch keyboard. And then all of us can buy it, and it''ll start showing up everywhere. But if that doesn''t happen, then it''s unlikely to go much further. I would use it if there was a keyboard I could buy that has it. I''d stick it in a product. But I''m not going to develop it myself.

Wessel: The Ensoniq ASR-10 is a highly sought-after sampler because of internal PolyAftertouch. You can set it up so that every key is a kind of pot.

Smith: Well yeah. [PolyAftertouch] is fun to play with.

Linn: I don''t think, just because something is not in high demand for popular music that it''s not valid and does not have the power to advance music on the fringes in a major way. I think the nature of popular music is that it does repeat themes that have been examined before. So the chicken-and-egg problem is there. To make these devices, like if you had a keyboard with PolyAftertouch or any sort of an alternate controller that used pressure effectively, it takes a lot of engineering and a lot of money. And the market is small for it. And ultimately, I think all of us just need more sponsors.

EM: If products had that technology would it eventually lead to music taking advantage of it?

Linn: I think it definitely would. And I think that of the controllers that are out there now, there are some musicians that are producing some very good music with them. It takes a little bit of learning, but my assertion is that the ability to use pressure for continuous envelope control over a note, is not that hard to do, and I think musicians are very good at it. They play it on other instruments.

Besides writing his blog, “The Robair Report,” Gino Robair is editorial director for and a former EM editor.