One of my favorite genres to play is free-form jazz, in which all members of a group improvise simultaneously, responding to each other's musical ideas as they occur in real time. You might think that this form of music would be an unlikely candidate for algorithmic computer generation, but Tim Blackwell, a British computer scientist and jazz musician, felt it presented just the challenge he needed to complete his masters thesis at University College London. Blackwell was inspired by the similarities he observed between free-jazz performances and flocks of birds or swarms of insects.
Among the most important characteristics of flocks and swarms is their ability to self-organize; for example, they often retain a persistent shape, and they can change direction almost instantly. Each member is attracted to the group's center of mass, but they invariably manage to avoid colliding with their neighbors. This self-organization is remarkably similar to the musical structures that emerge out of spontaneous improvisation among the members of a free-jazz ensemble, each of whom listens to what the others are doing and responds in real time while trying to avoid musical “collisions.”
Using Java on a 1.3 GHz Pentium-based computer, Blackwell has written a program called Swarm Music that applies models of swarming and flocking behavior to MIDI Note On/Off events in what he calls Music Space. Like physical space, any point in Music Space is defined by three variables; but instead of spatial coordinates x, y, and z, points in Music Space are defined by their MIDI note number, Velocity, and the amount of time since the start of the last event, which he calls pulse. Each note typically ends shortly before its subsequent note begins (although that can be overridden), and all notes are depicted graphically on the computer screen as particles in a real-time animation of Music Space (see Fig. 1).
After the user specifies various parameters, such as initial key center and preferred scales, a performance begins with a few randomly generated notes, up to five of which constitute a swarm. Subsequent notes are then generated within the Music Space according to the swarming model and user settings, and the MIDI data associated with each note is sent to a synthesizer. In the latest versions of the software, two or three swarms can be implemented simultaneously. Blackwell has found that swarms with many events sound sluggish, with lots of repeated notes, and too many swarms can tax the processing capabilities of the computer.
In addition to generating music autonomously, the software can interact with a human musician who plays a MIDI controller or an acoustic instrument into a microphone connected to the computer's MIDI or audio interface. In the case of an audio input, the software's event-capture routine uses pulse-height analysis to determine when a musical event, such as a single note or chord, has started or stopped, then passes the signal through a Fast Fourier Transform to determine its “location” in Music Space. Surprisingly, the latency of this process is very low, allowing real-time interaction between the human and computer. Of course, MIDI input is much easier to deal with computationally.
One primary concept embodied in the software is called a target: a point in Music Space toward which the individual notes of a swarm gravitate. One target is the swarm's center of mass, and others are provided by human input. The swarm follows the targets, resulting in a surprisingly coherent musical structure (for some examples, go to www.timblackwell.com). Swarm Music represents a unique synthesis of biological and musical modeling that could lead the way to a new era of computer-based improvisation.