This is one of the more exotic waveforms generated by the Music Box 2. The waveforms it produces are sometimes similar to bioelectric waveforms in the brain.
Photo: Courtesy Mark Holler
When you consider all the different types of synthesis available today — additive, subtractive, sample based, physical modeling — it seems that the potential pool of basic synthesis techniques has been exhausted. However, there is at least one more technique for generating sound electronically that is still in its infancy: neural networks.
Forrest Warthman, president of Warthman Associates (warthman.com), conceived the idea of a synthesizer based on the Intel 80170NX Electrically Trainable Analog Neural Network (ETANN) chip. Working in collaboration with Mark Thorson, who designed the hardware, and Mark Holler, then Intel program manager for neural-network products, Warthman developed the first prototype using a single 80170NX chip. The latest version includes three chips, each performing a specific task.
The 80170NX includes 64 artificial neurons, each with 128 inputs and 1 output. These artificial neurons emulate the behavior of biological neurons, which accept inputs from many neighboring neurons and produce a single output. Each input is connected to all neurons on the chip with a synapse that corresponds to the interface between biological neurons. The strength of each synaptic connection is specified with a weighting factor.
If the sum of the inputs (taking the weighting factors into account) is well below a user-specified threshold, called the sigmoid gain, the neuron doesn't fire (that is, it produces no output). If the sum of the inputs is well above the threshold, the neuron fires, resulting in a strong, steady output. If the sum of the inputs is at or near the threshold, the output exhibits a linear response in which the output tracks the input value. There is one sigmoid gain for all neurons on a chip, but you can vary each neuron's response by manipulating the synapse weighting factors.
In the current version of the synth, a single input signal passes through a multitap analog delay. The input signal can be from an external source, such as a microphone, or from the synth's own final output. The signals from the delay taps are fed into the first chip's inputs. This chip performs a fast Fourier transform (FFT) to determine the frequency components in the input signal, which determine the outputs from the chip.
These outputs are fed into the second 80170NX, which simulates the behavior of biological neuron bundles called cortical columns. The cortical-column chip includes several external inputs as well. This chip adds further complexity to the signals; they are then fed into the third 80170NX, which also includes several external inputs and behaves like a set of oscillators to produce the final output signals. These signals can be directed to a sound system, a visual display such as an oscilloscope, and/or back to the external inputs of any or all of the three ETANNs.
In April 2008, the most recent version of the synth, dubbed the Music Box 2, was used in an unusual live performance by Holler and Scot Gresham-Lancaster, a composer, performer, instrument builder, and educator. Gresham-Lancaster was in Hoboken, New Jersey, at the Stevens Institute of Technology (SIT) controlling the synth, which was located in Holler's garage in Palo Alto, California. They programmed an Arduino prototyping board to stimulate the synth's inputs from the Web, and the audio was then streamed back to SIT using an Ubuntu Linux computer running the MuSE compressor and Icecast2 streaming audio-server software.
Among the many potential applications of this technology are new musical instruments and control devices. It also provides a unique insight into the behavior of neural networks, which should be of benefit to neurobiologists and neurotic musicians alike.