Musical Mind: John Matthias Dissects His “Brain”

Pop quiz: When was the last time you were listening to an album, heard a vaguely familiar sound pattern that you couldn’t quite place, and said to yourself, “Aha! Neurogranular synthesis! I knew it”? The answer, of course, is never.

That is, unless you’ve given English multi-instrumentalist John Matthias’ Stories from the Water Cooler a listen. Why? Because Matthias, along with co-conspirator Nick Ryan, is responsible for developing and recording a new musical instrument, dubbed “The Brain.” Based on a model of spiking neurons, this experimental instrument was prototyped just in time to introduce it on Matthias’ latest collection of droney folk-pop observations.

What was the basic premise behind Stories from the Water Cooler?

The idea was to create music with very simple song structures, overlaid with very simple ideas, but so that when they all came together they formed a rather complicated sound. I have a PhD in physics, and I have always been interested in quantum fluids, and the fact that individual atoms don’t really have any meaning. It’s the whole collection of behavior that has meaning, and what makes it flow in such an amazing way. The same thing is true to a certain extent with traffic flow, the flow of people through cities, economies, stars, and loads and loads of systems if you look at them in their entirety—even music.

Tell us about The Brain.

We wanted to build something that had correlated rhythms that weren’t completely correlated like a drummer or musician. We started looking at models of the cortex of the brain, and we found that neurons firing are unpredictable, but they have interesting rhythms that aren’t necessarily random.

Explain how you make this work within the context of a musical instrument.

A neuron is an object that has a voltage on it. The voltage builds up until it gets to a certain level, and then the neuron lets it go. The hundreds of other neurons that it’s connected to also receive these spikes, and you get all kinds of signals building up and feeding back. In our instrument, we use a mathematical model—a network of spiking neurons where when one of the neurons fires, it takes a fragment of sound from a sound file. This spiking then continues on down the network, grabbing other temporal aspects from different locations of the same sound file.

What exactly can you do with The Brain?

We can turn it into rhythms, we can create MIDI events, and we can create rhythmic events from other instruments.

How would you describe its sound?

We started the song “Evermore,” for example, by recording a microphone being dragged across the pages of a book. The 15-second sample was fed into the prototype that scans through it and plays bits or “grains”—20ms–100ms in length—of the original sample with no correlation. If you go below 20ms, you can’t hear any frequency in a grain, because there aren’t enough wiggles in the wave for your ear to detect frequency. So the user controls the lengths of the grain in order to play it—it’s not just indeterminate. With a few manual tweaks—such as duration, voltage, etc.—we had pulsing grains of sound that we added as an extra texture to build the song around. Call it the “sonification” of a network of neurons. But for an example with longer grains, one of my collaborators, Jane Grant, made a piece 17 minutes long called “Threshold.” She uses the instrument with grains up to a second, which sounds very different and surreal. It’s less “clicky.”

Will you be marketing The Brain? What does it look like?

We’re hoping to make these instruments available, but we’re not sure if we’re going to do something commercial just yet. Right now, it looks like a computer screen and a microphone. I’d like it to look like a box with four dials on it, and all the necessary inputs and outputs. You would plug a microphone into it, or load a sound file, and the dials would change individual parameters that affect the neurons— the number in a network, the geometry of how they’re all connected together, how to simulate the neurons more, and how long each grain would be allowed to play.


To hear the newest versions of neurogranular synthesis, check out John Matthias and Nick Ryan’s Cortical Songs—an orchestral album with remixes by Thom Yorke [Radiohead] and Simon Tong [Gorillaz]. Also, click to to check out Matthias, Ryan, and Grant’s The Fragmented Orchestra, composed of streaming audio from 24 different microphones in public sites throughout the U.K., and mediated through a neurogranular instrument.