Server Architecture

Tim Hodgson

The server application contains a chain of Audio Units, which handle the network streaming and sound processing. The core of the audio processing is handled by two units, one containing the spiking neural network, and the other containing the granular sampler.

Server node graph The neural network is based on the spiking neuron model of Eugene Izhikevich. The network generates its own internal "noise", analogous to cortical neural activity, but in addition, our network receives input from the physical network of soundboxes. Each soundbox has a corresponding neuron which is stimulated by the audio signal from that soundbox. For each of these neurons, the unit sends two streams of data onwards through the chain, the first carrying the audio signal, the second containing spiking events generated by the neural network in response to both the direct stimulus to that neuron, and to activity elsewhere in the network.

The granular sampler is in many ways typical of this kind of unit: it selects 'grains' of sound from the input, of lengths varying from 30ms up to a second or more. These grains are output in a rearranged and layered form; the extent of this rearrangement being dependent on the chosen system settings.

The important difference with this granular sampler is that the timing and density of the grains is determined by the activity of the neural network, which in turn is partially determined by what's happening at the soundboxes. (only 'partially' because the complex internal dynamics of the neural network mean that its behaviour is very far from a simple mapping of input to output).

Tim Hodgson
Programmer, The Fragmented Orchestra