System version 0.0.1 – Overview

The ‘Positional Audio System’ created for the Sound Creation and Perception module comprises of a number of elements, at the core of which is the Resolume Avenue VJ Platform.

The system is designed with future performance in mind and although clips are randomly triggered in the current version, a future version could easily incorporate a control interface better suited to performance, eg an iPad running a specifically designed app.

A Flash movie holds a series of still images taken from individual points along the jetty. These images were taken using a 4:3 aspect ratio. The Flash movie itself is 16:9 and in the remaining space is a simple schematic of the jetty from a birds-eye view and a marker (a triangle) denoting the position of the Point of Audition (POA). Another marker (a circle) is used when a sound is triggered to allow a visual check that the each triggered sound is being treated corresponding to its nominal position.

Using Resolume Avenue’s Flash SDK, a number of the Flash movie’s parameters are ‘exposed’ to Resolume Avenue so that these can be controlled by any other internal/external source capable of communicating with Avenue.

Processing is used to execute a mini application (in Processing speak, a ‘sketch’) capable of using Open Sound Control (OSC) to control the audio and Flash clips held in Resolume Avenue. The Processing sketch contains the system’s control logic – all other system elements are essentially ‘dumb’ and are only used to facilitate connection, playback and final output of media and effects.

The Processing sketch initiates and controls the ‘shore’ and ‘mud’ atmosphere tracks (ie the processed field recordings made at either end of the jetty). It does this by modulating amplitude and also a low pass filter to suppress volume and frequency range of an audio track the further away the POA is from the nominal source of that track – ie for the atmosphere tracks, the distance from the current POA to either end of the jetty. The duration of POA movement from shore to mud end of the jetty is set in Processing and because the atmosphere tracks loop and the triggered sounds are continuously triggered this can be set for an arbitrary time.

Individual detail sounds are stored in Resolume Avenue and for this initial version of the Processing sketch, are triggered using a combination of simple techniques common to motion graphics programming. For each frame of the journey from shore to mud ends of the jetty, an increasing probability threshold is set for a trigger to occur, ie the nearer the end of the journey the more likley a sound is to be triggered. Whether or not a positional audio event occurs is determined by firstly randomly generating x and y co-ordinates for the possible location of the event in notional space and secondly using the ‘noise()’ function of Processing which returns a value for a given co-ordinate pair based on an adaptation of Perlin noise. Perlin noise is commonly used to create semi-random patterns for use as textures, form or movement paths in motion graphics. It has the feature of looking organic when visualised as a bitmap (as below).

Perlin noise seems like an appropriate input pattern for the triggering of sounds as part of a soundscape composition. In fact there may be some mileage in exploring the natural patterns present in an environment and trying to devise mathematical articulations of them for use in composition. For example the almost-fractal type patterns created by the interaction of sea water and land as salt marsh is created, as seen in the aerial view of the Mersea Island jetty.

Back to the audio trigger logic: if the Perlin noise value for the x,y co-ordinate pair being tested passes the current trigger threshold, a sound is triggered. Additionally a trigger count limit is set on a semi-random basis which is used to prevent successive sounds being triggered too quickly, ie many events triggered a few frames apart would sound cluttered.

The choice of sound is arbitrary for the initial version. Each sound triggered is assigned the x and y co-ordinate pair to hand. The Processing sketch then uses trigonometric functions (detailed previously) to ascertain distance and angle of the sound from the POA. An OSC message is then constructed that triggers the relevant sound clip in Resolume (shown below).

The OSC message instructs Resolume to modulate volume, pan, low pass filter and reverb dry/wet balance based on the positional attributes of the constructed sound event, as evaluate by the Processing sketch. Additionally, pitch is micro-modulated arbitrarily for each sound event in order to introduce variation and nuance.

Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s