Creative Development Process

Having converted the London image sequences to individual videos, applying a little colour correction in the process, I set about the task of trying out the various time-based effects I have developed in the course of the module while all the time keeping the audio moodboard in mind. In this process my criteria were

  • to find a cohesive and coherent sequence of scenes
  • to keep in mind how the audio moodboard might be developed into a more definitive soundtrack to match the scene sequence
  • to find ways of making the audio control the time-based effects
  • to fit particular effects to scenes, if necessary modifying the effect to match the scene

Through a systematic process of trial and error I got to the point where I felt that I had worked up the individual audio-visual scenes to a satisfactory level and that it was time to finalize structure.

During this process I had already decided to drop a number of key scenes from the final piece on the basis that they either didn’t fit the mood established by the audio moodboard (unsettling, familiar, ghostly, urban, spacious  and contemporary) or didn’t fit in with the other scenes. For example – I had found an effect I really liked with the fountain shot but it didn’t really fit mood or colour-wise with the others so I decided to save it for another project.


Similarly, despite all the effort of shooting a 30 minute long dusk sequence on Regent St, I found the Christmas decorations too jarring and thematically out of kilter with the other images. I will try to create this sequence again (minus decorations) for a future project as I think the effect works well.


I’d also realised that I could not use all the scenes on a time basis alone as the submission criteria stipulate a 3 minute maximum duration. I’d come to think of each scene as needing at least 20 if not 30 seconds to work so that already limited the number I could include.

At this point I turned back to the audio. In the back of my mind during the image shooting and editing process, I had been thinking about how I might develop the moodboard into a soundtrack. Although I wanted to keep the soundtrack sparse I knew that I would need a few extra elements and some degree of change to create a structural dynamic. I also wanted to introduce a simple repeated motif that would help the listener to relate to what could otherwise be a confusing experience, not least because of the non-standard time signature. Although the piece was recorded in 5/8 it is more accurately described as being in the compound time of 3/4- 2/4. I decided to use the actual word ‘zeitspiel’ and used a Mac German female speech synthesis module to generate the necessary recording. From this point I made a first pass at mapping out an audio-visual structure using the sound track as a guide.

I used Ableton Live to compose the audio. I found this a great composition tool as it enabled me to easily set up semi-generative sequences which could run as long as required. I’d written 5 or 6 2-bar length percussion phrases which randomly triggered each other on completion. I then created 3 or 4 very simple bass phrases which ran in tandem – again randomly triggering each other on completion. After listening through for a little while I made a couple of tweaks until the random percussion / bass combinations being generated sounded consistent with each other. At a certain point I had to get the composition down as a linear piece that would play the same each and every time in order to define a final structure. Again, this was a relatively painless experience using Abelton Live.

A significant production issue to overcome was the creation of in-sync audio-visual captures of the live output of Quartz Composer. Each Quartz Composer scene took a little while to run smoothly due to the way in which video frames are stored in memory. I considered creating a switcher-type composition in Quartz Composer which could use MIDI cues, for example, to switch between the different Quartz Composer compositions individually tailored for each scene. I rejected this idea as time-consuming and probably unsatisfactory from a performance/stability perspective. Instead I recorded each scene with a lead in and lead out to the relevant section of audio and then used the audio to match up the scenes with a master audio track, knocking out the audio from the individual scenes in the final edit. Although not 100% precise it proved to be a ‘good enough’ solution that took hours rather than days to run through.

Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s