Evaluation of Final Submission

The final submission (Vimeo version embedded below) weighs in at just under 3 minutes duration. It’s published in the 4:3 aspect ratio since most of the original time-lapse stills were shot in 4:3 and I didn’t want to lose elements of the original image framing. The film makes use of two principal time-based phenomena, both inspired by the slit-scan film/video/photography heritage examined previously in this blog. A ‘time slice’ effect is used where the source is sliced across the screen – each slice being a fragment of moving image taken from a progressively more delayed video frame. A variation is the ‘half screen history slice’ effect where half of the screen shows the source video and the other half shows static slices of progressive pixel history taken from the central strip of video – this effect works best with a high range of movement or chromatic variation. Both effects can be seem in their nascent form in experiments documented in previous posts. Although I have developed other time-based techniques during the course of the module, these two are the ones I feel are particularly suited to the final submission.

To run through the scenes (one can also choose to watch the film, embedded below, before reading the following discussion) –

The clock sequence is intended as an intro – rather than jumping straight into the Liverpool St Station scenes, I wanted to build a sense of expectation and reiterate the theme of the piece – the word ‘zeitspiel’ (German: time play) is repeated as the clock face runs contrary to expectation – being time sliced horizontally into progressively delayed image portions.

The first two Liverpool St scenes are synced to audio but not audio-reactive. I had to manipulate the image quality to obscure the facial detail which would have otherwise have certainly required (unobtainable) permissions. In the process I hit upon using a dark, slightly grainy, monotone quality suggestive of CCTV footage which I believe suits the subject and mood (see the previous post for a resume of mood keywords). In essence, these scenes are observations of human behaviour in a public space – the blurring of distinction between individual and mass behaviour reminiscent of flocking in birds. By chopping up time and space into segments, existing patterns of behaviour are accentuated  and new ones revealed. Mitchell Whitelaw (writer, artist and Associate Professor at the University of Canberra) questions the outcome of one of his own pieces of work, ‘Watching the Street’ (2008), also inspired by slit-scan techniques – “Could a simple visualisation process like this … support an open-ended process of exploration and interpretation?” (Whitelaw 2008).

The escalator scene introduces the half screen history effect. There is now a direct link between the audio and visual – the audio peak is being used to cue the video (ie move it to a given frame). This would normally create a confusing output likely to be tiring to the eye. However, the shuttling about of video frames, as well as being dampened, only accounts for half the screen while in the other half, progressive image history is being written to slices moving away from the centre. The result is a hybrid image that show us time and space in 2 very different but directly linked depictions – commuters frantically move up and down the escalator, turning into ghost-like image slices as they hit the top, subsequently being transported smoothly off screen. For me, there is a fascinating aesthetic here created by the technique Susanne Jaschko terms the ‘sculpturalization of images’ (Jaschko 2002).

The Waterloo bridge scene works in a similar way to the escalator with cars, buses, bikes and taxis hitting a ‘wall’ from which they become static pixel slices animating steadily off-screen.

The final night scene for me is also an aesthetic success  – not least the abstract nature of the reflected lights and spark-like trails. To begin with, the audio drops to ethereal ghost-type sounds which fit the images of trains shunting back and forth. Once the audio picks up volume, the scene reacts dynamically and a greater range of image slices  animate from right to left (the audio volume level cues the video source but we only see the ‘historical slices’ in this case).  Because so many trains pass in such a short space of time on parallel tracks, some slices may be from different trains in slightly different orientations. The effect is that the slices of train seem to take on a life of their own, almost appearing to animate along separate rails themselves due to the high-contrast nature of the images.

Of course – the film is a specific construction created to meet the submission requirements – each scene could potentially work in a self-contained manner for much longer than the 20-30 seconds seen here, especially if shown at a greater scale. In my opinion, the project has been a great success in terms of drawing on the slit-scan heritage to create something new, developing a series of embryonic works to try out ideas, running into problems and either solving them or working around them, developing and applying an evolving set of selection criteria to move the project to a conclusion while all the time learning the strengths and weaknesses of Quartz Composer, which I will certainly continue to use as a creative development/performance tool.

What could I have done better? With so many elements to the project it’s inevitable that things could have been done differently, possibly for a better result. In hindsight, it may have been better to move from experimenting with techniques to planning and producing the final piece earlier in the time frame. This may have afforded a second field trip – only after having worked extensively with the scenes shot in London did I come to concrete conclusions about how best to use them. A subsequent visit would have been much more targeted in terms of knowing what to shoot.

The Zeitspiel experience has certainly spurred me on in terms of thinking about the masters project and I have no doubt I will use that opportunity to further explore some of the themes and techniques developed here.


Jaschko, S. (2002) Space-Time Correlations Focused in Film Objects and Interactive Video. [Internet] available at ‘http://www.sujaschko.de/en/research/pr1/spa.html’ Accessed November 2012.

Whitelaw, M. (2008) The Teeming Void [Internet] available at ‘http://teemingvoid.blogspot.co.uk/2008/11/watching-street.html’ Accessed October 2012


Creative Development Process

Having converted the London image sequences to individual videos, applying a little colour correction in the process, I set about the task of trying out the various time-based effects I have developed in the course of the module while all the time keeping the audio moodboard in mind. In this process my criteria were

  • to find a cohesive and coherent sequence of scenes
  • to keep in mind how the audio moodboard might be developed into a more definitive soundtrack to match the scene sequence
  • to find ways of making the audio control the time-based effects
  • to fit particular effects to scenes, if necessary modifying the effect to match the scene

Through a systematic process of trial and error I got to the point where I felt that I had worked up the individual audio-visual scenes to a satisfactory level and that it was time to finalize structure.

During this process I had already decided to drop a number of key scenes from the final piece on the basis that they either didn’t fit the mood established by the audio moodboard (unsettling, familiar, ghostly, urban, spacious  and contemporary) or didn’t fit in with the other scenes. For example – I had found an effect I really liked with the fountain shot but it didn’t really fit mood or colour-wise with the others so I decided to save it for another project.


Similarly, despite all the effort of shooting a 30 minute long dusk sequence on Regent St, I found the Christmas decorations too jarring and thematically out of kilter with the other images. I will try to create this sequence again (minus decorations) for a future project as I think the effect works well.


I’d also realised that I could not use all the scenes on a time basis alone as the submission criteria stipulate a 3 minute maximum duration. I’d come to think of each scene as needing at least 20 if not 30 seconds to work so that already limited the number I could include.

At this point I turned back to the audio. In the back of my mind during the image shooting and editing process, I had been thinking about how I might develop the moodboard into a soundtrack. Although I wanted to keep the soundtrack sparse I knew that I would need a few extra elements and some degree of change to create a structural dynamic. I also wanted to introduce a simple repeated motif that would help the listener to relate to what could otherwise be a confusing experience, not least because of the non-standard time signature. Although the piece was recorded in 5/8 it is more accurately described as being in the compound time of 3/4- 2/4. I decided to use the actual word ‘zeitspiel’ and used a Mac German female speech synthesis module to generate the necessary recording. From this point I made a first pass at mapping out an audio-visual structure using the sound track as a guide.

I used Ableton Live to compose the audio. I found this a great composition tool as it enabled me to easily set up semi-generative sequences which could run as long as required. I’d written 5 or 6 2-bar length percussion phrases which randomly triggered each other on completion. I then created 3 or 4 very simple bass phrases which ran in tandem – again randomly triggering each other on completion. After listening through for a little while I made a couple of tweaks until the random percussion / bass combinations being generated sounded consistent with each other. At a certain point I had to get the composition down as a linear piece that would play the same each and every time in order to define a final structure. Again, this was a relatively painless experience using Abelton Live.

A significant production issue to overcome was the creation of in-sync audio-visual captures of the live output of Quartz Composer. Each Quartz Composer scene took a little while to run smoothly due to the way in which video frames are stored in memory. I considered creating a switcher-type composition in Quartz Composer which could use MIDI cues, for example, to switch between the different Quartz Composer compositions individually tailored for each scene. I rejected this idea as time-consuming and probably unsatisfactory from a performance/stability perspective. Instead I recorded each scene with a lead in and lead out to the relevant section of audio and then used the audio to match up the scenes with a master audio track, knocking out the audio from the individual scenes in the final edit. Although not 100% precise it proved to be a ‘good enough’ solution that took hours rather than days to run through.

London Shoot 12/12/12

I went to London armed with my camera (Casio Ex-F1), an un-tried variable Neutral Density filter and the ubiquitous tripod. Although I didn’t have a strict storyboard to follow, I did have a shot list in mind. In general I was looking for scenes displaying frequent and dynamic movement (human and mechanistic) with moving light being of particular interest (thinking of the delay by luminosity effect). The format would be strictly time lapse for a number of reasons

  • to exaggerate movement
  • to create light trails
  • to obscure faces (to avoid the need for permissions)
  • to enhance abstract qualities

One major concern was to find a location that I could shoot through the falling of dusk, knowing that I would only get one shot at this – short of scheduling a return visit. I had envisaged applying a slit-scan effect with a long delay between a small number of strips to a dusk-fall sequence thus showing the scene simultaneously at radically different stages of light.

To this end I had spent some time using Google maps, image search and in particular Google Street View to figure out exactly where to shoot the dusk sequence. Contenders were Old St roundabout, the roundabout at the Museum of London (next to the Barbican), Piccadilly/Regent Street or Leicester Square. I looked for protected spaces (to minimise the risk of tripod knock) with an uninterrupted view of the subject – ideally a busy West-facing road with good prospects for luminosity variations created by the falling light.

I nearly dismissed Old St roundabout because as London’s up and coming ‘Silicon Roundabout’ it has recently become the subject of much photo/videography and thus a little too much ’roundabout du jour’ for my liking. However, I did spy a promising vantage point  facing the West side of Old St, so thought I would leave this location as a reserve. The Museum of London roundabout, although a great location, didn’t look like it would yield particularly interesting shots. I did find a couple of traffic islands on Regent St which looked quite promising, thinking it would be visually interesting to shoot the dying light across the curve of Regent St dominated by Christmas illuminations. Leicester Square has been the location of recent renovations and I didn’t expect Street View to match up with reality in this case so thought I would leave it until the day to scout out. Although I know London pretty well, Street View and image search really helped me to identify and assess  specific shooting locations.

On the day itself it was very cold (around zero) with freezing fog and a chance of some precipitation forecast. My first port of call was Colchester station where I had planned to shoot a sequence of one of the station clocks. I didn’t quite manage to get the full bleed effect I had hoped for (digits running into one another), but having got a visitor’s pass, took the time to get a few sequences of digits advancing. You can see some bleed in the below image (98 not being expected as a seconds value).


As I arrived at Liverpool Street Station I wondered if it might be possible to shoot inside the actual station, which although not too busy by mid morning, is still a fantastic place to observe human behaviour at any time. Thirty minutes later after some discussion, form-filling and watching of a training video I had myself a station camera pass. This was a fantastic outcome and I shot 4 separate sequences, stills from which are shown below.





The light is fantastic at the station due to the large portions of glass ceiling. In this situation, the Neutral Density (ND) filter proved invaluable in terms of reducing light and increasing exposure times. However – I hadn’t realised that my camera wouldn’t shoot an exposure longer than half a second at this point and this was the first technical limitation I came up against. I had hoped to create more obscurement/movement trails with the ND filter but half a second exposure wasn’t quite producing the desired effect. Knowing that there was no way round this limitation on this particular day, I worked with it and moved round London looking for scenes I could shoot while all the time checking out potential dusk locations.

Trafalgar Square fountain.


Waterloo Bridge and River Thames from Embankment.


Waterloo Bridge.


Leicester Square was no go because of the Hobbit premiere. I ended up shooting the dusk sequence looking up Regent Street, away from the setting sun.


A very happy accident was ending up at Bethnal Green Overland Station after nightfall and shooting moving trains coming in and out of Liverpool Street Station.


The second technical limitation I came up against was memory card space. I hadn’t shot so many sequences in one field trip previously and doing the maths over a mid-morning coffee realised that I would soon run out of memory without the chance to copy the images and wipe the card. I ended up buying a couple of cards on the day, being in London this wasn’t a problem but could have been in other circumstances. I also reduced image capture quality slightly to give myself more headroom. A lesson to be learned. Battery charge was not a problem (although it was a subject I had worried about) despite using the camera frequently throughout the day – continuously shooting stills half a second apart is obviously a lot less battery draining than continuous HD video capture.