Where am I?

If anyone is wondering where I’ve gone – I’m over here!

Mirror Noise – Video & Photos

Here’s a short video I made about the exhibition.



And here are a selection of stills

Neon Dance
Neon Dance
Neon Dance
Timewarp
Timewarp
Timewarp
Sparks
Sparks
Sparks
Sound * Light
Sound * Light
Data Flow
Data Flow

Viewer Interaction with Mirror Noise

One of the great privileges of putting on Mirror Noise has been to sit in the gallery space observing how people interact with the piece as a whole. Some of the younger visitors have been the most dynamic in their interactions e.g. running and jumping about to see how this affects the sound and visuals. But some of the older visitors have been the most tenacious in trying out the 5 individual pieces and slowly but surely learning how to interact with each one. In many cases where there has been a small group of friends, 1 person has taken a more active role and the others were happy to ‘egg them on’, being more passive themselves, but enjoying the dynamism of their more active companion.

Some of the youngest children to visit the exhibition were awed just by seeing their likeness on the screen in front of them. The father of a 9 month old baby girl was convinced that his daughter really enjoyed the piece – he pushed her up close in her buggy so that she filled one of the screens.

The ‘instruction sheets’ left out in front of the screens have been a great help but even more so have been the invigilators. One invigilator in particular has been pro-active in physically showing people how each piece works and this has really helped to draw people in.  Despite all the encouragement, some people (generally lone adults), who evidently enjoyed the piece enough to remain in the gallery space for quite some time, did not want to actively interact with it. There were also perhaps an equal number of individuals (again lone adults) who were happy to interact with the piece even though they were alone (or perhaps for some, because they were alone?).

Another observation is that some visitors, after a short period of initial familiarisation with each piece, began to expect more. For example – a lady who believed she might be able to divide the shape shown by Neon Dance into 2 and then control each separate shape with her arms. More than one person thought they might be able to spell a meaningful word by interacting with Data Flow. Both of these expectations would require a far higher level of sophistication than was actually present in each piece and were based purely on the desire and imagination of the individual. For me, this was a very interesting phenomenon to observe and reminded me of research I have conducted into game theory.  In his discussion of what constitutes a game, veteran game theorist and designer Chris Crawford, states that a piece of entertainment is a ‘plaything’ if it is interactive and that if no goals are associated with that plaything that it is a ‘toy’ rather than a ‘challenge’ (which may subsequently be defined as a  ‘game’) (Crawford, 1984). Following this line of definition, the visitors who expected greater sophistication of interaction seemed than was present were moving into the space between ‘toy’ and ‘challenge’.

All of the entries left in the comments book to date have been very positive and I will be collating these as a separate write-up. I have also talked to some visitors directly and many have asked  me more about the piece. Some interesting suggestions have been made by people to whom I have spoken. One visitor suggested performing the piece in a club environment along the lines of a VJ which I thought was interesting. Many of the techniques and platforms I have been working with are drawn from the VJ space and part of my overall interest has been to bring VJ and electronic music practice into the gallery space, but equally it can go back the other way, taking something from the gallery space with it. I also like the idea of performing the work. During the private view I realised it could have been quite a coup to let the pieces run, except Timewarp, and once everyone had become comfortable with the situation, activate Timewarp which can be very arresting. Deciding when to show which piece is surely the first step towards performing the work.

Another visitor suggested taking the show to physically impaired or disabled people or to those with learning  difficulties, which I thought was a great idea, although would obviously be subject to appropriate due diligence. Just from watching some of less mobile visitors playing with Sparks and other pieces, I observed a pleasure in the creation of movement on-screen much faster and more dynamic than would ever be achievable in real life. This a surely an example of gestural interaction as a vehicle for personal empowerment.

References

Crawford, C. (1984) The Art of Computer Game Design. Berkeley, California : McGraw-Hill/Osborne Media

Mid-exhibition Analysis of Work in Situ

Now that the exhibition has been up and running for a couple of days, I can afford to take a step back and think about the success or otherwise of the work as a whole and also of each individual vignette.

Neon Dance

This has always been the least stable of all the pieces due to the apparent inherent instability of the Carousela CV Outline plug-in. The piece must be tweaked carefully to suit environmental lighting conditions. Originally I had hoped that each of the three screens would remain blank until movement was detected in front of them at which point an outline of the viewer would be traced and the relevant rhythm element played. It seems that there is an issue with the tracing functionality becoming dormant or somehow not working if no outline is traced in  the first place. So rather than drop the piece as being unreliable I lowered the outline detection threshold so that an outline is (almost) always drawn based on ambient light. When the viewer moves in front of the webcam the shape changes to incorporate their outline. Evolving kick drum, snare and high hat patterns are linked to each of the three screens and still do sometimes re-trigger when a person walks in front of them. Normally all three tracks play continuously so the originally intended effect of movement equating to audio trigger is lost. However, the piece still looks and sounds good and generally seems to work at getting the younger viewers up on their feet.

To further improve the piece I would try to solve the dormant tracing issue and raise the threshold for outline detection. From an audio perspective I would like to modulate the group mix  a little over time, perhaps so that there appears to be some progression, which could be semi-generative.

Spark Shower

Works pretty much as intended. The threshold for creating sparks is somehow a fraction too low (and not easily adjustable)  so sparks are created even when no one is present, but it is obvious that bold movement creates many more sparks. The localisation of the sound design falls victim to being incorporated into a stereo mix. It’s sometimes evident to my ear that more movement in front of the right screen results in a faster, louder spark shower on the right hand side of the stereo mix, but I doubt many visitors will pick this out themselves. The ideal solution would of course be completely localised sound – one speaker (at least) for one screen. Still, for me the audio definitely works – it is essentially a development of the original idea I had of multiple spark noises in the foreground and a cycle of ongoing notes in the background to add some tension and sense of direction.

To improve the piece I would find a way to raise the spark threshold and ideally incorporate truly localised audio.

Timewarp

For me is a great success. Almost the last vignette to make it into the show and essentially the result of a happy accident. There is a fascinating moment when the piece begins that a few video frames are shown from the previous time it ran ie 10 minutes previously, resulting in the ghosts of people past being mixed in with those present. The particle + line effect sometimes doesn’t quite work as intended – somehow the lines move laterally rather than spiraling out in a more radial fashion – but this still looks good to me.  I’m also very happy with the audio side of the piece which is essentially a cacophony of staccato noise played over the top of a slightly plaintive de-synchronised generative  chord sequence. The effect is at once brash and yet almost dreamlike which somehow makes me think of the futility, and acceptance of that futility, of our every movement in this life.

A future improvement might be to deliberately store images on an infrequent basis and equally infrequently mix them into the randomly arranged sequence of recent video frames.

Sound x Light

Generally works as expected. Audio sensitivity can sometimes be an issue – either the piece is slightly too audio-sensitive or not audio-sensitive enough. This is probably the least exciting piece for me and feels to be somewhat of a ‘stocking filler’. But maybe that’s just because I’m too used to it now.

To improve it I would experiment with a range of mics and perhaps a limiter / compressor to enable sensitized audio response without over-sensitivity.

Data Flow

Needed significant tweaking to stand out from the screen – the original dark image with fine lines of light  just wasn’t coming through in the projected image. Still, I’m pretty happy with the result and would rather have the amended version in the show than not at all. The audio is murky in a good way as it helps to create an eery and otherworldly atmosphere but the positional definition is lost somewhat in the stereo mix, as with some of the other pieces.

To improve, I would perhaps try to find a way of keeping the finer lines and of course would prefer to use truly localised audio.

As a whole

I think that the work as a whole hangs together well and is a pleasant distraction from the outside world, as intended. Cycling the vignettes at 2 minutes duration each, making a total cycle duration of 10 minutes, seems to be about right.

The next three days look to be substantially busier so it’s good to know that the piece works and has already been working for some time.

Producing a decent quality video and photographic record is proving to be difficult but that is a different subject.

Mirror Noise – Second Day

By the end of the first day it was apparent that not everyone who came to see the exhibition was patient and/or inquisitive enough to find out for themselves how each piece works. Accordingly, this morning I produced a series of laminated information sheets which give a little explanation of what is going on in each piece. Perhaps more importantly, within each description, I have effectively embedded an instruction as to how to interact with each piece. I also tweaked the names I have been using for my own reference purposes to make them more ‘gallery friendly’.

laminate

 

I’ve also been calibrating my camera and microphone in preparation for shooting a series of video interviews with visitors who agree to this. Unfortunately my tripod broke today so I haven’t been able to use the camera much other than for testing out settings. Luckily I have another at home. The other missing part of the jigsaw is finding a suitable permission form for participants willing to let me record them. I had imagined there would be one on the college intranet but I have not found it yet. I will probably use whatever else comes to hand if I can’t find it tonight.

Mirror Noise – Exhibition Opens

The show opened as scheduled today and attracted around 70 visitors, a handful of which were NUA students. In general the day has been a strong success with everything running as it should.

It’s quite a challenge to take pictures in the gallery without a flash (which swamps the projected image) but here are a few snap shots of the day.

A visitor interacts with ‘Neon Dance’.

dianne-neon

Here’s another of someone playing with ‘Sparks’.

man-sparksA couple sit before ‘Ascii Noise’.

couple-asciiHere are three promotional images which are being shown on the plasma screens around the gallery entrance. I’m keen to maintain a common look for all the project signage.

entrance-portraitentrance-landscape

stairsI’ve set up a make-shift comments book which I hope to improve upon tomorrow. I also intend to produce a few laminated A4 sheets to say something about each piece.

Here is a low quality video of a group of Spanish girls enjoying the Buffer piece.

Feedback so far has been limited but generally very positive.

Final Tests

Today I access the NUA machine for the last time before taking them across to the exhibition venue. I have a number of critical tests to perform and then I shall finalise the exhibition content itself.

With me today I have

  •  3 x HD webcams
  • 3 x USB extensions (10m, 15m and 20m)
  • 3 x simple USB MIDI interfaces
  • 1 x 4 bus USB MIDI interface
  • 1 x laptop running Ableton Live

Previously I had run into an issue with one of the key Quartz Composer (‘QC’) plug-ins not working which was probably due to the machines having being upgraded to OS X 10.8 from 10.7 – many of the QC plug-ins I intend to use are not supported on 10.7. At my request, the college has reverted the 3 exhibition machines back to 10.7 but there has been some confusion over the set-up I require so I am not 100% certain yet that the machines have the necessary software to run the exhibition content.

Hopefully I will then find that the Carousela plug-in works as it does on my dev machine. This plug-in is crucial to the ‘Neon Dance’ piece as it enables outline tracing as a series of geometric points which can be processed to create the Neon Dance outline effect. I have been working on the basis that the plug-in may not work, in which case I have an alternative approach in place (‘Neon Dots’) although unfortunately this is an inferior  approximation of the original.  So the result of the Carousela plug-in test will determine whether I can use Neon Dance in the exhibition or instead make do with Neon Dots.

Next up are the standard calibration tests I have developed to assess machine/webcam performance. I am particularly concerned about the performance of the webcams via the USB extension leads – I recently performed a software update on my dev machine which resulted in a significant loss of USB speed, to the point of the webcam plugged in via the USB extension lead being unusable. From reading around on the web, it seems that the operating system version of the host machine has a major impact upon USB performance. If the USB extensions prove to be problematic on the updated exhibition machines, I will face a serious configuration problem at the exhibition gallery which I won’t be able to solve without discussion with the venue manager, Richard Fair. I did discuss this prospect with him a couple of days ago via email and it seems that there is no easy alternative set-up, so I sincerely hope this will not be necessary.

Assuming all goes well so far, I will then run through each piece individually on the 3 machines. Next comes the final testing of the 3 exhibition master QC files for which each exhibition machine will require its own MIDI interface. I have created a version of the master suitable for each of the 3 machines (to be known as A, B & C). Each exhibition machine is instructed to play a particular piece or ‘vignette’ by the audio machine running Ableton Live. Instructions are relayed via MIDI. In return, while each machine plays its particular variant of the current vignette, it relays real time information (based on user interaction picked up via webcam video input and audio input in the case of the Rutt Ettra piece) back to the audio machine, also conveyed as MIDI  information.

The final outcome will hopefully be to prove that everything works as expected and to validate the master exhibition pieces. I know that much of the calibration to optimise each piece can only take place in the actual venue as it is dependent upon factors such as screen behaviour and lighting. Therefore it doesn’t matter too much what the pieces look or sound like at this stage – just that the mechanics of each are in good working order.

Tests

  •  #1 – ensure machines have correct software
  • #2 – check whether the previously misbehaving Carousela plug-in now works as expected
  • #3 – run calibration tehttps://conceptcrawler.wordpress.com/wp-admin/post-new.phpsts including testing webcams plugged in via USB extensions
  • #4 – test individual pieces on each machine
  • #5 – test multiple machine set up (including all MIDI devices)

Results

After time consuming issues with some USB extension leads not working in conjunction with some computers, all tests have now been passed. The next steps are to get the equipment into situ and set-up.

Vignettes are as follows (not final names!) with descriptions for good measure –

Neon Dance

Any significant  movement in front of the screen results in an outline being traced – typically this will match the viewer’s body eg arms being waved around. The tracing is approximate and glitchy, sometimes resulting in an unstable outline which in itself is quite interesting to watch as it momentarily flares out to incorporate bright features such as lights and windows. The lines are traced in a single ‘neon’ type colour which is run through a filter to produce a fluorescent type quality with light appearing to emanate from the drawn line. The line drawing routine has a sort of ‘memory’ to it so the line drawn represents presence detected over more than one frame. The background is black which accentuates the neon-ness of the piece.

The sound design consists of matching percussion tracks featuring kick drum, snare and high hats. Moving in front of webcam/screen A activates the kick drum track, webcam/screen B the snare and C the hats. The idea being that if persistent movement is detected in front of each of the screens, the entire mix will play. Each drum channel is made up of a small number of phrases which trigger each other randomly so the mix as a whole can play for sometime without repetition.

Neon Dots (will be used if the  Carousela plug-in does not work)

Similar to above but instead of a neon line, dots are shown around the moving edge of a body. It looks quite nice if you haven’t seen it working with the Carousela plug-in.

Sparks

Thick white lines are drawn around edges detected in the webcam image. Any movement results in a ghosting effect of the edges as frames are combined. Sparks fly from points of movement, the greater the movement the greater the number and velocity of the sparks. A gently cycling bass sequence provides sonic depth and some suspense while the visual sparks are also heard as 3 distinct collections of spark noise, linked to movement detected in respective webcam images. The three ‘spark wheels’ are panned left, centre and right to match the screens. I will need to tweak the sound design in situ to reinforce the relationship between sparks created by a given webcam and sounds emitted within the stereo field.

Buffer

Frames of video are shown randomly from a ‘buffer’ of video frames taken over the preceding 24 frames or so. The effect is quite startling and looks like a video disc that keeps getting stuck, playing the same frame a few times before stuttering forward and backwards then repeating the process. Changes of luminance are tracked and when these are high enough (ie there is substantial movement) particles with trails are generated from the area of most movement. To me, these look a little like creases appearing on the image as would be seen on an old photograph and accentuate the sense of playing with time. Tracked movement is also linked to audio with increased movement increasing the frantic-ness of a glitchy staccato noise. Again, there are 3 noise generators, 1 for each screen, being linked directly to the movement detected in that particular screen. In contrast, an asynchronous sequence of chime-like sounds develops over the course of the piece. For me this is a way of counteracting the potential discomfort of otherwise glitchy visual combined with glitchy audio.

Rutt Ettra

This piece may yet be dropped as I am uncertain of how well the interaction works until able to calibrate in situ. Essentially the piece utilises a well known technique of mapping an image into 3D space using luminance to define extrusion in the Z plane (in this case provided by the Rutt Ettra plug-in). Audio input is picked up via the webcam microphone and used to radically increase the degree of Z extrusion. Audio is generated using an LFO to produce abstract phrases reminiscent of 1980s computer games. It may or may not work.

Ascii

Ascii utilises the text mapping of a character set against luminance so that the brightness of an image fragment is matched to an ascii character – lighter areas being represented by characters at the beginning of the range (ie the alphabet), darker areas by characters towards the end. This in itself is not a new technique. In the vignette of the same name I have mixed a semi-opaque ascii luminance image map with an ‘edges’ image which helps to add definition of subject. A third uppermost layer shows larger characters corresponding to areas of image movement as they occur. The greater the range of movement the larger the character. The visual effect of moving is a little like creating a trail of characters. The sound design uses LFOs again to generate cascading phrases that modulate according to the level of detected movement. The timbre has a ‘computery’ feel and for me adds to the visual  perception of being surrounded by computer text or data. An asynchronous repetitive drone in the background might be the sound of a vast machine hidden from view but omnipresent.

Writing these vignette descriptions has helped me to think about what I might want to say about my work in the gallery and how I might say it.

 

First Access to Exhibition Machines

Today is just over 2 weeks away from the opening of the exhibition and I have access for the first time to the 3 NUA machines I will be using to run the real time image processing at the exhibition- ie one per screen. The spec is slightly lower than that of my development machine which may prove to be an issue. Hopefully the difference in processing capability will not present any serious problems but more likely will require minor tweaking of the individual compositions to optimise them for use in situ. More of a problem is likely to be access rights – in line with college procedure I will not have admin rights on these 3 machines and so must ensure that the entire range of actions I take is achievable within my user permissions.

The tests will fall into 2 camps –

1) Test that the Quartz Composer compositions actually work (ie the correct plug-ins and patches are in place) and that performance is within an acceptable range.

2) Test that the peripherals can be plugged into each machine and accessed without any issue   ie external webcam, mini-display port to VGA adapter and MIDI in/out device.

A few hours later…

The NUA machines, although lower in spec than my development machine, have a more up to date processor (i5 compared to i3) and seem to run the test pieces slightly faster, which is great news. All peripherals work as expected. One machine does not have Quartz Composer installed, which is easily remedied. The only significant problem  is that one key plug-in does not work, despite appearing to be installed as expected. I’ve taken a few screen shots and hopefully will get to the bottom of this next week. Otherwise, ‘Neon Dance’ and any other piece that performs outline detection will not be usable, which would be a great shame.

A few more hours later…

I cannot fathom why the plug-in does not work on the college machines. I hope to resolve at my next scheduled access on the 20th of June. If I can’t resolve on that day, I will have to drop the outline detection technique. In the mean time I will need  to develop some other ideas so that in the worst case scenario, I have something to fill the gap.

Test #2 – Gallery Test – The Results

The tests on the whole went well. Here are my observations categorized as described in the previous post (planning).

Operational Issues

Connectivity

The 20m USB extension was long enough to connect the furthest webcam to the dev machine in the control room. Now I need 15 m and 10 m extensions to wire up the remaining 2 webcams. MIDI and power routing was all fine. The mini-display port to VGA adapter also worked fine. The venue has plenty of VGA cables. Audio via the mini-jack taken directly as an analogue out  from the audio machine was very noisy. I had anticipated this and had planned to bring a digital mixer (to which the audio machine can connect optically) and jack to XLR cables to connect directly to the venue’s XLR stereo input. However, I didn’t bring up the mixer for the test due to lack of space and hoping it might be unnessary. It seems that it will need to come up for the duration of the exhibition. In an ideal world I would connect the audio machine directly to the venue sound system via optical and thus avoid  having to find room for another item of kit and associated cables.

Lighting

With the help of Richard Fair (the venue manager), I tried many permutations of lighting pattern and curtain placement to achieve optimum conditions. It seems that the best arrangement is to have the curtains drawn (to prevent light flooding in from the entrance) and the recessed ceiling lights up to about three quarter strength. There are a number of suspended LED lights which provided a small level of additional side fill but I found these distracting. I tried floor lights but also found these too much of a distraction to the eye. The recessed ceiling lights (presumably halogen bulbs) are capable of whitewashing the entire screen if set too high, so it’s a matter of balancing the need for overhead light, critical to the luminance difference technique (discussed previously), with the improved visibility of the screen in subdued light. More tweaking will be required in the final set-up but I am fairly confident that an adequate balance can be struck. One downside with the ceiling lights is that they appear in the captured image (ie they are ‘seen’ by the webcam) but this is unavoidable given that the ceiling is low and the lights are fixed with no provision for diffusers or similar to be set.

Webcam placement

The optimum placement was approximately 80cm above the floor (mounted on a mic stand). Any higher and the webcam captures the projector light itself, which must be avoided. One downside is a slight image distortion (acceptable to my mind) but more off-putting is the fact that when you stand directly in front of the webcam at the ‘sweet spot’ (the closest point at which the whole body is seen) you appear to be looking up as the webcam is capturing from below the line of sight. This won’t matter so much if the final image is non-realistic and the eye detail is effectively obscured. More on final image quality below. With Richard’s help, I also tried the webcam at near-ceiling height looking down. Again there was image distortion but no ceiling lights and critically no projector beam incursion. My feeling was that the image distortion was too great – in effect a position of 80cm above the floor is closer to average eye level than the venue ceiling height (approximately 3 metres). Webcam mounting is a new topic for my to-do list. I had thought of using 3 identical mic stands but theses are quite clunky being designed to mount heavier items. I’m hoping to either fabricate or adapt a stand that provides stability at 80cm while presenting less surface area and therefore less distraction eg a thin rod mounted on a flat shoe rather than a tripod. My first line of enquiry will be hifi speaker stands as I’m sure I have seen something along these lines.

Resolution and Framerate

The maximum resolution detected by the dev iMac when connected to the venue system was 1920 x 1080 and this seems to be the optimum size. I did try lower resolutions  scaled up to fit the screens but the image quality seemed to drop (unsurprisingly). One downside was framerate which dropped significantly to around 10fps even with a simple webcam test. The framerate did not improve noticeably by reducing resolution and is probably an intrinsic attribute of the set-up. This is frustrating as I have spent much effort trying to maintain 15fps and above in the work to date. I can at least re-calibrate the work to fit around a reduced framerate ie hopefully improving image quality.

Demarcation

There is a semi-circular pattern in the carpet and it so happens that the outer edge of this pattern coincides with the ‘sweet spot’. My plan is to run hi-vis gaffer tape along this line and add an arrow for the centre point of each screen ie the point directly in front of the relevant webcam. The webcams themselves will have boxes marked around them on the floor. These measures are informative rather than preventative and I hope that the majority of visitors to the exhibition will understand and abide by the demarcation. There will normally be an invigilator in the room, so human intervention in the case of visitors openly flouting the demarcation to the detriment of the piece will hopefully be achievable in as  polite a way as possible.

Critical Techniques

Live / Quartz Composer sync via MIDI

The technique essentially worked – Ableton Live was able to instruct Quartz Composer to change between vignettes while all the time accepting and responding to incoming MIDI information generated by viewer presence. This was a 1 :1 experiment (audio to visual) whereas the final will be a 1:3 and this will require a little more thought and experimentation to perfect.

Luminance frame difference as a basis for identifying presence

This key technique worked although will certainly benefit from further lighting optimisation.

Webcam focus

The Agent v6 webcam features a manual focus ring which is one of the reasons that I have chosen to use it for this project. However, it was difficult to be precise about focal range and I’m now thinking about producing a focus chart that I can place at the sweet spot in front of each webcam in order to adjust focus more efficiently

Aesthetic Results

The Slit-Scan piece didn’t work as well as the February Slack Space test (performed in front of a large-scale art work) due to the mundane nature of the background ie the venue entrance shown behind the viewer. I did try radical adjustment to the lighting  and even the addition of a footlight pointing back at the viewer, but none of these options worked. The answer may be to come up with a non-realistic version of the Slit-Scan piece – but this approach has the potential to become confusing to the viewer. I will experiment further but am prepared to discard this technique from the final selection if necessary.

The Rutt Ettra-based test (which looks amazing at close range on a computer monitor) just didn’t work at scale. The scanlines are too fine and the overall impression is confused.  It is good to know this now rather than later. I will probbaly discard this type of technique from the final selection.

Both the Neon Dance and Sparks pieces worked well if only so after substantial tweaking, largely to accommodate venue lighting and playback framerate conditions. This really is good news as these are the 2 pieces I have worked up the most and had the most hopes for. I’m sure I can optimise the aesthetics of both of these pieces with further tweaking in situ during set-up. The last major concern is that they may not work as well on the NUA machines to be used for the final piece which are slightly less powerful than the dev machine. I will be able to access these machines from June the 13th onward.

Here is fairly low-res recording of me calibrating Neon Dance and Sparks during the test.

Next Steps

Are many! I have 3 weeks to turn around the exhibition and am currently scheduling my time. Tomorrow I intend to write a press release and kick-start my promotion plan – more on this as it happens.

Test #2 – Gallery Test – The Plan

This evening I’m taking a collection of computers and equipment up to the The Forum to conduct a number of critical tests in the Fusion Gallery. This may be the only time I have access to the gallery space prior to set-up before the exhibition itself, so tight planning is of great importance.

I’ll be testing with a single webcam/computer/screen combination and hope to use the screen furthest from the control room which presents the greatest challenge in terms of leads and connectivity. The objectives of the test are essentially to

1) asess potential for operational success / identify emerging problems
2) evaluate critical techniques
2) evaluate aesthetic result

I have 4 candidate pieces

1) a simple webcam calibration test
2) Slit-scan piece (no audio)
3) Rutt-Ettra piece (no audio)
4) Switcher piece incorporating ‘Neon Dance’ and ‘Outline Sparks’ (audio active)

The calibration piece will simply enable me to calibrate webcam interaction in situ, taking into account lighting conditions and positioning factors. The major concern here is that the projector beam will coincide with the webcam field of capture – at the worst case meaning that I will need to implement image masking to block out the top portion of the video feed. I have a simple masking process in place so I can figure out the dimensions and position of any mask if required although will probably not have time to implement in situ across all test pieces.

The Slit-Scan piece is very similar to previous work and to a test I ran at Slack Space Colchester back in February where I ended up with a large piece of artwork in the background by coincidence. I need to assess whether the piece works visually at a larger scale and viewer distance without a fixed background. This is one of the core techniques I have been working on and I’m hopeful that it will make it into the final selection as it’s fun and responsive.

The Rutt Ettra piece is really just an in situ test of an existing Quartz Composer plug-in. Before going to any great lengths to produce something of a similar nature or perhaps create a hybrid piece that uses the plug-in, I want to see how it behaves at scale and distance.

The ‘switcher’ piece is of greatest interest as it will help me to evaluate a number of key audio techniques and the two most recently developed projects. From a control perspective, I use Ableton Live running on a dedicated audio machine to switch between the two Quartz Composer pieces using MIDI commands. I also have a sketch sound design in place for each piece and Ableton activates the sound design  corresponding to the active Quartz Composer piece. The active Quartz Composer piece also sends MIDI data corresponding to viewer interaction back to Ableton Live which uses this to modulate properties of the current sound design. So in other words – there should be a tangible audio reaction to webcam interaction. I will certainly be assessing the characteristics of each sound design in situ – eg volume, boom, softening, natural reverb etc. Even though audiaesthetic is at an early stage, there should be enough of a range of timbres to get a feel for sound-in-space qualities.

Audio aside, I’m confident of Outline Sparks working as expected but less so of ‘Neon Dance’ which has been glitchy in the past. Although this is one of my favourite pieces, I am preparing myself to drop if from the final selection if it doesn’t work in the gallery space or doesn’t run on the exhibition machines (which are not available to me just yet).

I have a sense that I’ll either be a happy man driving back to Colchester tonight or deeply worried if things haven’t worked out as expected – perhaps I’ll be caught out by something I didn’t expect – fingers crossed.

Equipment List

Dev machine (iMac running Quartz Composer)
Audio machine (Macbook running Ableton Live)
2 x USB MIDI adapters
1 x USB HD Webcam
Webcam mount adapter
Webcam stand
20m USB extension lead
1 x mini-display port to VGA adapter
Selection of VGA leads
Digital mixing desk
Optical lead
XLR / jack leads
Mini jack lead
Power extensions
Tape measure
Video Camera
Camera stand