Work in Progress: Difference and Outline

After writing up Test Event #1 I spent quite some time trying to pre-sample a background (ie by taking a picture through the webcam) and then using that in conjunction with a live webcam feed to separate subject from background in realtime. I wrote an image filter that compares the values of pixels between a foreground and background image (plus or minus a tolerance factor) and discards matching pixels, the idea being that a subject not present in a background image will have less matching pixels and so will be isolated from a background. It works to an extent, but the result is too ‘stipply’ and not really usable without major transformation, such as blurring or downsampling. The following image shows the (rather poor) result.

poor-difference

In the course of evaluating the result of the above experiment I hit upon an alternative technique which has proven to be more fruitful. The principle is to take the current frame of a video feed and compare it with the previous frame to determine what has changed. Rather than comparing colour values, luminance (eg brightness) is compare on a per-pixel basis. The result (shown below), although at times unpredictable and rather noisy, lends itself well to a number of real time uses.

The next step was to link this technique with another developed as a Quartz Composer plugin by Benoît Lahoz – ‘Carasuelo OpenCV Contour’ – which creates shape outlines based on luminance values. In this way a shape can be created to surround the area of luminance change as shown below.

I really like this technique for its simplicity to the viewer – it’s immediately clear that moving in front of the webcam results in an outline being added to the image. In the following experiments I’ve hidden the luminance difference image and placed the outline on top of the un-effected video capture. The first example combines the outline shape with mathematical noise to create a sort of webbing effect.

The second example creates a large-scale glow which is quite fun to play with.

Next steps are to look at persistence so that the outline and effect does not disappear as soon as the viewer stops moving, but rather continue for some time and perhaps recede more naturally. I’ve already tried an image processing technique which achieves this to a certain extent but also want to try a computational technique so that the points of the outline are stored in memory and then perhaps degraded mathematically over time.

Advertisements

In Search of Sound

I’m conscious of the fact that this is an audiovisual project and so far I have spent the majority of my time investigating the visual aspect. In the back of my mind I have been considering the way on which audio might be managed and it seems timely to formalise some of those thoughts.

Firstly, the format of the final piece has much bearing on the organisation of audio – will it be 3 screens showing the same interactive vignette at any given moment or will each individual screen show a separate vignettes at separate times? I’m leaning towards the former approach as I think the sheer visual impact of having 3 screens show the same (hopefully visually arresting) vignette will enhance the viewer experience.

Secondly, if the 3 screens are to the show the same vignette at the same time, should they each have an independent, local sound design or should they somehow feed into a global sound design?

The local sound design approach means that each vignette would need a sound scheme that is capable of being duplicated, out of sync –  in other words un-synchronised but mutually compatible.

The global approach would mean that each vignette could trigger elements of a single sound scheme, potentially sacrificing the perception of localised sound (ie the viewer interacting with a given screen is aware of their own interactions affecting local audio). The benefit would be that sounds could be synchronised (perhaps making use of rhythm) and that a single machine / software license could provide audio capability.

The last point is an important consideration as I will be unlikely to be able to purchase software licenses for 3 machines. I think I can afford to leave this question unanswered for now and focus on the more creative aspect of sound design.

So, I currently have 2 vignettes in development  – for each of these I want to consider the audio element before moving further with the project as a whole. I have named them for convenience.

Outline motion particles‘ is where an outline of the viewer is seen only when he/she comes to rest. When movement occurs the outline is softened and a storm of particles is created that shoot off similar to sparks from a sparkler. The first approach that comes to mind here is to find a foreground sound that can be used to represent the shooting sparks, either literally duplicated to create many sounds or complex enough to carry the idea of  many sounds and a background sound that creates atmosphere and tension. The background and foreground must fit together but the foreground is likely to supersede the background when ‘the sparks are flying’, following on from the visual dynamic.

Neon Outlines‘ is where a neon-coloured outline of the viewer is created in response to their movement. I had a strong idea right from the start that moving in front of the screen would not only create an outline but also trigger some kind of dance music. The more the viewer moves, ie dances, the more the outline is drawn and the dance track develops.

The Treachery of Sanctuary

The Treachery of Sanctuary is a large scale interactive installation directed by Chris Milk, former music video director and more recently, interactive video director. From early on in the project life cycle, Milk worked closely with interactive designer Ben Tricklebank who contributed design and creative direction. The technical director was James George, an artist in his own right with impressive technical capabilities. Several other individuals worked on the project, bringing the total number of contributors to at least 19. The project was sponsored by the Creators Project, a partnership between chip maker Intel and online magazine Vice, and debuted at Fort Mason, San Francisco, March 2012.

The installation consists of a triptych of 3 large screens each of which presents a unique interactive experience powered at its heart by a Kinect camera used to sense and track human form and motion. The first screen displays the viewer’s silhouette which then breaks down into a flock of birds which proceed to fly about. The second screen has a similar flock of birds attacking the viewer’s silhouette, taking chunks out of it. The third screen, perhaps most impressive, has the viewer’s silhouette transformed by the addition of a massive pair of wings when he/she reaches upward with their arms.

The Creators Project | San Francisco, CA

The Creators Project hosts an informative documentary on the making of the project which discusses the story behind the installation, including some key motivations on the part of the director.

James George also details the technical approach on his blog, the project page can be found here.

It is evident that The Treachery of Sanctuary was a well resourced project that relied on contributions from a range of individuals who might be considered to be experts in their fields. The technical aspects are of great interest, but perhaps more so are the narrative construction of the piece and the sense of emotion and atmosphere that are evidently created.

In an interview with Wired in which he discusses the installation, Milk talks about the 2-way nature of interactive art and its ability to ‘speak back to the person in front of it’.

For me, this installation is great inspiration because it takes a relatively simple idea and transforms it into a fantastic and unique experience using some very clever techniques. But, although the technology is a such a key element, it is effectively transparent and subordinate to the creative vision. Also, the interactions are couched within a narrative construct that imbue meaning and context, rather than just being ‘cool’ and fun, which they undoubtedly are too. From an interaction design perspective, I like the way that the image of a viewer is captured (via a Kinect camera) and then abstracted, initially as a silhouette, into a designated scene. The viewer implicitly and intuitively understands the control they have over the scene by the realtime response and more fundamentally, by the familiarity of their own silhouette.