Through developing a series of ‘demo’ sketches in openFrameworks I’ve been:

  • initially investigating the LEVEL Centre media room’s current infrastructure of four projectors, four wall mounted speakers, traditional stage lighting of spots and floods and wired and wireless Intranet;
  • and testing various hardware configurations to optimise for video + audio;
  • as well as ‘rapid prototyping’ creative ideas – often integrating additional functionality available through oF add ons – to try and realise immersive, 360 degree panoramas across the four screens, spatialised quad audio and a series of input interfaces to control the audiovisual output.

TSH_Demo1_700px

Technical

This initial demo began the process of researching and testing different system configuration + coding approaches for a fullscreen window span across multiple monitors. This was a first attempt at actually working with the multiple screens at LEVEL – using the in house Mac Pro Tower – but I quickly ran into the issue of spanning an oF sketch window across a desktop of multiple screens and particularly doing this while running it in fullscreen mode. These issues were subsequently resolved in Demo 2.

I also tested/used the:

  • ofxGifDecoder addon to import, display and control the frame-rate of found animated GIFs;
  • ofxOsc addon to test input from a TouchOSC interface on an iPad 2 into oF on the Mac via the LEVEL Centre’s default Wi-Fi network;
  • ofxMidi addon to send out note on/off, velocity and pan data.

Interaction

TouchOSC_Editor_TSH_Demo_1

I thought I’d start with testing the iPad’s touchscreen as input interface – via a simple TouchOSC interface.

The XY pads controlled pan (X) and pitch (Y) – displayed respectively as horizontal position and an increase/decrease in the frame-rate / decrease/increase in size of the animated GIFs on the projected screen. The vertical slider controlled volume – displayed as opacity.

I intentionally positioned the input elements and matched their colours with the three animated GIFs displayed on the screen to create a direct and obvious link between input and output.

Aesthetic

In my early planning I’d always thought it would be important to try and create a strong and obvious connection between what was seen and heard – as noted in the Project Concepts…

“Our emphasis here will be on trying to create strong perceptual connections between what’s heard and what’s seen – using abstract lines, curves, shapes and textures that look, behave and move like the sound sounds. For example, lower frequencies may move slowly and wobble while higher frequencies may move faster and be more sharply defined. Moreover, by using positional audio and linking the four projectors together we’ll be able to move sound and image around the space in 3D and in unison, further emphasising their interconnectedness.”

This was an early attempt to select looped animations that seemed best to visually represent the quality of the sound –  selected from Dave Whyte’s Bees & Bombs Tumblr archive of excellent animated GIFs created primarily through Processing – but also control the rate of their playback – e.g. slower for lower pitches and faster for higher pitches.

While ofxGifDecoder worked OK at playback… it didn’t allow for alpha or other types of blending… I had to overlay blocks on top of the GIFs and use these to effectively tint and fade out the GIFs.

Yet despite spending some time searching for particularly suitable GIFs… I don’t think the end results were actually that effective.