Project Concepts

From Lewis’ summary of a telephone discussion with Andy Williams in early October 2015 (following an introduction by Dr. Toby Heys and a couple of subsequent email exchanges). Although too extensive to include in much detail within the final proposal submitted to the AHRC, it outlines a shared understanding and ‘broad brushstroke’ working approach to the project overall.

Touch: See: Hear – A Multi-Sensory Instrument for Learning Disabled Adults – Project Concepts

Effective playful, interactive digital installations are frequently ‘toylike’ – they’re designed to be immediate, easy to understand and quick to learn. As a result they’re often ’shallow’ – they do one thing well but once their process is understood they offer limited scope for ongoing engagement (although particularly effective examples allow users to ‘usurp’ the interaction process and use it in ways that were not intended). Instruments differ in that while they’re also designed to be (relatively) immediate and easy to understand – a child can play a piano and enjoy the experience – they’re not necessarily quick to learn – an accomplished pianist will have taken years to develop their musicianship. Accordingly instruments have ‘depth’ – the more they’re played with and practised the more adept the user becomes at creating increasingly sophisticated outputs. This inherent nature of the instrument provides a useful mechanism for responding to the question: “How do people start playing with things, interpret them on their own and learn how to use them?”

A key axiom that draws on an appreciation of how adults with learning disabilities respond best to these types of interactive environments is ‘agency’ – that a particular input should have a direct and unambiguous output that is repeatable and consistent. This is essential to discovering how something works and to learning how to use it through play and experimentation – and is the basis of many musical instruments. Yet while we’ll certainly draw upon established HCI principles to develop the instrument’s interface, more important here will be following a User Centred Design approach – evolving the way the instrument behaves through user testing and an iterative design process that responds to the needs and requirements of its end users.

The concept of developing a new musical interface – a minimal glowing sphere – in part responds to an understanding of how the learning disabled react to conventional musical interfaces like piano keyboards. A frequent behaviour is to run a finger from top to bottom and so play a crescendo of notes from high to low. Since their behaviour is very pattern based – they tend to repetition – it’s then difficult for them not to do this every-time. Level Centre have implemented strategies that use the senses to try and change this sequential behaviour, by adjusting the way their keyboards behave – for example playing a sound of the same pitch but from quiet to loud. This makes designing a custom-controller which is responsive to their touch and movement but is unlike any conventional musical interface they may have encountered before a significant element within the project.

While we don’t want to preempt the emerging aesthetic of the sounds generated we know that adults with learning disabilities generally respond to rhythm and the quality of the sound – they engage with ‘pure’ sound very well. This may well result in the instrument’s sonic output being less about conventional notions of pitch, timbre, melody and harmony and more about creating abstract sonic ‘objects’ – sounds, drones and pulses that move around the space and can be combined and layered to create complex aural textures and rhythms.

In developing the visuals we plan to draw on an understanding of how the learning disabled react to visual based installations – particularly those that use cameras to capture their image and allow them to see and hear themselves back. While they undoubtedly enjoy the experience it’s often difficult for them to get over the ‘that’s me” novelty and move beyond it. So we’re unlikely to feature them within the projections as more than a silhouette – although we do intend to integrate image and motion capture devices such as the Xbox 360 Kinect for supplemental control and for documenting their engagement.

Our emphasis here will be on trying to create strong perceptual connections between what’s heard and what’s seen – using abstract lines, curves, shapes and textures that look, behave and move like the sound sounds. For example, lower frequencies may move slowly and wobble while higher frequencies may move faster and be more sharply defined. Moreover, by using positional audio and linking the four projectors together we’ll be able to move sound and image around the space in 3D and in unison, further emphasising their interconnectedness.

Despite seeming a little too literal, this intuitive practical approach is actually grounded in an understanding of current theories around visual perception – that in its early stages we see an image composed of forms, lines, colours, motions, etc. but lacking specific meaning and that in order to create a more instantaneous visual perception of the world around us, we all have, built within us, a set of archetypal shapes that we constantly reference. Additionally, our colour palettes will most likely reflect current cognitive neuroscience research that evidences perceptual connections between pitch and colour – the brighter a colour the higher the pitch we associate with it.

While this approach certainly has to be carefully tailored for a group of people who don’t sequence in the same way as able-bodied and have differing abilities in tracking multiple objects with varying movement and speed, we’re confident it has the potential to engage their multiple senses in ways that are far more meaningful than a simple synchronisation of sound and image. This position is further informed by contemporary audiovisual theory and recent cognitive neuroscience research that argues and provides evidence for a unique quality to combined audiovisual perception – the technique of merging sounds and images in order to generate a third audiovisual form, a type of experience which is distinct from the experience of images or the experience of sounds in isolation from one another.