Expressive Control of Indirect Augmented Reality During Live Music Performances

Nowadays many music artists rely on visualisations and light shows to enhance and augment their live performances. However, the visualisation and triggering of lights in popular music concerts is normally scripted in advance and synchronised with the music, limiting the artist's freedom for improvisation, expression and ad-hoc adaptation of their show. We argue that these limitations can be overcome by combining emerging non-invasive tracking technologies with an advanced gesture recognition engine.

We present a solution that uses explicit gestures and implicit dance moves to control the visual augmentation of a live music performance. We further illustrate how our framework overcomes limitations of existing gesture classification systems by providing a precise recognition solution based on a single gesture sample in combination with expert knowledge. The presented approach enables more dynamic and spontaneous performances and—in combination with indirect augmented reality—leads to a more intense interaction between artist and audience.

Powered by the Mudra Engine

Precise 3D gesture recognition using the Mudra framework. Our domain specific language allows a simple declarative description of complex 3D gestures in space and time. Furthermore, the recognition engine compiles this description to a RETE network, which is able to process all Kinect input easily in real time.

The gesture set enabling augmented fire

The gesture set enabling augmented fire

The work in action at the ArtCube and the International Convention Center (Ghent, Belgium)

Dancing with two hands on fire The final gesture, showing the whole body on fire

The configuration of the stage

The configuration of the stage

Live Music Performance

Related information: Academia Paper PDF BibTex

You could leave a comment if you were logged in.
research/mudra/nime.txt · Last modified: 2015/02/13 11:39 by lhoste
Trace: nime