I was a postdoctoral fellow enrolled at the Software Languages Lab (dec 2015) and the Lambda and REBLS group. My research focused on how reactive middleware can be integrated in mainstream applications. Thereby, the focus lies on describing complex reactive patterns in a declarative manner, efficiently recognising them in (soft) real-time and exploring their software engineering benefits.
Keywords: Rule language, Expert Systems, Complex Event Processing, Programming Languages, Research Framework, Ubiquitous Computing, Multimodal Interaction, Gesture Recognition.
Traffic monitoring or crowd management systems produce large amounts of data in the form of events that need to be processed to detect relevant incidents. Rule-based pattern recognition is a promising approach for these applications, however, increasing amounts of data as well as large and complex rule sets demand for more and more processing power and memory. In order to scale such applications, a rule-based pattern detection system needs to be distributable over multiple machines. Today's approaches are however focused on static distribution of rules or do not support reasoning over the full set of events.
We propose Cloud PARTE, a complex event detection system that implements the Rete algorithm on top of mobile actors. These actors can migrate between machines to respond to changes in the work load distribution. Cloud PARTE is an extension of PARTE and offers the first rule engine specifically tailored for continuous complex event detection that is able to benefit from elastic systems as provided by cloud computing platforms. It supports fully automatic load balancing and supports online rules with access to the entire event pool.
Nowadays many music artists rely on visualisations and light shows to enhance and augment their live performances. However, the visualisation and triggering of lights in popular music concerts is normally scripted in advance and synchronised with the music, limiting the artist's freedom for improvisation, expression and ad-hoc adaptation of their show. We argue that these limitations can be overcome by combining emerging non-invasive tracking technologies with an advanced gesture recognition engine.
We present a solution that uses explicit gestures and implicit dance moves to control the visual augmentation of a live music performance. We further illustrate how our framework overcomes limitations of existing gesture classification systems by providing a precise recognition solution based on a single gesture sample in combination with expert knowledge. The presented approach enables more dynamic and spontaneous performances and—in combination with indirect augmented reality—leads to a more intense interaction between artist and audience.
Precise 3D gesture recognition using the Mudra framework. Our domain specific language allows a simple declarative description of complex 3D gestures in space and time. Furthermore, the recognition engine compiles this description to a RETE network, which is able to process all Kinect input easily in real time.
Applying imperative programming techniques to process event streams, like those generated by multi-touch devices and 3D cameras, has significant engineering drawbacks. Declarative approaches solve these problems but have not been able to scale on multicore systems while providing guaranteed response times.
We propose PARTE, a parallel scalable complex event processing engine which allows a declarative definition of event patterns and provides soft real-time guarantees for their recognition. It extends the state-saving Rete algorithm and maps the event matching onto a graph of actor nodes. Using a tiered event matching model, PARTE provides upper bounds on the detection latency. Based on the domain-specific constraints, PARTE's design relies on a combination of (1) lock-free data structures; (2) safe memory management techniques; and (3) message passing between Rete nodes. In our benchmarks, we measured scalability up to 8 cores, outperforming highly optimized sequential implementations.
We propose a novel gesture spotting approach that offers a comprehensible representation of automatically inferred spatiotemporal constraints. These constraints can be defined between a number of characteristic control points which are automatically inferred from a single gesture sample. In contrast to existing solutions which are limited in time, our gesture spotting approach offers automated reasoning over a complete motion trajectory. Last but not least, we offer gesture developers full control over the gesture spotting task and enable them to refine the spotting process without major programming efforts.
Our technique can be applied to programming advanced gesture recognition for Multi-Touch, Full-Body (e.g. Kinect) and other multi-source event input.