Lode Hoste

Job Description

Software Engineering Abstractions for Multimodal Applications

Research Description

Complex gestures which are extremely hard to be implemented in traditional approaches can be expressed in one or multiple declarative rules which are easy to understand. The use of a rule language has the benefit that the developed gestures are reusable and easy to compose. Further a strong connection to application-level entities allows developers to activate and deactivate gestures depending on their graphical context.

[Mudra]
In recent years, multimodal interfaces have gained momentum as an alternative to traditional WIMP interaction styles. Existing multimodal fusion engines and frameworks range from low-level data stream-oriented approaches to high-level semantic inference-based solutions. However, there is a lack of multimodal interaction engines offering native fusion support across different levels of abstractions to fully exploit the power of multimodal interactions. We present Mudra, a unified multimodal interaction framework supporting the integrated processing of low-level data streams as well as high-level semantic inferences. Our solution is based on a central fact base in combination with a declarative rule-based language to derive new facts at different abstraction levels. Our innovative architecture for multimodal interaction encourages the use of software engineering principles such as modularisation and composition to support a growing set of input modalities as well as to enable the integration of existing or novel multimodal fusion engines.

[Midas]
Multi-touch technology allows users to use their hands to manipulate digital information. We have observed that mainstream software frameworks do not offer support to deal with the complexity of these new devices. Current multi-touch frameworks only provide a narrow range of hardcoded functionality. Therefore the development of new multi-touch gestures and integrating them with other gestures is notoriously hard. The main goal of this framework is to provide developers with adequate software engineering abstractions to close the gap between the evolution in the multi-touch technology and software detection mechanisms.

Current frameworks force the programmer into an event driven programming model where the programmer has to register and compose event handlers manually. This results in an application where the control flow of the application is driven by external events and no longer by the sequential structure of the program. Reuse, composition and understanding are hampered when using such frameworks.

In this work we propose a solution based on research conducted by the Complex Event Processing domain. We advocate the use of a rule language which allows programmers to express gestures in a declarative way. The advantage of such an approach is that the programmer no longer needs to be concerned about how to derive gestures but only about describing the gesture. We present a first step in that direction in the form of a domain-specific language supporting spatio-temporal operators.

Complex gestures which are extremely hard to be implemented in traditional approaches can be expressed in one or multiple rules which are easy to understand. The use of a rule language has the benefit that the developed gestures are reusable and ease to compose. Further a strong connection to application-level entities allows developers to activate and deactivate gestures depending on their graphical context.

Keywords: Gesture Recognition, Touch Events, Multi-Touch, Declarative Language, Describing a sequence of events