Over the past few years, multi-touch user interfaces emerged from research prototypes into mass market products mainly driven by innovative devices such as Apple's iPhone or Microsoft's Surface tabletop computer. Unfortunately, there seems to be a lack of abstraction in existing multi-touch software frameworks and often multi-touch application interfaces are based on the hard-coded processing of low level input device events. This leads to proprietary solutions for multi-touch gestures that cannot be reused across different applications or be composed to form more complex gestures. We present Midas, a domain-specific declarative framework for the definition and detection of multi-touch gestures. In our approach, multi-touch gestures are no longer programmed in a procedural manner but rather expressed via logical rules over a set of input facts. We discuss how a rule language-based multi-touch gesture recognition solution simplifies the definition of new multi-touch gestures and supports different tangible application scenarios.
Music with permission of Kevin MacLeod © 2001-2011. Opening photo by Михал Орела distributed under the Attribution License.