User Tools

Site Tools


gestureml:architecturaloverview

GML Architectural Overview

Each gesture defined in a GML document uses a fully editable and extendable system that can be conceptually broken down into a simple four step process:

Matching

The first step is the definition of the gesture action (or configuration). This definition is used to match the behavior of the input device to allow entry into the gesture analysis pipeline. This can be a simple as defining the minimum number of touch points, describing a point list to describe detailed vector path (or static geometry) or a detailed skeletal hand pose description. Input points can come from a wide variety of devices such as: touch screens, motion tacking devices or IMU sensors.

Analysis

The second step is the assignment of the analysis module. Currently GML allows you to select analysis module from the set of built in compiled algorithms. However the GML specification is also designed to accommodate custom code blocks and scripts that can be directly evaluated at run-time and directly inserted into the gesture processing pipeline.

Processing

The third step is the establishment of post processing filters. For example: values returned from the gesture analysis algorithm can be passed through a simple low pass filter which helps smooth out high frequency noise which can present in the form of touch point “jitter”. The “noise filter” can help smooth out these errors and reduce the wobble effect. In addition to this the values returned from the noise filter can also be fed into a secondary “inertial” filter that can be used to give the effect of inertial mass and friction to gestures, resulting in attributing psudo-physical behavior to touch objects associated with the gesture. In this way multiple cumulative filters can be applied to the gesture pipeline in much the same way as multiple filters can be added to display objects in popular image editing apps.

Mapping

The fourth and final step in defining a gesture using GML is a description of how to map returned values. This critically completes the gesture by applying key context to the user action. Values are returned from analysis and processing filters then mapped directly to a defined touch object property or to a gesture event value. The gesture event is then prepared for dispatch (in the gesture event manger) on the interactive object. Return values can then be easily processed and assigned to configurable display object properties. This can be done at run-time without re-compiling which effectively separates the gesture interactions from the application code in such a way as to externalize the scripting of touch UI/UX enabling interaction designers to work along side application developers.

With these four steps GML can be used to define surface touch gestures by performing configured geometric analysis on clusters of points or single touch points. This approach can be extended to include other point like input methods and modes of operation such as motion tracking and high fidelity hand or body gesture analysis. When used in this way GML becomes a powerful tool for the prototyping and development of rich expressive interaction schemes.


GML Example Multitouch Gesture Syntax

<Gesture id="n-drag-inertia" type="drag">
    <match>
        <action>
            <initial>
                <cluster point_number="0" point_number_min="1" point_number_max="10"/>
            </initial>
        </action>
    </match>       
    <analysis>
        <algorithm class="kinemetric" type="continuous">
            <library module="drag"/>
                <returns>
                    <property id="drag_dx" result="dx"/>
                    <property id="drag_dy" result="dy"/>
                </returns>
        </algorithm>
    </analysis>    
    <processing>
        <inertial_filter>
            <property ref="drag_dx" active="true" friction="0.9"/>
            <property ref="drag_dy" active="true" friction="0.9"/>
        </inertial_filter>
        <delta_filter>
            <property ref="drag_dx" active="true" delta_min="0.5" delta_max="500"/>
            <property ref="drag_dy" active="true" delta_min="0.5" delta_max="500"/>
        </delta_filter>
    </processing>
    <mapping>
        <update dispatch_type="continuous">
            <gesture_event type="drag">
                <property ref="drag_dx" target="x"/>
                <property ref="drag_dy" target="y"/>
            </gesture_event>
        </update>
    </mapping>
</Gesture>

A single GML document can be used to define all gestures used in an application. However when working with large libraries of interactions gestures can be divided into groups called gesture sets. Each gesture set consists of a series of defined gestures or “gesture objects” which can selectively be applied to any interaction object defined in the CML or in the application code.


gestureml/architecturaloverview.txt · Last modified: 2019/01/29 19:06 (external edit)