User Tools

Site Tools


GestureML

GestureML

Introduction

Welcome to the GestureML Wiki. This wiki contains GML samples, knowledge base articles and tutorials designed for users and developers of Gesture Markup Language and the Gestureworks family of products. Gesture Markup Language (GML) is an XML based gesture oriented user interface language.

GML is an extensible markup language used to define gestures that describe interactive object behavior and the relationships between objects in an application. Gesture Markup Language has been designed to enhance the development of applications with multiuser, multitouch, multisurface and multimodal natural user interfaces.

Surface (Touch Input) Computer Vision (Motion Input) Wearables (Sensor Input)


Features

GML provides tools for interaction developers to freely design unique hi-fidelity multitouch gesture interactions from a range of HCI input devices. A gesture can be a simple as a single tap or as complex as a series of detailed hand motion sequences that can lead to a gesture based password or rich character behaviors and immersive environments.

Feature List:

  • Rich Custom multitouch gesture definition
    • Explicit and implicit gesture definition
    • Run-time gesture editing
    • In-line support in CML
    • Gesture action matching
    • Support for multiple clustering methods (global, object & geometry based)
    • Gesture property filtering
    • Gesture value boundaries
  • Rich Gesture Event management
    • Concurrent parallel gesture support
    • Continuous and discrete gesture events
    • Gesture event mapping
    • Gesture sequence definition
    • Rich visual feedback definitions
  • Gesture set definition
    • Device specific gesture definition
    • Input specific gesture definition
    • Bi-manual gesture definition
    • Compound gesture definition
  • Cross mark-up compatible
    • In-line support in creative markup language (CML)
    • In-line support in device markup language (DML)
  • Advanced Touch Input
    • Touch based tangible object gestures (surface tangibles)
    • Touch based stylus stroke gestures (passive stylus)
    • Support for multiple concurrent touch surfaces
  • Fusion support
    • Context fusion descriptions
    • Multimodal gesture sets
    • Crossmodal gesture sets

GestureML Overview

The declarative form of GML can be used effectively to create complete, human readable, descriptions of multitouch gesture actions and specify how events and commands are generated in an application layer. GML can be used in combination with CML to create rich, dynamically defined user experiences. When GML is used with a Gestureworks engine in combination with Creative Markup Language (CML): objects can be dynamically constructed and managed along with well defined, dynamic display properties and interactive behaviors.

Central to the design of GML structure is conceptual framework of Objects Containers Gestures and Manipulators (OCGM). In conjunction with OCGM are included methods for defining Human Computer Interaction (HCI) design principles such as affordance and feedback. One of the primary goals of GML is to present a standard markup language for integrating a complete range of Natural User Interface (NUI) modes and models which would allow for the creation of multiple discrete or blended user interfaces. GML can be used to construct gestures for a wide variety of input methods such as: tangible objects, touch surfaces, body tracking, accelerometer, voice, and brain-wave. When GML is combined with CML it has been designed to enable the development of the complete spectrum of post-WIMP NUIs (or RBI's) such as: organic UI’s, Zoomable UI's, augmented reality, haptics, multiuser and full range immersive multitouch environments.
GML has been developed an open standard that can be used to rapidly create and share gestures for a wide variety of Human Computer Interaction (HCI) devices. Promoting these features by presenting a user friendly method for shaping complex interactions provides a corner stone with which to build the next generation of dynamic, production level HCI applications.

Current implementations of GML in the form of an external gesture engine (as in Gestureworks Core) present 300+ base gestures that can be integrated into an application layer using bindings (available in C++, C#, Java and Python). This model effectively provides an infinite number of possible gestures each with potential to be recast or refined after related applications have been compiled and distributed. This approach puts interaction development directly into the hands of the UX designer and even allows independent end user management.

Examples of Use

From a UI development standpoint multitouch gestures are relatively new and in many cases methods of best practice for UX development has remained closely linked to application type and available devices or modes. In order to effectively explore new UX paradigms any complete gesture description must provide and inherent flexibility in the way gestural input is recognized and mapped within applications but also remain outside the compiled application. Loosely coupling gesture recognition to the application in this manner provides a standard method to dynamically define gestures. This model allows users to define equivalent gestures or variable gesture modes for different input types and device types without requiring further application level development.

For example as multitouch input devices continue to increase the number of supported touch points and grow in size, touch screen UX is seeing a shift towards full hand multitouch and multi-user application spaces. Providing methods and by which developers can create gestures that use 2 finger pinch to zoom or 5 finger zoom will be essential step in developing multitouch software. This can be seen at Gestureworks.com and OpenExhibits.org.

An example of rich user-defined, on-demand control schemes can be found in GestureWorks Gameplay 3. In this utility application users can connect to Windows 8 games and use custom multimodal gestures control schemes. Each game can have custom multimodal gesture sets and users can even switch or modify controls schemes during game run-time.


Architectural Overview
Working with GML
Touch Gesture Index
Motion Gesture Index
Sensor Gesture Index
Multimodal Gesture Index
Crossmodal Gesture Index
Types of Input Fusion
Interaction Point Index
Advanced Gesture Mapping
Micro-Gesture index

Advantages

  • A unified model for understanding and describing gesture analysis
  • Separation of interactions and behaviors from content
  • Easy to read xml structure
  • A range of gestures can be defined for a single application
  • Allows for crowd-sourcing gesture development
  • Device and input method agnostic
  • NUI + OCGM structure for developing flexible UX models
  • XML based open standard, easy to post and share gml
  • Clear separation between touch input protocol and gesture definition
  • Simple method for describing a complete gesture library
  • Native transformation mapping
  • Ad-hoc blended interactions (cumulative transformations)
  • Manageable complexity (gesture block principle)
  • Simple gesture sequencing methods
  • Real-time feature and context fusion capable

Proposed Expansion of Schema

  • Map direct to operating system gesture commands
  • Map direct to key and mouse events
  • Map direct to IOT devices and controls (ZigBee, Z-Wave)
  • Add rich gesture application descriptions to gesture mapping
  • Direct audio feedback methods,
  • Direct haptic feedback methods
  • Upload user profiles, preferred interfaces
  • Direct UI/UX state integration
  • Direct algorithm scripting for gesture definitions (using javascript)

Frameworks & SDKS

Gestureworks Core: C++ framework for use with C++, C#.NET, Java and Python (Uses GML)
Gestureworks Flash: ActionScript3 framework for use with Flash and Air (Uses GML, CML and CSS)
OpenExhibits: ActionScript3 framework for use with Flash and Air (Uses GML, CML and CSS)

Applications

GestureKey: C++ based utility that maps gestures to mouse events and keyboard shortcuts (Uses Gestureworks Core & GML)
GestureWorks Gameplay: C++ based utility that maps gestures to mouse events and keyboard shortcuts (Uses Gestureworks Core & GML, DML and VCML)
GestureWorks Fusion: C++ based utility that maps gestures to mouse events, key commands and system events (Uses Gestureworks Core & GML, DML and VCML)
Tangible Engine: Authoring library supporting the use of fiducial object recognition. It comes with C++ and Unity 3D bindings.

gestureml.txt · Last modified: 2019/01/30 17:46 by ideumadmin