User Tools

Site Tools


gestures:touch:gesture_index

Touch Gesture Index

There are over 300 ready made gestures in the standard  my_gesture.gml  GML file that is distributed with Gestureworks products. This GML file documents standard common gestures that can be used as part of any application. Each any every gesture can be edited and extended in an almost endless variety of ways. The following index lists common gesture types and outlines the GML structures used to fully describe the gesture.

Simple Touch Gesture GML Descriptions

Simple touch gesture descriptions outline what can be considered fundamental gesture primitives. These gesture primitives include spacial gestures that deal with the patterns in the 2D motion of touch point clusters, temporal gestures that deal with time based patterns of touch point life cycles and path based gestures that deal with the geometric patterns associated traced 2D point paths.

Spatial Temporal Geometric

Drag/Move
Simple N Point Drag Gesture “N-Drag”
One Point Drag Gesture “1-Finger-Drag”
Two Point Drag Gesture “2-Finger-Drag”
Three Point Drag Gesture “3-Finger-Drag”
Four Point Drag Gesture “4-Finger-Drag”
Five Point Drag Gesture “5-Finger-Drag”



Tap
Simple N Point Tap Gesture
3-Point Tap Gesture

Path Based Stroke Gestures
One Point Stroke Letter “1-Finger-Stroke-Letter”
One Point Stroke Letter “1-Finger-Stroke-Greek”
One Point Stroke Letter “1-Finger-Stroke-Shape”
One Point Stroke Number “1-Finger-Stroke-Number”

Rotate
Simple N Point Rotate Gesture “N-Rotate”
Two Point Rotate Gesture “2-Finger-Rotate”
Three Point Rotate Gesture “3-Finger-Rotate”
Four Point Rotate Gesture “4-Finger-Rotate”
Five Point Rotate Gesture “5-Finger-Rotate”

Double Tap
Simple N Point Double Tap Gesture
1 Point Double Tap Gesture


Triple Tap
Simple N Point Triple Tap Gesture
1 Point Triple Tap Gesture



The Orientation Gesture
Five Point Orient “5-Finger-Orient”

Scale
Simple N Point Scale Gesture “N-Scale”
Two Point Scale Gesture “2-Finger-Scale”
Three Point Scale Gesture “3-Finger-Scale”
Four Point Scale Gesture “4-Finger-Scale”
Five Point Scale Gesture “5-Finger-Scale”



Hold
Simple N Point Hold Gesture
3 Point Hold Gesture

Simple Chord Gestures
The row gesture
The column gesture


Gesture Filtering

GML provides a method for describing how to process data in the gesture pipeline before it is dispatched as part of a gesture event called filters. There are five supported filters that use various methods to limit, smooth, clean or amplify raw data associated with touch point cluster motion. It is possible to add multiple filters to the gesture processing pipeline to shape data for highly specialized gesture applications or simply refine gestures to work more reliable results on specific devices.

The Delta Filter

The Boundary Filter

Inertial Filtering

Mean Filtering

Multiply Filter

Advanced Touch Gesture GML Descriptions

Advance touch gesture descriptions often require detailed specification to fully describe complex motion or multiple gesture dimensions.

Manipulation Gestures

Manipulation gestures are gestures that typically involve a direct transformation of a touch object based on a one to one mapping of cluster motion. These types of transformations require scale, rotation, and translation information and processing as well as specific matching criteria for each property. To accomplish this in a single gesture is created with a series of dimensions that can be treated independently and used to describe the change in each transformation property as an action is performed. This can be done using a single cluster analysis algorithm, a single gesture object and return a single gesture event.

Manipulate

Velocity and Direction Based Gestures

Gestures that require characteristic touch point velocities and associated accelerations. At first glance Flick, Swipe and scroll gestures are very similar to each other. They have much in common to traditional drag gestures but are considered more refined as they have more specific motion based constraints.

Flick

Swipe

Scroll

It is advantageous when creating new gestures using GML gesture to use self documenting gesture names that accurately describe the core features of a given gesture. Given the open ended applicability of a gesture within the associated application space names tend not to describe the application of the gesture but rather the matching criteria and gesture action used to tripper the gesture event. For example “2-finger-inertial-drag” rather than simply “2-point-image-pan”. The name “2-finger-inertial-drag” can be used in a number of ways in an application, it can pan an image, scrub a video or even adjust the brightness of the display. However GML has the capability of defining exactly how gesture event variables are be mapped to display objects in an application (using the mapping tag). This allows developers to create event handlers that can automatically “unpack” an event and automatically map changes to object. In this case the last part of a gesture name is can be used to identify the intended use for the gesture in the application space. For example “2-finger-inertial-scale-affine-image-zoom”.

Gesture Comparison:

Gesture Motion Type

Directional

Velocity Threshold

Acceleration Threshold

Inertial Filter

Event Response

Drag

No

No

No

No

Continuous

Flick

No

No

Yes, needs to exceed min

No

Discrete

Swipe

Yes, Horizontal or Vertical

Yes, must exceed min

Yes, must be below max

No

Discrete

Scroll

Yes, Horizontal or Vertical

No

No

Yes

Continuous

Gesture Property Mapping

One of the most powerful features of GML is the ability to fully describe how the gesture data associated with a dispatched gesture event is intended to be used.

The mapping tag allows developers to define many aspects of how the gesture event is to be “unpacked” and “mapped” interactive objects at the application level. For example the “2-finger-inertial-scale-affine-image-zoom” gesture fully describes how to apply the scale delta values, the associated scaling factors, affine scaling method and the type of display object to apply the gesture.


Touch Tangible Object Gestures

Touch tangible object gestures are created by physical objects that can create geometric clusters of touch point.
Geometry defined clusters can be characterized by the shape, size of a point cluster and the relative position of the touch points within. The geometry of each point cluster can be uniquely defined, recognized and tracked across large surfaces.

There are many ways to create touch tangibles, the fundamental criteria ensuring that the relative positions of the generated touch points remains constant and stable so that the geometry of any associated cluster is consistent. You can create touch tangibles using IR based touch devices by creating leg like protrusions on a plate. Additionally you can create touch tangibles for projected capacitive touch devices by using a pattern of conductive materials. This can be done with conductive ink, copper tape and even 3D printed conductive polymers.

The powerful thing about touch tangibles is their ability to processed side by side with traditional fingertip touch and stylus touch input on the same device. Input management at this level requires the ability to classify touch points by type once they have been passed to the application by the standard OS touch driver. Once properly classified the touch points can then be treated different in the gesture engine and assigned custom behaviors.

When handled in this manner touch tangibles can be used to evoke hidden menus and other GUI's that can then be controlled using fingertip touch. In this way 3D printed tangibles can then be used to create rich interactive table top tangibles that can bridge the digital and physical divide.

Different Size 3 Point Geometry

  • 3_point_tangible_A_begin
  • 3_point_tangible_B_begin
  • 3_point_tangible_C_begin

Different Geometries

  • 3_point_tangible_A_begin
  • 4_point_tangible_A_begin
  • 5_point_tangible_A_begin

Different Motion Treatment

  • 5_point_tangible_A_hold
  • 5_point_tangible_A_drag
  • 5_point_tangible_A_rotate

Bi-manual Touch Gestures

Bi-manual gestures are gestures that have actions which “require” two hands in order to be able to complete efficiency. A good example of this is the split gesture. Although the split gesture can be performed using 2 to 10 touch points, since it requires a critical separation between points in a cluster that typically exceeds the diameter of a hand it is best performed by placing fingers from both the left hand and right hand on a touch object and then pulling the two hands apart. This has the effect of creating two discrete touch point clusters still associated with the touch object.

The Tilt Gesture

The Split Gesture

  • 2_point_split
  • 4_point_split

The Complimentary Rotate Gesture

  • 2_point_bimanual_complimentary_rotate

Context Aware Gestures

Context aware gestures require object context data to fully qualify and calculate manipulation deltas.

Pivot
The pivot gesture allows the user to pivot a touch interactive object about a single touch point. This is done by minimizing the difference between the center of mass the object and the touch point perpendicular to direction of motion. The context in this example is the “center of mass” point of the object (centroid) as it requires detailed knowledge and management of the display object properties.

Tap to Scale
Using a tap to create a temporary anchor point on a display object as a user defined reference point. A second dynamic touch point is then added to touch interactive object and standard scale deltas can be calculated so that a scale gesture event can be created. The context in this example is the custom application of tap to create a static virtual touch point and temporarily attach it to a display object cluster.

Tap to Rotate
Using a tap to create a temporary anchor point on a display object as a user defined reference point. A second dynamic touch point is then added to touch interactive object and standard rotation deltas can be calculated so that a rotate gesture event can be created. The context in this example is the custom application of tap to create a static virtual touch point and temporarily attach it to a display object cluster.


Gesture Sequences

Gesture sequences define a range of gestures that can be defined using a set of sub gesture actions that are performed in series or parallel to meet matching criteria. Each gesture used in a sequence must be defined individually in the root GML document. Any gesture used as part of a sequence must be explicitly referenced in the matching criteria of the sequence gesture by name.

Activation Gestures

Activation gestures are a set of gestures that uses parallel gesture sequencing as matching criteria. The simplest type of activation gesture are the “hold” series these require “n” number of touch points to be locked on a location while a secondary gesture action is performed. These gesture are called activation gesture because they can be used effectively to activate new modes or behavior or interaction in an application. Activation gestures are well suited to this task as they require a precise sequence matched action which is difficult to accidentally perform and unlikely to conflict with other common actions.

Series Gestures

Series gestures are a set of gestures that use a series of gesture actions (discrete or continuous) as a gesture sequence to fulfill matching criteria. The first gesture in the sequence is used as the primary matching criteria. Once the first gesture in the sequence is completed the second gesture algorithm is activated and used to determine weather the second gesture is present. If the second gesture action is matched then a gesture event is returned. This primary and secondary gesture sequence can be extended to include an chain of any number of gesture actions and types.


Augmented Surface Gestures

Augmented gestures are common gestures that have been augmented with additional touch point properties such as “pressure” or point “width and height” requirements.

Pressure Augmented Gestures

Pressure augmented gestures are set of gestures that have additional matching criteria and return values that use pressure data associated with touch points to augment the gesture properties. In most cases pressure data associated with the touch point cluster is used to increase the fidelity of a gesture when using advanced pressure sensitive multitouch input devices.

Motion Gesture Index

gestures/touch/gesture_index.txt · Last modified: 2019/01/29 19:07 (external edit)