- 著者
- Dean Harris Rubine
- タイトル
- The Automatic Recognition of Gestures
- 日時
- December 1991
- 概要
- Gesture-based interfaces, in which the user specifies commands
by simple freehand drawings, offer an alternative to traditional
keyboard, menu, and direct manipulation interfaces.
The ability to specify objects, an operation, and additional
parameters with a single intuitive gesture makes gesture-based
systems appealing to both novice and experienced users.
Unfortunately, the difficulty in buildings gesture-based systems
has prevented such systems from being adequately explored.
This dissertation presents work that attempts to alleviate two
of the major difficulties: the construction of gesture
classifiers and the integration of gestures into direct-
manipulation interfaces.
Three example gesture-based applications were built to
demonstrate this work.
Gesture-based systems require classifiers to distinguish between
the possible gestures a user may enter.
In the past, classifiers have often been hand-coded for each new
application, making them difficult to built, change, and
maintain.
This dissertation applies elementary statistical pattern
recognition techniques to produce gesture classifiers that are
trained by example, greatly simplifying their creation and
maintenance.
Both single-path gestures (drawn with a mouse or stylus) and
multiple-path gestures (consisting of the simultaneous paths of
multiple fingers) may be classified.
On a 1 MIPS workstation, a 30-class single-path recognizer takes
175 milliseconds to train (once the examples have been entered),
and classification takes 9 milliseconds, typically achieving 97%
accuracy.
A method for classifying a gesture as soon as it is unambiguous
is also presented.
This dissertation also describes GRANDMA, a toolkit for
building gesture-based applications based on Smalltalk's Model/
View/Controller paradigm.
Using GRANDMA, one associates sets of gesture classes with
individual views or entire view classes.
A gesture class can be specified at runtime by entering a few
examples of the class, typically 15.
The semantics of a gesture class can be specified at runtime via
a simple programming interface.
Besides allowing for easy experimentation with gesture-based
interfaces, GRANDMA sports a novel input architecture, capable
of supporting multiple input devices and multi-threaded
dialogues.
The notion of virtual tools and semantic feedback are shown to
arise naturally from GRANDMA's approach.
- カテゴリ
- CMUTR
Category: CMUTR
Institution: Department of Computer Science, Carnegie
Mellon University
Abstract: Gesture-based interfaces, in which the user specifies commands
by simple freehand drawings, offer an alternative to traditional
keyboard, menu, and direct manipulation interfaces.
The ability to specify objects, an operation, and additional
parameters with a single intuitive gesture makes gesture-based
systems appealing to both novice and experienced users.
Unfortunately, the difficulty in buildings gesture-based systems
has prevented such systems from being adequately explored.
This dissertation presents work that attempts to alleviate two
of the major difficulties: the construction of gesture
classifiers and the integration of gestures into direct-
manipulation interfaces.
Three example gesture-based applications were built to
demonstrate this work.
Gesture-based systems require classifiers to distinguish between
the possible gestures a user may enter.
In the past, classifiers have often been hand-coded for each new
application, making them difficult to built, change, and
maintain.
This dissertation applies elementary statistical pattern
recognition techniques to produce gesture classifiers that are
trained by example, greatly simplifying their creation and
maintenance.
Both single-path gestures (drawn with a mouse or stylus) and
multiple-path gestures (consisting of the simultaneous paths of
multiple fingers) may be classified.
On a 1 MIPS workstation, a 30-class single-path recognizer takes
175 milliseconds to train (once the examples have been entered),
and classification takes 9 milliseconds, typically achieving 97%
accuracy.
A method for classifying a gesture as soon as it is unambiguous
is also presented.
This dissertation also describes GRANDMA, a toolkit for
building gesture-based applications based on Smalltalk's Model/
View/Controller paradigm.
Using GRANDMA, one associates sets of gesture classes with
individual views or entire view classes.
A gesture class can be specified at runtime by entering a few
examples of the class, typically 15.
The semantics of a gesture class can be specified at runtime via
a simple programming interface.
Besides allowing for easy experimentation with gesture-based
interfaces, GRANDMA sports a novel input architecture, capable
of supporting multiple input devices and multi-threaded
dialogues.
The notion of virtual tools and semantic feedback are shown to
arise naturally from GRANDMA's approach.
Number: CMU-CS-91-202
Bibtype: TechReport
Month: dec
Author: Dean Harris Rubine
Title: The Automatic Recognition of Gestures
Year: 1991
Address: Pittsburgh, PA
Super: @CMUTR