The large availability of touch-sensitive screens fostered the research in gesture recognition. The Machine Learning community focused mainly on accuracy and robustness to noise, creating classifiers that precisely recognize gestures after their performance. Instead, the User Interface Engineering community developed compositional gesture descriptions that model gestures and their sub-parts. They are suitable for building guidance systems, but they lack a robust and accurate recognition support. In this paper, we establish a compromise between the accuracy and the provided information introducing G-Gene, a method for transforming compositional stroke gesture definitions into profile Hidden Markov Models (HMMs), able to provide both a good accuracy and information on gesture sub-parts. It supports online recognition without using any global feature, and it updates the information while receiving the input stream, with an accuracy useful for prototyping the interaction. We evaluated the approach in a user interface development task, showing that it requires less time and effort for creating guidance systems with respect to common gesture classification approaches.

G-Gene: a gene alignment method for online partial stroke gestures recognition

Carcangiu, Alessandro;Spano, Lucio Davide
2018-01-01

Abstract

The large availability of touch-sensitive screens fostered the research in gesture recognition. The Machine Learning community focused mainly on accuracy and robustness to noise, creating classifiers that precisely recognize gestures after their performance. Instead, the User Interface Engineering community developed compositional gesture descriptions that model gestures and their sub-parts. They are suitable for building guidance systems, but they lack a robust and accurate recognition support. In this paper, we establish a compromise between the accuracy and the provided information introducing G-Gene, a method for transforming compositional stroke gesture definitions into profile Hidden Markov Models (HMMs), able to provide both a good accuracy and information on gesture sub-parts. It supports online recognition without using any global feature, and it updates the information while receiving the input stream, with an accuracy useful for prototyping the interaction. We evaluated the approach in a user interface development task, showing that it requires less time and effort for creating guidance systems with respect to common gesture classification approaches.
2018
Gestures; Classification; Hidden Markov Models; Compositional gesture modelling; Online recognition; Feedback; Feedforward
File in questo prodotto:
File Dimensione Formato  
sample-acmsmall.pdf

Solo gestori archivio

Tipologia: versione editoriale (VoR)
Dimensione 838.19 kB
Formato Adobe PDF
838.19 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/251578
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? ND
social impact