Title
Fusion of deep learning descriptors for gesture recognition
Date Issued
01 January 2018
Access level
metadata only access
Resource Type
conference paper
Author(s)
Federal University of Ouro Preto
Federal University of Ouro Preto
Publisher(s)
Springer Verlag
Abstract
In this paper, we propose an approach for dynamic hand gesture recognition, which exploits depth and skeleton joint data captured by Kinect™ sensor. Also, we select the most relevant points in the hand trajectory with our proposed method to extract keyframes, reducing the processing time in a video. In addition, this approach combines pose and motion information of a dynamic hand gesture, taking advantage of the transfer learning property of CNNs. First, we use the optical flow method to generate a flow image for each keyframe, next we extract the pose and motion information using two pre-trained CNNs: a CNN-flow for flow-images and a CNN-pose for depth-images. Finally, we analyze different schemes to fusion both informations in order to achieve the best method. The proposed approach was evaluated in different datasets, achieving promising results compared to other methods, outperforming state-of-the-art methods.
Start page
212
End page
219
Volume
10657 LNCS
Language
English
OCDE Knowledge area
Ciencias de la computación Bioinformática
Scopus EID
2-s2.0-85042214567
Source
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
ISSN of the container
03029743
ISBN of the container
9783319751924
Conference
22nd Iberoamerican Congress on Pattern Recognition, CIARP 2017
Sources of information: Directorio de Producción Científica Scopus