Title
Dynamic sign language recognition based on convolutional neural networks and texture maps
Date Issued
01 October 2019
Access level
metadata only access
Resource Type
conference paper
Publisher(s)
Institute of Electrical and Electronics Engineers Inc.
Abstract
Sign language recognition (SLR) is a very challenging task due to the complexity of learning or developing descriptors to represent its primary parameters (location, movement, and hand configuration). In this paper, we propose a robust deep learning based method for sign language recognition. Our approach represents multimodal information (RGB-D) through texture maps to describe the hand location and movement. Moreover, we introduce an intuitive method to extract a representative frame that describes the hand shape. Next, we use this information as inputs to two three-stream and two-stream CNN models to learn robust features capable of recognizing a dynamic sign. We conduct our experiments on two sign language datasets, and the comparison with state-of-the-art SLR methods reveal the superiority of our approach which optimally combines texture maps and hand shape for SLR tasks.
Start page
265
End page
272
Language
English
OCDE Knowledge area
Ciencias de la computación
Scopus EID
2-s2.0-85077023926
Resource of which it is part
Proceedings - 32nd Conference on Graphics, Patterns and Images, SIBGRAPI 2019
ISBN of the container
9781728152271
Sponsor(s)
The authors thank the Graduate Program in Computer Science (PPGCC) at the Federal University of Ouro Preto (UFOP), the Coordination for the Improvement of Higher Level Personneland (CAPES) and the funding Brazilian agency (CNPq).
Sources of information: Directorio de Producción Científica Scopus