TY - GEN
T1 - Dynamic sign language recognition based on convolutional neural networks and texture maps
AU - Escobedo, Edwin
AU - Ramirez, Lourdes
AU - Camara, Guillermo
N1 - Funding Information:
The authors thank the Graduate Program in Computer Science (PPGCC) at the Federal University of Ouro Preto (UFOP), the Coordination for the Improvement of Higher Level Personneland (CAPES) and the funding Brazilian agency (CNPq).
Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - Sign language recognition (SLR) is a very challenging task due to the complexity of learning or developing descriptors to represent its primary parameters (location, movement, and hand configuration). In this paper, we propose a robust deep learning based method for sign language recognition. Our approach represents multimodal information (RGB-D) through texture maps to describe the hand location and movement. Moreover, we introduce an intuitive method to extract a representative frame that describes the hand shape. Next, we use this information as inputs to two three-stream and two-stream CNN models to learn robust features capable of recognizing a dynamic sign. We conduct our experiments on two sign language datasets, and the comparison with state-of-the-art SLR methods reveal the superiority of our approach which optimally combines texture maps and hand shape for SLR tasks.
AB - Sign language recognition (SLR) is a very challenging task due to the complexity of learning or developing descriptors to represent its primary parameters (location, movement, and hand configuration). In this paper, we propose a robust deep learning based method for sign language recognition. Our approach represents multimodal information (RGB-D) through texture maps to describe the hand location and movement. Moreover, we introduce an intuitive method to extract a representative frame that describes the hand shape. Next, we use this information as inputs to two three-stream and two-stream CNN models to learn robust features capable of recognizing a dynamic sign. We conduct our experiments on two sign language datasets, and the comparison with state-of-the-art SLR methods reveal the superiority of our approach which optimally combines texture maps and hand shape for SLR tasks.
KW - CNN
KW - Sign language
KW - Texture maps
UR - http://www.scopus.com/inward/record.url?scp=85077023926&partnerID=8YFLogxK
U2 - 10.1109/SIBGRAPI.2019.00043
DO - 10.1109/SIBGRAPI.2019.00043
M3 - Articulo (Contribución a conferencia)
AN - SCOPUS:85077023926
T3 - Proceedings - 32nd Conference on Graphics, Patterns and Images, SIBGRAPI 2019
SP - 265
EP - 272
BT - Proceedings - 32nd Conference on Graphics, Patterns and Images, SIBGRAPI 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 32nd SIBGRAPI Conference on Graphics, Patterns and Images, SIBGRAPI 2019
Y2 - 28 October 2019 through 31 October 2019
ER -