A multimodal LIBRAS-UFOP Brazilian sign language dataset of minimal pairs using a microsoft Kinect sensor

Lourdes Ramirez Cerna, Edwin Escobedo Cardenas, Dayse Garcia Miranda, David Menotti, Guillermo Camara-Chavez

Research output: Contribution to journalArticle (Contribution to Journal)peer-review

26 Scopus citations

Abstract

Sign language recognition has made significant advances in recent years. Many researchers show interest in encouraging the development of different applications to simplify the daily life of deaf people and to integrate them into the hearing society. The use of the Kinect sensor (developed by Microsoft) for sign language recognition is steadily increasing. However, there are limited publicly available RGB-D and skeleton joint datasets that provide complete information for dynamic signs captured by Kinect sensor, most of them lack effective and accurate labeling or stored in a single data format. Given the limitations of existing datasets, this article presents a challenging public dataset, named LIBRAS-UFOP. The dataset is based on the concept of minimal pairs, which follows specific categorization criteria; the signs are labeled correctly, and validated by an expert in sign language; the dataset presents complete RGB-D and skeleton data. The dataset consists of 56 different signs with high similarity grouped into four categories. Besides, a baseline method is presented that consists of the generation of dynamic images from each multimodal data, which are the input to two flow stream CNN architectures. Finally, we propose an experimental protocol to conduct evaluations on the proposed dataset. Due to the high similarity between signs, the experimental results using a baseline method reports a recognition rate of 74.25% on the proposed dataset. This result highlights how challenging this dataset is for sign language recognition and let room for future research works to improve the recognition rate.

Translated title of the contributionUn conjunto de datos de lenguaje de señas brasileño LIBRAS-UFOP multimodal de pares mínimos utilizando un sensor Microsoft Kinect
Original languageEnglish
Article number114179
JournalExpert Systems with Applications
Volume167
DOIs
StatePublished - 1 Apr 2021
Externally publishedYes

Keywords

  • CNN
  • Dynamic images
  • Minimal pairs
  • RGB-D data
  • Sign language dataset
  • Sign language recognition

Fingerprint

Dive into the research topics of 'A multimodal LIBRAS-UFOP Brazilian sign language dataset of minimal pairs using a microsoft Kinect sensor'. Together they form a unique fingerprint.

Cite this