Continuous Sign Language Recognition Using Leap Motion Sensor

No Thumbnail Available

Date

2024

Journal Title

Journal ISSN

Volume Title

Publisher

Institute of Electrical and Electronics Engineers Inc.

Abstract

A vital communication tool that connects persons with hearing and speech impairments worldwide is sign language. Sign language involves mostly hand movements plus face gestures, which are interpreted by recognizing these gestures to form meaningful sentences. In this study, we use two machine learning models: Long Short-Term Memory (LSTM) and Support Vector Machines (SVM), to predict signs. A dataset of 42 unique sign words and 28 sentences was used to train and evaluate our models. Our method uses depth sensors, like the Leap Motion gadget, to improve sign language recognition (SLR).Worldwide, sign language is an essential communication tool for people with speech and hearing impairments. Sign languages are primarily made up of hand gestures and facial expressions, and their meaning is communicated through precise gesture interpretation. Our models were trained on a dataset containing 42 distinct sign words and 28 sentences, achieving an accuracy of 90.35% for word prediction and 98.21 for sentence prediction. The LSTM model outperformed the SVM model, which had accuracies of 85.96% and 89.58% for words and sentences, respectively. By using depth sensors like the Leap Motion device, our approach aims to enhance sign language recognition (SLR). © 2024 IEEE.

Description

Keywords

depth sensor, leap motion device, LSTM, Sign language recognition, SVM

Citation

2024 IEEE 3rd World Conference on Applied Intelligence and Computing, AIC 2024, 2024, Vol., , p. 1160-1165

Endorsement

Review

Supplemented By

Referenced By