Exploring LSTM and CNN Architectures for Sign Language Translation

Abstract

Our study explores the application of deep learning models, specifically LSTM (Long Short-Term Memory) and CNN (Convolutional Neural Network), in the realm of sign language translation to address communication barriers faced by individuals with hearing disabilities. Using a dedicated dataset comprising ten frequently used American Sign Language words, we rigorously compare the performance of LSTM and CNN models, measuring precision and recall metrics. The LSTM model achieves a perfect accuracy score of 1, while the CNN model demonstrates a commendable accuracy of 0.9826. These results highlight the potential of these deep learning architectures to facilitate more inclusive and accessible communication avenues in sign language, bridging the communication divide.

Description

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By