Exploring LSTM and CNN Architectures for Sign Language Translation

dc.contributor.authorMongkol Boondamnoen
dc.contributor.authorKamolwich Thongsri
dc.contributor.authorThanapat Sahabantoegnsin
dc.contributor.authorKuntpong Woraratpanya
dc.date.accessioned2026-05-08T19:16:27Z
dc.date.issued2023-10-26
dc.description.abstractOur study explores the application of deep learning models, specifically LSTM (Long Short-Term Memory) and CNN (Convolutional Neural Network), in the realm of sign language translation to address communication barriers faced by individuals with hearing disabilities. Using a dedicated dataset comprising ten frequently used American Sign Language words, we rigorously compare the performance of LSTM and CNN models, measuring precision and recall metrics. The LSTM model achieves a perfect accuracy score of 1, while the CNN model demonstrates a commendable accuracy of 0.9826. These results highlight the potential of these deep learning architectures to facilitate more inclusive and accessible communication avenues in sign language, bridging the communication divide.
dc.identifier.doi10.1109/icitee59582.2023.10317660
dc.identifier.urihttps://dspace.kmitl.ac.th/handle/123456789/15510
dc.subjectHand Gesture Recognition Systems
dc.subjectHearing Impairment and Communication
dc.subjectGait Recognition and Analysis
dc.titleExploring LSTM and CNN Architectures for Sign Language Translation
dc.typeArticle

Files

Collections