Exploring LSTM and CNN Architectures for Sign Language Translation
| dc.contributor.author | Mongkol Boondamnoen | |
| dc.contributor.author | Kamolwich Thongsri | |
| dc.contributor.author | Thanapat Sahabantoegnsin | |
| dc.contributor.author | Kuntpong Woraratpanya | |
| dc.date.accessioned | 2026-05-08T19:16:27Z | |
| dc.date.issued | 2023-10-26 | |
| dc.description.abstract | Our study explores the application of deep learning models, specifically LSTM (Long Short-Term Memory) and CNN (Convolutional Neural Network), in the realm of sign language translation to address communication barriers faced by individuals with hearing disabilities. Using a dedicated dataset comprising ten frequently used American Sign Language words, we rigorously compare the performance of LSTM and CNN models, measuring precision and recall metrics. The LSTM model achieves a perfect accuracy score of 1, while the CNN model demonstrates a commendable accuracy of 0.9826. These results highlight the potential of these deep learning architectures to facilitate more inclusive and accessible communication avenues in sign language, bridging the communication divide. | |
| dc.identifier.doi | 10.1109/icitee59582.2023.10317660 | |
| dc.identifier.uri | https://dspace.kmitl.ac.th/handle/123456789/15510 | |
| dc.subject | Hand Gesture Recognition Systems | |
| dc.subject | Hearing Impairment and Communication | |
| dc.subject | Gait Recognition and Analysis | |
| dc.title | Exploring LSTM and CNN Architectures for Sign Language Translation | |
| dc.type | Article |