Design and Implement Deepfake Video Detection Using VGG_16 and Long Short_Term Memory

dc.contributor.authorLaor Boongasame
dc.contributor.authorJindaphon Boonpluk
dc.contributor.authorSunisa Soponmanee
dc.contributor.authorJirapond Muangprathub
dc.contributor.authorKaranrat Thammarak
dc.date.accessioned2025-07-21T06:10:35Z
dc.date.issued2024-01-01
dc.description.abstractThis study aims to design and implement deepfake video detection using VGG‐16 in combination with long short‐term memory (LSTM). In contrast to other studies, this study compares VGG‐16, VGG‐19, and the newest model, ResNet‐101, including LSTM. All the models were tested using Celeb‐DF video dataset. The result showed that the VGG‐16 model with 15 epochs and 32 batch sizes had the highest performance. The results showed that the VGG‐16 model with 15 epochs and 32 batch sizes exhibited the highest performance, with 96.25% accuracy, 93.04% recall, 99.20% specificity, and 99.07% precision. In conclusion, this model can be implemented practically.
dc.identifier.doi10.1155/2024/8729440
dc.identifier.urihttps://dspace.kmitl.ac.th/handle/123456789/13194
dc.subject.classificationDigital Media Forensic Detection
dc.titleDesign and Implement Deepfake Video Detection Using VGG_16 and Long Short_Term Memory
dc.typeArticle

Files

Collections