Design and Implement Deepfake Video Detection Using VGG‐16 and Long Short‐Term Memory

dc.contributor.authorLaor Boongasame
dc.contributor.authorJindaphon Boonpluk
dc.contributor.authorSunisa Soponmanee
dc.contributor.authorJirapond Muangprathub
dc.contributor.authorKaranrat Thammarak
dc.date.accessioned2026-05-08T19:16:16Z
dc.date.issued2024-1-1
dc.description.abstractThis study aims to design and implement deepfake video detection using VGG‐16 in combination with long short‐term memory (LSTM). In contrast to other studies, this study compares VGG‐16, VGG‐19, and the newest model, ResNet‐101, including LSTM. All the models were tested using Celeb‐DF video dataset. The result showed that the VGG‐16 model with 15 epochs and 32 batch sizes had the highest performance. The results showed that the VGG‐16 model with 15 epochs and 32 batch sizes exhibited the highest performance, with 96.25% accuracy, 93.04% recall, 99.20% specificity, and 99.07% precision. In conclusion, this model can be implemented practically.
dc.identifier.doi10.1155/2024/8729440
dc.identifier.urihttps://dspace.kmitl.ac.th/handle/123456789/15438
dc.publisherApplied Computational Intelligence and Soft Computing
dc.subjectDigital Media Forensic Detection
dc.subjectAnomaly Detection Techniques and Applications
dc.subjectGenerative Adversarial Networks and Image Synthesis
dc.titleDesign and Implement Deepfake Video Detection Using VGG‐16 and Long Short‐Term Memory
dc.typeArticle

Files

Collections