Design and Implement Deepfake Video Detection Using VGG_16 and Long Short_Term Memory

Abstract

This study aims to design and implement deepfake video detection using VGG‐16 in combination with long short‐term memory (LSTM). In contrast to other studies, this study compares VGG‐16, VGG‐19, and the newest model, ResNet‐101, including LSTM. All the models were tested using Celeb‐DF video dataset. The result showed that the VGG‐16 model with 15 epochs and 32 batch sizes had the highest performance. The results showed that the VGG‐16 model with 15 epochs and 32 batch sizes exhibited the highest performance, with 96.25% accuracy, 93.04% recall, 99.20% specificity, and 99.07% precision. In conclusion, this model can be implemented practically.

Description

Keywords

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By