A Comparative Study of Generative Adversarial Networks for Generating Car Damaged Images
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Abstract The deeper the deep learning (DL) models are, the more computational complexity in need of the vast amount of data training requires to get better performance with accurate results. Generative adversarial networks (GANs) have obtained tremendous attention from many researchers with their impressive generation of synthetic instance data via a few source data by alleviating the problems of data scarcity, insufficient data diversity, and producing only limited plausible alternative data using standard data augmentation techniques conforming to the art of low-data-driven DL training in both scratch models, and pre-trained model in a variety of image classification tasks. That is why to defeat the above-referred problems and a lack of publicly available high-quality car-damaged datasets in car damage analysis, we created a custom data with a framework including three different evaluation assessments to generate a synthesized car-damaged dataset as the comparative study of Cycle-Consistent Adversarial Networks (CycleGAN), and Attention-Guided Generative Adversarial Network (AttentionGAN) by transforming one domain to another with our custom car damaged-undamaged dataset. In addition to this, we evaluated our generated car-damaged images based on three different evaluation assessments: firstly using three quantitative GANs metrics such as Inception Score (IS), Fréchet Inception Distance (FID), and Kernel Inception Distance (KID); secondly creating a convolutional neural networks (CNNs) classifier to identify them into real or fake; and finally building a vision-transformer (ViT) classifier to analyze them into damaged or undamaged. After accomplishing our comparative analysis, we can prove that AttentionGAN is better performance than CycleGAN according to all our experimental results.