Comparative Analysis of Deep Learning Models for Building Extraction from High-resolution Satellite Imagery

dc.contributor.authorTachasit Chueprasert
dc.contributor.authorAkadej Udomchaiporn
dc.contributor.authorSarun Intagosum
dc.date.accessioned2025-07-21T06:12:04Z
dc.date.issued2024-10-16
dc.description.abstractIn this research, an approach to extract buildings from Google's satellite imagery was proposed. The performances of various deep learning models (U-Net, RIU-Net, U-Net++, Res-U-Net, and DeepLabV3+) on pre-processed datasets were compared. The models were trained using the similarity metrics of Intersection over Union (IoU) and Dice Similarity Coefficient (DSC). The best-performing models among the segmentation techniques were Res-U-Net and DeepLabV3+. Res-U-Net, an enhanced version of the traditional U-Net model that incorporates residual connections for improved feature propagation, achieved an F1 score of 85.43% when using the RGB dataset. Similarly, DeepLabV3+ also achieved high performance on the Enhanced RGB dataset, obtaining an F1 score of 85.18% after applying pre-processing techniques. This research highlights the significance of color as a dominant feature for accurate building extraction from satellite images. The findings contribute to improved methodologies for building identification, benefiting urban planning, and disaster management applications.
dc.identifier.doi10.55003/cast.2024.260846
dc.identifier.urihttps://dspace.kmitl.ac.th/handle/123456789/13980
dc.subject.classificationRemote Sensing and Land Use
dc.titleComparative Analysis of Deep Learning Models for Building Extraction from High-resolution Satellite Imagery
dc.typeArticle

Files

Collections