Land-Cover Semantic Segmentation for Very-High-Resolution Remote Sensing Imagery Using Deep Transfer Learning and Active Contour Loss

Miguel Chicchon, Francisco James Leon Trujillo, Ivan Sipiran, Ricardo Madrid

Producción científica: Artículo CientíficoArtículo originalrevisión exhaustiva

Resumen

An accurate land-cover segmentation of very-high-resolution aerial images is essential for a wide range of applications, including urban planning and natural resource management. However, the automation of this process remains a challenge owing to the complexity of images, variability in land surface features, and noise. In this study, a method for training convolutional neural networks and transformers to perform land-cover segmentation on very-high-resolution aerial images in a regional context was proposed. We assessed the U-Net-scSE, FT-U-NetFormer, and DC-Swin architectures, incorporating transfer learning and active contour loss functions to improve performance on semantic segmentation tasks. Our experiments conducted using the OpenEarthMap dataset, which includes images from 44 countries, demonstrate the superior performance of U-Net-scSE models with the EfficientNet-V2-XL and MiT-B4 encoders, achieving an mIoU of over 0.80 on a test dataset of urban and rural images from Peru.

Idioma originalInglés estadounidense
Páginas (desde-hasta)59007-59019
-13
PublicaciónIEEE Access
Volumen13
DOI
EstadoIndizado - 2025

Nota bibliográfica

Publisher Copyright:
© 2013 IEEE.

Huella

Profundice en los temas de investigación de 'Land-Cover Semantic Segmentation for Very-High-Resolution Remote Sensing Imagery Using Deep Transfer Learning and Active Contour Loss'. En conjunto forman una huella única.

Citar esto