Skip to main navigation Skip to search Skip to main content

Land-Cover Semantic Segmentation for Very-High-Resolution Remote Sensing Imagery Using Deep Transfer Learning and Active Contour Loss

  • Miguel Chicchon
  • , Francisco James Leon Trujillo
  • , Ivan Sipiran
  • , Ricardo Madrid

Research output: Contribution to journalArticle (Contribution to Journal)peer-review

1 Scopus citations

Abstract

An accurate land-cover segmentation of very-high-resolution aerial images is essential for a wide range of applications, including urban planning and natural resource management. However, the automation of this process remains a challenge owing to the complexity of images, variability in land surface features, and noise. In this study, a method for training convolutional neural networks and transformers to perform land-cover segmentation on very-high-resolution aerial images in a regional context was proposed. We assessed the U-Net-scSE, FT-U-NetFormer, and DC-Swin architectures, incorporating transfer learning and active contour loss functions to improve performance on semantic segmentation tasks. Our experiments conducted using the OpenEarthMap dataset, which includes images from 44 countries, demonstrate the superior performance of U-Net-scSE models with the EfficientNet-V2-XL and MiT-B4 encoders, achieving an mIoU of over 0.80 on a test dataset of urban and rural images from Peru.

Original languageEnglish
Pages (from-to)59007-59019
Number of pages13
JournalIEEE Access
Volume13
DOIs
StatePublished - 2025

Fingerprint

Dive into the research topics of 'Land-Cover Semantic Segmentation for Very-High-Resolution Remote Sensing Imagery Using Deep Transfer Learning and Active Contour Loss'. Together they form a unique fingerprint.

Cite this