Skip to Main content Skip to Navigation
Conference papers

Patch-based deep learning architectures for sparse annotated very high resolution datasets

Résumé : In this paper, we compare the performance of different deep-learning architectures under a patch-based framework for the semantic labeling of sparse annotated urban scenes from very high resolution images. In particular, the simple convolutional network ConvNet, the AlexNet and the VGG models have been trained and tested on the publicly available, multispectral, very high resolution Summer Zurich v1.0 dataset. Experiments with patches of different dimensions have been performed and compared, indicating the optimal size for the semantic segmentation of very high resolution satellite data. The overall validation and assessment indicated the robustness of the high level features that are computed with the employed deep architectures for the semantic labeling of urban scenes.
Document type :
Conference papers
Complete list of metadatas

https://hal-centralesupelec.archives-ouvertes.fr/hal-02423037
Contributor : Kamilia Abdani <>
Submitted on : Monday, December 23, 2019 - 3:36:01 PM
Last modification on : Thursday, July 9, 2020 - 4:06:04 PM

Identifiers

Citation

Maria Papadomanolaki, Maria Vakalopoulou, Konstantinos Karantzalos. Patch-based deep learning architectures for sparse annotated very high resolution datasets. Joint Urban Remote Sensing Event (JURSE), Mar 2017, Dubai, United Arab Emirates. ⟨10.1109/JURSE.2017.7924538⟩. ⟨hal-02423037⟩

Share

Metrics

Record views

81