Patch-based deep learning architectures for sparse annotated very high resolution datasets - CentraleSupélec Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Patch-based deep learning architectures for sparse annotated very high resolution datasets

Résumé

In this paper, we compare the performance of different deep-learning architectures under a patch-based framework for the semantic labeling of sparse annotated urban scenes from very high resolution images. In particular, the simple convolutional network ConvNet, the AlexNet and the VGG models have been trained and tested on the publicly available, multispectral, very high resolution Summer Zurich v1.0 dataset. Experiments with patches of different dimensions have been performed and compared, indicating the optimal size for the semantic segmentation of very high resolution satellite data. The overall validation and assessment indicated the robustness of the high level features that are computed with the employed deep architectures for the semantic labeling of urban scenes.
Fichier non déposé

Dates et versions

hal-02423037 , version 1 (23-12-2019)

Identifiants

Citer

Maria Papadomanolaki, Maria Vakalopoulou, Konstantinos Karantzalos. Patch-based deep learning architectures for sparse annotated very high resolution datasets. Joint Urban Remote Sensing Event (JURSE), Mar 2017, Dubai, United Arab Emirates. ⟨10.1109/JURSE.2017.7924538⟩. ⟨hal-02423037⟩
50 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More