DeepAI AI Chat
Log In Sign Up

Exploiting Convolutional Representations for Multiscale Human Settlement Detection

by   Dalton Lunga, et al.

We test this premise and explore representation spaces from a single deep convolutional network and their visualization to argue for a novel unified feature extraction framework. The objective is to utilize and re-purpose trained feature extractors without the need for network retraining on three remote sensing tasks i.e. superpixel mapping, pixel-level segmentation and semantic based image visualization. By leveraging the same convolutional feature extractors and viewing them as visual information extractors that encode different image representation spaces, we demonstrate a preliminary inductive transfer learning potential on multiscale experiments that incorporate edge-level details up to semantic-level information.


page 3

page 4


On the Selective and Invariant Representation of DCNN for High-Resolution Remote Sensing Image Recognition

Human vision possesses strong invariance in image recognition. The cogni...

A Multiscale Graph Convolutional Network for Change Detection in Homogeneous and Heterogeneous Remote Sensing Images

Change detection (CD) in remote sensing images has been an ever-expandin...

Multisensor Images Fusion Based on Feature-Level

Until now, of highest relevance for remote sensing data processing and a...

IM2HEIGHT: Height Estimation from Single Monocular Imagery via Fully Residual Convolutional-Deconvolutional Network

In this paper we tackle a very novel problem, namely height estimation f...

Deep-FExt: Deep Feature Extraction for Vessel Segmentation and Centerline Prediction

Feature extraction is a very crucial task in image and pixel (voxel) cla...