What Goes Where: Predicting Object Distributions from Above

08/02/2018
by   Connor Greenwell, et al.
0

In this work, we propose a cross-view learning approach, in which images captured from a ground-level view are used as weakly supervised annotations for interpreting overhead imagery. The outcome is a convolutional neural network for overhead imagery that is capable of predicting the type and count of objects that are likely to be seen from a ground-level perspective. We demonstrate our approach on a large dataset of geotagged ground-level and overhead imagery and find that our network captures semantically meaningful features, despite being trained without manual annotations.

READ FULL TEXT

page 2

page 3

page 4

research
02/19/2019

Using Conditional Generative Adversarial Networks to Generate Ground-Level Views From Overhead Imagery

This paper develops a deep-learning framework to synthesize a ground-lev...
research
12/08/2016

Predicting Ground-Level Scene Layout from Aerial Imagery

We introduce a novel strategy for learning to extract semantically meani...
research
05/09/2023

Comprehensive Dataset of Synthetic and Manipulated Overhead Imagery for Development and Evaluation of Forensic Tools

We present a first of its kind dataset of overhead imagery for developme...
research
09/16/2019

Learning to Map Nearly Anything

Looking at the world from above, it is possible to estimate many propert...
research
05/05/2019

Understanding urban landuse from the above and ground perspectives: a deep learning, multimodal solution

Landuse characterization is important for urban planning. It is traditio...
research
12/09/2016

Understanding and Mapping Natural Beauty

While natural beauty is often considered a subjective property of images...

Please sign up or login with your details

Forgot password? Click here to reset