Estimating height is of great importance in understanding geometric relations within a scene. Many challenging remote sensing problems, on the other hand, have proven to benefit from the rich representations of objects and their environment provided by the height information, to name a few, semantic labeling [1, 2, 3, 4] and change detection [5, 6].
The highly developed laser sensors like LiDAR nowadays have made the acquisition of DSM data affordable, but often such height information is not always available, especially working on a huge number of historical remote sensing images. In addition, while there is much prior work in remote sensing field on estimating height based on the idea of stereo matching, which leverages camera motion to estimate camera poses through different temporal interval and, in turn, estimate height via triangulation from pairs of consecutive views, there has been fairly little on estimating height from a single remote sensing image. Whereas the monocular case often arises in practice.
Unfortunately, height estimation from monocular vision is a technically ill-posed problem, as one captured remote sensing image may correspond an infinite number of possible real world scenarios, i.e., there is an inherent ambiguity in mapping an intensity or color measurement into a height value. Yet it is not difficult for humans to infer the underlying 3D structure from a single remote sensing image, which remains a challenging task to develop a vision algorithm to do so by exploiting monocular cues alone. In this work, a correct height map means one is physically plausible in real world.
I-a Related Work
Depth Estimation in Computer Vision.
Directly related to our work are several methods of estimating depth map from a single image in computer vision field. Traditionally, this problem was solved by works making use of hand-crafted visual features and probabilistic graphical models (PGMs). And most approaches rely on strong assumptions about scene geometry. Saxenaet al.  use a discriminatively-trained Markov Random Field (MRF) that infers depth from a single monocular image by incorporating local and global image features, in which superpixels are introduced to enforce neighboring constraints. Their work has been later extended to the Make3D  system for 3D scene structure generation. 
, where authors make an attempt at classifying image regions into geometric structures (e.g., sky, vertical, and ground) instead of explicitly predicting depth. By doing so, a simple 3D model of the scene can be obtained. Liuet al.  do not map from hand-crafted features to depth directly, but instead first perform a semantic segmentation of the scene and use the semantic labels to guide the depth prediction. In this respect, depth can be more readily estimated by measuring the difference in appearance with respect to a given semantic class. In , authors show how to simultaneously integrate semantic segmentation with single-view depth estimation to improve performance.
Another view on depth estimation can be found when considering transfer strategy: a feature-based matching between a query image and a pool of images for which the depth is known is first performed to search the nearest neighbors, and the retrieved depth counterparts are then warped and combined to produce the final depth map for the query image. Konrad et al. 
make use of a kNN transfer mechanism based on histograms of oriented gradient (HOG) feature to estimate depth by computing a median over the retrieved candidate depth maps. In, authors consider monocular depth estimation a discrete-continuous optimization problem, where the continuous variables encode the depth of the superpixels in the query image, and the discrete ones represent relationships between neighboring superpixels. Thus they can formulate this task as inference in a high-order Conditional Random Field (CRF). These transfer mechanism-based approaches, however, require the entire datasets to be available at runtime and perform expensive alignment process.
More recently, as an important branch of the deep learning family, convolution neural networks (CNNs), have been fast emerging as the leading machine learning methodology and become the model of choice in many fields. For instance, CNNs have been proven to be good at extracting mid- and high-level abstract feature representations from raw images for classification purpose, by interleaving convolutional and pooling layers, i.e., spatially shrinking the feature maps layer by layer, and recently proposed network architectures also allow for dense per-pixel predictions like semantic segmentation[14, 15]
and super-resolution[16, 17]. A first attempt in exploiting CNN for depth estimation can be found in the work of Eigen et al. , where authors achieve this through the use of two CNNs, one that regresses a global depth structure from a single image, and another that refines it locally at finer resolution. In , that work is later extended by including the idea of predicting depth jointly with the semantic labeling and surface normal estimation by a multi-scale CNN architecture, which helps obtaining more fine-grained depth maps. Moreover, another line of works studies ways of combining CNNs and graphical models like CRF for single-view depth estimation, where CNNs are usually used to extract relevant features. For example, Liu et al.  employ a CRF to model the relations of neighboring superpixels and learn the potentials (both unary and binary) in a unified CNN framework.
Height Prediction in Remote Sensing. For depth/height estimation from single monocular images, the useful cue to be taken into account is the contextual information that includes occlusion, interposotion, texture gradient, and texture variations [22, 23]. In comparison with images used for depth estimation in computer vision field, remote sensing images have several unique characteristics which bring in new challenges for height prediction:
Remotely sensed images are often orthographic, which leads to the fact that extremely limited contextual information is available.
In remote sensing, limited spatial resolution, relatively large area coverage, and a number of tiny objects represent often a problem for height prediction task.
Fig. 1 shows the differences between the depth estimation data and the height prediction data.
Unlike single-view depth estimation in computer vision, height prediction from a single monocular image has rarely been addressed in the remote sensing community so far. In a pioneer work, Srivastava et al. 
use a joint loss function in a CNN, which is a linear combination of a semantic labeling loss and a regression loss minimizing height prediction errors. The network can be trained by traditional back-propagation by alternating over the two losses. However, note that in the training phase this model needs pixel-wise labeled segmentation masks as input, while obtaining massive amount of manually-labeled masks is very expensive and time consuming. In contrast, along with the development of sensor technology, DSM data is now widely accessible at a reasonable cost.
In this paper we propose to learn the mapping between a single monocular remote sensing image and its corresponding height map, by exploiting an end-to-end network, and unlike , only DSM is used as ground truth to train the network. More specifically, we directly regress on the height making use of a network with two components: one that first transforms the input image into a condensed, abstract high-level feature representation, then a second that estimates the height map of the scene from this encoded feature. In detail, our work contributes to the literature in three major aspects:
In this paper we address a very novel problem in the remote sensing community, namely height estimation from a single monocular remote sensing image. Unlike , we only make use of DSM data as ground truth for training, and no additional data like pixel-wise labeled mask is used.
We propose an end-to-end deep residual network, which is composed of a convolutional sub-network and a deconvolutional sub-network, as well as a skip connection. Learning such a residual convolutional-deconvolutional network architecture for pixel-wise prediction remote sensing tasks has not been well investigated yet to the best of our knowledge.
To further assess the usefulness of the single-view height prediction, we show an application, instance segmentation of buildings from the predicted height map. Most instance segmentation approaches usually rely on strong supervision for training, i.e., pixel-level segmentation masks, while labeling a considerable number of pixel-level masks is expensive. In this application, we give a different perspective to achieve this task.
The paper is organized as follows. After the introductory Section I detailing depth/height estimation from a single monocular image, we enter Section II dedicated to the details of the proposed fully residual convolutional-deconvolutional network for height estimation. Section III then provides the network setup, experimental results, and an application. Finally, Section IV concludes the paper.
Denote by random variables representing a remote sensing image and its corresponding DSM data, and denote their joint probability distribution by . Here is the distribution of remote sensing images and is the distribution of DSM maps given remote sensing images. Ideally our aim is to find
, but direct application of Bayes’ theorem is not feasible. Fortunately, as a special case,may be a deterministic function of . Therefore in this paper we resort to a point estimate a mapping , which minimizes the following objective function:
where is a loss function.
The minimizer of this loss is the conditional expectation:
that is the expected height map.
Given the training set of remote sensing images and their DSM data , we learn the weights of to minimize a Monte-Carlo estimate of the loss (1):
This means that training an end-to-end network to approximate DSMs from their remote sensing images can result in estimating the expected height maps. But what is a good network architecture for our purpose?
Ii-a Architecture: Fully Convolutional-Deconvolutional Net
The conventional CNNs are well known to be good at extracting features from data for concrete-to-abstract tasks like image classification [25, 26, 27], by spatially shrinking the feature maps. In such networks, pooling is necessary to allow agglomerating information over the feature maps and, more fundamentally, to make the network computationally feasible. However, this produces feature maps with a reduced spatial resolution, so in order to provide dense per-pixel height maps we need to find a way to refine the coarse pooled feature representations.
Fully convolutional network (FCN) has recently been actively studied for dense pixel-wise prediction tasks, e.g., semantic segmentation [14, 15], image super-resolution [16, 17], and depth estimation [18, 19]
. To refine the downsampled output issue caused by the pooling operations in the conventional CNN framework, “interpolation” operations are usually used. For example, in, Long et al. present a method to iteratively refine the coarse feature maps, by applying the upsampling to the training phase instead of simply taking it as a post-processing step. This work exhibits that fully convolutional layers are capable of being replaced with convolutions whose filter size matches the layer input dimension. By doing so, the network is able to work on arbitrarily sized images while generating a desired full resolution output. Moreover, the CNN model proposed by Eigen et al.  for depth prediction also suffers from this upsampling problem: the estimated depth maps are only -resolution of their original input images and with some border areas lost. They refine those coarse depth maps by training an additional network which gets the coarse prediction and the input image as inputs.
In this paper, we propose to use a network that is composed of two parts, i.e., convolutional sub-network and deconvolutional sub-network (see Fig. 2). The former corresponds to feature extractor that transforms the input remote sensing image to high-level multidimensional feature representation, whereas the latter plays the role of a height generator that produces height map from the feature extracted from the convolutional sub-network. Unlike [18, 14], we deconvolve the whole coarse feature maps, rather than only processing the coarse prediction, and this allows to transfer more high-level information to the fine prediction.
So far we have confirmed the network architecture we adopted, and the next question is to instantiate a network, i.e., identifying details of the architecture (e.g., network super-parameters and loss function). It can be seen from Eq. (1) that does matter, which involves such details. The following text will show how we build and gradually refine the a network for the height map estimation task.
Ii-B Plain Convolutional Net vs. Residual Net
The popular CNNs for dense per-pixel prediction tasks is VGG-like networks [14, 15, 28, 29]. We, therefore, first attempt to build our fully convolutional-deconvolutional network using the philosophy of the VGG Nets .
The core component of the VGG Nets is plain convolutional block (cf. Fig. 3
), which can make the networks simple and extensible. In general, our convolutional sub-network follows two rules of VGG Nets: 1) Having the same feature map size and the same number of filters in each convolutional layer of the same plain convolutional block; and 2) increasing the size of the feature maps in the deeper layers, roughly doubling after each max-pooling layer. The traits of the convolutional sub-network can be summarized as follows:
The input remote sensing image is fed into a stack of plain convolutional blocks, where we make use of convolutional filters with a very small receptive field of , rather than leveraging larger ones, such as or . That is because stacking convolutional layers can increase the nonlinearities inside the network.
The deconvolutional sub-network is a mirrored version of the convolutional sub-network, and its main ingredient is deconvolutional operation, which performs reverse operation of the convolutional sub-network and construct the height map from the abstract feature representation. The deconvolutional operation consists of unpooling and convolution. In order to map the encoded feature maps to a desired full resolution height map, we need unpooling to unpool the feature maps, i.e., to increase their spatial span, as opposed to the pooling (spatially shrinking the feature maps). Dosovitskiy et al. [30, 31] perform unpooling by simply replacing each entry of a feature map by an block with the entry value in the top left corner and zeroing elsewhere (see Fig. 4). In [32, 29], another form of unpooling is implemented by making use of max-pooling indices computed in the max-pooling layers of the convolutional sub-network (cf. Fig. 4). In this paper we choose the latter one as unpooling strategy in our model, as the use of max-pooling indices theoretically enables location information to be more accurately represented and thus improves boundary delineation. Moreover, we achieve the convolution using the same plain convolutional block as the convolutional sub-network.
Now we have a reasonable network to handle our task, but a problem arises when we try to train it. The fully convolutional-deconvolutional network based on the plain convolutional blocks, which we will call plain conv-deconv net hereafter, can reduce errors on both the training and validation sets during the first few iterations, but rapidly converges to a relatively high error value, which means it is not easy to optimize such a network. Furthermore, we also observe that the plain conv-deconv net fails to learn to produce physically plausible height maps (see an example in Fig. 5). To resolve this problem, we need to find a better way to construct the network.
Recently, ResNet  has achieved state-of-the-art results in image classification, winning ILSVRC 2015 with an incredible error rate of 3.6%. The core idea of ResNet is building residual block (cf. Fig. 3), i.e., adding shortcut connections that by-pass two or more stacked convolutions by performing identity mapping and are then added together with the output of stacked convolutions. In , He et al. show that ResNet is easier to optimize than plain networks like VGG Nets. In this paper, to solve the network training problem, we would like to introduce the residual learning to our convolutional-deconvolutional network architecture.
Our fully residual convolutional-deconvolutional network (res. conv-deconv net for short) is a modularized network that stacks residual blocks. Similarly to the plain convolutional block, a residual block consists of several convolutional layers that are with the same feature map size and have the same number of filters. However, it performs the following calculation:
Here, indicates the feature maps that are fed into the -th residual block and satisfies where is the input remote sensing image. represents a collection of weights associated with the -th residual block, and denotes that there are convolutional layers in a residual block. Moreover, is the residual function and is generally achieved by few stacked convolutional layers. The functionworks after element-wise addition. The function is fixed to an identity mapping: .
In contrast, a plain convolutional block performs the following computation:
Fig. 6 illustrates the difference between the plain convolutional block and the residual block, which is the latter is about learning residual instead of learning a complete mapping. After introducing the idea of residual learning to the network architecture, it can be clearly seen that the network becomes easier to be optimized (cf. Fig. 5).
Ii-C Res. Conv-Deconv Net with Skip
For the problem we consider, the input high-resolution remote sensing image and output high-resolution height map differ in surface appearance, but both are renderings of the same underlying structure. Structure in the remote sensing image, therefore, should be roughly aligned with structure in the height map.
In the res. conv-deconv net discussed above, the input image is passed through a stack of residual blocks that downsample inch by inch, until a bottleneck layer, at which point the process is reversed. Such a network architecture requires that all information flow pass through all the layers, including the bottleneck. For our problem, there is a great deal of low-level visual information, e.g., edges, shared between the input remote sensing image and output height map, and it would be desirable to shuttle such low-level information directly across the network. Unfortunately, due to the bottleneck layer, this can not be achieved in the current version of the network, which leads to the result that object boundaries tend to be blurred (see Fig. 5). Both of  and  try to resolve this problem by combining the coarse or blurred semantic segmentation map with a superpixel segmentation of the input image to restore accurate object edges. However, such a strategy cannot be integrated into a network being trained end-to-end.
Recently, several studies [36, 37] that attempt to reveal what learned by CNNs show that deeper layers make use of filters to grasp global high-level information while shallower layers capture local low-level details. As reported in , the first layer of CNNs is always to detect object boundaries and edges, and such information is exactly what is missing in our network. To give the network a means to circumvent the bottleneck for the low-level visual information, we add skip connection between the first residual block and the next to last block to build a new network, and the skip connection simply concatenates all feature maps at the next to last block with those at the first block. With this strategy, the low-level information can be directly propagated without concerning any weight layers, which implies that the information containing object boundaries and edges will not vanish in the deconvolutional sub-network. Fig. 5 compares the network with skip connection (cf. Fig. 7) against the network in Fig. 2 in terms of height map quality.
Iii-a Data and Error Metrics
We conduct our experiments on an aerial image over the area of Berlin, Germany. The DSM and the corresponding orthorectified aerial image were reconstructed using semi-global matching . The data used in this research is resampled to about 70 cm in ground spacing. Results on this data are validated using of the whole image for training and for testing (see Fig. 9 for the test area).
To evaluate the performance of different approaches for height estimation from a single monocular image, a scale-invariant error metric is required. We adopt the scale-invariant error of :
Mean squared error (MSE): .
Mean absolute error (MAE): .
Moreover, in order to evaluate the estimated height map quality, another metric, structural similarity (SSIM) index, is also used in the experiment, which is given by 
where , , , , and
are the local means, standard deviations, and cross-covariance for images. SSIM is capable of comparing local patterns of pixel intensities that have been normalized for luminance and contrast.
Iii-B Training Details
The network training is based on the Theano framework. We choose Nesterov Adam[40, 41]
as optimization algorithm to train our model from scratch, since for our task it shows much faster convergence than standard stochastic gradient descent with momentum or Adam . We fix almost of all parameters of Nesterov Adam as recommended in : , , , and a schedule decay of 0.004, but make use of a relatively small learning rate of 0.00002. All weights in the network are initialized with a Glorot normal initializer 
that draws samples from a truncated normal distribution centered on zero. A standard loss function for optimization in regression problem is theloss, minimizing the squared Euclidean norm between predictions and ground truth. Although this loss performs well in our experiments, we find that using the loss yields a slightly better results. Hence, we use MAE as loss function in our network.
The network is trained on RGB images to predict the corresponding height maps. The training set has only 840 unique images. We use data augmentation to increase the number of training samples. The remote sensing images and corresponding DSM data are transformed using 1) rotation of input and target by 90 degree; and 2) horizontally and vertically flipping half of the images. By doing so, the number of training samples increase to 8,568. To monitor overfitting during training, we randomly select 10% of the training samples as the validation set, i.e., splitting the training set into 7,711 training and 857 validation pairs. In addition, we use extremely small mini-batch of 1 image pair because, in a sense, every pixel is a training sample. Our network has 3,976,907 trainable parameters as well as 4,864 non-trainable parameters. Training a network of this size to give a good generalization error is very hard, therefore we make use of early stopping strategy to train it. We train our network on a single NVIDIA GeForce GTX TITAN with 12 GB of GPU memory and take about two hours.
|res. conv-deconv net||3.1e-03||2.7e-02||0.8060|
|net with skip connection||7.8e-04||1.7e-02||0.9366|
Iii-C Evaluation on Height Maps
To demonstrate the effectiveness of the proposed network, we conduct experimental comparisons against several other networks. The approaches included in the comparison are listed as follows:
Eigen-Net: The network proposed in . Note that the output height maps of this network are only -resolution of their original input images. We, therefore, generate desired full resolution outputs using bilinear interpolation.
In Fig. 8 we qualitatively compare the accuracy of the estimated height maps using the proposed approach (res. conv-deconv net with skip connection) with that of the different variants (plain conv-deconv net with skip connection and res. conv-deconv net) as well as with the Eigen-Net on different land use scenes such as dense residential, vegetative cover, park, commercial zone, river, railway station, and playground. One can clearly see an improvement in quality from left to right. Compared to the Eigen-Net and res. conv-deconv net, the proposed model greatly improves the quality of object edges and boundaries and structure definition in the estimated height maps, which means that it can lead to better results, allowing low-level visual information to shortcut across the network.
In addition, it can be seen from Fig. 8 that the plain conv-deconv net with skip connection is unable to learn to produce realistic height maps in our experiments, and indeed collapses to generating nearly identical results for each input remote sensing image. Even if a couple of techniques such as Dropout  and regularization on network weights are adopted during training phase, we still cannot learn a valid network model. This proves that when both res. conv-deconv net and plain conv-deconv net are trained in our task, the former is capable of achieving the superior results, whereas the latter is pretty difficult to optimize. Moreover, Fig. 8 shows that to a certain extent, Eigen-Net  is able to learn some structures in the estimated height maps, but these results are quite blurry. In contrast, our height predictions exhibit noteworthy visual quality, even though they are derived by a single network being trained end-to-end, without any additional post-processing steps.
We do not have any fully connected layers in our network, which allows the network to take remote sensing images of arbitrary size as input. Fig. 9 shows height estimation results on the large-scale Berlin test zone. As shown in this figure, the proposed network with skip connection can obtain a height map with very good low-level visual details (e.g., object boundaries) instead of a blurry one as do res. conv-deconv net. Table I
list the quantitative statistics on the test data set. Specifically, the res. conv-deconv net with skip connection increases the accuracy significantly by 0.0023 of MSE, 0.01 of MAE, and 0.1306 of SSIM, respectively. Furthermore, to better understand the performance of the networks, we divide the whole test image into 392 uniquepatches, and Fig. 10 shows the performance of the res. conv-deconv net and the net with skip connection on these 392 scenes. We can see that introducing skip connection to the res. conv-deconv net strikingly helps with a large majority of scenes. However, the proposed network still does poorly on some complex scenes such as tall buildings. Thus, even if we could get good object boundaries and edges, the task of estimating many of the tall buildings would remain quite challenging. According to Fig. 10, we select several failure cases, as shown in Fig. 11. In addition, the output of our model can be used to generate novel 3D views of the scene from a single monocular image (see Fig. 12).
Iii-D Application to Instance Segmentation of Buildings
To complement the previous results, in this section we show a practical case, instance segmentation of buildings, to demonstrate the usefulness of height estimation from a single monocular image. The goal of instance segmentation is to identify individual instances of buildings in pixel-level in an image. In general, most instance segmentation approaches rely on strong supervision for training, i.e., pixel-level segmentation masks. However, in comparison with convenient image-level labels, collecting annotations of pixel-wise segmentation ground-truth is much more expensive. In particular, for the instance segmentation task, labeling a considerable number of pixel-level segmentation masks usually requires a large amount of human effort as well as financial expenses.
In this practical case, we give a different perspective to achieve instance segmentation of buildings, i.e., utilizing the estimated height map generated by the proposed res. conv-deconv net with skip connection. Specifically, we deploy an instance segmentation framework where structures elevated above the ground level (i.e., buildings and trees) are first extracted by setting a threshold in the predicted height map, and then trees are filtered out using vegetation index (VI). Finally, post-processing steps including removing small areas and filling holes are performed. We wish to point out that, to the best of our knowledge, this is the first demonstration of an instance segmentation of buildings based on height estimation from a single monocular image. The result on the Berlin test scene is shown in Fig. 13, and we also provides two close zooms of the instance segmentation map (cf. 14). As it can be seen, the segmentation result is satisfactory, especially considering the fact that our approach dose nor rely on any pixel-wise labeled mask and supervised training.
In this paper we propose a fully residual convolutional-deconvolutional network in order to deal with a novel problem in the remote sensing community, i.e., single-view height prediction. In particular, the proposed network consists of two parts, namely convolutional sub-network and deconvolutional sub-network. They are responsible for transforming an input high resolution remote sensing image to abstract multidimensional feature representations and generating height map, respectively. However, during the experiment, we found that due to the bottleneck of the network, such a network easily leads to the result that object boundaries tend to be blurred. To address this problem, we refine the network architecture by adding a skip connection between the first residual block and the next to last block, which makes it possible to shuttle low-level information, e.g., object boundaries, directly across the network. in addition, in the experimental section we show an application to instance segmentation of buildings to demonstrate the usefulness of the predicted height map from a single monocular image. In the future, further experiments and studies will be focused on how to improve the accuracy of high-rise buildings.
We thank H. Hirschmüller of Institute of Robotics and Mechatronics of DLR for providing the data used in this research.
-  M. Volpi and D. Tuia, “Dense semantic labeling of subdecimeter resolution images with convolutional neural networks,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 2, pp. 881–893, 2017.
-  N. Audebert, B. L. Saux, and S. Lefèvrey, “Fusion of heterogeneous data in convolutional networks for urban semantic labeling,” in Joint Urban Remote Sensing Event (JURSE), 2017.
-  M. Campos-Taberner, A. Romero-Soriano, C. Gatta, G. Camps-Valls, A. Lagrange, B. L. Saux, A. Beaup¨¨re, A. Boulch, A. Chan-Hon-Tong, S. Herbin, H. Randrianarivo, M. Ferecatu, M. Shimoni, G. Moser, and D. Tuia, “Processing of extremely high-resolution LiDAR and RGB data: Outcome of the 2015 IEEE GRSS Data Fusion Contest-Part A: 2-D Contest,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, no. 12, pp. 5547–5559, 2016.
-  D. Marmanis, J. D. Wegner, S. Galliani, K. Schindler, M. Datcu, and U. Stilla, “Semantic segmentation of aerial images with an ensemble of CNNs,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2016.
-  R. Qin, X. Huang, A. Gruen, and G. Schmitt, “Object-based 3-D building change detection on multitemporal stereo images,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 5, pp. 2125–2137, 2015.
-  R. Qin, J. Tian, and P. Reinartz, “3D change detection ¨c approaches and applications,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 122, pp. 41–56, 2016.
-  A. Saxena, S. H. Chung, and A. Y. Ng, “Learning depth from single monocular images,” in Advances in Neural Information Processing Systems (NIPS), 2005.
-  A. Saxena, M. Sun, and A. Y. Ng, “Make3D: Learning 3D scene structure from a single still image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, pp. 824–840, 2008.
-  D. Hoiem, A. A. Efros, and M. Hebert, “Automatic photo pop-up,” ACM Transactions on Graphics, vol. 24, no. 3, pp. 577–584, 2005.
B. Liu, S. Gould, and D. Koller, “Single image depth estimation from predicted
semantic labels,” in
IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2010.
-  L. Ladicky, J. Shi, and M. Pollefeys, “Pulling things out of perspective,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
-  J. Konrad, M. Wang, and P. Ishwar, “2D-to-3D image conversion by learning depth from examples,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2012.
-  M. Liu, M. Salzmann, and X. He, “Discrete-continuous depth estimation from a single image,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
-  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
-  H. Noh, S. Hong, and B. Han, “Learning deconvolutional network for semantic segmentation,” in IEEE International Conference on Computer Vision (ICCV), 2015.
-  C. Dong, C. C. Loy, and K. He, “Image super-resolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, 2016.
-  W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single image using a multi-scale deep network,” in Advances in Neural Information Processing Systems (NIPS), 2014.
-  D. Eigen and R. Fergus, “Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture,” in IEEE International Conference on Computer Vision (ICCV), 2015.
-  F. Liu, C. Shen, and G. Lin, “Learning depth from single monocular images using deep convolutional neural fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 10, pp. 2024–2039, 2016.
-  N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from RGBD images,” in European Conference on Computer Vision (ECCV), 2012.
-  A. Saxena, S. H. Chung, and A. Y. Ng, “3-D depth reconstruction from a single still image,” International Journal of Computer Vision, vol. 76, no. 1, pp. 53–59, 2008.
-  I. Bülthoff, H. Bülthoff, and P. Sinha, “Top-down influences on stereoscopic depth-perception,” Nature Neuroscience, vol. 1, no. 3, pp. 254–257, 1998.
-  S. Srivastava, M. Volpi, and D. Tuia, “Joint height estimation and semantic labeling of monocular aerial images with CNNs,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2017.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” inAdvances in Neural Information Processing Systems (NIPS), 2012.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in IEEE International Conference on Learning Representations (ICLR), 2015.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
-  J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision (ECCV), 2016.
-  V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A deep convolutional encoder-decoder architecture for scene segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, DOI:10.1109/TPAMI.2016.2644615.
-  A. Dosovitskiy, J. Springenberg, and T. Brox, “Learning to generate chairs with convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
-  A. Dosovitskiy, P. Fischer, J. Springenberg, M. Riedmiller, and T. Brox, “Discriminative unsupervised feature learning with exemplar convolutional neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 9, pp. 1734–1747, 2016.
-  R. Goroshin, M. Mathieu, and Y. LeCun, “Learning to linearize under uncertainty,” in Advances in Neural Information Processing Systems (NIPS), 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik, “Simultaneous detection and segmentation,” in European Conference on Computer Vision (ECCV), 2014.
-  L. Mou and X. X. Zhu, “Spatiotemporal scene interpretation of space videos via deep neural network and tracklet analysis,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2016.
-  M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European Conference on Computer Vision (ECCV), 2014.
-  A. Mahendran and A. Vedaldi, “Understanding deep image representations by inverting them,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
-  H. Hirschmüller, “Stereo processing by semiglobal matching and mutual information,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 328–341, 2008.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
T. Dozat, “Incorporating Nesterov momentum into Adam,”http://cs229.stanford.edu/proj2015/054_report.pdf, online.
-  I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in IEEE International Conference on Machine Learning (ICML), 2013.
Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel, “Backpropagation applied to handwritten zip code recognition,”Neural Computation, vol. 1, no. 4, pp. 541–551, 1989.
-  D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” in IEEE International Conference on Learning Representations (ICLR), 2015.
X. Glorot and Y. Bengio, “Understanding the difficulty of training deep
feedforward neural networks,” in
International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
-  N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.