Detect, Replace, Refine: Deep Structured Prediction For Pixel Wise Labeling

12/14/2016 ∙ by Spyros Gidaris, et al. ∙ Ecole nationale des Ponts et Chausses 0

Pixel wise image labeling is an interesting and challenging problem with great significance in the computer vision community. In order for a dense labeling algorithm to be able to achieve accurate and precise results, it has to consider the dependencies that exist in the joint space of both the input and the output variables. An implicit approach for modeling those dependencies is by training a deep neural network that, given as input an initial estimate of the output labels and the input image, it will be able to predict a new refined estimate for the labels. In this context, our work is concerned with what is the optimal architecture for performing the label improvement task. We argue that the prior approaches of either directly predicting new label estimates or predicting residual corrections w.r.t. the initial labels with feed-forward deep network architectures are sub-optimal. Instead, we propose a generic architecture that decomposes the label improvement task to three steps: 1) detecting the initial label estimates that are incorrect, 2) replacing the incorrect labels with new ones, and finally 3) refining the renewed labels by predicting residual corrections w.r.t. them. Furthermore, we explore and compare various other alternative architectures that consist of the aforementioned Detection, Replace, and Refine components. We extensively evaluate the examined architectures in the challenging task of dense disparity estimation (stereo matching) and we report both quantitative and qualitative results on three different datasets. Finally, our dense disparity estimation network that implements the proposed generic architecture, achieves state-of-the-art results in the KITTI 2015 test surpassing prior approaches by a significant margin.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 12

page 14

page 15

page 16

page 17

page 18

page 19

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Dense image labeling is a problem of paramount importance in the computer vision community as it encompasses many low or high level vision tasks including stereo matching [40], optical flow [12], surface normals estimation [5], and semantic segmentation [20], to mention a few characteristic examples. In all these cases the goal is to assign a discrete or continuous value for each pixel in the image. Due to its importance, there is a vast amount of work on this problem. Recent methods can be roughly divided into three main classes of approaches.

The first class focuses on developing independent patch classifiers/regressors  

[34, 32, 33, 20, 7, 23, 27] that would directly predict the pixel label given as input an image patch centered on it or, in cases like stereo matching and optical flow, would be used for comparing patches between different images in order to pick pairs of best matching pixels  [21, 39, 40, 41]

. Deep convolutional neural networks (DCNNs) 

[18] have demonstrated excellent performance in the aforementioned tasks thanks to their ability to learn complex image representations by harnessing vast amount of training data [16, 35, 10]

. However, despite their great representational power, just applying DCNNs on image patches, does not capture the structure of output labels, which is an important aspect of dense image labeling tasks. For instance, independent feed-forward DCNN patch predictors do not take into consideration the correlations that exist between nearby pixel labels. In addition, feed-forward DCNNs have the extra disadvantages that they usually involve multiple consecutive down-sampling operations (i.e. max-pooling or strided convolutions) and that the top most convolutional layers do not capture factors such as image edges or other fine image structures. Both of the above properties may prevent such methods from achieving precise and accurate results in dense image labeling tasks.

Another class of methods tries to model the joint dependencies of both the input and output variables by use of probabilistic graphical models such as Conditional Random Fields (CRFs) [17]. In CRFs, the dense image labeling task is performed through maximum a posteriori (MAP) inference in a graphical model that incorporates prior knowledge about the nature of the task in hand with pairwise edge potential between the graph nodes of the label variables. For example, in the case of semantic segmentation, those pairwise potentials enforce label consistency among similar or spatially adjacent pixels. Thanks to their ability to jointly model the input-output variables, CRFs have been extensively used in pixel-wise image labelling tasks [15, 28]. Recently, a number of methods has attempted to combine them with the representational power of DCNNs by getting the former (CRFs) to refine and disambiguate the predictions of the later one [30, 2, 42, 3]. Particularly, in semantic segmentation, DeepLab [2] uses a fully connected CRF to post-process the pixel-wise predictions of a convolutional neural network while in CRF-RNN [42]

, they unify the training of both the DCNN and the CRF by formulating the approximate mean-field inference of fully connected CRFs as Recurrent Neural Networks (RNN). However, a major drawback of most CRF based approaches is that the pairwise potentials have to be carefully hand designed in order to incorporate simple human assumptions about the structure of the output labels

and at the same time to allow for tractable inference.

A third class of methods relies on a more data-driven approach for learning the joint space of both the input and the output variables. More specifically, in this case a deep neural network gets as input an initial estimate of the output labels and (optionally) the input image and it is trained to predict a new refined estimate for the labels, thus being implicitly enforced to learn the joint space of both the input and the output variables. The network can learn either to predict new estimates for all pixel labels  (transform-based approaches) [38, 9, 19], or alternatively, to predict residual corrections w.r.t. the initial label estimates (residual-based approaches) [1]. We will hereafter refer to these methods as deep joint input-output models. These are, loosely speaking, related to the CRF models in the sense that the deep neural network is enforced to learn the joint dependencies of both the input image and output labels, but with the advantage of being less constrained about the complexity of the input-output dependencies that it can capture.

Figure 1: In this figure we visualize two different type of erroneously labeled image regions. On the left hand are the ground truth labels and on the right hand are some initial label estimates. With the red rectangle we indicate a dense concentration of ”hard” mistakes in the initial labels that it is very difficult to be corrected by a residual refinement component. Instead, the most suitable action for such a region is to replace them by predicting entirely new labels for them. In contrast, the blue eclipse indicates an image region with ”soft” label mistakes. Those image regions are easier to be handled by a residual refinement components.

Our work belongs to this last category of dense image labeling approaches, thus it is not constrained on the complexity of the input-output dependencies that it can capture. However, here we argue that prior approaches in this category use a sub-optimal strategy. For instance, the transform-based approaches (that always learn to predict new label estimates) often have to learn something more difficult than necessary since they must often simply learn to operate as identity transforms in case of correct initial labels, yielding the same label in their output. On the other hand, for the residual based approaches it is easier to learn to predict zero residuals in the case of correct initial labels, but it is more difficult for them to refine “hard” mistakes that deviate a lot from the initial labels (see figure 1). Due to the above reasons, in our work we propose a deep joint input-output model that decomposes the label estimation/refinement process as a sequence of the following easier to execute operations: 1) detection of errors in the input labels, 2) replacement of the erroneous labels with new ones, and finally 3) an overall refinement of all output labels in the form of residual corrections. Each of the described operations in our framework is executed by a different component implemented with a deep neural network. Even more, those components are embedded in a unified architecture that is fully differentiable thus allowing for an end-to-end learning of the dense image labeling task by only applying the objective function on the final output. As a result of this, we are also able to explore a variety of novel deep network architectures by considering different ways of combining the above components, including the possibility of performing the above operations iteratively, as it is done in [19], thus enabling our model to correct even large, in area, regions of incorrect labels. It is also worth noting that the error detection component in the proposed architecture, by being forced to detect the erroneous pixel labels (given both the input and the initial estimates of the output labels), it implicitly learns the joint structure of the input-output space, which is an important requirement for a successful application of any type of structured prediction model.

To summarize, our contributions are as follows:

  • We propose a deep structured prediction framework for the dense image labeling task, which we call Detect, Replace, Refine, that relies on three main building blocks: 1) recognizing errors in the input label maps, 2) replacing the erroneous labels, and 3) performing a final refinement of the output label map. We show that all of the aforementioned steps can be embedded in a unified deep neural network architecture that is end-to-end trainable.

  • In the context of the above framework, we also explore a variety of other network architectures for deep joint input-output models that result from utilizing different combinations of the above building blocks.

  • We implemented and evaluated our framework on the disparity prediction task (stereo matching) and we provide both qualitative and quantitative evidence about the advantages of the proposed approach.

  • We show that our disparity estimation model that implements the proposed Detect, Replace, Refine architecture achieves state of the art results in the KITTI 2015 test set outperforming all prior published work by a significant margin.

The remainder of the paper is structured as follows: We first describe our structured dense label prediction framework in §2 and its implementation w.r.t. the dense disparity estimation task (stereo matching) in §3. Then, we provide experimental results in §4 and we finally conclude the paper in §5.

2 Methodology

Let be the input image222Here, for simplicity, we consider images defined on a 2D domain, but our framework can be readily applied to images defined on any domain. of size , where are the image pixels, and be some initial label estimates for this image, where is the label for the i-th pixel. Our dense image labeling methodology belongs on the broader category of approaches that consist of a deep joint input-output model model that given as input the image and the initial labels , it learns to predict new, more accurate labels . Note that in this setting the initial labels could come from another model that depends only on the image . Also, in the general case, the pixel labels can be of either discrete or continuous nature. In this work, however, we focus on the continuous case where greater variety of architectures can be explored.

The crucial question is what is the most effective way of implementing the deep joint input-output model . The two most common approaches in the literature involve a feed-forward deep convolutional neural network, , that either directly predicts new labels or it predicts the residual correction w.r.t. the input labels: . We argue that both of them are sub-optimal solutions for implementing the model. Instead, in our work we opt for a decomposition of the task of model (i.e. predicting new, more accurate labels ) in three different sub-tasks that are executed in sequence.

In the remainder of this section, we first describe the proposed architecture in §2.1, then we discuss the intuition behind it and its advantages in §2.2, and finally we describe other alternative architectures that we explored in  §2.3.

2.1 Detect, Replace, Refine architecture

Figure 2: In this figure we demonstrate the generic architecture that we propose for the dense image labeling task. In this architecture the task of the deep joint input-output model is decomposed into three different sub-tasks that are: 1) detection of the erroneous initial labels (based on an estimated error map ) , 2) replacement of the erroneous labels with new ones (leading to a renewed label map ), and then 3) refinement of the renewed label map. The illustrated example is coming from the dense disparity labeling task (stereo matching).

The generic dense image labeling architecture that we propose decomposes task of the deep joint input-output model in three sub-tasks each of them handled by a different learn-able network component (see Figure 2). Those network components are: the error detection component , the label replacement component , and the label refinement component . The sub-tasks that they perform, are:

Detect:

The first sub-task in our generic pipeline is to detect the erroneously labeled pixels of by discovering which pixel labels are inconsistent with the remaining labels of and the input image . This sub-task is performed by the error detection component

that basically needs to yield a probability map

of the same size as the input labels that will have high probabilities for the ”hard” mistakes in . These mistakes should ideally be forgotten and replaced with entirely new label values in the processing step that follows (see Figures 3a, 3b, and 3c). As we will see below, the topology of our generic architecture allows the error detection component to learn its assigned task (i.e. detecting the incorrect pixel labels) without explicitly being trained for this, e.g., through the use of an auxiliary loss. The error detection function can be implemented with any deep (or shallow) neural network with the only constraint being that its output map must take values in the range .

Replace:

In the second sub-task, a new label field is produced by the convex combination of the initial label field and the output of the label replacement component : (see Figures 3e and 3f). We observe that the error probabilities generated by the error detection component now act as gates that control which pixel labels of will be forgotten and replaced by the outputs of , which will be all pixel labels that are assigned high probability of being incorrect. In this context, the task of the Replace component is to replace the erroneous pixel labels with new ones that will be in accordance both w.r.t. the input image and w.r.t. the non-erroneous labels of . Note that for this task the Replace component gets as input also the error probability map . The reason for doing this is to help the Replace component to focus its attention only on those image regions that their labels need to be replaced. The component can be implemented by any neural network that its output has the same size as the input labels .

Refine:

The purpose of the erroneous label detection and label replacement steps so far was to perform a crude “fix” of the “hard” mistakes in the label map . In contrast, the purpose of the current step is to do a final refinement of the entire output label map , which is produced by the previous steps, in the form of residual corrections: (see Figures 3g and 3h). Intuitively, the purpose of this step is to correct the “soft” mistakes of the label map and to better align the output labels with the fine structures in the image . The Refine component can be implemented by any neural network that its output has the same size as the input labels .

The above three steps can be applied for more than one iterations which, as we will see later, allows our generic framework to recover a good estimate of the ground truth labels or, in worst case, to yield more plausible results even when the initial labels are severely corrupted (see Figure 10 in the experiments section §4.3.6).

To summarize, the workings of our dense labeling generic architecture can be concisely described by the iterative application of the following three equations:

(1)
(2)
(3)

We observe that the above generic architecture is fully differentiable as long as the function components , , and are also differentiable. Due to this fact, the overall proposed architecture is end-to-end learnable by directly applying an objective function (e.g

. Absolute Difference or Mean Square Error loss functions) on the final output label maps

.

2.2 Discussion

Role of the Detection component and its synergy with the Replace component : The error detection component is a key element in our generic architecture and its purpose is to indicate which are the image regions that their labels are incorrect. This type of information is exploited in the next step of label replacement in two ways. Firstly, the Replace component that gets as input the error map , which is generated by , is able to know which are the image regions that their labels needs to be replaced and thus it is able to focus its attention only on those image regions. At this point note that, in equation 7, the error maps , apart from being given as input attention maps to the Replace component , they also act as gates that control which way the information will flow both during the forward propagation and during the backward propagation. Specifically, during the forward propagation case, in the cases that the error map probabilities are either or , it holds that:

(4)

which basically means that the Replace component is being utilized mainly for the erroneously labelled image regions. Also, during the backward propagation, it is easy to see that the gradients of the replace function w.r.t. the loss (in the cases that the error probabilities are either or ) are:

(5)

which means that gradients are back-propagated through the Replace component only for the erroneously labelled image regions. So, in a nutshell, during the learning procedure the Replace component is explicitly trained to predict new values mainly for the erroneously labelled image regions. The second advantage of giving the error maps as input to the Replace component , is that this allows the Replace component to know which image regions contain “trusted” labels that can be used for providing information on how to fill the erroneously labelled regions.

Estimated error probability maps by the Detection component : Thanks to the topology of our generic architecture, by optimizing the reconstruction of the ground truth labels , the error detection component implicitly learns to act as a joint probability model for patches of and centered on each pixel of the input image, assigning a high probability of error for patches that do not appear to belong to the joint input-output space . In Figures 3c and 3d we visualize the estimated by the Detection component error maps and the ground truth error maps in the context of the disparity estimation task (more visualizations are provided in Figure 6). It is interesting to note that the estimated error probability maps are very similar to the ground truth error maps despite the fact that we are not explicitly enforcing this behaviour, e.g., through the use of an auxiliary loss.


(a) Image

(b) Initial labels

(c) Predicted error map

(d) Ground truth errors

(e) predictions

(f) Renewed labels

(g) residuals

(h) Final labels
Figure 3: Here we provide an example that illustrates the functions performed by the Detect, Replace, and Refine steps in our proposed architecture. The example is coming from the dense disparity labeling task (stereo matching). Specifically, subfigures (a), (b), and (c) depict respectively the input image , the initial disparity label estimates , and the error probability map that the detection component yields for the initial labels . Notice the high similarity of map with the ground truth error map of the initial labels depicted in subfigure (d), where the ground truth error map has been computed by thresholding the absolute difference of the initial labels from the ground truth labels with a threshold of pixels (red are the erroneous pixel labels). In subfigure (e) we depict the label predictions of the Replace component . For visualization purposes we only depict the pixel predictions that will replace the initial labels that are incorrect (according to the detection component) by drawing the remaining ones (i.e. those that their error probability is less than ) with black color. In subfigure (f) we depict the renewed labels . In subfigure (g) we depict the residual corrections that the Refine component yields for the renewed labels . Finally, in the last subfigure (h) we depict the final label estimates that the Refine step yields.

Error detection component and Highway Networks: Note that the way the Detection component and Replace component interact bears some resemblance to the basic building blocks of the Highway Networks [36]

that are being utilized for training extremely deep neural network architectures. Briefly, each highway building block gets as input some hidden feature maps and then predicts transform gates that control which feature values will be carried on the next layer as is and which will be transformed by a non-linear function. There are however some important differences. For instance, in our case the error gate prediction and the label replacement steps are executed in sequence with the latter one getting as input the output of the former one. Instead, in Highway Networks the gate prediction and the non-linear transform of the input feature maps are performed in parallel. Furthermore, in Highway Networks the components of each building block are implemented by simple affine transforms followed by non-linearities and the purpose is to have multiple building blocks stacked one on top of the other in order to learn extremely deep image representations. In contrast, the components of our generic architecture are them selves deep neural networks and the purpose is to learn to reconstruct the input labels

.

Two stage refinement approach: Another key element in our architecture is that the step of predicting new, more accurate labels , given the initial labels , is broken in two stages. The first stage is handled by the error detection component and the label replacement component . Their job is to correct only the ”hard” mistakes of the input labels . They are not meant to correct ”soft” mistakes (i.e. errors in the label values of small magnitude). In order to learn to correct those ”soft” mistakes, it is more appropriate to use a component that yields residual corrections w.r.t. its input. This is the purpose of our Refine component , in the second stage of our architecture, from which we expect to improve the ”details” of the output labels by better aligning them with the fine structures of the input images. This separation of roles between the first and the second refinement stages (i.e. coarse refinement and then fine-detail refinement) has the potential advantage, which is exploited in our work, to perform the actions of the first stage in lower resolution thus speeding up the processing and reducing the memory footprint of the network. Also, the end-to-end training procedure allows the components in the first stage (i.e. and ) to make mistakes as long as those are corrected by the second stage. This aspect of our architecture has the advantage that each component can more efficiently exploit its available capacity.

2.3 Explored architectures

In order to evaluate the proposed architecture we also devised and tested various others architectures that consist of the same core components as those that we propose. In total, the architectures that are explored in our work are:

Detect + Replace + Refine architecture: This is the architecture that we proposed in section 2.1.

Replace baseline architecture: In this case the model directly replaces the old labels with new ones: .

Refine baseline architecture: In this case the model predicts residual corrections w.r.t. the input labels: .

Replace + Refine architecture: Here the model first replaces the entire label map with new values and then residual corrections are predicted w.r.t. the updated values , .

Detect + Replace architecture: Here the model first detects errors on the input label maps and then replace those erroneous pixel labels .

Detect + Refine architecture: In this case, after the detection of the errors , the erroneous pixel labels are masked out by setting them to the mean label value , . Then the masked label maps are given as input to a residual refinement model . Note that this architecture can also be considered as a specific instance of the general Detect + Replace + Refine architecture where the Replace component does not have any learnable parameters and constantly returns the mean label value, i.e., .

Parallel architecture: Here, after the detection of the errors, the erroneous labels are replaced by the Replace component while the rest labels are refined by the Refine component . More specifically, the operations performed by this architecture are described by the following equations:

(6)
(7)
(8)

Basically, in this architecture the components and are applied in parallel instead of the sequential topology that is chosen in the Detect + Replace + Refine architecture.

Detect + Replace + Refine : This is basically the Detect + Replace + Refine architecture but applied iteratively for iterations. Note that the model implementing this architecture is trained in a multi-iteration manner.

X-Blind Detect + Replace + Refine architecture: This is a ”blind” w.r.t. the image version of the Detect + Replace + Refine architecture. Specifically, the ”X-Blind” architecture is exactly the same as the proposed Detect + Replace + Refine architecture with the only difference being that it gets as input only the initial labels and not the image (i.e. none of the , , and components depends on the image ). Hence, the model implemented by the ”X-Blind” architecture must learn to reconstruct the ground truth labels by only ”seeing” a corrupted version of them.

3 Detect, Replace, Refine for disparity estimation

In order to evaluate the proposed dense image labeling architecture, as well as the other alternative architectures that are explored in our work, we use the dense disparity estimation (stereo matching) task, according to which, given a left and right image, one needs to assign to each pixel of the left image a continuous label that indicates its horizontal displacement in the right image (disparity). Such a task forms a very interesting and challenging testbed for the evaluation of dense labeling algorithms since it requires dealing with several challenges such as accurately preserving disparity discontinuities across object boundaries, dealing with occlusions, as well as recovering the fine details of disparity maps. At the same time it has many practical applications on various autonomous driving and robot navigation or grasping tasks.

3.1 Initial disparities

Generating initial disparity field: In all the examined architectures, in order to generate the initial disparity labels we used the deep patch matching approach that was proposed by W. Luo et al[21] and specifically their architecture with id . We then train our models to reconstruct the ground truth labels given as input only the left image and the initial disparity labels . We would like to stress out that the right image of the stereo pair is not provided to our models. This practically means that the trained models cannot rely only on the image evidence for performing the dense disparity labelling task – since disparity prediction from a single image is an ill-posed problem – but they have to learn the joint space of both input and output labels in order to perform the task.

Image & disparity field normalization:

Before we feed an image and its initial disparity field to any of our examined architectures, we normalize them to zero mean and unit variance (

i.e

. mean subtraction and division by the standard deviation). The mean and standard deviation values of the RGB colors and disparity labels are computed on the entire training set. The disparity target labels are also normalized with the same mean and standard deviation values and during inference the normalization effect is inverted on the disparity fields predicted by the examined architectures.

3.2 Deep neural network architectures

Each component of our generic architecture can be implemented by a deep neural network. For our disparity estimation experiments we chose the following implementations:

Error detection component: It is implemented by 5 convolutional layers of which the last one yields the error probability map

. All the convolutional layers, apart from the last one, are followed by batch normalization 

[13]

plus ReLU 

[22] units. Instead, the last convolutional layer is followed by a sigmoid unit. The first two convolutions are followed by max-pooling layers of kernel size 2 that in total reduce the input resolution by a factor of 4. To compensate, a bi-linear up-sampling layer is placed on top of the last convolution layer in order the output probability map to have the same resolution as the input image. The number of output feature planes of each of the 5 convolutional layers is , , , , and correspondingly.

Replace component: It is implemented with a convolutional architecture that first ”compress” the resolution of the feature maps to of the input resolution and then ”decompress” the resolution to of the input resolution. For its implementation we follow the guidelines of A. Newel et al. [26] which are to use residual blocks [10] on each layer and parametrized (by residual blocks) skip connection between the symmetric layers in the ”compressing” and the ”decompressing” parts of the architecture. The ”compressing” part of the architecture uses max-pooling layers with kernel size 2 to down-sample the resolution while the ”decompressing” part uses nearest-neighbor up-sampling (by a factor of 2). We refer for more details to A. Newel et al. [26]. In our case, during the ”compression” part there are in total 6 down-sampling convolutional blocks and during the ”decompression” part 4 up-sampling convolutional blocks. The number of output feature planes in the first layer is and each time the resolution is down-sampled the number of feature planes is increased by a factor of . For GPU memory efficiency reasons, we do not allow the number of output feature planes of any layer to exceed that of . During the ”decompression” part, each time we up-sample the resolution we also decrease by a factor of 2 the number of feature planes. The last convolution layer yields a single feature plane with the new disparity labels (without any non-linearity). As already explained, during the ”decompressing” part the resolution is increased till that of of the input resolution. The reason for early-stopping the ”decompression” is that the Replace component is needed to only perform crude ”fixes” of the initial labels and thus further ”decompression” steps are not necessary. Before the disparity labels are fed to the next processing steps, bi-linear up-sampling by a factor of 4 (without any learn-able parameter) is being used in order to restore the resolution to that of the input resolution.

Refine component: It follows the same architecture as the replace component with the exception that during the ”compressing” part the resolution of the feature maps is reduced till of the input resolution and then during the ”decompressing” part the resolution is restored to that of the input resolution.

Alternative architectures: In case the alternative architectures have missing components, then the number of layers and/or the number of feature planes per layer of the remaining components is being increased such that the total capacity (i.e. number of learn-able parameters) remains the same. For the architectures that include only the Replace or Refine components (i.e. Replace, Refine, Detect+Replace, and Detect+Refine architectures) the ”compression” - ”decompression” architecture of this component ”compresses” the resolution till of the input resolution and then ”decompresses” it to the same resolution as the input image.

Weight initialization: In order to initialize the weights of each convolutional layer we use the initialization scheme proposed by K. He et al[11].

3.3 Training details

We used the loss as objective function and the networks were optimized using the Adam [14] method with and . The learning rate was set to

and was decreased after 20 epochs to

and then after epochs to . We then continued optimizing for another epochs. Each epoch lasted approximately batch iterations where each batch consisted of training samples. Each training sample consists of patches with spatial size and

channels (3 RGB color channels + 1 initial disparity label channel). The patches are generated by randomly cropping with uniform distribution an image and its corresponding initial disparity labels.

Augmentation: During training we used horizontal flip augmentation and chromatic transformations such as color, contrast, and brightness transformations.

4 Experimental results

In this section we present an exhaustive experimental evaluation of the proposed architecture as well as of the other explored architectures in the task of dense disparity estimation. Specifically, we first describe the evaluation settings used in our experiments (section 4.1), then we report detailed quantitative results w.r.t. the examined architectures (section 4.2), and finally we provide qualitative results of the proposed Detect, Replace, Refine architecture and all of its components, trying in this way to more clearly illustrate their role (section 4.3).

4.1 Experimental settings

Training set: In order to train the explored architectures we used the large scale synthetic dataset for disparity estimation that was recently introduced by N. Mayer et al[23]. We call this dataset the Synthetic dataset. It consists of three different type of synthetic image sequences and includes around stereo images. Also, we enriched this training set with images from the training set of the KITTI 2015 dataset [24, 25]333The entire training set of KITTI 2015 includes images. In our case we split those images in images that were used for training purposes and images that were used for validation purposes.

Evaluation sets: We evaluated our architectures on three different datasets. On 2000 images from the test split of the Synthetic dataset, on 40 validation images coming from KITTI 2015 training dataset, and on 15 images from the training set of the Middlebury dataset [29]. Prior to evaluating the explored architectures in the KITTI 2015 validation set, we fine-tuned the models that implement them only on the image of the KITTI 2015 training split. In this case, we start training for epochs with a learning rate of , we then reduce the learning rate to and continue training for epochs, and then reduce again the learning rate to and continue training for more epochs (in total epochs). The epoch size is set to batch iterations.

Evaluation metrics: For evaluation we used the end-point-error (EPE), which is the averaged euclidean distance from the ground truth disparity, and the percentage of disparity estimates that their absolute difference from the ground truth disparity is more than pixels ( pixel). Those metrics are reported for the non-occluded pixels (Non-Occ), all the pixels (All), and only the occluded pixels (Occ).

4.2 Quantitative results

4.2.1 Disparity estimation performance

2 pixel 3 pixel 4 pixel 5 pixel EPE
Architectures All All All All All
Initial labels 24.3175 22.9004 21.9140 21.1680 12.0218
Single-iteration results
Replace (baseline) 12.8007 10.4512 8.8966 7.7467 2.4456
Refine (baseline) 14.5996 12.2246 10.3046 8.7873 2.1235
Replace + Refine 11.1152 9.1821 7.8430 6.8550 2.2356
Detect + Replace 11.6970 9.2419 7.6812 6.6018 2.1504
Detect + Refine 10.5309 8.5565 7.2154 6.2186 1.8210
Parallel 11.0146 8.9261 7.5029 6.4742 2.0241
Detect + Replace + Refine 9.5981 7.9764 6.7895 5.9074 1.8569
Multi-iteration results
Detect + Replace + Refine x2 8.8411 7.2187 6.0987 5.2853 1.6899
Table 1: Stereo matching results on the Synthetic dataset.
2 pixel 3 pixel 4 pixel 5 pixel EPE
Architectures Non-Occ All Occ Non-Occ All Occ Non-Occ All Occ Non-Occ All Occ Non-Occ All Occ
Initial labels 18.243 26.714 86.125 15.664 23.986 82.330 14.208 22.282 78.758 13.237 21.044 75.579 6.058 8.709 25.598
Single-iteration results
Replace (baseline) 15.767 21.089 57.197 12.323 16.793 46.303 10.312 14.020 37.922 9.032 12.147 31.770 2.731 3.221 5.818
Refine (baseline) 13.981 19.742 58.039 11.110 16.042 47.732 9.266 13.406 39.218 7.889 11.392 32.467 1.953 2.551 5.665
Replace + Refine 14.262 19.257 52.036 11.297 15.701 43.905 9.552 13.459 37.910 8.408 11.891 33.125 2.292 2.908 6.216
Detect + Replace 15.368 20.984 58.745 11.243 16.169 48.568 8.957 13.176 40.663 7.571 11.179 34.482 2.013 2.676 6.462
Detect + Refine 13.732 19.375 56.383 10.718 15.552 46.281 8.893 12.975 38.197 7.600 11.012 31.478 2.105 2.626 5.389
Parallel 14.917 20.345 57.459 11.363 15.907 46.221 9.234 12.941 37.218 7.840 10.940 30.854 2.012 2.552 5.607
Detect + Replace + Refine 12.845 17.825 50.407 10.096 14.379 41.704 8.285 11.957 34.801 7.057 10.253 29.560 1.774 2.368 5.457
Multi-iteration results
Detect + Replace + Refine x2 11.529 16.414 47.922 8.757 12.874 37.977 6.997 10.482 30.634 5.911 8.916 25.514 1.789 2.321 4.971
Table 2: Stereo matching results on Middlebury.
2 pixel 3 pixel 4 pixel 5 pixel EPE
Architectures Non-Occ All Occ Non-Occ All Occ Non-Occ All Occ Non-Occ All Occ Non-Occ All Occ
Initial labels 8.831 10.649 98.098 6.412 8.253 96.559 5.222 7.059 94.742 4.514 6.339 93.139 1.700 2.457 31.214
Single-iteration results
Replace (Baseline) 4.997 5.668 37.327 3.329 3.888 27.890 2.452 2.892 19.643 1.924 2.292 15.226 0.858 0.923 3.165
Refine (Baseline) 4.429 5.165 33.028 3.075 3.714 25.107 2.370 2.924 19.610 1.933 2.404 15.978 0.867 0.953 3.384
Replace + Refine 3.963 4.529 27.411 2.712 3.209 21.465 2.082 2.507 16.481 1.735 2.098 13.611 0.802 0.865 2.859
Detect + Replace 5.126 5.751 35.554 3.469 4.005 27.656 2.517 2.953 20.519 1.911 2.269 15.947 0.886 0.943 3.108
Detect + Refine 4.482 5.169 34.992 3.054 3.634 26.453 2.328 2.799 19.004 1.865 2.258 14.686 0.863 0.926 2.952
Parallel 5.239 5.952 38.392 3.530 4.139 29.436 2.522 3.017 21.208 1.943 2.338 15.748 0.904 0.962 3.095
Detect + Replace + Refine 3.919 4.610 33.947 2.708 3.294 25.697 2.082 2.570 19.123 1.699 2.112 15.140 0.790 0.858 3.056
Multi-iteration results
Detect + Replace + Refine x2 3.685 4.277 28.164 2.577 3.075 20.762 2.001 2.424 16.086 1.652 2.004 13.056 0.779 0.835 2.723
Table 3: Stereo matching results on KITTI 2015 validation set.

In Tables 12, and 3 we report the stereo matching performance of the examined architectures in the Synthetic, Middlebury, and KITTI 2015 evaluation sets correspondingly.

Single-iteration results: We first evaluate all the examined architectures when they are applied for a single iteration. We observe that all of them are able to improve the initial label estimates . However, they do not all of them achieve it with the same success. For instance, the baseline models Replace and Refine tend to be less accurate than the rest models. Compared to them, the Detect + Replace and the Detect + Refine architectures perform considerably better in two out of three datasets, the Synthetic and the Middlebury datasets. This improvement can only be attributed to the error detection step, which is what it distinguishes them from the baselines, and indicates the importance of having an error detection component in the dense labelling task. Overall, the best single-iteration performance is achieved by the Detect + Replace + Refine architecture that we propose in this paper and combines both the merits of the error detection component and the two stage refinement strategy. Compared to it, the Parallel architecture has considerably worse performance, which indicates that the sequential order in the proposed architecture is important for achieving accurate results.

Multi-iteration results: We also evaluated our best performing architecture, which is the Detect + Replace + Refine architecture that we propose, in the multiple iteration case. Specifically, the last entry Detect + Replace + Refine x2 in Tables 12, and 3 indicates the results of the proposed architecture for 2 iterations and we observe that it further improves the performance w.r.t. the single iteration case. For more than 2 iterations we did not see any further improvement and for this reason we chose not to include those results. Note that in order to train this two iterations model, we first pre-train the single iteration version and then fine-tune the two iterations version by adding the generated disparity labels from the first iteration in the training set.

4.2.2 Label prediction accuracy Vs initial labels quality

(a) Error threshold pixels
(b) Error threshold pixels
(c) Error threshold pixels
(d) Error threshold pixels
Figure 4: Percentage of erroneously estimated disparity labels for a pixel as a function of the percentage of erroneous initial disparity labels in the patch of size centered on the pixel of interest . The patch size is set to . An estimated pixel label is considered erroneous if its absolute difference from the ground truth label is more than pixels. For the initial disparity labels in each patch, the threshold of considering them incorrect is set to (a) pixels, (b) pixels, (c) pixels, and (d) pixels. The evaluation is performed on images of the Synthetic test set.

In Figure 4 we evaluate the ability of each architecture to predict the correct disparity label for each pixel as a function of the ”quality” of the initial disparity labels in a neighborhood of that pixel. To that end, we plot for each architecture the percentage of erroneously estimated disparity labels as a function of the percentage of erroneous initial disparity labels that exist in the patch of size centered on the pixel of interest . In our case, the size of the neighborhood is set to . An estimated pixel label for the pixel is considered erroneous if its absolute difference from the ground truth label is more than pixels. For the initial disparity labels in the patch centered on , the threshold of considering them incorrect is set to (Fig. 4.a), (Fig. 4.b), (Fig. 4.c), or (Fig. 4.d). We make the following observations (that are more clearly illustrated from sub-figures 4.c and 4.d):

  • In the case of the Replace and Refine architectures, when the percentage of erroneous initial labels is low (e.g. less than ) then the Refine architecture (which predicts residual corrections) is considerably more accurate than the Replace architecture (which directly predicts new label values). However, when the percentage of erroneous initial labels is high (e.g. more than ) then the Replace architecture is more accurate than the Refine one. This observation supports our argument that residual corrections are more suitable for “soft” mistakes in the initial labels while predicting an entirely new label value is a better choice for the “hard” mistakes.

  • By introducing the error detection component, both the Refine and the Replace architectures manage to significantly improve their predictions. In the Detect+Refine case, the improvement is due to the fact that the error detection component sets the “hard” mistakes to the mean label values (see the description of the Detect+Refine architecture in the main paper) thus allowing the Refine component to ignore the values of the “hard” mistakes of the initial labels and instead make residual predictions w.r.t. the mean label values (these mean values are fixed and known in advance and thus it is easier for the network to learn to make residual predictions w.r.t. them). In the case of the Detect+Replace architecture, the error detection component “dictates” the Replace component to predict new label values for the incorrect initial labels while allowing the propagation of the correct ones in the output.

  • Finally, the best ”label prediction accuracy Vs initial labels quality” behavior is achieved by the proposed Detect + Replace + Refine architecture, which efficiently combines the error detection component with the two-stage label improvement approach. Interestingly, the improvement margins w.r.t. the rest architectures is increased as the quality of the initial labels is decreased.

4.2.3 KITTI 2015 test set results

All / All All / Est Noc / All Noc / Est Runtime
Architectures D1-bg D1-fg D1-all D1-bg D1-fg D1-all D1-bg D1-fg D1-all D1-bg D1-fg D1-all (secs)
Ours 2.58 6.04 3.16 2.58 6.04 3.16 2.34 4.87 2.76 2.34 4.87 2.76 0.4
DispNetC [23] 4.32 4.41 4.34 4.32 4.41 4.34 4.11 3.72 4.05 4.11 3.72 4.05 0.06
PBCB [31] 2.58 8.74 3.61 2.58 8.74 3.6 2.27 7.71 3.17 2.27 7.71 3.17 68
Displets v2 [8] 3.00 5.56 3.43 3.00 5.56 3.43 2.73 4.95 3.09 2.73 4.95 3.09 265
MC-CNN [41] 2.89 8.88 3.89 2.89 8.88 3.88 2.48 7.64 3.33 2.48 7.64 3.33 67
SPS-St [37] 3.84 12.67 5.31 3.84 12.67 5.31 3.50 11.61 4.84 3.50 11.61 4.84 2
MBM [6] 4.69 13.05 6.08 4.69 13.05 6.08 4.33 12.12 5.61 4.33 12.12 5.61 0.13
Table 4: Stereo matching results on KITTI 2015 test set.

We submitted our best solution, which is the proposed Detect + Replace + Refine architecture applied for two iterations, on the KITTI 2015 test set evaluation server and we achieved state-of-the-art results in the main evaluation metric, D1-all, surpassing all prior work by a significant margin. The results of our submission, as well as of other competing methods, are reported in Table 4444The link to our KITTI 2015 submission that contains more thorough test set results – both qualitative and quantitative – is:
http://www.cvlibs.net/datasets/kitti/eval_scene_flow_detail.php?benchmark=stereo&result=365eacbf1effa761ed07aaa674a9b61c60fe9300
. Note that our improvement w.r.t. the best prior approach corresponds to a more than relative reduction of the error rate. Our total execution time is 0.4 secs, of which around 0.37 secs is used by the patch matching algorithm for generating the initial disparity labels and the rest 0.03 by our Detect + Replace + Refine x2 architecture (measured in a Titan X GPU). For this submission, after having train the Detect + Replace + Refine x2 model on the training split (160 images), we further fine-tuned it on both the training and the validation splits (in which we divided the 200 images of KITTI 2015 training dataset).

4.2.4 ”X-Blind” Detect + Replace + Refine architecture

(a) Image
(b) Initial labels
(c) Final labels
(d) Ground truth labels
Figure 5: Here we illustrate some examples of the disparity predictions that the ”X-Blind” architecture performs. The illustrated examples are from the Synthetic and the Middlebury datasets.
2 pixel 3 pixel 4 pixel 5 pixel EPE
Architectures Non-Occ All Occ Non-Occ All Occ Non-Occ All Occ Non-Occ All Occ Non-Occ All Occ
Synthetic dataset
Initial labels 24.3175 22.9004 21.9140 21.1680 12.0218
Detect + Replace + Refine 9.5981 7.9764 6.7895 5.9074 1.8569
”X-Blind” 16.0014 14.0196 12.5170 11.3758 3.8810
Middlebury dataset
Initial labels 18.243 26.714 86.125 15.664 23.986 82.330 14.208 22.282 78.758 13.237 21.044 75.579 6.058 8.709 25.598
Detect + Replace + Refine 12.845 17.825 50.407 10.096 14.379 41.704 8.285 11.957 34.801 7.057 10.253 29.560 1.774 2.368 5.457
”X-Blind” 16.845 22.037 57.324 14.038 18.562 48.356 12.212 16.217 41.941 10.914 14.509 37.022 2.878 3.656 7.945
KITTI 2015 dataset
Initial labels 8.831 10.649 98.098 6.412 8.253 96.559 5.222 7.059 94.742 4.514 6.339 93.139 1.700 2.457 31.214
Detect + Replace + Refine 3.919 4.610 33.947 2.708 3.294 25.697 2.082 2.570 19.123 1.699 2.112 15.140 0.790 0.858 3.056
”X-Blind” 5.040 5.602 32.575 3.671 4.135 24.566 2.722 3.099 18.069 2.191 2.505 14.359 0.910 0.966 2.997
Table 5: Stereo matching results for the ”X-Blind” architecture. We also include the corresponding results of the proposed Detect + Replace + Refine architecture to facilitate their comparison.

Here we evaluate the ”X-Blind” architecture that, as already explained, it is exactly the same as the proposed Detect + Replace + Refine architecture with the only difference being that as input gets only the initial labels and not the image . The purpose of evaluating such an architecture is not to examine a competitive variant of the main Detect + Replace + Refine architecture, but rather to explore the capabilities of the latter one in such a scenario. In Table 5 we provide the stereo matching results of the ”X-Blind” architecture. We observe that it might not be able to compete the original Detect + Replace + Refine architecture but it still can significantly improve the initial disparity label estimates. In Figure 5 we illustrate some disparity prediction examples generated by the ”X-Blind” architecture. We observe that the ”X-Blind” architecture manages to considerably improve the quality of the initial disparity label estimates, however, since it does not have the image to guide it, it is not able to accurately reconstruct the disparity field on the borders of the objects.

4.3 Qualitative results

This section includes qualitative examples that help illustrating the role of the various components of our proposed architecture.

4.3.1 Error Detection step

(a) Image
(b) Initial labels
(c) Predicted error map
(d) Ground truth errors
Figure 6: Illustration of the error probability maps that the error detection component yields. The ground truth error maps are computed by thresholding the absolute difference of the initial labels from the ground truth labels with a threshold of pixels (red are the erroneous pixel labels). Note that in the case of the KITTI 2015 dataset, the available ground truth labels are sparse and do not cover the entire image (e.g. usually there is no annotation for the sky), which is why some obviously erroneous initial label estimates are not coloured as incorrect (with red color) in the ground truth error maps.

In Figure 6 we provide additional examples of error probability maps (that the error detection component generated w.r.t. the initial labels ) and compare them with the ground truth error maps of the initial labels. The ground truth error maps are computed by thresholding the absolute difference of the initial labels from the ground truth labels with a threshold of pixels (red are the erroneous pixel labels in the figure). Note that this is the logic that is usually followed in the disparity task for considering a pixel label erroneous. We observe that, despite the fact the error detection component is not explicitly trained to produce such ground truth error maps, its predictions still highly correlate with them. This implies that the error detection component seems to have learnt to recognize the areas that look abnormal/atypical with respect to the joint input-output space (i.e., it has learnt the “structure” of that space).

4.3.2 Replace step

In Figure 7 we provide several examples that more clearly illustrate the function performed by the Replace step in our proposed architecture. Specifically, in sub-figures 7a, 7b, and 7c we depict the input image , the initial disparity label estimates , and the error probability map that the detection component yields for the initial labels . In sub-figure 7d we depict the label predictions of the replace component . For visualization purposes we only depict the pixel predictions that will replace the initial labels that are incorrect (according to the detection component) by drawing the remaining ones (i.e. those that their error probability is less than ) with black color. Finally, in the last sub-figure 7e we depict the renewed labels . We can readily observe that most of the “hard” mistakes of the initial labels have now been crudely “fixed” by the Replace component.

(a) Image
(b) Initial labels
(c) Error map
(d) predictions
(e) Renewed labels
Figure 7: Here we provide more examples that illustrate the function performed by the Replace step in our proposed architecture. Specifically, sub-figures (a), (b), and (c) depict the input image , the initial disparity label estimates , and the error probability map that the detection component yields for the initial labels . In sub-figure (d) we depict the label predictions of the replace component . For visualization purposes we only depict the pixel predictions that will replace the initial labels that are incorrect (according to the detection component) by drawing the remaining ones (i.e. those that their error probability is less than ) with black color. Finally, in the last sub-figure (e) we depict the renewed labels . We can readily observe that most of the “hard” mistakes of the initial labels have now been crudely “fixed” by the Replace component.

4.3.3 Refine step

In Figure 8 we provide several examples that more clearly illustrate the function performed by the Refine step in our proposed architecture. Specifically, in sub-figures 8a, 8b, and 8c we depict the input image , the initial disparity label estimates , and the renewed labels that the Replace step yields. In sub-figure 8d we depict the residual corrections that the Refine component yields for the renewed labels . Finally, in last sub-figure 8e we depict the final label estimates that the Refine step yields. We observe that most of residual corrections that the Refine component yields are concentrated on the borders of the objects. Furthermore, by adding those residuals on the renewed labels , the Refine step manages to refine the renewed labels and align the estimated labels with the fine image structures in .

(a) Image
(b) Initial labels
(c) Renewed labels
(d) residuals
(e) Final labels
Figure 8: Here we provide more examples that illustrate the function performed by the Refine step in our proposed architecture. Specifically, in sub-figures (a), (b), and (c) we depict the input image , the initial disparity label estimates , and the renewed labels that the Replace step yields. In sub-figure (d) we depict the residual corrections that the Refine component yields for the renewed labels . Finally, in the last sub-figure (e) we depict the final label estimates that the Refine step yields.

4.3.4 Detect, Replace, Refine pipeline

In Figure 9 we illustrate the entire work-flow of the Detect + Replace + Refine architecture that we propose and we compare its predictions with the ground truth disparity labels.

(a) Image
(b) Initial labels
(c) Error map
(d) Labels
(e) Final labels
(f) Ground truth
Figure 9: Illustration of the intermediate steps of the Detect + Replace + Refine work-flow. We observe that the final Refine component , by predicting residual corrections, manages to refine the renewed labels and align the output labels with the fine image structures in image . Note that in the case of the KITTI 2015 dataset, the available ground truth labels are sparse and do not cover the entire image.

4.3.5 Multi-iteration architecture

In Figure 10, we illustrate the estimated disparity labels after each iteration of our multi-iteration architecture Detect + Replace + Refine x2 that in our experiments achieved the most accurate results. We observe that the 2 iteration further improves the fine details of the estimated disparity labels delivering a higher fidelity disparity field. Furthermore, applying the model for a 2 iteration results in a disparity field that looks more “natural”, i.e., visually plausible.

(a) Image
(b) Initial labels
(c) 1st iteration labels
(d) 2nd iteration labels
(e) Ground truth labels
Figure 10: Illustration of the estimated labels on each iteration of the Detect, Replace, Refine x2 multi-iteration architecture. The visualised examples are from zoomed-in patches from the Middlebury and the Synthetic datasets.

4.3.6 KITTI 2015 qualititive results

Figure 11: Qualitative results in the validation set of KITTI 2015. From left to right, we depict the left image , the initial labels , the labels that our model estimates, and finally the errors of our estimates w.r.t. ground truth.

We provide qualitative results from KITTI 2015 validation set in Figure 11. In order to generate them we used the Detect + Replace + Refine x2 architecture that gave the best quantitative results. We observe that our model is able to recover a good estimate of the actual disparity map even when the initial label estimates are severely corrupted.

5 Conclusions

In our work we explored a family of architectures that performs the structured prediction problem of dense image labeling by learning a deep joint input-output model that (iteratively) improves some initial estimates of the output labels. In this context our main focus was on what is the optimal architecture for implementing this deep model. We argued that the prior approaches of directly predicting the new labels with a feed-forward deep neural networks are sub-optimal and we proposed to decompose the label improvement step in three sub-tasks: 1) detection of the incorrect input labels, 2) their replacement with new labels, and 3) the overall refinement of the output labels in the form of residual corrections. All three steps are embedded in a unified architecture, which we call Detect + Replace + Refine, that is end-to-end trainable. We evaluated our architecture in the disparity estimation (stereo matching) task and we report state-of-the-art results in the KITTI 2015 test set.

6 Acknowledgements

This work was supported by the ANR SEMAPOLIS project and hardware donation by NVIDIA. We would like to thank Sergey Zagoruyko, Francisco Massa, and Shell Xu for their advices with respect to the Torch framework and fruitful discussions.

References