Pixel wise image labeling is an interesting and challenging problem with great significance in the computer vision community. In order for a dense labeling algorithm to be able to achieve accurate and precise results, it has to consider the dependencies that exist in the joint space of both the input and the output variables. An implicit approach for modeling those dependencies is by training a deep neural network that, given as input an initial estimate of the output labels and the input image, it will be able to predict a new refined estimate for the labels. In this context, our work is concerned with what is the optimal architecture for performing the label improvement task. We argue that the prior approaches of either directly predicting new label estimates or predicting residual corrections w.r.t. the initial labels with feed-forward deep network architectures are sub-optimal. Instead, we propose a generic architecture that decomposes the label improvement task to three steps: 1) detecting the initial label estimates that are incorrect, 2) replacing the incorrect labels with new ones, and finally 3) refining the renewed labels by predicting residual corrections w.r.t. them. Furthermore, we explore and compare various other alternative architectures that consist of the aforementioned Detection, Replace, and Refine components. We extensively evaluate the examined architectures in the challenging task of dense disparity estimation (stereo matching) and we report both quantitative and qualitative results on three different datasets. Finally, our dense disparity estimation network that implements the proposed generic architecture, achieves state-of-the-art results in the KITTI 2015 test surpassing prior approaches by a significant margin.READ FULL TEXT VIEW PDF
Pixelwise semantic image labeling is an important, yet challenging, task...
Stereo matching algorithms usually consist of four steps, including matc...
Availability of a few, large-size, annotated datasets, like ImageNet, Pa...
3D object detection is an essential task in autonomous driving and robot...
Structured prediction energy networks (SPENs; Belanger & McCallum 2016) ...
To achieve parsimonious inference in per-pixel labeling tasks with a lim...
Structured prediction is concerned with predicting multiple inter-depend...
Dense image labeling is a problem of paramount importance in the computer vision community as it encompasses many low or high level vision tasks including stereo matching , optical flow , surface normals estimation , and semantic segmentation , to mention a few characteristic examples. In all these cases the goal is to assign a discrete or continuous value for each pixel in the image. Due to its importance, there is a vast amount of work on this problem. Recent methods can be roughly divided into three main classes of approaches.
The first class focuses on developing independent patch classifiers/regressors[34, 32, 33, 20, 7, 23, 27] that would directly predict the pixel label given as input an image patch centered on it or, in cases like stereo matching and optical flow, would be used for comparing patches between different images in order to pick pairs of best matching pixels [21, 39, 40, 41]
. Deep convolutional neural networks (DCNNs) have demonstrated excellent performance in the aforementioned tasks thanks to their ability to learn complex image representations by harnessing vast amount of training data [16, 35, 10]
. However, despite their great representational power, just applying DCNNs on image patches, does not capture the structure of output labels, which is an important aspect of dense image labeling tasks. For instance, independent feed-forward DCNN patch predictors do not take into consideration the correlations that exist between nearby pixel labels. In addition, feed-forward DCNNs have the extra disadvantages that they usually involve multiple consecutive down-sampling operations (i.e. max-pooling or strided convolutions) and that the top most convolutional layers do not capture factors such as image edges or other fine image structures. Both of the above properties may prevent such methods from achieving precise and accurate results in dense image labeling tasks.
Another class of methods tries to model the joint dependencies of both the input and output variables by use of probabilistic graphical models such as Conditional Random Fields (CRFs) . In CRFs, the dense image labeling task is performed through maximum a posteriori (MAP) inference in a graphical model that incorporates prior knowledge about the nature of the task in hand with pairwise edge potential between the graph nodes of the label variables. For example, in the case of semantic segmentation, those pairwise potentials enforce label consistency among similar or spatially adjacent pixels. Thanks to their ability to jointly model the input-output variables, CRFs have been extensively used in pixel-wise image labelling tasks [15, 28]. Recently, a number of methods has attempted to combine them with the representational power of DCNNs by getting the former (CRFs) to refine and disambiguate the predictions of the later one [30, 2, 42, 3]. Particularly, in semantic segmentation, DeepLab  uses a fully connected CRF to post-process the pixel-wise predictions of a convolutional neural network while in CRF-RNN 
, they unify the training of both the DCNN and the CRF by formulating the approximate mean-field inference of fully connected CRFs as Recurrent Neural Networks (RNN). However, a major drawback of most CRF based approaches is that the pairwise potentials have to be carefully hand designed in order to incorporate simple human assumptions about the structure of the output labelsand at the same time to allow for tractable inference.
A third class of methods relies on a more data-driven approach for learning the joint space of both the input and the output variables. More specifically, in this case a deep neural network gets as input an initial estimate of the output labels and (optionally) the input image and it is trained to predict a new refined estimate for the labels, thus being implicitly enforced to learn the joint space of both the input and the output variables. The network can learn either to predict new estimates for all pixel labels (transform-based approaches) [38, 9, 19], or alternatively, to predict residual corrections w.r.t. the initial label estimates (residual-based approaches) . We will hereafter refer to these methods as deep joint input-output models. These are, loosely speaking, related to the CRF models in the sense that the deep neural network is enforced to learn the joint dependencies of both the input image and output labels, but with the advantage of being less constrained about the complexity of the input-output dependencies that it can capture.
Our work belongs to this last category of dense image labeling approaches, thus it is not constrained on the complexity of the input-output dependencies that it can capture. However, here we argue that prior approaches in this category use a sub-optimal strategy. For instance, the transform-based approaches (that always learn to predict new label estimates) often have to learn something more difficult than necessary since they must often simply learn to operate as identity transforms in case of correct initial labels, yielding the same label in their output. On the other hand, for the residual based approaches it is easier to learn to predict zero residuals in the case of correct initial labels, but it is more difficult for them to refine “hard” mistakes that deviate a lot from the initial labels (see figure 1). Due to the above reasons, in our work we propose a deep joint input-output model that decomposes the label estimation/refinement process as a sequence of the following easier to execute operations: 1) detection of errors in the input labels, 2) replacement of the erroneous labels with new ones, and finally 3) an overall refinement of all output labels in the form of residual corrections. Each of the described operations in our framework is executed by a different component implemented with a deep neural network. Even more, those components are embedded in a unified architecture that is fully differentiable thus allowing for an end-to-end learning of the dense image labeling task by only applying the objective function on the final output. As a result of this, we are also able to explore a variety of novel deep network architectures by considering different ways of combining the above components, including the possibility of performing the above operations iteratively, as it is done in , thus enabling our model to correct even large, in area, regions of incorrect labels. It is also worth noting that the error detection component in the proposed architecture, by being forced to detect the erroneous pixel labels (given both the input and the initial estimates of the output labels), it implicitly learns the joint structure of the input-output space, which is an important requirement for a successful application of any type of structured prediction model.
To summarize, our contributions are as follows:
We propose a deep structured prediction framework for the dense image labeling task, which we call Detect, Replace, Refine, that relies on three main building blocks: 1) recognizing errors in the input label maps, 2) replacing the erroneous labels, and 3) performing a final refinement of the output label map. We show that all of the aforementioned steps can be embedded in a unified deep neural network architecture that is end-to-end trainable.
In the context of the above framework, we also explore a variety of other network architectures for deep joint input-output models that result from utilizing different combinations of the above building blocks.
We implemented and evaluated our framework on the disparity prediction task (stereo matching) and we provide both qualitative and quantitative evidence about the advantages of the proposed approach.
We show that our disparity estimation model that implements the proposed Detect, Replace, Refine architecture achieves state of the art results in the KITTI 2015 test set outperforming all prior published work by a significant margin.
The remainder of the paper is structured as follows: We first describe our structured dense label prediction framework in §2 and its implementation w.r.t. the dense disparity estimation task (stereo matching) in §3. Then, we provide experimental results in §4 and we finally conclude the paper in §5.
Let be the input image222Here, for simplicity, we consider images defined on a 2D domain, but our framework can be readily applied to images defined on any domain. of size , where are the image pixels, and be some initial label estimates for this image, where is the label for the i-th pixel. Our dense image labeling methodology belongs on the broader category of approaches that consist of a deep joint input-output model model that given as input the image and the initial labels , it learns to predict new, more accurate labels . Note that in this setting the initial labels could come from another model that depends only on the image . Also, in the general case, the pixel labels can be of either discrete or continuous nature. In this work, however, we focus on the continuous case where greater variety of architectures can be explored.
The crucial question is what is the most effective way of implementing the deep joint input-output model . The two most common approaches in the literature involve a feed-forward deep convolutional neural network, , that either directly predicts new labels or it predicts the residual correction w.r.t. the input labels: . We argue that both of them are sub-optimal solutions for implementing the model. Instead, in our work we opt for a decomposition of the task of model (i.e. predicting new, more accurate labels ) in three different sub-tasks that are executed in sequence.
In the remainder of this section, we first describe the proposed architecture in §2.1, then we discuss the intuition behind it and its advantages in §2.2, and finally we describe other alternative architectures that we explored in §2.3.
The generic dense image labeling architecture that we propose decomposes task of the deep joint input-output model in three sub-tasks each of them handled by a different learn-able network component (see Figure 2). Those network components are: the error detection component , the label replacement component , and the label refinement component . The sub-tasks that they perform, are:
The first sub-task in our generic pipeline is to detect the erroneously labeled pixels of by discovering which pixel labels are inconsistent with the remaining labels of and the input image . This sub-task is performed by the error detection component
that basically needs to yield a probability mapof the same size as the input labels that will have high probabilities for the ”hard” mistakes in . These mistakes should ideally be forgotten and replaced with entirely new label values in the processing step that follows (see Figures 3a, 3b, and 3c). As we will see below, the topology of our generic architecture allows the error detection component to learn its assigned task (i.e. detecting the incorrect pixel labels) without explicitly being trained for this, e.g., through the use of an auxiliary loss. The error detection function can be implemented with any deep (or shallow) neural network with the only constraint being that its output map must take values in the range .
In the second sub-task, a new label field is produced by the convex combination of the initial label field and the output of the label replacement component : (see Figures 3e and 3f). We observe that the error probabilities generated by the error detection component now act as gates that control which pixel labels of will be forgotten and replaced by the outputs of , which will be all pixel labels that are assigned high probability of being incorrect. In this context, the task of the Replace component is to replace the erroneous pixel labels with new ones that will be in accordance both w.r.t. the input image and w.r.t. the non-erroneous labels of . Note that for this task the Replace component gets as input also the error probability map . The reason for doing this is to help the Replace component to focus its attention only on those image regions that their labels need to be replaced. The component can be implemented by any neural network that its output has the same size as the input labels .
The purpose of the erroneous label detection and label replacement steps so far was to perform a crude “fix” of the “hard” mistakes in the label map . In contrast, the purpose of the current step is to do a final refinement of the entire output label map , which is produced by the previous steps, in the form of residual corrections: (see Figures 3g and 3h). Intuitively, the purpose of this step is to correct the “soft” mistakes of the label map and to better align the output labels with the fine structures in the image . The Refine component can be implemented by any neural network that its output has the same size as the input labels .
The above three steps can be applied for more than one iterations which, as we will see later, allows our generic framework to recover a good estimate of the ground truth labels or, in worst case, to yield more plausible results even when the initial labels are severely corrupted (see Figure 10 in the experiments section §4.3.6).
To summarize, the workings of our dense labeling generic architecture can be concisely described by the iterative application of the following three equations:
We observe that the above generic architecture is fully differentiable as long as the function components , , and are also differentiable. Due to this fact, the overall proposed architecture is end-to-end learnable by directly applying an objective function (e.g
. Absolute Difference or Mean Square Error loss functions) on the final output label maps.
Role of the Detection component and its synergy with the Replace component : The error detection component is a key element in our generic architecture and its purpose is to indicate which are the image regions that their labels are incorrect. This type of information is exploited in the next step of label replacement in two ways. Firstly, the Replace component that gets as input the error map , which is generated by , is able to know which are the image regions that their labels needs to be replaced and thus it is able to focus its attention only on those image regions. At this point note that, in equation 7, the error maps , apart from being given as input attention maps to the Replace component , they also act as gates that control which way the information will flow both during the forward propagation and during the backward propagation. Specifically, during the forward propagation case, in the cases that the error map probabilities are either or , it holds that:
which basically means that the Replace component is being utilized mainly for the erroneously labelled image regions. Also, during the backward propagation, it is easy to see that the gradients of the replace function w.r.t. the loss (in the cases that the error probabilities are either or ) are:
which means that gradients are back-propagated through the Replace component only for the erroneously labelled image regions. So, in a nutshell, during the learning procedure the Replace component is explicitly trained to predict new values mainly for the erroneously labelled image regions. The second advantage of giving the error maps as input to the Replace component , is that this allows the Replace component to know which image regions contain “trusted” labels that can be used for providing information on how to fill the erroneously labelled regions.
Estimated error probability maps by the Detection component : Thanks to the topology of our generic architecture, by optimizing the reconstruction of the ground truth labels , the error detection component implicitly learns to act as a joint probability model for patches of and centered on each pixel of the input image, assigning a high probability of error for patches that do not appear to belong to the joint input-output space . In Figures 3c and 3d we visualize the estimated by the Detection component error maps and the ground truth error maps in the context of the disparity estimation task (more visualizations are provided in Figure 6). It is interesting to note that the estimated error probability maps are very similar to the ground truth error maps despite the fact that we are not explicitly enforcing this behaviour, e.g., through the use of an auxiliary loss.
Error detection component and Highway Networks: Note that the way the Detection component and Replace component interact bears some resemblance to the basic building blocks of the Highway Networks 
that are being utilized for training extremely deep neural network architectures. Briefly, each highway building block gets as input some hidden feature maps and then predicts transform gates that control which feature values will be carried on the next layer as is and which will be transformed by a non-linear function. There are however some important differences. For instance, in our case the error gate prediction and the label replacement steps are executed in sequence with the latter one getting as input the output of the former one. Instead, in Highway Networks the gate prediction and the non-linear transform of the input feature maps are performed in parallel. Furthermore, in Highway Networks the components of each building block are implemented by simple affine transforms followed by non-linearities and the purpose is to have multiple building blocks stacked one on top of the other in order to learn extremely deep image representations. In contrast, the components of our generic architecture are them selves deep neural networks and the purpose is to learn to reconstruct the input labels.
Two stage refinement approach: Another key element in our architecture is that the step of predicting new, more accurate labels , given the initial labels , is broken in two stages. The first stage is handled by the error detection component and the label replacement component . Their job is to correct only the ”hard” mistakes of the input labels . They are not meant to correct ”soft” mistakes (i.e. errors in the label values of small magnitude). In order to learn to correct those ”soft” mistakes, it is more appropriate to use a component that yields residual corrections w.r.t. its input. This is the purpose of our Refine component , in the second stage of our architecture, from which we expect to improve the ”details” of the output labels by better aligning them with the fine structures of the input images. This separation of roles between the first and the second refinement stages (i.e. coarse refinement and then fine-detail refinement) has the potential advantage, which is exploited in our work, to perform the actions of the first stage in lower resolution thus speeding up the processing and reducing the memory footprint of the network. Also, the end-to-end training procedure allows the components in the first stage (i.e. and ) to make mistakes as long as those are corrected by the second stage. This aspect of our architecture has the advantage that each component can more efficiently exploit its available capacity.
In order to evaluate the proposed architecture we also devised and tested various others architectures that consist of the same core components as those that we propose. In total, the architectures that are explored in our work are:
Detect + Replace + Refine architecture: This is the architecture that we proposed in section 2.1.
Replace baseline architecture: In this case the model directly replaces the old labels with new ones: .
Refine baseline architecture: In this case the model predicts residual corrections w.r.t. the input labels: .
Replace + Refine architecture: Here the model first replaces the entire label map with new values and then residual corrections are predicted w.r.t. the updated values , .
Detect + Replace architecture: Here the model first detects errors on the input label maps and then replace those erroneous pixel labels .
Detect + Refine architecture: In this case, after the detection of the errors , the erroneous pixel labels are masked out by setting them to the mean label value , . Then the masked label maps are given as input to a residual refinement model . Note that this architecture can also be considered as a specific instance of the general Detect + Replace + Refine architecture where the Replace component does not have any learnable parameters and constantly returns the mean label value, i.e., .
Parallel architecture: Here, after the detection of the errors, the erroneous labels are replaced by the Replace component while the rest labels are refined by the Refine component . More specifically, the operations performed by this architecture are described by the following equations:
Basically, in this architecture the components and are applied in parallel instead of the sequential topology that is chosen in the Detect + Replace + Refine architecture.
Detect + Replace + Refine : This is basically the Detect + Replace + Refine architecture but applied iteratively for iterations. Note that the model implementing this architecture is trained in a multi-iteration manner.
X-Blind Detect + Replace + Refine architecture: This is a ”blind” w.r.t. the image version of the Detect + Replace + Refine architecture. Specifically, the ”X-Blind” architecture is exactly the same as the proposed Detect + Replace + Refine architecture with the only difference being that it gets as input only the initial labels and not the image (i.e. none of the , , and components depends on the image ). Hence, the model implemented by the ”X-Blind” architecture must learn to reconstruct the ground truth labels by only ”seeing” a corrupted version of them.
In order to evaluate the proposed dense image labeling architecture, as well as the other alternative architectures that are explored in our work, we use the dense disparity estimation (stereo matching) task, according to which, given a left and right image, one needs to assign to each pixel of the left image a continuous label that indicates its horizontal displacement in the right image (disparity). Such a task forms a very interesting and challenging testbed for the evaluation of dense labeling algorithms since it requires dealing with several challenges such as accurately preserving disparity discontinuities across object boundaries, dealing with occlusions, as well as recovering the fine details of disparity maps. At the same time it has many practical applications on various autonomous driving and robot navigation or grasping tasks.
Generating initial disparity field: In all the examined architectures, in order to generate the initial disparity labels we used the deep patch matching approach that was proposed by W. Luo et al.  and specifically their architecture with id . We then train our models to reconstruct the ground truth labels given as input only the left image and the initial disparity labels . We would like to stress out that the right image of the stereo pair is not provided to our models. This practically means that the trained models cannot rely only on the image evidence for performing the dense disparity labelling task – since disparity prediction from a single image is an ill-posed problem – but they have to learn the joint space of both input and output labels in order to perform the task.
Image & disparity field normalization:
Before we feed an image and its initial disparity field to any of our examined architectures, we normalize them to zero mean and unit variance (i.e
. mean subtraction and division by the standard deviation). The mean and standard deviation values of the RGB colors and disparity labels are computed on the entire training set. The disparity target labels are also normalized with the same mean and standard deviation values and during inference the normalization effect is inverted on the disparity fields predicted by the examined architectures.
Each component of our generic architecture can be implemented by a deep neural network. For our disparity estimation experiments we chose the following implementations:
Error detection component: It is implemented by 5 convolutional layers of which the last one yields the error probability map
. All the convolutional layers, apart from the last one, are followed by batch normalization
plus ReLU units. Instead, the last convolutional layer is followed by a sigmoid unit. The first two convolutions are followed by max-pooling layers of kernel size 2 that in total reduce the input resolution by a factor of 4. To compensate, a bi-linear up-sampling layer is placed on top of the last convolution layer in order the output probability map to have the same resolution as the input image. The number of output feature planes of each of the 5 convolutional layers is , , , , and correspondingly.
Replace component: It is implemented with a convolutional architecture that first ”compress” the resolution of the feature maps to of the input resolution and then ”decompress” the resolution to of the input resolution. For its implementation we follow the guidelines of A. Newel et al.  which are to use residual blocks  on each layer and parametrized (by residual blocks) skip connection between the symmetric layers in the ”compressing” and the ”decompressing” parts of the architecture. The ”compressing” part of the architecture uses max-pooling layers with kernel size 2 to down-sample the resolution while the ”decompressing” part uses nearest-neighbor up-sampling (by a factor of 2). We refer for more details to A. Newel et al. . In our case, during the ”compression” part there are in total 6 down-sampling convolutional blocks and during the ”decompression” part 4 up-sampling convolutional blocks. The number of output feature planes in the first layer is and each time the resolution is down-sampled the number of feature planes is increased by a factor of . For GPU memory efficiency reasons, we do not allow the number of output feature planes of any layer to exceed that of . During the ”decompression” part, each time we up-sample the resolution we also decrease by a factor of 2 the number of feature planes. The last convolution layer yields a single feature plane with the new disparity labels (without any non-linearity). As already explained, during the ”decompressing” part the resolution is increased till that of of the input resolution. The reason for early-stopping the ”decompression” is that the Replace component is needed to only perform crude ”fixes” of the initial labels and thus further ”decompression” steps are not necessary. Before the disparity labels are fed to the next processing steps, bi-linear up-sampling by a factor of 4 (without any learn-able parameter) is being used in order to restore the resolution to that of the input resolution.
Refine component: It follows the same architecture as the replace component with the exception that during the ”compressing” part the resolution of the feature maps is reduced till of the input resolution and then during the ”decompressing” part the resolution is restored to that of the input resolution.
Alternative architectures: In case the alternative architectures have missing components, then the number of layers and/or the number of feature planes per layer of the remaining components is being increased such that the total capacity (i.e. number of learn-able parameters) remains the same. For the architectures that include only the Replace or Refine components (i.e. Replace, Refine, Detect+Replace, and Detect+Refine architectures) the ”compression” - ”decompression” architecture of this component ”compresses” the resolution till of the input resolution and then ”decompresses” it to the same resolution as the input image.
Weight initialization: In order to initialize the weights of each convolutional layer we use the initialization scheme proposed by K. He et al. .
We used the loss as objective function and the networks were optimized using the Adam  method with and . The learning rate was set to
and was decreased after 20 epochs toand then after epochs to . We then continued optimizing for another epochs. Each epoch lasted approximately batch iterations where each batch consisted of training samples. Each training sample consists of patches with spatial size and
channels (3 RGB color channels + 1 initial disparity label channel). The patches are generated by randomly cropping with uniform distribution an image and its corresponding initial disparity labels.
Augmentation: During training we used horizontal flip augmentation and chromatic transformations such as color, contrast, and brightness transformations.
In this section we present an exhaustive experimental evaluation of the proposed architecture as well as of the other explored architectures in the task of dense disparity estimation. Specifically, we first describe the evaluation settings used in our experiments (section 4.1), then we report detailed quantitative results w.r.t. the examined architectures (section 4.2), and finally we provide qualitative results of the proposed Detect, Replace, Refine architecture and all of its components, trying in this way to more clearly illustrate their role (section 4.3).
Training set: In order to train the explored architectures we used the large scale synthetic dataset for disparity estimation that was recently introduced by N. Mayer et al. . We call this dataset the Synthetic dataset. It consists of three different type of synthetic image sequences and includes around stereo images. Also, we enriched this training set with images from the training set of the KITTI 2015 dataset [24, 25]333The entire training set of KITTI 2015 includes images. In our case we split those images in images that were used for training purposes and images that were used for validation purposes.
Evaluation sets: We evaluated our architectures on three different datasets. On 2000 images from the test split of the Synthetic dataset, on 40 validation images coming from KITTI 2015 training dataset, and on 15 images from the training set of the Middlebury dataset . Prior to evaluating the explored architectures in the KITTI 2015 validation set, we fine-tuned the models that implement them only on the image of the KITTI 2015 training split. In this case, we start training for epochs with a learning rate of , we then reduce the learning rate to and continue training for epochs, and then reduce again the learning rate to and continue training for more epochs (in total epochs). The epoch size is set to batch iterations.
Evaluation metrics: For evaluation we used the end-point-error (EPE), which is the averaged euclidean distance from the ground truth disparity, and the percentage of disparity estimates that their absolute difference from the ground truth disparity is more than pixels ( pixel). Those metrics are reported for the non-occluded pixels (Non-Occ), all the pixels (All), and only the occluded pixels (Occ).
|2 pixel||3 pixel||4 pixel||5 pixel||EPE|
|Replace + Refine||11.1152||9.1821||7.8430||6.8550||2.2356|
|Detect + Replace||11.6970||9.2419||7.6812||6.6018||2.1504|
|Detect + Refine||10.5309||8.5565||7.2154||6.2186||1.8210|
|Detect + Replace + Refine||9.5981||7.9764||6.7895||5.9074||1.8569|
|Detect + Replace + Refine x2||8.8411||7.2187||6.0987||5.2853||1.6899|
|2 pixel||3 pixel||4 pixel||5 pixel||EPE|
|Replace + Refine||14.262||19.257||52.036||11.297||15.701||43.905||9.552||13.459||37.910||8.408||11.891||33.125||2.292||2.908||6.216|
|Detect + Replace||15.368||20.984||58.745||11.243||16.169||48.568||8.957||13.176||40.663||7.571||11.179||34.482||2.013||2.676||6.462|
|Detect + Refine||13.732||19.375||56.383||10.718||15.552||46.281||8.893||12.975||38.197||7.600||11.012||31.478||2.105||2.626||5.389|
|Detect + Replace + Refine||12.845||17.825||50.407||10.096||14.379||41.704||8.285||11.957||34.801||7.057||10.253||29.560||1.774||2.368||5.457|
|Detect + Replace + Refine x2||11.529||16.414||47.922||8.757||12.874||37.977||6.997||10.482||30.634||5.911||8.916||25.514||1.789||2.321||4.971|
|2 pixel||3 pixel||4 pixel||5 pixel||EPE|
|Replace + Refine||3.963||4.529||27.411||2.712||3.209||21.465||2.082||2.507||16.481||1.735||2.098||13.611||0.802||0.865||2.859|
|Detect + Replace||5.126||5.751||35.554||3.469||4.005||27.656||2.517||2.953||20.519||1.911||2.269||15.947||0.886||0.943||3.108|
|Detect + Refine||4.482||5.169||34.992||3.054||3.634||26.453||2.328||2.799||19.004||1.865||2.258||14.686||0.863||0.926||2.952|
|Detect + Replace + Refine||3.919||4.610||33.947||2.708||3.294||25.697||2.082||2.570||19.123||1.699||2.112||15.140||0.790||0.858||3.056|
|Detect + Replace + Refine x2||3.685||4.277||28.164||2.577||3.075||20.762||2.001||2.424||16.086||1.652||2.004||13.056||0.779||0.835||2.723|
Single-iteration results: We first evaluate all the examined architectures when they are applied for a single iteration. We observe that all of them are able to improve the initial label estimates . However, they do not all of them achieve it with the same success. For instance, the baseline models Replace and Refine tend to be less accurate than the rest models. Compared to them, the Detect + Replace and the Detect + Refine architectures perform considerably better in two out of three datasets, the Synthetic and the Middlebury datasets. This improvement can only be attributed to the error detection step, which is what it distinguishes them from the baselines, and indicates the importance of having an error detection component in the dense labelling task. Overall, the best single-iteration performance is achieved by the Detect + Replace + Refine architecture that we propose in this paper and combines both the merits of the error detection component and the two stage refinement strategy. Compared to it, the Parallel architecture has considerably worse performance, which indicates that the sequential order in the proposed architecture is important for achieving accurate results.
Multi-iteration results: We also evaluated our best performing architecture, which is the Detect + Replace + Refine architecture that we propose, in the multiple iteration case. Specifically, the last entry Detect + Replace + Refine x2 in Tables 1, 2, and 3 indicates the results of the proposed architecture for 2 iterations and we observe that it further improves the performance w.r.t. the single iteration case. For more than 2 iterations we did not see any further improvement and for this reason we chose not to include those results. Note that in order to train this two iterations model, we first pre-train the single iteration version and then fine-tune the two iterations version by adding the generated disparity labels from the first iteration in the training set.
In Figure 4 we evaluate the ability of each architecture to predict the correct disparity label for each pixel as a function of the ”quality” of the initial disparity labels in a neighborhood of that pixel. To that end, we plot for each architecture the percentage of erroneously estimated disparity labels as a function of the percentage of erroneous initial disparity labels that exist in the patch of size centered on the pixel of interest . In our case, the size of the neighborhood is set to . An estimated pixel label for the pixel is considered erroneous if its absolute difference from the ground truth label is more than pixels. For the initial disparity labels in the patch centered on , the threshold of considering them incorrect is set to (Fig. 4.a), (Fig. 4.b), (Fig. 4.c), or (Fig. 4.d). We make the following observations (that are more clearly illustrated from sub-figures 4.c and 4.d):
In the case of the Replace and Refine architectures, when the percentage of erroneous initial labels is low (e.g. less than ) then the Refine architecture (which predicts residual corrections) is considerably more accurate than the Replace architecture (which directly predicts new label values). However, when the percentage of erroneous initial labels is high (e.g. more than ) then the Replace architecture is more accurate than the Refine one. This observation supports our argument that residual corrections are more suitable for “soft” mistakes in the initial labels while predicting an entirely new label value is a better choice for the “hard” mistakes.
By introducing the error detection component, both the Refine and the Replace architectures manage to significantly improve their predictions. In the Detect+Refine case, the improvement is due to the fact that the error detection component sets the “hard” mistakes to the mean label values (see the description of the Detect+Refine architecture in the main paper) thus allowing the Refine component to ignore the values of the “hard” mistakes of the initial labels and instead make residual predictions w.r.t. the mean label values (these mean values are fixed and known in advance and thus it is easier for the network to learn to make residual predictions w.r.t. them). In the case of the Detect+Replace architecture, the error detection component “dictates” the Replace component to predict new label values for the incorrect initial labels while allowing the propagation of the correct ones in the output.
Finally, the best ”label prediction accuracy Vs initial labels quality” behavior is achieved by the proposed Detect + Replace + Refine architecture, which efficiently combines the error detection component with the two-stage label improvement approach. Interestingly, the improvement margins w.r.t. the rest architectures is increased as the quality of the initial labels is decreased.
|All / All||All / Est||Noc / All||Noc / Est||Runtime|
|Displets v2 ||3.00||5.56||3.43||3.00||5.56||3.43||2.73||4.95||3.09||2.73||4.95||3.09||265|
We submitted our best solution,
which is the proposed Detect + Replace + Refine architecture applied for two iterations,
on the KITTI 2015 test set evaluation server and we achieved state-of-the-art results in the main evaluation metric, D1-all, surpassing all prior work by a significant margin.
The results of our submission, as well as of other competing methods, are reported in Table 4444The link to our KITTI 2015 submission that contains more thorough test set results – both qualitative and quantitative – is:
http://www.cvlibs.net/datasets/kitti/eval_scene_flow_detail.php?benchmark=stereo&result=365eacbf1effa761ed07aaa674a9b61c60fe9300. Note that our improvement w.r.t. the best prior approach corresponds to a more than relative reduction of the error rate. Our total execution time is 0.4 secs, of which around 0.37 secs is used by the patch matching algorithm for generating the initial disparity labels and the rest 0.03 by our Detect + Replace + Refine x2 architecture (measured in a Titan X GPU). For this submission, after having train the Detect + Replace + Refine x2 model on the training split (160 images), we further fine-tuned it on both the training and the validation splits (in which we divided the 200 images of KITTI 2015 training dataset).
|2 pixel||3 pixel||4 pixel||5 pixel||EPE|
|Detect + Replace + Refine||9.5981||7.9764||6.7895||5.9074||1.8569|
|Detect + Replace + Refine||12.845||17.825||50.407||10.096||14.379||41.704||8.285||11.957||34.801||7.057||10.253||29.560||1.774||2.368||5.457|
|KITTI 2015 dataset|
|Detect + Replace + Refine||3.919||4.610||33.947||2.708||3.294||25.697||2.082||2.570||19.123||1.699||2.112||15.140||0.790||0.858||3.056|
Here we evaluate the ”X-Blind” architecture that, as already explained, it is exactly the same as the proposed Detect + Replace + Refine architecture with the only difference being that as input gets only the initial labels and not the image . The purpose of evaluating such an architecture is not to examine a competitive variant of the main Detect + Replace + Refine architecture, but rather to explore the capabilities of the latter one in such a scenario. In Table 5 we provide the stereo matching results of the ”X-Blind” architecture. We observe that it might not be able to compete the original Detect + Replace + Refine architecture but it still can significantly improve the initial disparity label estimates. In Figure 5 we illustrate some disparity prediction examples generated by the ”X-Blind” architecture. We observe that the ”X-Blind” architecture manages to considerably improve the quality of the initial disparity label estimates, however, since it does not have the image to guide it, it is not able to accurately reconstruct the disparity field on the borders of the objects.
This section includes qualitative examples that help illustrating the role of the various components of our proposed architecture.
In Figure 6 we provide additional examples of error probability maps (that the error detection component generated w.r.t. the initial labels ) and compare them with the ground truth error maps of the initial labels. The ground truth error maps are computed by thresholding the absolute difference of the initial labels from the ground truth labels with a threshold of pixels (red are the erroneous pixel labels in the figure). Note that this is the logic that is usually followed in the disparity task for considering a pixel label erroneous. We observe that, despite the fact the error detection component is not explicitly trained to produce such ground truth error maps, its predictions still highly correlate with them. This implies that the error detection component seems to have learnt to recognize the areas that look abnormal/atypical with respect to the joint input-output space (i.e., it has learnt the “structure” of that space).
In Figure 7 we provide several examples that more clearly illustrate the function performed by the Replace step in our proposed architecture. Specifically, in sub-figures 7a, 7b, and 7c we depict the input image , the initial disparity label estimates , and the error probability map that the detection component yields for the initial labels . In sub-figure 7d we depict the label predictions of the replace component . For visualization purposes we only depict the pixel predictions that will replace the initial labels that are incorrect (according to the detection component) by drawing the remaining ones (i.e. those that their error probability is less than ) with black color. Finally, in the last sub-figure 7e we depict the renewed labels . We can readily observe that most of the “hard” mistakes of the initial labels have now been crudely “fixed” by the Replace component.
In Figure 8 we provide several examples that more clearly illustrate the function performed by the Refine step in our proposed architecture. Specifically, in sub-figures 8a, 8b, and 8c we depict the input image , the initial disparity label estimates , and the renewed labels that the Replace step yields. In sub-figure 8d we depict the residual corrections that the Refine component yields for the renewed labels . Finally, in last sub-figure 8e we depict the final label estimates that the Refine step yields. We observe that most of residual corrections that the Refine component yields are concentrated on the borders of the objects. Furthermore, by adding those residuals on the renewed labels , the Refine step manages to refine the renewed labels and align the estimated labels with the fine image structures in .
In Figure 9 we illustrate the entire work-flow of the Detect + Replace + Refine architecture that we propose and we compare its predictions with the ground truth disparity labels.
In Figure 10, we illustrate the estimated disparity labels after each iteration of our multi-iteration architecture Detect + Replace + Refine x2 that in our experiments achieved the most accurate results. We observe that the 2 iteration further improves the fine details of the estimated disparity labels delivering a higher fidelity disparity field. Furthermore, applying the model for a 2 iteration results in a disparity field that looks more “natural”, i.e., visually plausible.
We provide qualitative results from KITTI 2015 validation set in Figure 11. In order to generate them we used the Detect + Replace + Refine x2 architecture that gave the best quantitative results. We observe that our model is able to recover a good estimate of the actual disparity map even when the initial label estimates are severely corrupted.
In our work we explored a family of architectures that performs the structured prediction problem of dense image labeling by learning a deep joint input-output model that (iteratively) improves some initial estimates of the output labels. In this context our main focus was on what is the optimal architecture for implementing this deep model. We argued that the prior approaches of directly predicting the new labels with a feed-forward deep neural networks are sub-optimal and we proposed to decompose the label improvement step in three sub-tasks: 1) detection of the incorrect input labels, 2) their replacement with new labels, and 3) the overall refinement of the output labels in the form of residual corrections. All three steps are embedded in a unified architecture, which we call Detect + Replace + Refine, that is end-to-end trainable. We evaluated our architecture in the disparity estimation (stereo matching) task and we report state-of-the-art results in the KITTI 2015 test set.
This work was supported by the ANR SEMAPOLIS project and hardware donation by NVIDIA. We would like to thank Sergey Zagoruyko, Francisco Massa, and Shell Xu for their advices with respect to the Torch framework and fruitful discussions.
Torch7: A matlab-like environment for machine learning.In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4165–4175, 2015.
Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.In Proceedings of the IEEE International Conference on Computer Vision, pages 1026–1034, 2015.
Efficient deep learning for stereo matching.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5695–5703, 2016.