1 Introduction
Deep Convolutional Neural Networks (CNNs) have become a de facto standard to successfully address all kind of perception related problems, such as image classification, object detection and optical flow. Fresh CNN architectures and training procedures are day after day becoming the new state of the art, producing models which prediction accuracy was inconceivable few years ago. The ascendancy of these tools is greatly due to the release of very large annotated datasets as well as the popularization of massivelyparallel GPUs, which enable fast training and inference.
In addition to the two aforementioned elements, the success of CNNbased approaches heavy rely on a smart design of three elements, which are i) the representation of the problem, ii) the training method and iii) the network architecture.
As a general practice, the task under study is represented as a set of classification or regression problems, depending on the nature of the task. For example, it is common to represent semantic segmentation [1] as multiple classification problems over a finite and discrete set of categories, while motion related tasks —such as optical flow prediction [2]
— are represented as regression problems over a continuous ”flow space”. To promote the correct behavior of CNNs, the training method needs to reflect the chosen representation with an appropriate loss function. Typical examples of losses are cross entropy and mean squared error, associated to classification and regression problems respectively. The last element, the network architecture, needs to provide enough capacity for the approximation of the task and support the propagation of the gradient to make the training possible. The use of certain network designs, as for instance the architectures based on residual blocks has proven to yield a notorious improvement in speed and accuracy
[3].Unlike existing approaches, in this paper we propose an alternative representation that combines the benefits of classification and regression in a joint coarseandfine reasoning as shown in Fig. 1. The classification component carries general coarse information that is important to focus the search around the solution space, while the regression component carries the fine details needed to produce an accurate prediction. We defend that this representation is more suitable that the existing ones, helping to reach better solutions faster. To enforce this joint representation we propose one simple but effective loss functions that linearly combines a classification and a regression cost. We also show how to fully integrate this representation in any network architecture by introducing a new layer that expresses the final prediction as the addition of a refinement real component on top of a coarse discrete approximation.
Our approach is applied to the context of optical flow due to its challenging nature, where a real value needs to be predicted for each pixel of an image that may follow any kind of motion. We demonstrate the benefits of our proposal in stateoftheart optical flow datasets.
2 Related Work
The formulation of optical flow approaches has continued evolving from the classical energy optimization formulation over a pixelbrightness space [4] to sophisticated variational approaches [5], which include all type of ad hoc blocks to account for key aspects such as edges motion [6] and robust patch matching [7]. This evolution towards improving flow accuracy brought the addition of object semantics [8] and eventually the will of exploiting semantic information and context to improve flow estimation led to approach the task as a learning problem, exploiting the power of CNNbased techniques.
It is clear that CNNs [9][10] have gained much attention in the context of optical flow. They have been applied to improve many different parts of the pipeline, from dealing with image patch matching in large motion displacement [11][6], to the extraction and match of features patch [12]. The first totally CNNbased optical flow approach was introduced in [2], where authors show that it is feasible to reach stateoftheart solutions training a CNN architecture endtoend. Such an approach builds upon the recent success of deconvolutional blocks to solve dense pixelwise prediction problems, such as semantic segmentation [13][14][15][1]
and super resolution
[16]. Most of the CNNbased solutions typically address the learning task casting it to a classification or a regression problem, depending on the nature of the task. In our novel approach, we perform joint classification and regression to exploit their respective benefits, i.e., i) obtaining a simplified coarse solution via classification, which helps the training to converge quicker and ii) distilling the fine details of a solution via regression. We prove that this approach leads to better results than existing coarsetofine strategies used in methods like FlowNet [2], where the problem is hierarchically approached from low to high resolution.3 The CoarseandFine Formulation
Optical flow, as well as many other dense pixelwise prediction tasks, is traditionally formulated as a regression problem in order to predict a solution that is intended to capture fine details. However, in this work we defend that it is more convenient and accurate to jointly represent a coarse classification component, which contains a generic and discrete approximation to the solution, and a fine regression component, which provides a fine and continuous refinement. The introduction of an explicit discrete classification term draws inspiration from semantic segmentation methods, which exhibit fast convergence rates. In our case, this component helps on accelerating the training by quickly centring the search space around a coarsely correct solution.
Here we describe the concepts and ingredients used to fully exploit this joint coarseandfine representation, including two different network topologies with their respective training methods and associated loss functions.
3.1 Estimating coarse information as an auxiliary task
For the sake of generality, we define a basic architecture for the estimation of optical flow as a combination of two blocks, and . Given an RGB image , an initial stage of the network computes features according to the network model . Then, a second stage transforms these features into pixelwise optical flow predictions . In our approach, we adapt FlowNet [2] to use it as our (see Fig. 2 for a graphical description).
Traditional CNNbased methods define as a set of convolutions that transform the extracted representation into the final optical flow. Then, during training a regression loss function is used to find a suitable model , and therefore just using finegrained information.
A simple way to account for both coarseandfine components is to branch into and . Here, stands for a finegrained regression solution and stands for a coarse classification prediction over a small set of flow categories.
is given by a convolution kernel, which maps to a 2 channel output representing flow. On the other hand, consists of a simple convolution mapping to categories and followed by a softmax operator. These K categories are defined by projecting optical flow within the range, which bounds are empirically selected according to typical minimum and maximum values for this problem. Then this range is divided into categories such that:
(1) 
are the centroids of the classes. Notice that outbound pixels are codified on the outer classes. This procedure serves also to transform the regression ground truth into classification ground truth .
During training is adjusted via standard endtoend backpropagation, guided by the following coarseandfine loss function:
(2) 
In our approach, is a standard norm. For , we use the multiclass Weighted Cross Entropy loss (WCE) [1], such that:
(3) 
where
is an index function that acts as a selector for the probability associated to the expected ground truth class.
is a weight proportional to the inverse of the frequency of theth class, which is a key factor to prevent the bias introduced by class imbalance due to some predominant vector flows and is computed from the training set statistics.
We refer to this approach as CaF. In practice, and without loss of generality, we further subdivide into its horizontal and vertical optical flow terms , to simplify the representation of the problem. In the CaF, coarse and fine components are never combined. The output of the network is just the regression, while the coarse component is used as an an auxiliary task to provide additional guidance and speed up to the training process. Despite its simplicity, this method serves us to test and validate the importance of accounting for both coarse and fine components.
3.2 Explicit Joint CoarseandFine
We propose a refinement of the previous approach that explicitly represents the optical flow estimation by adding the output of the regressor to the classifier component. In this case, the regressor does not encode the whole optical flow, but just the fine details of the solution, i.e., a refinement, which is combined with the coarse solution provided by the horizontal and vertical classifiers to produce the final estimation. This process, that we call CaFFull, is depicted by Fig.
4. This representation has the advantage of reducing the search space of the fine component to a bounded area around zero, which makes the training convergence faster and leads to more accurate models (see section 4).In practice, the combination of the three components, i.e., , and requires to map the discrete classification solutions back to a real value. This is done in the DeCLASS blocks (Fig. 4), which output the centroid associated to a given class. Afterwards horizontal and vertical components are concatenated and added to the regression output.
4 Experiments
In our experiments, we take a stateoftheart regressionbased CNN architecture [2] and validate the benefits of adding our joint coarseandfine reasoning scheme in terms of optical flow endpointerror (EPE). As additional baselines classificationonly and regressiononly predictions are also reported. Experiments are summarized in Table 4, where we show that our proposal decreases EPE by up to a .
4.1 Experimental conditions
All the presented models are trained from scratch under the exact same conditions, allowing to measure the real performance boost that our approach produce. FlyingChairs [2] is used for training, adopting the same splits than the original paper and a batch size of pairs of images as input. We perform slight data augmentation by mirroring upsidedown and lefttoright the images with a chance each. All models are implemented in MatConvNet, initialized following He’s method [17] and trained using Adam with the standard parameters and . The training process is performed on a single NVIDIA K40 GPU for epochs, fixing the learning rate to during the first epochs and successively halving it each epochs. Following [2] we measure the network loss at different resolution points on the expansive part (Fig.2), but contrary to their approach we weight all these losses equally to avoid extreme hyperparameter tuning. For the coarse prediction, we bound the continuous flow space between and (parameters and respectively), and discretise the resulting subdomain. We perform three different experiments attending to the number of classes created and therefore the size of the pixel flow bins (). We choose to test , and classes, each one representing flow ranges of , and respectively.
4.2 Regression baseline
Our regression baseline consists of a BatchNormalized FlowNet trained from scratch under the previously defined conditions. The regression baseline is trained by deactivating the contribution of the classification modules to the final output as well to the loss function (turning off the upper part of Fig. 4). The reported results are fairly close to the ones of the original paper, but we used moderated data augmentation and avoided the hyperparameter tuning in order to create a fair and reproducible test environment. Notwithstanding, the increase in performance of our joint approach is evident, as the training procedure is rigorous and fixed for all the methods.
4.3 Classification baseline
In addition to the regression baseline, Table 4 reports classification results labelled as ClassKc, for classes. This baseline is trained by deactivating the regression contribution to the network output (the ”SUM” block in Fig. 4) as well as the MSE error of the loss during training, so that only the coarse components are used.
4.4 Joint CoarseandFine performance
We report experiments for the two flavours of our proposal, i.e., i) CaF, which is the regression baseline trained with the proposed coarseandfine loss function —turning the DeClass modules of Fig. 4 off, but keeping its measured errors on—, and ii) our full coarseandfine proposal (CaFFull) where the coarseandfine refinement is plugged, explicitly creating the network output in that way.
According to the results, the performance boost produced by our approach in the trained networks is significant. The addition of the combined loss function (see Table 4 rows 5–7) noticeably decreases the endpointerror (EPE). Moreover, by introducing our full coarseandfine architecture (rows 8–10), described in section 3.2, the performance is boosted up to a in the Flying chairs validation set.
Regarding the number of classes of the coarse prediction, we observe a trend in the full architecture as the error tends to decrease with the number of classes. This is more clear for the CaFFull models, has having smaller class bins allows the fine prediction to recover misclassified pixels easier.
We further evaluate the generalization capacities of our approach by testing the models trained on FlyingChairs over the unseen Sintel dataset without any finetuning. Although the improvement is not so abrupt in this challenging dataset, the same conclusions can be systematically obtained for both training and test Sintel splits. This proves once more the benefits of our joint CoarseandFine methods.
5 Conclusions and Future Work
^{1}^{1}1Acknowledgements: This work was partially supported by European AEROARMS project (H2020ICT20141644271) and CICYT projects ColRobTransp (DPI201678957R), ROBINSTRUCT (TIN201458178R). Authors thank Nvidia for GPU hardware donation.This paper presented the benefits of using a joint coarseandfine representation for dense pixelwise estimation task—such as optical flow— by casting the task to a joint classification and regression problem. Our novel representation has proven to speed up training convergence and to increase model accuracy when compared against CNNbased stateoftheart methods and other baselines. We have experimentally demonstrated that this joint representation achieves its maximum potential by exploiting a new type of architecture, which expresses its prediction as the addition of a refinement real component to a coarse discrete approximation. Our next steps are focused on the study the impact that complementary sources of information have in models accuracy and how to efficiently combine those sources.
References
 [1] German Ros, Simon Stent, Pablo F. Alcantarilla, and Tomoki Watanabe, “Training constrained deconvolutional networks for road scene semantic segmentation,” arXiv preprint abs/1604.01545, 2016.
 [2] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox, “Flownet: Learning optical flow with convolutional networks,” in The IEEE International Conference on Computer Vision (ICCV), 2015.

[3]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun,
“Deep residual learning for image recognition,”
in
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, 2016.  [4] Berthold KP Horn and Brian G Schunck, “Determining optical flow,” Artificial intelligence, vol. 17, no. 13, pp. 185–203, 1981.
 [5] Deqing Sun, Stefan Roth, and Michael J Black, “A quantitative analysis of current practices in optical flow estimation and the principles behind them,” International Journal of Computer Vision, vol. 106, no. 2, pp. 115–137, 2014.

[6]
Jerome Revaud, Philippe Weinzaepfel, Zaid Harchaoui, and Cordelia Schmid,
“Epicflow: Edgepreserving interpolation of correspondences for optical flow,”
in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1164–1172.  [7] Christian Bailer, Bertram Taetz, and Didier Stricker, “Flow fields: Dense correspondence fields for highly accurate large displacement optical flow estimation,” in The IEEE International Conference on Computer Vision (ICCV), 2015.
 [8] Laura SevillaLara, Deqing Sun, Varun Jampani, and Michael J Black, “Optical flow with semantic segmentation and localized layers,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
 [9] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner, “Gradientbased learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.

[10]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton,
“Imagenet classification with deep convolutional neural networks,”
in Advances in Neural Information Processing Systems (NIPS), 2012.  [11] Philippe Weinzaepfel, Jerome Revaud, Zaid Harchaoui, and Cordelia Schmid, “Deepflow: Large displacement optical flow with deep matching,” in The IEEE International Conference on Computer Vision (ICCV), 2013.
 [12] Min Bai, Wenjie Luo, Kaustav Kundu, and Raquel Urtasun, “Exploiting semantic information and deep matching for optical flow,” in European Conference on Computer Vision (ECCV), 2016.
 [13] Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus, “Deconvolutional networks,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010.
 [14] Jonathan Long, Evan Shelhamer, and Trevor Darrell, “Fully convolutional networks for semantic segmentation,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
 [15] Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han, “Learning deconvolution network for semantic segmentation,” in The IEEE International Conference on Computer Vision (ICCV), 2015.
 [16] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang, “Image superresolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, 2016.
 [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification,” in The IEEE International Conference on Computer Vision (ICCV), 2015.
Comments
There are no comments yet.