I Introduction
It has been seen that deep learning has been widely deployed in many computer vision tasks. Disparity estimation (also referred to as stereo matching) is a classical and important problem in computer vision applications, such as 3D scene reconstruction, robotics and autonomous driving. While traditional methods based on handcrafted feature extraction and matching cost aggregation such as SemiGlobal Matching (SGM)
[1]) tend to fail on those textureless and repetitive regions in the images, recent advanced deep neural network (DNN) techniques surpass them with decent generalization and robustness to those challenging patches, and achieve stateoftheart performance in many public datasets [2][3][4][5][6][7]. The DNNbased methods for disparity estimation are endtoend frameworks which take stereo images (left and right) as input to the neural network and predict the disparity directly. The architectures of DNN are very essential to achieve accurate estimation, and can be categorized into two classes, encoderdecoder network with 2D convolution (EDConv2D) and cost volume matching with 3D convolution (CVMConv3D). Besides, recent studies [8][9]begin to reveal the potential of automated machine learning (AutoML) for neural architecture search (NAS) on stereo matching, while some others
[5][10] focus on creating large scale datasets with highquality labels. In practice, to measure whether a DNN model is good enough, we not only need to evaluate its accuracy on unseen samples (whether it can estimate the disparity correctly), but also its time efficiency (whether it can generate the results in realtime).In EDConv2D methods, stereo matching neural networks [2][3][5] are first proposed for endtoend disparity estimation by exploiting an encoderdecoder structure. The encoder part extracts the features from the input images, and the decoder part predicts the disparity with the generated features. The disparity prediction is optimized as a regression or classification problem using largescale datasets (e.g., Scene Flow [5], IRS [10]) with disparity ground truth. The correlation layer [11][5] is then proposed to increase the learning capability of DNNs in disparity estimation, and it has been proved to be successful in learning strong features at multiple levels of scales [11][5][12][13][14]. To further improve the capability of the models, residual networks [15][16][17] are introduced into those EDConv2D networks since the residual structure enables much deeper network to be easier to train [18]. The EDConv2D methods have been proved computing efficient, but they cannot achieve very high estimation accuracy.
To address the accuracy problem of disparity estimation, researchers have proposed CVMConv3D networks to better capture the features of stereo images and thus improve the estimation accuracy [3][19][6][7][20]. The key idea of the CVMConv3D methods is to generate the cost volume by concatenating left feature maps with their corresponding right counterparts across each disparity level [19][6]. The features of cost volume are then automatically extracted by 3D convolution layers. However 3D operations in DNNs are computingintensive and hence very slow even with current powerful AI accelerators (e.g., GPUs). Although the 3D convolution based DNNs can achieve stateoftheart disparity estimation accuracy, they are difficult for deployment due to their resource requirements. On one hand, it requires a large amount of memory to install the model; so only a limited set of accelerators (like Nvidia Tesla V100 with 32GB memory) can run these models. On the other hand, it takes several seconds to generate a single result even on the very powerful Tesla V100 GPU using CVMConv3D models. The memory consumption and the inefficient computation make the CVMConv3D methods difficult to be deployed in practice. Therefore, it is crucial to address the accuracy and efficiency problems for realworld applications.
To this end, we propose FADNet which is a Fast and Accurate Disparity estimation Network based on EDConv2D architectures. FADNet can achieve high accuracy while keeping a fast inference speed. As illustrated in Fig. 5, our FADNet can easily obtain comparable performance as stateoftheart PSMNet [6], while it runs approximately 20 faster than PSMNet and consumes 10 less GPU memory. In FADNet, we first exploit the multiple stacked 2Dbased convolution layers with fast computation, and then we combine stateoftheart residual architectures to improve the learning capability, and finally we introduce multiscale outputs for FADNet so that it can exploit the multiscale weight scheduling to improve the training speed. These features enable FADNet to efficiently predict the disparity with high accuracy as compared to existing work. Our contributions are summarized as follows:

We propose an accurate yet efficient DNN architecture for disparity estimation named FADNet, which achieves comparable prediction accuracy as CVMConv3D models and it runs at an order of magnitude faster speed than the 3Dbased models.

We develop a multiple rounds training scheme with multiscale weight scheduling for FADNet during training, which improves the training speed yet maintains the model accuracy.
The rest of the paper is organized as follows. We introduce some related work in DNN based stereo matching problems in Section II. Section III introduces the methodology and implementation of our proposed network. We demonstrate our experimental results in Section IV. We finally conclude the paper in Section V.
Ii Related Work
There exist many studies using deep learning methods in estimating image depth using monocular, stereo and multiview images. Although monocular vision is low cost and commonly available in practice, it does not explicitly introduce any geometrical constraint, which is important for disparity estimation[21]
. On the contrary, stereo vision leverages the advantages of crossreference between the left and the right view, and usually show greater performance and robustness in geometrical tasks. In this paper, we mainly discuss the work related to stereo images for disparity estimation, which is classified into two categories: 2D based and 3D based CNNs.
In 2D based CNNs, endtoend architectures with mainly convolution layers [5][22] are proposed for disparity estimation, which use two stereo images as input and generate the disparity directly and the disparity is optimized as a regression task. However, the models are pure 2D CNN architectures which are difficult to capture the matching features such that the estimation results are not good. To address the problem, the correlation layer which can express the relationship between left and right images is introduced in the endtoend architecture (e.g., DispNetCorr1D [5], FlowNet [11], FlowNet2 [23], DenseMapNet [24]). The correlation layer significantly increases the estimating performance compared to the pure CNNs, but existing architectures are still not accurate enough for production.
3D based CNNs are further proposed to increase the estimation performance [3][19][6][7][20], which employ 3D convolutions with cost volume. The cost volume is mainly formed by concatenating left feature maps with their corresponding right counterparts across each disparity level [19][6], and the features of the generated cost volumes can be learned by 3D convolution layers. The 3D based CNNs can automatically learn to regularize the cost volume, which have achieved stateoftheart accuracy of various datasets. However, the key limitation of the 3D based CNNs is their high computation resource requirements. For example, training GANet [7] with the Scene Flow [5] dataset takes weeks even using very powerful Nvidia Tesla V100 GPUs. Even they achieve good accuracy, it is difficult to deploy due to their very low time efficiency. To this end, we propose a fast and accurate DNN model for disparity estimation.
Iii Model Design and Implementation
Our proposed FADNet exploits the structure of DispNetC [5] as a backbone, but it is extensively reformed to take care of both accuracy and inference speed, which is lacking in existing studies. We first change the structure in terms of branch depth and layer type by introducing two new modules, residual block and pointwise correlation. Then we exploit the multiscale residual learning strategy for training the refinement network. Finally, a loss weight training schedule is used to train the network in a coarsetofine manner.
Iiia Residual Block and Pointwise Correlation
DispNetC and DispNetS which are both from the study in [5]
basically use an encoderdecoder structure equipped with five feature extraction and downsampling layers and five feature deconvolution layers. While conducting feature extraction and downsampling, DispNetC and DispNetS first adopt a convolution layer with a stride of 1 and then a convolution layer with a stride of 2 so that they consistently shrink the feature map size by half. We call the twolayer convolutions with size reduction as DualConv, which is shown in the leftbottom corner of Fig.
6. DispNetC equipped with DualConv modules and a correlation layer finally achieves an endpoints error (EPE) of 1.68 on the Scene Flow dataset, as reported in [5].The residual block originally derived in [15] for image classification tasks is widely used to learn robust features and train a very deep networks. The residual block can well address the gradient vanish problem when training very deep networks. Thus, we replace the convolution layer in the DualConv module by the residual block to construct a new module called DualResBlock, which is shown in the leftbottom corner of Fig. 6. With DualResBlock, we can make the network deeper without training difficulty as the residual block allows us to train very deep models. Therefore, we further increase the number of feature extraction and downsampling layers from five to seven. Finally, DispNetC and DispNetS are evolving to two new networks with better learning ability, which are called RBNetC and RBNetS respectively, as shown in Fig. 6.
One of the most important contributions of DispNetC is the correlation layer, which targets at finding correspondences between the left and right images. Given two multichannel feature maps with and as their width, height and number of channels, the correlation layer calculates the cost volume of them using Eq. (1).
(1) 
where is the kernel size of cost matching, and are the centers of two patches from and respectively. Computing all patch combinations involves multiplication and produces a cost matching map of . Given a maximum searching range , we fix and shift the on the xaxis direction from to with a stride of two. Thus, the final output cost volume size will be .
However, the correlation operation assumes that each pixel in the patch contributes equally to the pointwise convolution results, which may loss the ability to learn more complicated matching patterns. Here we propose pointwise correlation composed of two modules. The first module is a classical convolution layer with a kernel size of and a stride of . The second one is an elementwise multiplication which is defined by Eq. (2).
(2) 
where we remove the patch convolution manner from Eq. (1). Since the maximum valid disparity is 192 in the evaluated datasets, the maximum search range for the original image resolution is no more than 192. Remember that the correlation layer is put after the third DualResBlock, of which the output feature resolution is 1/8. So a proper searching range value should not be less than 192/8=16. We set a marginally larger value 20. We also test some other values, such as 10 and 40, which do not surpass the version of using 20 in the network. The reason is that applying too small or large search range value may lead to underfitting or overfitting.
Table I lists the accuracy improvement brought by applying the proposed DualResBlock and pointwise correlation. We train them using the same dataset as well as the training schemes. It is observed that RBNetC outperforms DispNetC with a much lower EPE, which indicates the effectiveness of the residual structure. We also notice that setting a proper searching range value of the correlation layer helps further improve the model accuracy.
Model  Training EPE  Test EPE  

DispNetC  20  2.89  2.80 
RBNetC  10  2.28  2.06 
RBNetC  20  2.09  1.76 
RBNetC  40  2.12  1.83 
IiiB MultiScale Residual Learning
Instead of directly stacking DispNetC and DispNetS subnetworks to conduct disparity refinement procedure [13], we apply the multiscale residual learning firstly proposed by [25]. The basic idea is that the second refinement network learns the disparity residuals and accumulates them into the initial results generated by the first network, instead of directly predicting the whole disparity map. In this way, the second network only needs to focus on learning the highly nonlinear residual, which is effective to avoid gradient vanishing. Our final FADNet is formed by stacking RBNetC and RBNetS with multiscale residual learning, which is shown in Fig. 6.
As illustrated in Fig. 6, the upper RBNetC takes the left and right images as input and produces disparity maps at a total of 7 scales, denoted by , where is from 0 to 6. The bottom RBNetS exploits the inputs of the left image, right image, and the warped left images to predict the residuals. The generated residuals (denoted by ) from RBNetS are then accumulated to the prediction results by RBNetC to generate the final disparity maps with multiple scales (). Thus, the final disparity maps predicted by FADNet, denoted by , can be calculated by
(3) 
IiiC Loss Function Design
Given a pair of stereo RGB images, our FADNet takes them as input and produces seven disparity maps at different scales. Assume that the input image size is . The dimension of the seven scales of the output disparity maps are , , , , , , and respectively. To train FADNet in an endtoend manner, we adopt the pixelwise smooth L1 loss between the predicted disparity map and the ground truth using
(4) 
where is the number of pixels of the disparity map, is the element of and
(5) 
Note that is the ground truth disparity of scale and is the predicted disparity of scale
. The loss function is separately applied in the seven scales of outputs, which generates seven loss values. The loss values are then accumulated with loss weights.
Round  

1  0.32  0.16  0.08  0.04  0.02  0.01  0.005 
2  0.6  0.32  0.08  0.04  0.02  0.01  0.005 
3  0.8  0.16  0.04  0.02  0.01  0.005  0.0025 
4  1.0  0  0  0  0  0  0 
The loss weight scheduling technique which is initially proposed in [5] is useful to learn the disparity in a coarsetofine manner. Instead of just switching on/off the losses of different scales, we apply different nonzero weight groups for tackling different scale of disparity. Let denote the weight for the loss of the scale of . The final loss function is
(6) 
The specific setting is listed in Table II. Totally there are seven scales of predicted disparity maps. At the beginning, we assign lowvalue weights for those large scale disparity maps to learn the coarse features. Then we increase the loss weights of large scales to let the network gradually learn the finer features. Finally, we deactivate all the losses except the final predict one of the original input size. With different rounds of weight scheduling, the evaluation EPE is gradually increased to the final accurate performance which is shown in Table III on the Scene Flow dataset.
Round  # Epochs 
Training EPE  Test EPE  Improvement (%) 

1  20  1.85  1.57   
2  20  1.33  1.32  18.9 
3  20  1.04  0.93  41.9 
4  30  0.92  0.83  12.0 

Note: “Improvement” indicates the improvement of the current round of weight schedule over its previous.
Table III lists the model accuracy improvements (around 12%41%) brought by the multiple round training of four loss weight groups. It is observed that both the training and testing EPEs are decreased smoothly and close, which indicates good generalization and advantages of our training strategy.
Iv Performance Evaluation
In this section, we present the experimental results of our proposed FADNet compared to existing work (i.e., DispNetC [5], PSMNet [6], GANet [7] and DenseMapNet [24]) in terms of accuracy and time efficiency.
Iva Experimental Setup
We implement our FADNet using PyTorch
^{1}^{1}1https://pytorch.org, which is one of popular deep learning frameworks, and we make the codes and experimental setups be publicly available^{2}^{2}2https://github.com/HKBUHPML/FADNet.In terms of accuracy, the model is trained with Adam (
). We perform color normalization with the mean ([0.485, 0.456, 0.406]) and variation ([0.229, 0.224, 0.225]) of the ImageNet
[26] dataset for data preprocessing. During training, images are randomly cropped to size and . The batch size is set to 16 for the training on four Nvidia Titan X (Pascal) GPUs (each of 4). We apply a fourround training scheme illustrated in Section IIIC, where each round adopts one different loss weight group. At the beginning of each round, the learning rate is initialized as and is decayed by half every 10 epochs. We train 20 epochs for the first three rounds and 30 for the last round.In terms of time efficiency, we evaluate the inference time of existing stateoftheart DNNs including both 2D and 3D based networks using a pair of stereo images () from the Scene Flow dataset [5] on a desktoplevel Nvidia Titan X (Pascal) GPU (with 12GB memory) and a serverlevel Nvidia Tesla V100 GPU (with 32GB memory).
IvB Dataset
We used two publicly popular available datasets to train and evaluate the performance of our FADNet. The first one is Scene Flow which is produced by synthetic rendering techniques. The second one is KITTI 2015 which is captured by real world cameras and laser sensors.
IvB1 Scene Flow [5]
a large synthetic dataset which provides totally 39,824 samples of stereo RGB images (35,454 for training and 4,370 for testing). The full resolution of the images is 960540. The dataset covers a wide range of object shapes and texture and provides highquality dense disparity ground truth. We use the endpoint error (EPE) as error measurement. We remove those pixels whose disparity values are larger than 192 in the loss computation, which is typically done by the previous studies [6][7].
IvB2 Kitti 2015 [27]
an open benchmark dataset which contains 200 stereo images which are grayscale and have a resolution of 1241376. The ground truth of disparity is generated by the LIDAR equipment, so the disparity map is very sparse. During training, we randomly crop 1024256 resolution of images and disparity maps. We use its full resolution during test.
IvC Experimental Results
Model  EPE  Memory  Runtime (ms)  
(GB)  Titan X (Pascal)  Tesla V100  
FADNet(ours)  0.83  3.87  65.5  48.1 
DispNetC  1.68  1.62  28.7  18.7 
DenseMapNet  5.36    30   
PSMNet  1.09  13.99  OOM  399.3 
GANet  0.84  29.1  OOM  2251.1 

Note: “OOM” indicates that it runs out of memory. Runtime is the inference time per pair of stereo images, and it is measured by 100 runs with average. The underline numbers are from the original paper.
The experimental results on the Scene Flow dataset are shown in Table IV. Regarding the model accuracy measured with EPE, our proposed FADNet achieves comparable performance compared to the stateoftheart CVMConv3D (PSMNet and GANet), while FADNet is 46 and 8 faster than GANet and PSMNet respectively on an Nvidia Tesla V100 GPU. Even PSMNet and GANet are not runnable on the Titan X (Pascal) GPU, which implies high cost of them in practice. Compared to DispNetC and DenseMapNet, even FADNet is relatively slow, it predicts the disparity more than 2 accurate than DispNetC and DenseMapNet, which is a huge accuracy improvement. The visualized comparison with predicted disparity maps are shown in Fig. 19.
Model  Noc(%)  All(%)  
D1bg  D1fg  D1all  D1bg  D1fg  D1all  
FADNet(ours)  2.49%  3.07%  2.59%  2.68%  3.50%  2.82% 
DispNetC  4.11%  3.72%  4.05%  4.32%  4.41%  4.34% 
GCNet  2.02%  5.58%  2.61%  2.21%  6.16%  2.87% 
PSMNet  1.71%  4.31%  2.14%  1.86%  4.62%  2.32% 
GANet  1.34%  3.11%  1.63%  1.48%  3.46%  1.81% 

Note: “Noc” and “All” indicates percentage of outliers averaged over ground truth pixels of nonoccluded and all regions respectively. “D1bg”, “D1fg” and “D1all” indicates percentage of outliers averaged over background, foreground and all ground truth pixels respectively.
From the visualized disparity maps shown in Fig. 19, we can see that the details of textures are successfully estimated by our FADNet while PSMNet is a little worse and DispNetC almost misses all the details. The visualization results are dramatically different although the EPE gap between DispNetC and FADNet is only 0.85. In the qualitative evaluation, FADNet is more robust and accurate than DispNetC with a 2D based network and PSMNet with a 3D based network.
From Table IV, it is noticed that the CVMConv3D architectures cannot be used on the desktoplevel GPU which is equipped with 12 GB memory, while the proposed FADNet requires only 3.87 GB to perform the disparity estimation. The low memory requirement of FADNet makes it much easier for deployment in realworld applications. DispNetC is also an efficient architecture in terms of both memory consumption and computing efficiency, but its estimation performance is bad such that it cannot be used in realworld applications. In summary, FADNet not only achieves high disparity estimation accuracy, but it is also very efficient and practical for deployment.
The experimental results on the KITTI 2015 dataset are shown in Table V. GANet achieves the best estimation results among the evaluated models, and our proposed FADNet performs comparable error rates on the metric of D1fg. The qualitative evaluation of the KITTI 2015 dataset is shown in Fig. 36, it is seen that the error maps of FADNet are close to PSMNet, while they are much better than that of DispNetC.
V Conclusion and Future Work
In this paper, we proposed an efficient yet accurate neural network, FADNet, for endtoend disparity estimation to embrace the time efficiency and estimation accuracy on the stereo matching problem. The proposed FADNet exploits pointwise correlation layers, residual blocks, and multiscale residual learning strategy to make the model be accurate in many scenarios while preserving fast inference time. We compared FADNet with existing stateoftheart 2D and 3D based methods on two popular datasets in terms of accuracy and speed. Experimental results showed that FADNet achieves comparable accuracy while it runs much faster than the 3D based models. Compared to the 2D based models, FADNet is more than two times accurate.
We have two future directions following our discovery in this paper. First, we would like to develop fast disparity inference of FADNet on edge devices. Since the computational capability of them is much lower than that of the server GPUs used in our experiments, it is necessary to explore the techniques of model compression, including pruning, quantization, and so on. Second, we would also like to apply AutoML [9] for searching a wellperforming network structure for disparity estimation.
Acknowledgements
This research was supported by Hong Kong RGC GRF grant HKBU 12200418. We thank the anonymous reviewers for their constructive comments and suggestions. We would also like to thank NVIDIA AI Technology Centre (NVAITC) for providing the GPU clusters for some experiments.
References
 [1] H. Hirschmuller, “Stereo processing by semiglobal matching and mutual information,” IEEE Transactions on pattern analysis and machine intelligence, vol. 30, no. 2, pp. 328–341, 2007.

[2]
S. Zagoruyko and N. Komodakis, “Learning to compare image patches via convolutional neural networks,” in
Proceedings of the IEEE conference on computer vision and pattern recognition
, 2015, pp. 4353–4361.  [3] J. Zbontar, Y. LeCun et al., “Stereo matching by training a convolutional neural network to compare image patches.” Journal of Machine Learning Research, vol. 17, no. 132, p. 2, 2016.
 [4] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in The IEEE International Conference on Computer Vision (ICCV), December 2015.
 [5] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox, “A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4040–4048.
 [6] J.R. Chang and Y.S. Chen, “Pyramid stereo matching network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
 [7] F. Zhang, V. Prisacariu, R. Yang, and P. H. Torr, “Ganet: Guided aggregation net for endtoend stereo matching,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
 [8] T. Saikia, Y. Marrakchi, A. Zela, F. Hutter, and T. Brox, “Autodispnet: Improving disparity estimation with automl,” in The IEEE International Conference on Computer Vision (ICCV), October 2019.
 [9] X. He, K. Zhao, and X. Chu, “Automl: A survey of the stateoftheart,” arXiv preprint arXiv:1908.00709, 2019.
 [10] Q. Wang, S. Zheng, Q. Yan, F. Deng, K. Zhao, and X. Chu, “Irs: A large synthetic indoor robotics stereo dataset for disparity and surface normal estimation,” arXiv preprint arXiv:1912.09678, 2019.
 [11] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2758–2766.
 [12] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “Flownet 2.0: Evolution of optical flow estimation with deep networks,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
 [13] E. Ilg, T. Saikia, M. Keuper, and T. Brox, “Occlusions, motion and depth boundaries with a generic network for disparity, optical flow or scene flow estimation,” in The European Conference on Computer Vision (ECCV), September 2018.
 [14] Z. Liang, Y. Feng, Y. Guo, H. Liu, W. Chen, L. Qiao, L. Zhou, and J. Zhang, “Learning for disparity estimation through feature constancy,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2811–2820.
 [15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
 [16] A. E. Orhan and X. Pitkow, “Skip connections eliminate singularities,” arXiv preprint arXiv:1701.09175, 2017.
 [17] W. Zhan, X. Ou, Y. Yang, and L. Chen, “Dsnet: Joint learning for scene segmentation and disparity estimation,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 2946–2952.
 [18] X. Du, M. ElKhamy, and J. Lee, “Amnet: Deep atrous multiscale stereo disparity estimation networks,” arXiv preprint arXiv:1904.09099, 2019.
 [19] A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry, “Endtoend learning of geometry and context for deep stereo regression,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 66–75.
 [20] G.Y. Nie, M.M. Cheng, Y. Liu, Z. Liang, D.P. Fan, Y. Liu, and Y. Wang, “Multilevel context ultraaggregation for stereo matching,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 3283–3291.
 [21] Y. Luo, J. Ren, M. Lin, J. Pang, W. Sun, H. Li, and L. Lin, “Single view stereo matching,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
 [22] J. Pang, W. Sun, J. S. Ren, C. Yang, and Q. Yan, “Cascade residual learning: A twostage convolutional neural network for stereo matching,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 887–895.
 [23] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “Flownet 2.0: Evolution of optical flow estimation with deep networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2462–2470.
 [24] R. Atienza, “Fast disparity estimation using dense networks,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 3207–3212.
 [25] J. Pang, W. Sun, J. S. Ren, C. Yang, and Q. Yan, “Cascade residual learning: A twostage convolutional neural network for stereo matching,” in The IEEE International Conference on Computer Vision (ICCV) Workshops, Oct 2017.
 [26] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei, “ImageNet: A largescale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 248–255.
 [27] M. Menze, C. Heipke, and A. Geiger, “Joint 3d estimation of vehicles and scene flow,” in ISPRS Workshop on Image Sequence Analysis (ISA), 2015.
Comments
There are no comments yet.