Cascade Cost Volume for High-Resolution Multi-View Stereo and Stereo Matching

12/13/2019 ∙ by Xiaodong Gu, et al. ∙ 5

The deep multi-view stereo (MVS) and stereo matching approaches generally construct 3D cost volumes to regularize and regress the output depth or disparity. These methods are limited when high-resolution outputs are needed since the memory and time costs grow cubically as the volume resolution increases. In this paper, we propose a both memory and time efficient cost volume formulation that is complementary to existing multi-view stereo and stereo matching approaches based on 3D cost volumes. First, the proposed cost volume is built upon a standard feature pyramid encoding geometry and context at gradually finer scales. Then, we can narrow the depth (or disparity) range of each stage by the depth (or disparity) map from the previous stage. With gradually higher cost volume resolution and adaptive adjustment of depth (or disparity) intervals, the output is recovered in a coarser to fine manner. We apply the cascade cost volume to the representative MVS-Net, and obtain a 23.1 in GPU memory and run-time. It is also the state-of-the-art learning-based method on Tanks and Temples benchmark. The statistics of accuracy, run-time and GPU memory on other representative stereo CNNs also validate the effectiveness of our proposed method.



There are no comments yet.


page 1

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Convolutional neural networks (CNNs) have been widely adopted in 3D reconstruction and broader computer vision tasks. State-of-the-art multi-view stereo [12, 44, 45, 22] and stereo matching algorithms [15, 3, 8, 48, 38, 26] often compute a 3D cost volume according to a set of hypothesized depth (or disparity) and warped features. 3D convolutions are applied to this cost volume to regularize and regress the final scene depth (or disparity).

Compared with the methods based on 2D CNNs [23, 47], the 3D cost volume can capture better geometry structures, perform photometric matching in 3D space, and alleviate the influence of image distortion caused by perspective transformation and occlusions [4]. However, methods relying on 3D cost volumes are often limited to low-resolution input images (and results), because 3D CNNs are generally time and GPU memory consuming. Typically, these methods downsample the feature maps to formulate the cost volumes at a lower resolution [15, 3, 8, 48, 38, 26, 12, 44, 45, 22, 4], and adopt upsampling [15, 3, 8, 48, 26, 38, 41, 34] or post-refinement [22, 4] to output the final high-resolution result.

In this work, we present a novel cascade formulation of 3D cost volumes. We start from a feature pyramid to extract multi-scale features which are commonly used in standard multi-view stereo [44] and stereo matching [8, 3]

networks. In a coarse-to-fine manner, the cost volume at the early stages is built upon larger scale semantic 2D features with sparsely sampled depth hypotheses, which lead to a relatively lower volume resolution. Subsequently, the later stages use the estimated depth (or disparity) maps from the earlier stages to adaptively adjust the sampling range of depth (or disparity) hypotheses and construct new cost volumes where finer semantic features are applied. This adaptive depth sampling and adjustment of feature resolution ensures the computation and memory resources are spent on more meaningful regions. In this way, our cascade structure can remarkably decrease computation time and GPU memory consumption. The effectiveness of our method can be seen in Figure 


We validate our method on both multi-view stereo and stereo matching on various benchmark datasets. For multi-view stereo, our cascade structure achieves the best performance on the DTU dataset [1] at the submission time of this paper, when combined with MVSNet [44]. It is also the state-of-the-art learning-based method on Tanks and Temples benchmark [17]. For stereo matching, our method reduces the end-point-error (EPE) and GPU memory consumption of GwcNet [8] by about 15.2% and 36.9% respectively.

Figure 2: Network architecture of the proposed cascade cost volume on MVSNet [44], denoted as MVSNet+Ours.

2 Related Work

Stereo Matching

According to the survey by Scharstein [30], a typical stereo matching algorithm contains four steps: matching cost calculation, matching cost aggregation, disparity calculation, and disparity refinement. Local methods [49, 42, 24] aggregate matching costs with neighboring pixels and usually utilize the winner-take-all strategy to choose the optimal disparity. Global methods [35, 16, 10] construct an energy function and try to minimize it to find the optimal disparity. More specifically, works in [35, 16] use belief propagation and semi-global matching [10] to approximate the global optimization with dynamic programming.

In the context of deep neural networks, CNNs based stereo matching methods are first introduced by Zbontar and LeCun [46], in which a convolutional neural network is introduced to learn the similarity measure of small patch pairs. The introduction of the widely used 3D cost volume in stereo is first proposed in GCNet [15], in which the disparity regression step uses the soft argmin operation to figure out the best matching results. PSMNet [3] further introduces pyramid spatial pooling and 3D hourglass networks for cost volume regularization and yields better results. GwcNet [8] modifies the structure of 3D hourglass and introduces group wise correlation to form a group based 3D cost volume. HSM [40] builds a light model for high-resolution images with a hierarchical design. EMCUA [26] introduces an approach for multi-level context ultra-aggregation. GANet [48] constructs several semi-global aggregation layers and local guided aggregation layers to further improve the accuracy.

Although methods based on 3D cost-volume remarkably boost the performance, they are limited to downsampled cost volumes and rely on interpolation operations to generate high-resolution disparity. Our cascade cost volumes can be combined with these methods to improve the disparity accuracy and GPU memory efficiency.

Multi-View Stereo

According to the comprehensive survey [5], works in traditional muti-view stereo can be roughly categorised into volumetric methods  [18, 33, 13, 14], which estimate the relationship between each voxel and surfaces; point cloud based methods [19, 6], which directly process 3D points to iteratively densify the results; and depth map reconstruction methods  [36, 2, 7, 32, 43], which use only one reference and a few source images for single depth map estimation.

Recently, learning-based approaches also demonstrate superior performance on multi-view stereo. Multi-patch similarity [9] introduces a learned cost metric. SurfaceNet [13] and DeepMVS [11] pre-warp the multi-view images to 3D space and use deep networks for regularization and aggregation. Most recently, multi-view stereo based on 3D cost volumes have been proposed in [44, 12, 22, 45, 4]. A 3D cost volume is built based on warped 2D image features from multiple views and 3D CNNs are applied for cost regularization and depth regression. Because the 3D CNNs require large GPU memory, these methods generally use downsampled cost volumes. Our cascade cost volume can be easily integrated into these methods to enable high-resolution cost volumes and further boosts accuracy, computational speed, and GPU memory efficiency.

High-Resolution Output in Stereo and MVS

Recently, some learning-based methods try to reduce the memory requirement in order to generate high resolution outputs. Instead of using voxel grids, Point MVSNet [4] proposes to use a small cost volume to generate the coarse depth and uses a point-based iterative refinement network to output the full resolution depth. In comparison, a standard MVSNet combined with our cascade cost volume can output full resolution depth with superior accuracy using less run-time and GPU memory than Point MVSNet [4]. Works in [37, 28, 28] partition advanced space to reduce memory consumption and construct a fixed cost volume representation which lacks flexibility. Works in [41, 34, 22] build extra refinement module by 2D CNNs and output a high resolution prediction. Notably, such refinement modules can be utilized jointly with our proposed cascade cost volume.

3 Methodology

This section describes the detailed architecture of the proposed cascade cost volume which is complementary to the existing 3D cost volume based methods in multi-view stereo and stereo matching. Here, we use the representative MVSNet [44] and PSMNet [3] as the backbone networks to demonstrate the application of the cascade cost volume in multi-view stereo and stereo matching tasks respectively. Figure 2 shows the architecture of MVSNet+Ours.

3.1 Cost volume Formulation

Learning-based multi-view stereo [44, 45, 4] and stereo matching [3, 15, 46, 48, 8] construct 3D cost volumes to measure the similarity between corresponding image patches and determine whether they are matched. Constructing 3D cost volume requires three major steps in both multi-view stereo and stereo matching. First, the discrete hypothesis depth (or disparity) planes are determined. Then, we warp the extracted 2D features of each view to the hypothesis planes and construct the feature volumes, which are finally fused together to build the 3D cost volume. Pixel-wise cost calculation is generally ambiguous in inherently ill-posed regions such as occlusion areas, repeated patterns, textureless regions, and reflective surfaces. To solve this, 3D CNNs at multiple scales are generally introduced to aggregate contextual information and regularize the possibly noise-contaminated cost volumes.

3D Cost Volumes in Multi-View Stereo

MVSNet [44] proposes to use front-to-parallel planes at different depth as hypothesis planes and the depth range is generally determined by the sparse reconstruction. The coordinate mapping is determined by the homography:


where refers to the homography between the feature maps of the view and the reference feature maps at depth . Moreover, refers to the camera intrinsics, rotations and translations of the view respectively, and

denotes the principle axis of the reference camera. Then differentiable homography is used to warp 2D feature maps into hypothesis planes of the reference camera to form feature volumes. To aggregate multiple feature volumes to one cost volume, the variance-based cost metric is proposed to adapt an arbitrary number of input feature volumes.

3D Cost Volumes in Stereo Matching

PSMNet [3] uses disparity levels as hypothesis planes and the range of disparity is designed according to specific scenes. Since the left and right images have been rectified, the coordinate mapping is determined by the offset in the x-axis direction:


where refers to the transformed x-axis coordinate of the right view at disparity , and is the source x-axis coordinate of the left view. To build feature volumes, we warp the feature maps of the right view to the left view using the translation along the x-axis. There are multiple ways to build the final cost volume. GCNet [15] and PSMNet [3] concatenate the left feature volume and the right feature volume without decreasing the feature dimension. The work [47] uses the sum of absolute differences to compute matching cost. DispNetC [23] computes full correlation about the left feature volume and right feature volume and produces only a single-channel correlation map for each disparity level. GwcNet [8] proposes group-wise correlation by splitting the features into groups and computing correlation maps in each group.

Plane Num. Plane Interv. Spatial Res. Efficiency Negative Positive Negative Accuracy Positive Negative Positive
Figure 3: Left: the standard cost volume. D is the number of hypothesis planes, W H is the spatial resolution and I is the plane interval. Right: The influence factors of efficiency (run-time and GPU memory) and accuracy.

3.2 Cascade Cost Volume

Figure 3 shows a standard cost volume of a resolution of , where denotes the spatial resolution, is the number of plane hypothesis, and is the channel number of feature maps. As mentioned in [44, 45, 4], an increased number of plane hypothesis , a larger spatial resolution , and a finer plane interval are likely to improve the reconstruction accuracy. However, the GPU memory and run-time grow cubically as the resolution of the cost volume increases. As demonstrated in R-MVSNet [45], MVSNet [44] is able to process a maximum cost volume of on a 16 GB Tesla P100 GPU. To resolve the problems above, we propose a cascade cost volume formulation and predict the output in a coarse-to-fine manner.

Hypothesis Range

As shown in Figure 4, the depth (or disparity) range of the first stage denoted by covers the entire depth (or disparity) range of the input scene. In the following stages, we can base on the predicted output from the previous stage, and narrow the hypothesis range. Consequently, we have , where is the hypothesis range at the stage and is the reducing factor of hypothesis range.

Hypothesis Plane Interval

We also denote the depth (or disparity) interval at the first stage as . Compared with the commonly adopted single cost volume formulation [3, 44], the initial hypothesis plane interval is comparatively larger to generate a coarse depth (or disparity) estimation. In the following stages, finer hypothesis plane intervals are applied to recover more detailed outputs. Therefore, we have: , where is the hypothesis plane interval at the stage and is the reducing factor of hypothesis plane interval.

Number of Hypothesis Planes

At the stage, given the hypothesis range and hypothesis plane interval , the corresponding number of hypothesis planes is determined by the equation: . When the spatial resolution of a cost volume is fixed, a larger generates more hypothesis planes and correspondingly more accurate results while leads to increased GPU memory and run-time. Based on the cascade formulation, we can effectively reduce the total number of hypothesis planes since the hypothesis range is remarkably reduced stage by stage while still covering the entire output range.

Spatial Resolution

Following the practices of Feature Pyramid Network [21], we double the spatial resolution of the cost volume at every stage along with the doubled resolution of the input feature maps. We define N as the total stage number of cascade cost volume, then the spatial resolution of cost volume at the stage is defined as . We set in multi-view stereo tasks and in stereo matching tasks.

Figure 4: Illustration of hypothesis plane generation. and are respectively the hypothesis range and the hypothesis plane number at the stage. Pink lines are hypothesis planes. Yellow line indicates the predicted depth (or disparity) map from stage 1, which is used to determine the hypothesis range and hypothesis plane intervals at stage 2.

Warping Operation

Applying the cascade cost volume formulation to multi-view stereo, we base on Equation 1 and rewrite the homography warping function at the stage as:


where denotes the predicted depth of the pixel at the stage, and is the residual depth of the pixel to be learned at the stage.

Similarly in stereo matching, we reformulate Equation 2 based on our cascade cost volume. The pixel coordinate mapping at the stage is expressed as:


where denotes the predicted disparity of the pixel at the stage, and denotes the residual disparity of the pixel to be learned at the stage.

(a) MVSNet [44]
(b) R-MVSNet [45]
(c) Point MVSNet [4]
(d) MVSNet+Ours
(e) Ground Truth
Figure 5: Multi-view stereo qualitative results of scan 10 on DTU dataset [1]. Top row: Generated point clouds of different methods and ground truth point clouds. Bottom row: Zoomed local areas.
Rank Mean Family Francis Horse Lighthouse M60 Panther Playground Train

COLMAP [32, 31]
54.62 42.14 50.41 22.25 25.63 56.43 44.83 46.97 48.53 42.04
R-MVSNet [45] 40.12 48.40 69.96 46.65 32.59 42.95 51.88 48.80 52.00 42.38
Point-MVSNet [4] 38.12 48.27 61.79 41.15 34.20 50.79 51.97 50.85 52.38 43.06
ACMH [39] 15.00 54.82 69.99 49.45 45.12 59.04 52.64 52.37 58.34 51.61

P-MVSNet [22]
12.25 55.62 70.04 44.64 40.22 65.20 55.08 55.17 60.37 54.29
MVSNet [44] 52.00 43.48 55.99 28.55 25.07 50.79 53.96 50.86 47.90 34.69
MVSNet+Ours 9.50 56.42 76.36 58.45 46.20 55.53 56.11 54.02 58.17 46.56
Table 1: Statistical results on the Tanks and Temples dataset [17] of state-of-the-art multi-view stereo and our methods.
Methods Mean Acc. (mm) Mean Comp. (mm) Overall (mm)
Camp[2] 0.835 0.554 0.695
Furu[6] 0.613 0.941 0.777
Tola[36] 0.342 1.190 0.766
Gipuma[7] 0.283 0.873 0.578
SurfaceNet[13] 0.450 1.040 0.745
R-MVSNet(D=256)[45] 0.385 0.459 0.422
R-MVSNet(D=512)[45] 0.383 0.452 0.417
P-MVSNet [22] 0.406 0.434 0.420
Point-MVSNet [4] 0.342 0.411 0.376
MVSNet(D=192)[44] 0.456 0.646 0.551
MVSNet(D=256)[44] 0.396 0.527 0.462
MVSNet+Ours 0.325 0.385 0.355
Table 2: Multi-view stereo quantitative results of different methods on DTU dataset [1] (lower is better).

3.3 Feature Pyramid

In order to obtain high-resolution depth (or disparity) maps, previous works [15, 3, 8, 48, 38, 26, 22] generally generate a comparatively low-resolution depth (or disparity) map using the standard cost volume and then upsample and refine it with 2D CNNs. The standard cost volume is constructed using the top level feature maps which contains high-level semantic features but lacks low-level finer representations. Here, we refer to Feature Pyramid Network [21] and adopt its feature maps with increased spatial resolutions to build the cost volumes of higher resolutions. For example, when applying cascade cost volume to MVSNet [44], we build three cost volumes from the feature maps {P1, P2, P3} of Feature Pyramid Network [21]. Their corresponding spatial resolutions are {1/16, 1/4, 1} of the input image size.

3.4 Loss Function

The cascade cost volume with stages produces intermediate outputs and a final prediction. We apply the supervision to all the outputs and the total loss is defined as:


where refers to the loss at the stage and

refers to its corresponding loss weight. We adopt the same loss function

as the baseline networks in our experiments.

4 Experiments

We evaluate the proposed cascade cost volume on multi-view stereo and stereo matching tasks.

Figure 6: Point cloud results of MVSNet+Ours on the intermediate set of Tanks and Temples dataset [17].
Stages Resosution >2mm(%) >4mm(%) >8mm(%) Acc. (mm) Comp. (mm) Overall (mm) GPU Mem. (MB) Run-time (s)

1/4 1/4 0.310 0.171 0.163 0.595 0.672 0.602 2373 0.081
2 1/2 1/2 0.208 0.127 0.084 0.451 0.351 0.401 4093 0.243
3 1 0.174 0.112 0.077 0.325 0.385 0.355 5345 0.492
Table 3: The statistical results of different stages in cascade cost volume. The statistics are collected on the DTU evaluation set [1] using MVSNet+Ours. The run-time is the sum of the current and previous stages.
>1px >2px. >3px EPE Mem.
PSMNet [3] 9.46 5.19 3.80 0.887 6871
PSMNet+Ours 7.44 4.61 3.50 0.721 4124
GwcNet [8] 8.03 4.47 3.30 0.765 7277
GwcNet+Ours 7.46 4.16 3.04 0.649 4585
GANet11 [48] - - - 0.95 6631
GANet11+Ours 11.0 5.97 4.28 0.90 5032
Table 4: Quantitative results of different stereo matching methods with and without cascade cost volume on Scene Flow dataset [23]. Accuracy, GPU memory consumption and run-time are included for comparisons.

4.1 Multi-view stereo


DTU [1] is a large-scale MVS dataset consisting of 124 different scenes scanned in 7 different lighting conditions at 49 or 64 positions. Tanks and Temples dataset [17] contains realistic scenes with small depth ranges. More specifically, its intermediate set is consisted of 8 scenes including Family, Francis, Horse, Lighthouse, M60, Panther, Playground, and Train. Following the work [45], we use DTU training set [1] to train our method, and test on DTU evaluation set. To validate the generalization of our approach, we also test it on the intermediate set of Tanks and Temples dataset [17] using the model trained on DTU dataest without fine-tuning.


We apply the proposed cascade cost volume to the representative MVSNet [44] and denote the network as MVSNet+Ours. During training, we set the number of input images to =3 and image resolution to . After balancing accuracy and efficiency, we adopt a three-stage cascade cost volume. From the first to the third stage, the number of depth hypothesis is 48, 32 and 8, and the corresponding depth interval is set to 4, 2 and 1 times as the interval of MVSNet [44] respectively. Accordingly, the spatial resolution of feature maps gradually increases and is set to , and of the original input image size. We follow the same input view selection and data pre-processing strategies as MVSNet [44] in both training and evaluation. During training, we use Adam optimizer with and

. The learning rate is set to 0.001 for 10 epochs, and downscaled by 2 after epoch 10, 12, and 14. The batch size is fixed to 16, and we train our method with 8 Nvidia GTX 1080Ti GPUs with 2 training samples on each GPU.

For quantitative evaluation on DTU dataset [1], we calculate the accuracy and the completeness by the MATLAB code provided by DTU dataset [1]. The percentage evaluation is implemented following MVSNet [44]

. The F-score is used as the evaluation metric for Tanks and Temples dataset

[17] to measure the accuracy and completeness of the reconstructed point clouds. We use fusibile [29] as our post-processing consisting of three steps: photometric filtering, geometric consistency filtering, and depth fusion.

Figure 7: Qualitative results on the test set of KITTI2015 [25]. Top row: Input images, Second row: Results of PSMNet [3]. Third row: Results of GwcNet  [8]. Bottom row: Results of GwcNet with cascade cost volume (GwcNet+Ours).
Methods >2mm(%) >4mm(%) >8mm(%) Acc. (mm) Comp. (mm) Overall (mm) GPU Mem. (MB) Run-time (s)
MVSNet 0.271 0.173 0.124 0.456 0.646 0.551 10823 1.210
MVSNet-Cas 0.236 0.138 0.088 0.450 0.455 0.453 2373 0.322
MVSNet-Cas-Ups 0.215 0.126 0.079 0.419 0.338 0.379 6227 0.676
MVSNet+Ours 0.174 0.112 0.077 0.325 0.385 0.355 5345 0.492
Table 5: Comparisons of MVSNet [44] with different cascade cost volume formulations.
Methods All (%) Noc (%)
D1-bg D1-fg D1-all D1-bg D1-fg D1-all
DispNetC [23] 4.32 4.41 4.34 4.11 3.72 4.05
GC-Net [15] 2.21 6.16 2.87 2.02 5.58 2.61
CRL [27] 2.48 3.59 2.67 2.32 3.12 2.45
iResNet-i2e2 [20] 2.14 3.45 2.36 1.94 3.20 2.15
SegStereo [41] 1.88 4.07 2.25 1.76 3.70 2.08
PSMNet [3] 1.86 4.62 2.32 1.71 4.31 2.14
GwcNet [8] 1.74 3.93 2.11 1.61 3.49 1.92
GwcNet+Ours 1.59 4.03 2.00 1.43 3.55 1.78
Table 6: Comparison of different stereo matching methods on KITTI2015 benchmark [25].

Benchmark Performance

Quantitative results on DTU evaluation set [1] are shown in Table 2. We can see that MVSNet [44] with cascade cost volume outperforms other methods [4, 22, 44, 45] in both completeness and overall quality and rank the place on DTU dataset [1]. The qualitative results are shown in Figure 5. We can see that MVSNet+Ours generates more complete point clouds with finer details. Besides, we demonstrate the generalization ability of our trained model by testing on Tanks and Temples dataset [17]. The corresponding quantitative results are reported in Table 1, and MVSNet+Ours achieves the state-of-the-art performance among the learning-based multi-view stereo methods. The qualitative point cloud results of the intermediate set of Tanks and Temples benchmark [17] are visualized in Figure 6. Note that, we get the results of above mentioned methods by running their provided pre-trained model and code except R-MVSNet [45] which provides point cloud results with their post-processing method.

To analyse the accuracy, GPU memory and run-time at each stage, we evaluate the MVSNet+Ours method on the DTU dataset [1]. We provide comprehensive statistics in Table 3 and visualization results in Figure 8. In a coarse-to-fine manner, the overall quality is improved from 0.602 to 0.355. Accordingly, the GPU memory increases from 2,373 MB to 4,093 MB and 5,345 MB, and run-time increases from 0.081 s to 0.243 s and 0.492 s.

4.2 Stereo Matching


Scene Flow dataset [23] is a large scale-dataset containing 35,454 training and 4,370 testing stereo pairs of size 960 540. It contains accurate ground truth disparity maps. We use the Finalpass of the Scene Flow dataset [23] since it contains more motion blur and defocus and is more like a real-world environment. KITTI 2015 [25] is a real-world dataset with dynamic street views. It contains 200 training pairs and 200 testing pairs.


In Scene Flow dataset, we extend PSMNet [3], GwcNet [8] and GANet11 [48] with our proposed cascade cost volume and denote them as PSMNet+Ours, GwcNet+Ours and GANet11+Ours. Balancing the trade-off between accuracy and efficiency, a two-stage cascade cost volume is applied, and the number of disparity hypothesis is 12. The corresponding disparity interval is set to 4 and 1 pixels respectively. The spatial resolution of feature maps increases from to of the original input image size. The maximum disparity is set to 192.

In KITTI 2015 benchmark [25], we mainly compare GwcNet [8] and GwcNet+Ours. For a fair comparison, we follow the training details of the original networks. The evaluation metric in Scene Flow dataset [23] is end-point-error (EPE), which is the mean absolute disparity error in pixels. For KITTI 2015 [25]

, the percentage of disparity outliers

is used to evaluate disparity error larger than max(3px, 0.05d), where denotes the ground-truth disparity.

Benchmark Performance

Quantitative results of different stereo methods on Scene Flow dataset [23] is shown in Table 4. By applying the cascade 3D cost volume, we boost the accuracy in all the metrics and less memory is required owing to the cascade design with smaller number of disparity hypothesis. Our method reduces the end-point-error by 0.166, 0.116 and 0.050 on PSMNet [3] (0.887 vs. 0.721), GwcNet [8] (0.765 vs. 0.649) and GANet11 [48] (0.950 vs. 0.900) respectively. The obvious improvement on indicates that small errors are suppressed with the introduction of high-resolution cost volumes. In KITTI 2015 [25], Table  6 shows the percentage of disparity outliers evaluated for background, foreground, and all pixels. Compared with the original GwcNet [8], the rank of GwcNet+Ours rises from to (date: Nov.5, 2019). Several disparity estimation on KITTI 2015 test set [25] is shown in Figure 7.

Depth Num. Depth Interv. Acc. Comp. Overall
MVSNet 192 1 0.4560 0.6460 0.5510
MVSNet-Cas 96, 96 2, 1 0.4352 0.4275 0.4314
MVSNet-Cas 96, 48, 48 2, 2, 1 0.4479 0.4141 0.4310
MVSNet-Cas 96, 48, 24, 24 2, 2, 2, 1 0.4354 0.4374 0.4364
MVSNet-Cas-share 96, 48, 48 2, 2, 1 0.4741 0.4282 0.4512
Table 7: Comparisons between MVSNet [44] and MVSNet using our cascade cost volume with different setting of depth hypothesis numbers and depth intervals. The statistics are collected on DTU dataset [1].
Loss Weight Acc. (mm) Comp. (mm) Overall (mm)
Loss1 Loss2 Loss3
2.0 1.0 0.5 0.4520 0.4219 0.4370
1.0 1.0 1.0 0.4521 0.4166 0.4344
0.5 1.0 2.0 0.4479 0.4141 0.4310
Table 8: Influence of loss function weight for the intermediate outputs and final prediction.

4.3 Ablation Study

Extensive ablation studies are performed to validate the improved accuracy and efficiency of our approach. All results are obtained by the three-stage model on DTU validation set [1] unless otherwise stated.

Cascade Stage Number

The quantitative results with different stage numbers are summarized in Table 7. In our implementation, we use MVSNet [44] with 192 depth hypothesis as the baseline model, and replace its cost volume with our cascade design which is also consisted of 192 depth hypothesis. Note that the spatial resolution of different stages are the same as that of the original MVSNet [44]. This extended MVSNet is denoted as MVSNet-Cas where indicates the total stage number. We find that as the number of stages increases, the overall quality first remarkably increases and then stabilizes.

Spatial Resolution

Then, we study how the spatial resolution of a cost volume affects the reconstruction performance. Here, we compare MVSNet-Cas, which contains 3 stages and all the stages share the same spatial resolution, and MVSNet-Cas-Ups where the spatial resolution increases from to of the original image size and bilinear interpolation is used to upsample feature maps. As shown in Table 5, the overall quality of MVSNet+Ours is obviously superior to those of MVSNet-Cas (0.453 vs. 0.355). Accordingly, a higher spatial resolution also leads to increased GPU memory (2373 vs. 5345 MB) and run-time (0.322 vs. 0.492 seconds).

Feature Pyramid

As shown in Table 5, the cost volume constructed from Feature Pyramid Network [21] denoted by MVSNet+Ours can slightly improve the overall quality from 0.379 to 0.355. The GPU memory (6227 vs. 5345 MB) and run-time (0.676 vs. 0.492 seconds) are also decreased. Compared with the improvement between MVSNet-Cas and MVSNet-Cas-Ups, the increased spatial resolution is still more critical to the improvement of reconstruction accuracy.

(a) GTRef Img
(b) Stage
(c) Stage
(d) Stage
Figure 8: Reconstruction results of each stage. Top row: Ground truth depth map and intermediate reconstructions. Bottom row: Error maps of intermediate reconstructions.

Parameter Sharing in Cost Volume Regularization

We also analyze the effect of weight sharing in 3D cost volume regularization across all the stages. As is shown in Table 7, the shared parameters cascade cost volume denoted by MVSNet-Cas-share achieves worse performance than MVSNet-Cas. It indicates that separate parameter learning of the cascade cost volumes at different stages further improves the accuracy.

Loss Weight

The stages model contains intermediate outputs and a final prediction. We conduct experiments with various combinations of loss weights of MVSNet+Ours on DTU dataset [1]. As is shown in Table 8, the proposed cascade cost volume prefers a larger loss weight at the later stages.

4.4 Run-time and GPU Memory

Table 5 shows the comparison of GPU memory and run-time between MVSNet [44] with and without cascade cost volume. Given the remarkable accuracy improvement, the GPU memory decreases from 10,823 to 5,345 MB, and the run-time drops from 1.210 to 0.492 seconds. In Table 4, we compare the GPU memory between PSMNet [3], GwcNet [8] and GANet11 [48] with and without the proposed cascade cost volume. The GPU memory of PSMNet [3], GwcNet [8] and GANet11 [48] decreases by , and respectively.

5 Conclusion

In this paper, we present a both GPU memory and computationally efficient cascade cost volume formulation for high-resolution multi-view stereo and stereo matching. First, we decompose the single cost volume into a cascade formulation of multiple stages. Then, we can narrow the depth (or dispartiy) range of each stage and reduce the total number of hypothesis planes by utilizing the depth (or disparity) map from the previous stage. Next, we use the cost volumes of higher spatial resolution to generate the outputs with finer details. The proposed cost volume is complementary to existing 3D cost-volume-based multi-view stereo and stereo matching approaches.


  • [1] H. Aanæs, R. R. Jensen, G. Vogiatzis, E. Tola, and A. B. Dahl (2016) Large-scale data for multiple-view stereopsis. IJCV, 2016 120 (2), pp. 153–168. Cited by: §1, Figure 5, Table 2, §4.1, §4.1, §4.1, §4.1, §4.3, §4.3, Table 3, Table 7.
  • [2] N. D. Campbell, G. Vogiatzis, C. Hernández, and R. Cipolla (2008) Using multiple hypotheses to improve depth-maps for multi-view stereo. In ECCV, 2008, pp. 766–779. Cited by: §2, Table 2.
  • [3] J. Chang and Y. Chen (2018) Pyramid stereo matching network. In CVPR, 2018, pp. 5410–5418. Cited by: §1, §1, §1, §2, §3.1, §3.1, §3.2, §3.3, §3, Figure 7, §4.2, §4.2, §4.4, Table 4, Table 6.
  • [4] R. Chen, S. Han, J. Xu, and H. Su (2019) Point-based multi-view stereo network. In ICCV, 2019, Cited by: Figure 1, §1, §2, §2, 5(c), §3.1, §3.2, Table 1, Table 2, §4.1.
  • [5] Y. Furukawa, C. Hernández, et al. (2015) Multi-view stereo: a tutorial. CGV 9 (1-2), pp. 1–148. Cited by: §2.
  • [6] Y. Furukawa and J. Ponce (2009) Accurate, dense, and robust multiview stereopsis. TPAMI 32 (8), pp. 1362–1376. Cited by: §2, Table 2.
  • [7] S. Galliani, K. Lasinger, and K. Schindler (2015) Massively parallel multiview stereopsis by surface normal diffusion. In ICCV, 2015, pp. 873–881. Cited by: §2, Table 2.
  • [8] X. Guo, K. Yang, W. Yang, X. Wang, and H. Li (2019) Group-wise correlation stereo network. In CVPR, 2019, pp. 3273–3282. Cited by: §1, §1, §1, §1, §2, §3.1, §3.1, §3.3, Figure 7, §4.2, §4.2, §4.2, §4.4, Table 4, Table 6.
  • [9] W. Hartmann, S. Galliani, M. Havlena, L. Van Gool, and K. Schindler (2017) Learned multi-patch similarity. In ICCV, 2017, pp. 1586–1594. Cited by: §2.
  • [10] H. Hirschmuller (2005) Accurate and efficient stereo processing by semi-global matching and mutual information. In CVPR, 2005, Vol. 2, pp. 807–814. Cited by: §2.
  • [11] P. Huang, K. Matzen, J. Kopf, N. Ahuja, and J. Huang (2018) DeepMVS: learning multi-view stereopsis. In CVPR, 2018, pp. 2821–2830. Cited by: §2.
  • [12] S. Im, H. Jeon, S. Lin, and I. S. Kweon (2019) DPSNet: end-to-end deep plane sweep stereo. arXiv:1905.00538. Cited by: §1, §1, §2.
  • [13] M. Ji, J. Gall, H. Zheng, Y. Liu, and L. Fang (2017) SurfaceNet: an end-to-end 3d neural network for multiview stereopsis. In ICCV, 2017, pp. 2307–2315. Cited by: §2, §2, Table 2.
  • [14] A. Kar, C. Häne, and J. Malik (2017) Learning a multi-view stereo machine. In NeurIPS, 2017, pp. 365–376. Cited by: §2.
  • [15] A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry (2017) End-to-end learning of geometry and context for deep stereo regression. In ICCV, 2017, pp. 66–75. Cited by: §1, §1, §2, §3.1, §3.1, §3.3, Table 6.
  • [16] A. Klaus, M. Sormann, and K. Karner (2006) Segment-based stereo matching using belief propagation and a self-ddapting dissimilarity measure. In ICPR, 2006, Vol. 3, pp. 15–18. Cited by: §2.
  • [17] A. Knapitsch, J. Park, Q. Zhou, and V. Koltun (2017) Tanks and temples: benchmarking large-scale ccene reconstruction. TOG 36 (4), pp. 78. Cited by: §1, Table 1, Figure 6, §4.1, §4.1, §4.1.
  • [18] K. N. Kutulakos and S. M. Seitz (2000) A theory of shape by space carving. IJCV 38 (3), pp. 199–218. Cited by: §2.
  • [19] M. Lhuillier and L. Quan (2005) A quasi-dense approach to surface reconstruction from uncalibrated images. TPAMI 27 (3), pp. 418–433. Cited by: §2.
  • [20] Z. Liang, Y. Feng, Y. Guo, H. Liu, W. Chen, L. Qiao, L. Zhou, and J. Zhang (2018) Learning for disparity estimation through feature constancy. In CVPR, 2018, pp. 2811–2820. Cited by: Table 6.
  • [21] T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie (2017) Feature pyramid networks for object detection. In CVPR, 2017, pp. 2117–2125. Cited by: §3.2, §3.3, §4.3.
  • [22] K. Luo, T. Guan, L. Ju, H. Huang, and Y. Luo (2019-10) P-mvsnet: learning patch-wise matching confidence aggregation for multi-view stereo. In ICCV, 2019, Cited by: §1, §1, §2, §2, §3.3, Table 1, Table 2, §4.1.
  • [23] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox (2016) A large dataset to train convolutional networks for disparity, optical flow, and scene flow sstimation. In CVPR, 2016, pp. 4040–4048. Cited by: §1, §3.1, §4.2, §4.2, §4.2, Table 4, Table 6.
  • [24] X. Mei, X. Sun, W. Dong, H. Wang, and X. Zhang (2013) Segment-tree based cost aggregation for stereo matching. In CVPR, 2013, pp. 313–320. Cited by: §2.
  • [25] M. Menze and A. Geiger (2015) Object scene flow for autonomous vehicles. In CVPR, 2015, pp. 3061–3070. Cited by: Figure 7, §4.2, §4.2, §4.2, Table 6.
  • [26] G. Nie, M. Cheng, Y. Liu, Z. Liang, D. Fan, Y. Liu, and Y. Wang (2019) Multi-level context ultra-aggregation for stereo matching. In CVPR, 2019, pp. 3283–3291. Cited by: §1, §1, §2, §3.3.
  • [27] J. Pang, W. Sun, J. S. Ren, C. Yang, and Q. Yan (2017) Cascade residual learning: a two-stage convolutional neural network for stereo matching. In ICCV, 2017, pp. 887–895. Cited by: Table 6.
  • [28] G. Riegler, A. Osman Ulusoy, and A. Geiger (2017) OctNet: learning deep 3d representations at high resolutions. In CVPR, 2017, pp. 3577–3586. Cited by: §2.
  • [29] K. L. S. Galliani and K. Schindler Massively parallel multiview stereopsis by surface normal diffusion. Note: Cited by: §4.1.
  • [30] D. Scharstein and R. Szeliski (2002) A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV 47 (1-3), pp. 7–42. Cited by: §2.
  • [31] J. L. Schonberger and J. Frahm (2016) Structure-from-motion revisited. In CVPR, 2016, pp. 4104–4113. Cited by: Table 1.
  • [32] J. L. Schönberger, E. Zheng, J. Frahm, and M. Pollefeys (2016) Pixelwise view selection for unstructured multi-view stereo. In ECCV, 2016, pp. 501–518. Cited by: §2, Table 1.
  • [33] S. M. Seitz and C. R. Dyer (1999) Photorealistic scene reconstruction by voxel coloring. IJCV 35 (2), pp. 151–173. Cited by: §2.
  • [34] X. Song, X. Zhao, H. Hu, and L. Fang (2018) EdgeStereo: a context integrated residual pyramid network for stereo matching. In ACCV, 2018, pp. 20–35. Cited by: §1, §2.
  • [35] J. Sun, N. Zheng, and H. Shum (2003) Stereo matching using belief propagation. TPAMI (7), pp. 787–800. Cited by: §2.
  • [36] E. Tola, C. Strecha, and P. Fua (2012) Efficient large-scale multi-view stereo for ultra high-resolution image sets. MVA 23 (5), pp. 903–920. Cited by: §2, Table 2.
  • [37] P. Wang, Y. Liu, Y. Guo, C. Sun, and X. Tong (2017) O-cnn: octree-based convolutional neural networks for 3d shape analysis. TOG 36 (4), pp. 72. Cited by: §2.
  • [38] Z. Wu, X. Wu, X. Zhang, S. Wang, and L. Ju (2019-10) Semantic stereo matching with pyramid cost volumes. In ICCV, 2019, Cited by: §1, §1, §3.3.
  • [39] Q. Xu and W. Tao (2018) Multi-view stereo with asymmetric checkerboard propagation and multi-hypothesis joint view selection. arXiv:1805.07920. Cited by: Table 1.
  • [40] G. Yang, J. Manela, M. Happold, and D. Ramanan (2019) Hierarchical deep stereo matching on high-resolution images. In CVPR, 2019, pp. 5515–5524. Cited by: §2.
  • [41] G. Yang, H. Zhao, J. Shi, Z. Deng, and J. Jia (2018) SegStereo: exploiting semantic information for disparity estimation. In ECCV, 2018, pp. 636–651. Cited by: §1, §2, Table 6.
  • [42] Q. Yang (2012) A non-local cost aggregation method for stereo matching. In CVPR, 2012, pp. 1402–1409. Cited by: §2.
  • [43] Y. Yao, S. Li, S. Zhu, H. Deng, T. Fang, and L. Quan (2017) Relative camera refinement for accurate dense reconstruction. In 3DV, 2017, pp. 185–194. Cited by: §2.
  • [44] Y. Yao, Z. Luo, S. Li, T. Fang, and L. Quan (2018) MVSNet: depth inference for unstructured multi-view stereo. In ECCV, 2018, pp. 767–783. Cited by: Figure 1, Figure 2, §1, §1, §1, §1, §2, 5(a), §3.1, §3.1, §3.2, §3.2, §3.3, Table 1, Table 2, §3, §4.1, §4.1, §4.1, §4.3, §4.4, Table 5, Table 7.
  • [45] Y. Yao, Z. Luo, S. Li, T. Shen, T. Fang, and L. Quan (2019) Recurrent mvsnet for high-resolution multi-view stereo depth inference. In CVPR, 2019, pp. 5525–5534. Cited by: Figure 1, §1, §1, §2, 5(b), §3.1, §3.2, Table 1, Table 2, §4.1, §4.1.
  • [46] J. Zbontar and Y. LeCun (2015) Computing the stereo matching cost with a convolutional neural network. In CVPR, 2015, pp. 1592–1599. Cited by: §2, §3.1.
  • [47] J. Zbontar and Y. LeCun (2016) Stereo matching by training a convolutional neural network to compare image patches. JMLR 17, pp. 1–32. Cited by: §1, §3.1.
  • [48] F. Zhang, V. Prisacariu, R. Yang, and P. H. Torr (2019) GA-net: guided aggregation net for end-to-end stereo matching. In CVPR, 2019, pp. 185–194. Cited by: §1, §1, §2, §3.1, §3.3, §4.2, §4.2, §4.4, Table 4.
  • [49] K. Zhang, J. Lu, and G. Lafruit (2009) Cross-based local stereo matching using orthogonal integral images. TCSVT 19 (7), pp. 1073–1079. Cited by: §2.