Shift Convolution Network for Stereo Matching
In this paper, we present Shift Convolution Network (ShiftConvNet) to provide matching capability between two feature maps for stereo estimation. The proposed method can speedily produce a highly accurate disparity map from stereo images. A module called shift convolution layer is proposed to replace the traditional correlation layer to perform patch comparisons between two feature maps. By using a novel architecture of convolutional network to learn the matching process, ShiftConvNet can produce better results than DispNet-C[1], also running faster with 5 fps. Moreover, with a proposed auto shift convolution refine part, further improvement is obtained. The proposed approach was evaluated on FlyingThings 3D. It achieves state-of-the-art results on the benchmark dataset. Codes will be made available at github.
READ FULL TEXT