Humans are highly competent at recovering 3D scene geometry and object motion at a very detailed level, e.g.per-pixel depth and optical flow. They obtain rich 3D understanding of geometry and object movement from visual perception. 3D perception from images and videos is widely applicable to many real-world tasks such as augmented reality , video analysis  and robotics navigation [3, 4]. In this paper, we aim at learning framework for inferring dense and 3D geometry and motion understanding jointly without use of annotated training data. Instead, we use only unlabeled videos to provide self-supervision. The 3D geometry estimation includes per-pixel depth estimation from a single image and the motion understanding includes 2D optical flow, camera motion and 3D object motion.
Recently, for unsupervised single image depth estimation, impressive progress [8, 9, 10, 5] has been made to train a deep network taking only unlabeled samples as input and using 3D reconstruction for supervision, yielding even better depth estimation results than those of supervised methods  in outdoor scenarios. The core idea is to supervise depth estimation through view synthesis via rigid structure from motion (SfM) . The image of one view (source) is warped to another (target) based on the predicted depth map of target view and relative 3D camera motions. The photometric error between the warped frame and target frame is used to supervise the learning. A similar idea also applies when stereo image pairs are available .
However, real world videos may contain moving objects, which is inconsistent with rigid scene assumption commonly used in these frameworks. Zhou et al.  try to avoid such errors by inducing an explanability mask, where both pixels from moving objects and occluded regions images are ignored during training. Vijayanarasimhan et al.  separately tackle moving objects with a multi-rigid body model by estimating object masks and object pivots from the motion network. This system requires placing a limitation on the number of objects, and yields worse geometry estimation results than those from Zhou et al.  or other systems  which do not explicitly model moving objects. As illustrated in Fig. 1(d), we explicitly compute the moving object mask computed from our jointly estimated depth Fig. 1(b) and optical flow Fig. 1(c), which distinguishes the motion induced between camera and object motion. Compared to the corresponding results (same column) from other SOTA approaches which specifically handle the task, the visualization results from our joint estimation are noticeably better on all three tasks.
On the other hand, optical flow estimates dense 2D pixel movements, which models both rigid and non-rigid motion in the scene. Ren et al.  first proposed to supervise a flow network through view synthesis; later Wang et al.  introduced a learning strategy which is aware of occlusion to avoid unnecessary view matches. Nevertheless, these systems lack the understanding of the holistic 3D geometry, yielding difficulties in regularization of the learning process, e.g.
on the occluded regions. Unlike previous approaches, this paper proposes to model dense 3D motion for unsupervised/self-supervised learning, which jointly considers depths and optical flow encouraging their inter-consistency. Specifically, given two consecutive frames, we interpret the 2D pixel motion as caused by the movement of a 3D point cloud,a.k.a. 3D scene flow , by integrating optical flow and depth cues. Then, the movement of those 3D points is decomposed w.r.t. camera motion and object motion, where every pixel in the images is holistically understand and thus counted in 3D estimation. We show that the two information items are mutually reinforced; this helps provide significant performance boost over other SOTA methods.
We illustrate the framework of EPC++ in Fig. 2. Specifically, suppose the two consecutive frames are and , we first introduce an optical flow network which produces two flow maps: and . Then, a motion network is used to output their relative camera motion , and a single view depth network that outputs depths for the two images. The three types of information (2D flow, camera pose and depth maps) are fused into a holistic motion parser (HMP), where the visibility/non-occlusion mask , the moving object segmentation mask , the per-pixel 3D motion for rigid background and the moving objects are recovered following geometrical rules and consistency.
The 3D motion flow of rigid background is computed using depths of the target image and the relative camera pose . In addition, a full 3D scene flow  can be computed given the optical flow and depths of the two images . In principle, for pixels that are non-occluded in , i.e. , subtracting the two 3D flows in rigid regions, the error should be zero, while inside a moving object region, the residual yields the 3D motion of moving objects , which should be significantly larger than that from the background, yielding a mask of moving objects. For pixels that are occluded, , we can use
to inpaint optical flow by leveraging cues from depth information, which is more accurate than using bilinear interpolation adopted by[14, 6]. We use the above principles to guide the design of losses, and learning strategies for the networks; all the operations inside the parser are easy to compute and differentiable. Therefore, the system can be trained end-to-end, which helps the learning of both depth estimation and optical flow prediction.
Last but not the least, for a monocular video, the depth and object motion are two entangled information, which depends on the given projective camera model . For example, from the view point of a camera, a very close object moving with the camera is equivalent to a far object keeping relatively still, yielding scale confusion for depth estimation. This is an ill-posed problem; we address this by incorporating stereo image pairs into the learning framework during training stage. Finally, as shown in Fig. 1, EPC++ successfully decomposes the background and foreground motion, thus every pixel which contributes to the photometric error can be explained and interpreted explicitly, yielding better depth, optical flow and motion segmentation results than approaches which are specifically designed for one task.
We conducted extensive experiments on the public KITTI 2015  dataset, and evaluate our results in multiple aspects including depth estimation, optical flow estimation, 3D scene flow estimation, camera motion and moving object segmentation. As elaborated in Sec. 4, EPC++ significantly outperforms other SOTA methods on all tasks.
2 Related Work
Estimating single view depth, predicting 3D motion and optical flow from images have long been central problems for computer vision. Here we summarize the most related works in various aspects without enumerating them all due to space limitation.
Structure from motion and single view geometry. Geometric based methods estimate 3D from a given video with feature matching or patch matching, such as PatchMatch Stereo , SfM , SLAM [18, 19] and DTAM , and are effective and efficient in many cases. When there are dynamic motions inside a monocular video, usually there is scale-confusion for each non-rigid movement, thus regularization through low-rank , orthographic camera , rigidity  or fixed number of moving objects  are necessary in order to obtain an unique solution. However, those methods assume the 2D matching are reliable, which can fail at where there is low texture, or drastic change of visual perspective etc.. More importantly, those methods can not extend to single view reconstruction.
Traditionally, specific and strong assumptions are necessary for estimating depth from single view geometry, such as computing vanishing point , following assumptions of BRDF [25, 26], or extract the scene layout with major plane and box representations [27, 28] etc.. These methods typically only obtain sparse geometry representations, and some of them require certain assumptions (e.g. Lambertian, Manhattan world).
Supervised depth estimation with CNN.
Deep neural networks (DCN) developed in recent years provide stronger feature representation. Dense geometry,i.e. pixel-wise depth and normal maps, can be readily estimated from a single image [29, 30, 31, 32, 33] and trained in an end-to-end manner. The learned CNN model shows significant improvement compared to other methods which were based on hand-crafted features [34, 35, 36]. Others tried to improve the estimation further by appending a conditional random field (CRF) [37, 38, 39, 40]. However, all these supervised methods require densely labeled ground truths, which are expensive to obtain in natural environments.
Unsupervised single image depth estimation. Most recently, many CNN based methods are proposed to do single view geometry estimation with supervision from stereo images or videos, yielding impressive results. Some of them are relying on stereo image pairs [41, 42, 8], e.g. warping one image to another given known stereo baseline. Some others are relying on monocular videos [9, 43, 44, 10, 45, 5] by incorporating 3D camera pose estimation from a motion network. However, as discussed in Sec. 1, most of these models only consider a rigid scene, where moving objects are omitted. Vijayanarasimhan et al.  model rigid moving objects with motion masks, while their estimated depths are negatively effected by such an explicit rigid object assumption comparing to the one without object modeling . However, these methods are mostly based solely on photometric error, i.e. , which uses a Lambertian assumption, and are not robust in natural scenes with very variable lighting conditions. To handle the problem, supervision based on local structural errors, such as local image gradient , non-local smoothness  and structural similarity (SSIM) [8, 46] yields more robust matching and shows additional improvement on depth estimation. Most recently, Godard et al.  further improved the results by jointly considering stereo and monocular images with updated neural architectures. Unlike those approaches, we also jointly consider the learning of optical flow network, in which more robust matching can be learned, yielding better results for estimated depths.
Optical flow estimation. Similarly, there is a historical road map for optical flow estimation from traditional dense feature matching with local patterns, such as Patch matching , Piece-wise matching  and SIFT flow 
, to supervised learning based on convolutional neural networks (CNNs), such as FlowNet, SPyNet , and PWCNet  etc.. These produce significantly better performance due to deep hierarchical feature including larger while flexible context. However, fully supervised strategies requires high quality labelled data for generalization, which is non-trivial to obtain.
The unsupervised learning of optical flow with a neural network was first introduced in [14, 54] by training CNNs with image synthesis and local flow smoothness. Most recently, in [6, 55], the authors improve the results by explicitly computing the occlusion masks where photometric error are omitted during the training, yielding more robust learned results. However, these works do not have 3D scene geometry understanding, e.g. depths and camera motion from the videos, of the optical flow. In our case, we leverage such an understanding and show a significant improvement over previous SOTA results.
3D Scene flow by joint depth and optical flow estimation. Estimating 3D scene flow [56, 57] is a task of finding per-pixel dense flow in 3D given a pair of images, which requires joint consideration of depths and optical flow of given consecutive frames. Traditional algorithms estimate depths from stereo images [3, 58], or the given image pairs  assuming rigid constraint, and trying to decompose the scene to piece-wise moving planes in order to finding correspondence with larger context [59, 60]. Most recently, Behl et al.  adopt semantic object instance segmentation and supervised optical flow from DispNet  to solve large displacement of objects, yielding the best results on KITTI dataset.
Most recently, works in unsupervised learning have begun to consider depths and optical flow together. Yin et al.  use a residual FlowNet back on ResNet  to refine the rigid flow to the full optical flow, but it did not account for the moving objects or handle the occlusion, and the depth estimation did not benefit from the learning of optical flow. Ranjan et al.  pasted the optical flow from objects to the rigid flow from background and ego-motion to explain the whole scene in a adversarial collaboration. However, rather than measuring 3D motion consistency, they divide the whole image with a selected threshold. In our case, we choose to model from the perspective of 3D scene flow, which is embedded in our unsupervised learning pipeline, yielding better results even with weaker backbone networks, i.e. VGG , demonstrating the effectiveness of EPC++.
Segment moving objects. Finally, since our algorithm decomposes static background and moving objects, our approach is also related to segmentation of moving objects from a given video. Current contemporary SOTA methods are dependent on supervision from human labels by adopting CNN image features [65, 66] or RNN temporal modeling .
For unsupervised video segmentation, saliency estimation based on 2D optical flow is often used to discover and track the objects [68, 69], and long trajectories  of the moving objects based on optical flow need to be considered. However, these approaches commonly handle non-rigid objects within a relative static background, which is out of major scope of this paper. Most recently, Barnes et al.  shows explicitly modeling moving things with a 3D prior map can avoid visual odometry drifting. We also consider moving object segmentation, which is under an unsupervised setting with videos only. In the near future, we will try to generalize the approach to more common videos to estimate 3D objects movement.
3 Learning with Holistic 3D Motion Understanding
As discussed in Sec. 1, we obtain per-pixel 3D motion understanding by jointly modeling depth and optical flow, which is dependent on learning methods considering depth  and optical flow  independently. We refer to those papers for preliminaries due to space limitations.
In the following, we will first elaborate on the geometry relationship between the two types of information, and then discuss the details about the how we leverage the rules of 3D geometry in EPC++ learning framework (Sec. 3.1
) through HMP. Finally, we clarify all our loss functions and training strategies which consider both stereo and monocular images in training, with awareness of 3D motion dissected from HMP.
3.1 Geometrical understanding with 3D motion
Given two images, i.e. a target view image and a source view image , suppose that are the depth maps of , their relative camera transformation is from to , and let optical flow from to to be . For one pixel in , the corresponding pixel in can be found either through camera perspective projection or with given optical flow, and the two should be consistent. Formally, the computation could be written as,
where is the depth value of the target view at pixel , and is the camera intrinsic matrix, is the homogeneous coordinate of .
is a scaling function that rescale the vector by its last element,i.e. where is the vector dimension. Here, and the last element is the projected depth value at from , which we represent it by . is the 3D motion of dynamic moving objects relative to the world. In this way, every pixel in is explained geometrically. Here, can be outside of the image , or non-visible in when computing optical flow, which is also evaluated in optical flow estimation of KITTI dataset 111http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow.
Commonly, as proposed by previous works [6, 7], one may design CNN models for predicting . After computed the corresponding and , We can supervised those models through synthesizing a target image by,
using the photometric loss,
is implemented by using spatial transformer network, thus the models can be trained end-to-end, and is a visibility mask which is when is also visible in , and if is occluded or falls out of view.
Dropping the depth prediction models, i.e. models for predicting in Eq. (1), and adding flow smoothness yield unsupervised learning of optical flow . On the other hand, dropping optical flow model, and assuming there is no dynamic motion in the scene, i.e. setting in Eq. (1), and adding depth smoothness yield unsupervised learning of depths and motions in [9, 5].
In our case, to holistically model the 3D motion, we adopt CNN models for all optical flow, depths and motion estimation. However, dynamic motion and depths are two conjugate pieces of information, where there always exists a motion pattern that can exactly compensate the error caused by inaccurate depth estimation. Considering matching and based on RGB could also be noisy, this yields an ill-posed problem with trivial solutions that prevent stable learning. Therefore, we need to design effective learning strategies with strong regularization to provide effective supervision for all those networks, which we will describe later.
Holistic 3D motion parser (HMP). In order to make the learning process feasible, we first need to distinguish between the motion from rigid background/camera motion and dynamic moving objects, regions of visible and occluded, where at visible rigid regions we can rely on structure-from-motion  for training depths and at moving regions we can find 3D object motions. As illustrated in Fig. 2, we handle this through a HMP that takes in the provided information from three networks, i.e. DepthNet, MotionNet and FlowNet, and outputs the desired dissected dense motion maps of background and moving things respectively.
Formally, given depths of both images and , the learned forward/backward optical flow , and the relative camera pose , the motion induced by rigid background and dynamic moving objects from HMP are computed as,
where is a back projection function from 2D to 3D space. Note here is the dynamic per-pixel 3D motion at visible regions, and is the visibility mask as mentioned in Eq. (2), which follows the rule of occlusion estimation from the optical flow as presented in . We refer readers to their original paper for further details of the computational equation due to space limitations. is a soft moving object mask, which is computed for separating the rigid background and dynamic objects. is an annealing hyper parameter and will be changed at different stage of training, which we will elaborate in Sec. 3.2.2.
After HMP, the rigid and dynamic 3D motions are disentangled from the whole 3D motion, where we could apply various supervision accordingly based on our structural error and regularization, and drives the joint learning of depth, motion and flow networks.
3.2 Training the networks.
In this section, we will first introduce the networks for predicting and losses we designed for unsupervised learning.
3.2.1 Network architectures.
For depth prediction and motion estimation between two consecutive frames , we adopt the network architecture from Yang et al. , which depends on a VGG based encoder doubling the input resolution than that used in Zhou et al. , i.e. on KITTI, to acquire better ability in capturing image details. In addition, for motion prediction, we drop the decoder for their explanability mask prediction since we can directly infer the occlusion mask and moving object masks through the HMP module to avoid error matching.
For optical flow prediction , rather than using FlowNet  adopted in , we use a light-weighted network architecture, i.e. PWC-Net , to learn a robust matching, which is almost 10 smaller than the network of FlowNet, while producing higher matching accuracy in our unsupervised setting.
We will describe the details of all these networks in our experimental section Sec. 4.
3.2.2 Training losses.
After HMP Eq. (4), the system generates various outputs, including: 1) depth map from a single image , 2) relative camera motion , 3) optical flow map , 4) rigid background 3D motion , 5) dynamic 3D motion , 6) visibility mask , and 7) moving object mask . Different loss terms are also used to effectively train corresponding networks as illustrated in pipeline shown in Fig. 2.
Structural matching. As discussed in Sec. 2, photometric matching as proposed in Eq. (3) for training flows and depths is not robust against illumination variations. In this work, in order to better capture local structures, we add additional matching cost from SSIM . Formally, our matching cost is,
Here, is a balancing hyper-parameter which is set to be . represents the type of input for obtaining the matching pixels, which could be or as introduced in Eq. (1). We denote view synthesis loss terms for depth and optical flow as respectively (as shown in Fig. 2). Then, we may directly apply these losses to learn the flow, depth and motion networks.
Edge-aware local smoothness. Although the structural loss alleviates the appearance confusion of view synthesis, the matching pattern is still a very local information. Therefore, smoothness is commonly adopted for further regularizing the local matching  to improve the results. In our experiments, we tried two types of smoothness including edge-aware smoothness from image gradient proposed by Godard , or smoothness with learned affinity similar to Yang et al. . We find that when using only photometric matching (Eq. (2)), learned affinity provides significant improvements for final results over image gradient, but when adding structural loss (Eq. (5)), the improvements from learned affinity becomes very marginal. From our perspective, this is mostly due to the robustness from SSIM and the self-regularization from CNN. Therefore, in this work, for simplicity, we simply use image gradient based edge-aware smoothness to regularize the learning of different networks. Formally, the spatial smoothness loss can be written as,
where represents type of input, is a weighted factor, and is the order of smoothness gradient. For example, is a spatial smoothness term penalizes the L1 norm of second-order gradients of depth along both x and y directions inside rigid segmentation mask , encouraging depth values to align in planar surface when no image gradient appears. Here, is an all one matrix with the same shape as , and the number represents the order. In our experiments, we perform for depth and for optical flow. Here, we use and to denote the smoothness loss terms for depth and optical flow respectively.
3D motion consistency between depths and flows. Finally, we model the consistency between learning of depths and flows at the rigid regions based on the outputs from our HMP. Specifically, we require to be small inside the rigid background regions, which can be calculated by . Formally, the loss functions can be written as,
However, in the formula, is determined on the magnitude of , which is computed as the difference between the motion induced from depths and motion from flows. However, at beginning of the system learning, the prediction of depths and flows can be very noisy, yielding non-reasonable masks. Therefore, we set for computing to be at beginning of system training, where no consistency is asked, so that the flow and depth networks are trained independently. Then, after convergence of individual learning, we reset to be a small constant to further require the consistency of the 3D motion.
In practice, we found the learning could be made more stable by decomposing the 3D motion consistency into 2D flow consistency and depth consistency. We believe the reason could be similar to supervised depth estimation , where the estimated 3D motions at long distance can be much more noisy than the regions nearby, which induce losses difficult to minimize for the networks. Therefore, by decomposing the 3D motions to 2D motions and depths, such difficulties be alleviated. Specifically, substituting for computing in Eq. (4), and put in the back projection function of given the formula for decomposing the consistency, i.e.
where we let to be the corresponding pixel in source image found by optical flow , and to be the matching pixel found by using the rigid transform . Here, is the depth map of source image projected from the depth of target image as mentioned in Eq. (1).
Therefore, the loss for 3D motion consistency is equivalent to,
where indicates the depth consistency, and indicates flow consistency inside rigid regions. One may easily prove that is the necessary and sufficient condition for . Thus, the we do not lose any supervision introduced from switching the optimization target.
2D motion consistency between depths and flows. Commonly, optical flow estimation on benchmarks, e.g. KITTI 2015 , also requires flow estimation for pixels inside occlusion regions , which is not possible when solely using 2D pixel matching. Traditionally, researchers [6, 14] use local smoothness to “inpaint” those pixels from nearby estimated flows. Thanks to our 3D understanding, we can train those flows by requiring its geometrical consistency with our estimated depth and motion. Formally, the loss for 2D flow consistency is written as,
where are defined in Eq. (8). We use such a loss to drive the supervision of our FlowNet to predicting flows only at non-visible regions, and surprisingly, it also benefits the flows predicted at visible regions, which we think it is because well modeling of the occluded pixels helps regularization of training.
Nevertheless, one possible concern of our formula in 3D motion consistency is when the occluded part is from a non-rigid movement, e.g. a car moves behind another car. To handle this problem, it requires further dissecting object instance 3D motions, which we leave to our future work, and is beyond the scope of this paper. In the datasets we experimented such as KITTI 2015, the major part of occlusion (95 of the occluded pixels) is from rigid background, which falls into our assumption.
Multi-scale penalization. Finally, in order to incorporate multi-scale context for training, following , we use four scales for the outputs of and . In summary, our loss functional for depths and optical flow supervision from a monocular video can be written as,
where indicates the level of image scale, and indicates the one with the lowest resolution. is a weighting factor for balancing the losses between different scales. is the set of hyper-parameters balancing different losses, and we elaborate them in Alg. 1.
3.2.3 Stage-wise learning procedure.
In practice, as we explained, it is not effective to put all the losses together (e.g.) to train the network from scratch, e.g. the segmentation mask can be very noisy at beginning. Therefore, we adjust the hyper-parameter set as the training goes on to switch on or off the learning of networks. Formally, we adopt a stage-wise learning strategy similar to , which trains the framework stage by stage and start the learning of later stages after previous stages are converged. The learning algorithm is summarized in Alg. 1. Firstly, we learn depth and optical flow networks separately. Then, we enforce the consistency between depth and optical flow through iterative training. In our experiments, the networks converged after two iterations of training in the iterative training stage, yielding SOTA performance for all the required tasks, which we will elaborate in Sec. 4.
3.3 Stereo to solve motion confusion.
As discussed in the introduction part (Sec. 1), the reconstruction of moving objects in monocular video has projective confusion, which is illustrated in Fig. 3. The depth map in (b) is an example predicted with our algorithm trained with monocular samples, where the car in the front is running at the same speed and the region is estimated to be far. This is because when the depth value is estimated large, the car will stay at the same place in the warped image, yielding small photometric errors during training. Obviously, the losses of motion or smoothness Eq. (11) does not solve this issue. Therefore, we have added stereo images (which are captured at the same time but from different view points) into learning the depth network to avoid such confusion jointly with monocular videos. As shown in Fig. 3 (c), the framework trained with stereo pairs correctly figures out the depth of the moving object regions.
Formally, when corresponding stereo image is additionally available for the target image , we treat as another source image, similar to , but with known camera pose . In this case, since there is no motion factor (stereo pairs are simultaneously captured), we adopt the same loss of and taken as inputs for supervising the depth network. Formally, the total loss for DepthNet when having stereo images is,
where and indicate the corresponding losses which are computed using stereo image . Here, we update steps of learning depth and motion networks in Alg. 1 by adding the loss from stereo pair with and .
In this section, we firstly describe the datasets and evaluation metrics used in our experiments, and then present comprehensive evaluation of EPC++ on different tasks.
4.1 Implementation details
EPC++ consists of three sub-networks: DepthNet, FlowNet and MotionNet as described in Sec. 3. Our HMP module has no learnable parameters, thus does not increase the model size, and needs no hyper-parameter tuning.
DepthNet architecture. A DispNet  like architecture is adopted for DepthNet. DispNet is based on an encoder-decoder design with skip connections and multi-scale side outputs. All conv
layers are followed by ReLU activation except for the top output layer, where we apply a sigmoid function to constrain the depth prediction within a reasonable range. In practice, the disparity output range is constrained within 0-0.3. Batch normalization (BN) is performed on all conv layers when training with stereo images, and is dropped when training with only monocular images for better stability and performance. We think this is because BN helps to reduce the scale confusion between monocular and stereo images. In addition, for stereo training, following , we let the DepthNet output the disparity maps of both the left and the right images for computing their consistency. During training, the Adam optimizer  is applied with , , learning rate of and batch size of . In training stage one, the hyper-parameters are set as , respectively. FlowNet architecture. A PWC-Net  is adopted as FlowNet. PWC-Net is based on an encoder-decoder design with intermediate layers warping CNN features for reconstruction. During training stage one, the network is optimized with Adam optimizer  with , , learning rate of for 100,000 iterationsThe batch size is set as 4 and other hyper-parameters are set as in . MotionNet architecture. The MotionNet architecture is the same as the Pose CNN in . 6-dimensional camera motion is estimated after 7 conv layers. The learning optimizer is set to be the same as DepthNet.
4.2 Datasets and metrics
Extensive experiments were conducted on five tasks to validate the effectiveness of EPC++ in different aspects. These tasks include: depth estimation, optical flow estimation, 3D scene flow estimation, odometry and moving object segmentation. All the results are evaluated on the KITTI dataset using the corresponding standard metrics commonly used by other SOTA methods [5, 6, 46, 63].
KITTI 2015. The KITTI 2015 dataset provides videos in 200 street scenes captured by stereo RGB cameras, with sparse depth ground truths captured by Velodyne laser scanner. 2D flow and 3D scene flow ground truth are generated from the ICP registration of the point cloud projection. The moving object mask is provided as a binary map to distinguish between static background and moving foreground in flow evaluation. During training, 156 stereo videos that exclude test and validation scenes are used. The monocular training sequences are constructed with three consecutive frames; left and right views are processed independently. This leads to 40,250 monocular training sequences. Stereo training pairs are constructed with left and right frame pairs, resulting in a total of 22,000 training samples. We set the input size as which is twice as larger as the input in  to capture more details.
|Method||Stereo||Lower the better||Higher the better|
|Abs Rel||Sq Rel||RMSE||RMSE log||<||<||<|
|Zhou et al.||0.208||1.768||6.856||0.283||0.678||0.885||0.957|
|Mahjourian et al.||0.163||1.240||6.220||0.250||0.762||0.916||0.968|
|EPC++ (mono depth only)||0.151||1.448||5.927||0.233||0.809||0.933||0.971|
|EPC++ (mono depth consist)||0.146||1.065||5.405||0.220||0.812||0.939||0.975|
|EPC++ (mono flow consist)||0.148||1.034||5.546||0.223||0.802||0.938||0.975|
|EPC++ (mono vis flow consist)||0.144||1.042||5.358||0.218||0.813||0.941||0.976|
|Godard et al.||✓||0.148||1.344||5.927||0.247||0.803||0.922||0.964|
|EPC++ (stereo depth only)||✓||0.141||1.224||5.548||0.229||0.811||0.934||0.972|
|EPC++ (stereo depth consist)||✓||0.134||1.063||5.353||0.218||0.826||0.941||0.975|
For depth evaluation, we chose the Eigen split  for experiments to compare with more baseline methods. The Eigen test split consists of 697 images, where the depth ground truth is obtained by projecting the Velodyne laser scanned points into the image plane. To evaluate at input image resolution, we re-scale the depth predictions by bilinear interpolation. The sequence length is set to be 3 during training. For optical flow evaluation, we report performance numbers on both training and test splits of KITTI 2012 and KITTI 2015 datasets and compare with other unsupervised methods. Both training and test set contain 200 image pairs. Ground truth optical flow for training split is provided and the ground truth for test split is withheld on the official evaluation server. For scene flow and segmentation evaluation, we evaluate on the KITTI 2015 training split, containing 200 image pairs. The scene flow ground truth is publicly available and the moving object ground truth is only provided for this split. The odometry is evaluated on two test sequences: sequence09 and sequence10 of KITTI benchmark. The visualization results on training sequences are also presented.
Metrics. The existing metrics of depth, optical flow, odometry, segmentation and scene flow were used for evaluation, as in previous methods [11, 3, 79]. For depth and odometry evaluation, we adopt the code from . For optical flow and scene flow evaluation, we use the official toolkit provided by . For foreground segmentation evaluation, we use the overall/per-class pixel accuracy and mean/frequency weighted (f.w.) IOU for binary segmentation. The definition of each metric used in our evaluation is specified in Tab. II, in which, and are ground truth and estimated results (). is the number of pixels of class segmented into class . is the total number of pixels in class . is the total number of classes.
|Abs Rel:||Sq Rel:|
|EPE:||F1: err 3px and err|
|pixel acc:||mean acc:|
|mean IoU:||f.w. IoU:|
4.3 Depth evaluation
Experiment setup. The depth experiments are conducted on KITTI Eigen split  to evaluate the performance of EPC++ and its variants. The depth ground truths are sparse maps as they come from the projected Velodyne Lidar points. Only pixels with ground truth depth values (valid Velodyne projected points) are evaluated. The following evaluations are performed to present the depth performances: (1) ablation study of our approach and (2) depth performance comparison with the SOTA methods. Ablation study. We explore the effectiveness of each component of EPC++ as presented in Tab. I. Several variant results are generated for evaluation, including:
(1) EPC++ (mono depth only): DepthNet trained with view synthesis and smoothness loss () on monocular sequences, which is already better than many SOTA methods;
(2) EPC++ (mono depth consist): Fine-tune the trained DepthNet with a depth consistency term as formulated with term, which is a part of Eq. (9); we show it benefits the depth learning.
(3) EPC++ (mono flow consist): DepthNet trained by adding flow consistency in Eq. (9), where we drop the visibility mask. We can see that the performance is worse than adding depth consistency alone since flow at non-visible parts harms the matching.
(4) EPC++ (mono vis flow consist): DepthNet trained with depth and flow consistency as in Eq. (9), but add the computation of visibility mask ; this further improves the results.
(5) EPC++ (mono): Final results from DepthNet with twice iterative depth-flow consistency training, yielding the best performance.
We also explore the use of stereo training samples in our framework, and report performances of two variants: (6) EPC (stereo depth only): DepthNet trained on stereo pairs with only .
(7) EPC++ (stereo depth consist): DepthNet trained on stereo pairs with depth consistency.
(8) EPC++ (stereo): Our full model trained with stereo samples.
It is notable that for monocular training, the left and right view frames are considered independently and thus the frameworks trained with either monocular or stereo samples leverage the same amount training data. As shown in Tab. I, our approach (EPC++) trained with both stereo and sequential samples shows large performance boost over using only one type of training samples, proving the effectiveness of incorporating stereo into the training. With fine-tuning from HMP, comparing results of EPC++ (stereo) and EPC++ (stereo depth consist), the performance is further improved.
Comparison with state-of-the-art. Following the tradition of other methods [11, 9, 8], the same crop as in  is applied during evaluation on Eigen split. We conducted a comprehensive comparison with SOTA methods that take both monocular and stereo samples for training.
Tab. I shows the comparison of EPC++ and recent SOTA methods. Our approach outperforms current SOTA unsupervised methods [9, 80, 10, 5, 8] on all metrics by a large margin. It is worth noting that (1) EPC++ trained with only monocular samples already outperforms 
which takes stereo pairs as input; (2) on the metrics “Sq Rel” and “RMSE”, there is a large performance boost after applying the depth-flow consistency, comparing the row “EPC++ (depth only)” and “EPC++ (mono depth consist)”. The two metrics measures the square of depth prediction error, and thus are sensitive to points where the depth values are further away from the ground truth. Applying the depth-flow consistency eliminates some “outlier” depth predictions. Some depth estimation visualization results are presented in Fig.1 and compared with results from . Our depth results preserve the details of the scene noticeably better.
4.4 Optical Flow Evaluation
Experiment setup. The optical flow evaluation is performed on KITTI 2015 and KITTI 2012 datasets. For ablation study, the comparison of our full model and other variants is evaluated on the training split, which consists of 200 image pairs and the ground truth optical flow is provided. We chose the training split for ablation study as the ground truth of the test split is withheld and there is a limit of submission times per month. For our full model and comparison with the SOTA methods, we evaluated on the test split and report numbers generated by the test server.
|Method||KITTI 2012||KITTI 2015|
Ablation study. The ablation study our model and 4 different variants is presented in Tab. IV. The model variants include:
(1) Flow only: FlowNet trained with only view synthesis and smoothness losses .
(2) Finetuned with depth: FlowNet is finetuned jointly with DepthNet after individually trained using . We can see that the results are worse than training with flow alone; this is because the flows from depth at rigid regions, i.e. in Eq. (9), are not as accurate as those from learning FlowNet alone. In other words, factorized depth and camera motion in the system can introduce extra noise to 2D optical flow estimation (from 3.66 to 4.00). But we notice that the results on occluded/non-visible regions are slightly better (from 23.07 to 22.96).
(3) EPC++ all region: We fix DepthNet, but finetune FlowNet without using the visibility mask . We can see the flows at rigid regions are even worse for the same reason as above, while the results at the occluded region becomes much better (from 23.07 to 16.20).
(4) EPC++ vis-rigid region: We fix DepthNet, and finetune FlowNet at the pixels of the visible and rigid regions, where the effect of improving at occluded region is marginal.
(5) EPC++ non-vis region: We only finetune FlowNew with and it yields improved results at all the regions of optical flow.
Results from variants (1)-(5) validate our assumption that the rigid flow from depth and camera motion helps the optical flow learning at the non-visible/occluded region. We also compared two variants of our framework trained with stereo samples: EPC (stereo) vis-rigid region and EPC (stereo) non-vis region, which comes to similar conclusion.
|Finetune with depth||4.00||22.96||7.40|
|EPC++ all region||4.33||16.20||6.46|
|EPC++ vis-rigid region||3.97||21.79||7.17|
|EPC++ non-vis region||3.84||15.72||5.84|
|EPC++ (stereo) vis-rigid region||3.97||21.86||7.14|
|EPC++ (stereo) non-vis region||3.86||14.78||5.66|
Comparison with SOTA methods. For fair comparison with current SOTA optical flow methods, our FlowNet is evaluated on both KITTI 2015, KITTI 2012 training and test splits. On test split, the reported numbers are generated by separate evaluation servers. As shown in the Tab. III, EPC++ outperforms all current unsupervised methods on “F1-bg” and “F1-all” metrics. Please note that Multi-frame  reports better performance on “F1-fg”, but this method takes three frames as input to estimate the optical flow while our method only takes two. Although longer input sequence gives better estimation the movement of foreground objects, our results at full regions are still better. Qualitative results are shown in Fig.6, and ours have better sharpness and smoothness of the optical flow.
4.5 Odometry estimation.
To evaluate the performance of our trained MotionNet, we use the odometry metrics. The same protocol as in  is applied in our evaluation, which measures the absolute trajectory error averaged over every consecutive five frames. Unlike the settings in previous works [9, 46] which train a MotionNet using stacked five frames (as described in Sec. 3), we took the same MotionNet which takes three frames as input and fine-tuned it on KITTI odometry split. We compare with several unsupervised SOTA methods on two sequences of KITTI. To explore variants of our model, we experimented learning DepthNet with monocular samples (EPC++ (mono)) and with stereo pairs (EPC++ (stereo)).
As shown in Table V, our trained MotionNet shows superior performance with respect to visual SLAM methods (ORB-SLAM), and is comparable to other unsupervised learning methods with slight improvement on two test sequences. The more accurate depth estimation from our DepthNet helps constraint the output of MotionNet, yielding better odometry results. The qualitative odometry results are shown in Fig. 7. Compared to results from SfMLearner  or GeoNet , which have large offset at the end of the sequence, results from EPC++are more robust to large motion changes and closer to the ground truth trajectories.
The small quantitative performance gap leads to large qualitative performance difference because the metric only evaluates 5-frame relative errors and always assume the first frame prediction to be ground truth; thus the errors can add up in the sequence while the existing metrics do not take it into consideration. To better compare the odometry performance over the complete sequence, we adopted the evaluation metrics as proposed in . This metric evaluates the average translational and rotational errors over the full sequence and the quantitative results are shown in Tab. VI. As these metrics evaluate over the full sequence, the quantitative numbers align well with the qualitative results in Fig. 7.
|ORB-SLAM (full) |
|ORB-SLAM (short) |
|Zhouet al. |
|Mahjourianet al. |
|EPC++(stereo)||0.012 0.006||0.012 0.008|
4.6 Moving object segmentation
Ideally, the residual between the dynamic scene flow and the background scene flow represents the motion of foreground object. As the HMP (Eq. (4)) is capable of decomposing the foreground and background motion by leveraging the depth-flow consistency, we test the effectiveness of this decomposition by evaluating the foreground object segmentation. Experiment setup. The moving object segmentation is evaluated on the training split of the KITTI 2015 dataset. An “Object map” is provided in this dataset to distinguish the foreground and background in flow evaluation. We use this motion mask as ground truth in our segmentation evaluation. Fig. 8 (second column) shows some visualizations of the segmentation ground truths. Our foreground segmentation estimation is generated by subtracting the rigid optical flow from optical flow, as indicated by in Eq. (4). We set a threshold on to generate a binary segmentation mask.
|pixel acc.||mean acc.||mean IoU||f.w. IoU|
|Explainability mask ||0.61||0.54||0.38||0.64|
|Yang et al.||0.89||0.75||0.52||0.87|
|Yang et al.||23.62||27.38||26.81||18.75||70.89||60.97||25.34||28.00||25.74|
Evaluation results. The quantitative and qualitative results are presented in Tab. VII and Fig. 8 respectively. We compare with two previous methods [9, 7] that takes the non-rigid scene into consideration. Yang et al.  explicitly models the moving object mask, and thus is directly comparable. The “explainability mask” in  is designed to deal with both moving objects and occlusion, and here we list their performances for a more comprehensive comparison. Our generated foreground segmentation performs comparable to the previous methods on all metrics, and the visualization shows the motion mask aligns well with the moving object. On the metrics of “pixel acc.” and “f.w. IoU”, EPC++ trained with monocular sequences performs better than that trained with stereo pairs. One possible reason is that the network trained with monocular samples is more prone to predicting large segmentation regions to cover the matching errors (e.g. errors caused by the depth confusion) and hence performs better on a metric that focuses on the “recall” number (“pixel acc” and “f.w. IoU”).
4.7 Scene flow evaluation
Experiment setup. The scene flow evaluation is performed on training split of KITTI 2015 dataset. There are 200 frames pairs (frames for and ) in the scene flow training split. The depth ground truth of the two consecutive frames and the 2D optical flow ground truth from frame to frame are provided. The evaluation of scene flow is performed with the KITTI benchmark evaluation toolkit 222http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php. As the unsupervised monocular method generates depth/disparity without absolute scale, we rescale the estimated depth by matching the median to ground truth depth for each image. Since no unsupervised methods have reported scene flow performances on KITTI 2015 dataset, we only compare our model trained on monocular sequences (EPC++ (mono)) and stereo pairs (EPC++ (stereo)) with the previous results reported in . As shown in Tab. VIII, our scene flow performance outperforms the previous SOTA method .
In this paper, we presented an unsupervised learning framework for jointly predicting depth, optical flow and moving object segmentation masks. Specifically, we formulated the geometrical relationship between all these tasks, where every pixels is explained by either rigid motion, non-rigid/object motion or occluded/non-visible regions. We used a holistic motion parser (HMP) to parse pixels in an image to different regions, and design various losses to encourage the depth, camera motion and optical flow consistency. Finally, an iterative learning pipeline is presented to effectively train all the models.We conducted comprehensive experiments to evaluate the performance of our system. On the KITTI dataset, our approach achieves SOTA performance on all the tasks of depth estimation, optical flow estimation and 2D moving object segmentation. Our framework can be extended to other motion video data sets containing deformable and articulated non-rigid objects such as MoSeg  etc., and multiple object segmentation , yielding a more comprehensive understanding of the videos.
-  R. A. Newcombe, S. Lovegrove, and A. J. Davison, “DTAM: dense tracking and mapping in real-time,” in ICCV, 2011.
Y.-H. Tsai, M.-H. Yang, and M. J. Black, “Video segmentation via object
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3899–3908.
-  M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in CVPR, 2015.
-  G. N. DeSouza and A. C. Kak, “Vision for mobile robot navigation: A survey,” IEEE transactions on pattern analysis and machine intelligence, vol. 24, no. 2, pp. 237–267, 2002.
-  Z. Yang, P. Wang, Y. Wang, W. Xu, and R. Nevatia, “Lego: Learning edge with geometry all at once by watching videos,” in CVPR, 2018.
-  Y. Wang, Y. Yang, Z. Yang, P. Wang, L. Zhao, and W. Xu, “Occlusion aware unsupervised learning of optical flow,” in CVPR, 2018.
-  Z. Yang, P. Wang, Y. Wang, W. Xu, and R. Nevatia, “Every pixel counts: Unsupervised geometry learning with holistic 3d motion understanding,” arXiv preprint arXiv:1806.10556, 2018.
-  C. Godard, O. Mac Aodha, and G. J. Brostow, “Unsupervised monocular depth estimation with left-right consistency,” 2017.
-  T. Zhou, M. Brown, N. Snavely, and D. G. Lowe, “Unsupervised learning of depth and ego-motion from video,” in CVPR, 2017.
-  Z. Yang, P. Wang, W. Xu, L. Zhao, and N. Ram, “Unsupervised learning of geometry from videos with edge-aware depth-normal consistency,” in AAAI, 2018.
-  D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single image using a multi-scale deep network,” in NIPS, 2014.
-  C. Wu et al., “Visualsfm: A visual structure from motion system,” 2011.
-  S. Vijayanarasimhan, S. Ricco, C. Schmid, R. Sukthankar, and K. Fragkiadaki, “Sfm-net: Learning of structure and motion from video,” CoRR, vol. abs/1704.07804, 2017.
Z. Ren, J. Yan, B. Ni, B. Liu, X. Yang, and H. Zha, “Unsupervised deep learning for optical flow estimation.” inAAAI, 2017, pp. 1495–1501.
-  L. Torresani, A. Hertzmann, and C. Bregler, “Nonrigid structure-from-motion: Estimating shape and motion with hierarchical priors,” IEEE transactions on pattern analysis and machine intelligence, vol. 30, no. 5, pp. 878–892, 2008.
-  A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in CVPR, 2012.
-  M. Bleyer, C. Rhemann, and C. Rother, “Patchmatch stereo-stereo matching with slanted support windows.” in Bmvc, vol. 11, 2011, pp. 1–11.
-  R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “Orb-slam: a versatile and accurate monocular slam system,” IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
-  J. Engel, T. Schöps, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in ECCV, 2014.
-  Y. Dai, H. Li, and M. He, “A simple prior-free method for non-rigid structure-from-motion factorization,” International Journal of Computer Vision, vol. 107, no. 2, pp. 101–122, 2014.
-  J. Taylor, A. D. Jepson, and K. N. Kutulakos, “Non-rigid structure from locally-rigid motion,” in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010, pp. 2761–2768.
-  S. Kumar, Y. Dai, and H. Li, “Monocular dense 3d reconstruction of a complex dynamic scene from two perspective frames,” ICCV, 2017.
-  ——, “Multi-body non-rigid structure-from-motion,” in 3D Vision (3DV), 2016 Fourth International Conference on. IEEE, 2016, pp. 148–156.
-  D. Hoiem, A. A. Efros, and M. Hebert, “Recovering surface layout from an image.” in ICCV, 2007.
-  E. Prados and O. Faugeras, “Shape from shading,” Handbook of mathematical models in computer vision, pp. 375–388, 2006.
-  N. Kong and M. J. Black, “Intrinsic depth: Improving depth transfer with intrinsic images,” in ICCV, 2015.
-  A. G. Schwing, S. Fidler, M. Pollefeys, and R. Urtasun, “Box in the box: Joint 3d layout and object reasoning from single images,” in ICCV, 2013.
-  F. Srajer, A. G. Schwing, M. Pollefeys, and T. Pajdla, “Match box: Indoor image matching via box-like scene estimation,” in 3DV, 2014.
-  X. Wang, D. Fouhey, and A. Gupta, “Designing deep networks for surface normal estimation,” in CVPR, 2015.
-  D. Eigen and R. Fergus, “Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture,” in ICCV, 2015.
-  I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab, “Deeper depth prediction with fully convolutional residual networks,” in 3D Vision (3DV), 2016 Fourth International Conference on. IEEE, 2016, pp. 239–248.
-  J. Li, R. Klein, and A. Yao, “A two-streamed network for estimating fine-scaled depth maps from single rgb images,” in ICCV, 2017.
-  X. Cheng, P. Wang, and R. Yang, “Depth estimation via affinity learned with convolutional spatial propagation network,” ECCV, 2018.
-  K. Karsch, C. Liu, and S. B. Kang, “Depth transfer: Depth extraction from video using non-parametric sampling,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 11, pp. 2144–2158, 2014.
-  L. Ladicky, J. Shi, and M. Pollefeys, “Pulling things out of perspective,” in CVPR, 2014.
-  B. L. Ladicky, Zeisl, M. Pollefeys et al., “Discriminatively trained dense surface normal estimation,” in ECCV, 2014.
-  P. Wang, X. Shen, Z. Lin, S. Cohen, B. L. Price, and A. L. Yuille, “Towards unified depth and semantic prediction from a single image,” in CVPR, 2015.
-  F. Liu, C. Shen, and G. Lin, “Deep convolutional neural fields for depth estimation from a single image,” in CVPR, June 2015.
B. Li, C. Shen, Y. Dai, A. van den Hengel, and M. He, “Depth and surface normal estimation from monocular images using regression on deep features and hierarchical crfs,” inCVPR, 2015.
-  P. Wang, X. Shen, B. Russell, S. Cohen, B. L. Price, and A. L. Yuille, “SURGE: surface regularized geometry estimation from a single image,” in NIPS, 2016.
-  J. Xie, R. Girshick, and A. Farhadi, “Deep3d: Fully automatic 2d-to-3d video conversion with deep convolutional neural networks,” in ECCV, 2016.
-  R. Garg, V. K. B. G, and I. D. Reid, “Unsupervised CNN for single view depth estimation: Geometry to the rescue,” ECCV, 2016.
-  C. Wang, J. M. Buenaposada, R. Zhu, and S. Lucey, “Learning depth from monocular videos using direct methods,” in CVPR, 2018.
-  R. Li, S. Wang, Z. Long, and D. Gu, “Undeepvo: Monocular visual odometry through unsupervised deep learning,” ICRA, 2018.
-  R. Mahjourian, M. Wicke, and A. Angelova, “Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints,” arXiv preprint arXiv:1802.05522, 2018.
-  Z. Yin and J. Shi, “Geonet: Unsupervised learning of dense depth, optical flow and camera pose,” arXiv preprint arXiv:1803.02276, 2018.
-  C. Godard, O. Mac Aodha, and G. Brostow, “Digging into self-supervised monocular depth estimation,” arXiv preprint arXiv:1806.01260, 2018.
-  B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial intelligence, vol. 17, no. 1-3, pp. 185–203, 1981.
-  M. J. Black and P. Anandan, “The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields,” Computer vision and image understanding, vol. 63, no. 1, pp. 75–104, 1996.
-  C. Liu, J. Yuen, and A. Torralba, “Sift flow: Dense correspondence across scenes and its applications,” TPAMI, vol. 33, no. 5, pp. 978–994, 2011.
-  E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “Flownet 2.0: Evolution of optical flow estimation with deep networks,” in CVPR, 2017. [Online]. Available: http://lmb.informatik.uni-freiburg.de//Publications/2017/IMKDB17
-  A. Ranjan and M. J. Black, “Optical flow estimation using a spatial pyramid network,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2. IEEE, 2017, p. 2.
-  D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume,” arXiv preprint arXiv:1709.02371, 2017.
-  J. Y. Jason, A. W. Harley, and K. G. Derpanis, “Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness,” in ECCV. Springer, 2016, pp. 3–10.
-  S. Meister, J. Hur, and S. Roth, “UnFlow: Unsupervised learning of optical flow with a bidirectional census loss,” in AAAI, New Orleans, Louisiana, Feb. 2018.
-  S. Vedula, S. Baker, P. Rander, R. Collins, and T. Kanade, “Three-dimensional scene flow,” in Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, vol. 2. IEEE, 1999, pp. 722–729.
-  S. Vedula, P. Rander, R. Collins, and T. Kanade, “Three-dimensional scene flow,” IEEE transactions on pattern analysis and machine intelligence, vol. 27, no. 3, pp. 475–480, 2005.
-  A. Behl, O. H. Jafari, S. K. Mustikovela, H. A. Alhaija, C. Rother, and A. Geiger, “Bounding boxes, segmentations and object coordinates: How important is recognition for 3d scene flow estimation in autonomous driving scenarios?” in CVPR, 2017, pp. 2574–2583.
-  C. Vogel, K. Schindler, and S. Roth, “Piecewise rigid scene flow,” in Computer Vision (ICCV), 2013 IEEE International Conference on. IEEE, 2013, pp. 1377–1384.
-  Z. Lv, C. Beall, P. F. Alcantarilla, F. Li, Z. Kira, and F. Dellaert, “A continuous optimization approach for efficient and accurate scene flow,” in ECCV. Springer, 2016, pp. 757–773.
-  N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox, “A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation,” in CVPR, 2016.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” CVPR, 2016.
-  A. Ranjan, V. Jampani, K. Kim, D. Sun, J. Wulff, and M. J. Black, “Adversarial collaboration: Joint unsupervised learning of depth, camera motion, optical flow and motion segmentation,” arXiv preprint arXiv:1805.09806, 2018.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014.
-  K. Fragkiadaki, P. Arbelaez, P. Felsen, and J. Malik, “Learning to segment moving objects in videos,” in CVPR, 2015, pp. 4083–4090.
-  J. S. Yoon, F. Rameau, J. Kim, S. Lee, S. Shin, and I. S. Kweon, “Pixel-level matching for video object segmentation using convolutional neural networks,” in 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017, pp. 2186–2195.
-  P. Tokmakov, C. Schmid, and K. Alahari, “Learning to segment moving objects,” arXiv preprint arXiv:1712.01127, 2017.
-  W. Wang, J. Shen, R. Yang, and F. Porikli, “Saliency-aware video object segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 1, pp. 20–33, 2018.
-  A. Faktor and M. Irani, “Video segmentation by non-local consensus voting.” in BMVC, vol. 2, no. 7, 2014, p. 8.
-  T. Brox and J. Malik, “Object segmentation by long term analysis of point trajectories,” in European conference on computer vision. Springer, 2010, pp. 282–295.
-  D. Barnes, W. Maddern, G. Pascoe, and I. Posner, “Driven to distraction: Self-supervised distractor learning for robust monocular visual odometry in urban environments,” in ICRA. IEEE, 2018, pp. 1894–1900.
-  M. Jaderberg, K. Simonyan, A. Zisserman et al., “Spatial transformer networks,” in Advances in Neural Information Processing Systems, 2015, pp. 2017–2025.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
-  J. Revaud, P. Weinzaepfel, Z. Harchaoui, and C. Schmid, “Epicflow: Edge-preserving interpolation of correspondences for optical flow,” in CVPR, 2015, pp. 1164–1172.
-  S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in ICML, 2015.
-  D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  C. Wang, J. M. Buenaposada, R. Zhu, and S. Lucey, “Learning depth from monocular videos using direct methods,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2022–2030.
-  Y. Zou, Z. Luo, and J.-B. Huang, “Df-net: Unsupervised joint learning of depth and flow using cross-task consistency,” in European Conference on Computer Vision, 2018.
-  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in CVPR, 2015.
-  Y. Kuznietsov, J. Stuckler, and B. Leibe, “Semi-supervised deep learning for monocular depth map prediction,” 2017.
-  J. Janai, F. Guney, A. Ranjan, M. Black, and A. Geiger, “Unsupervised learning of multi-frame optical flow with occlusions,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 690–706.