Convolutional Neural Networks (CNNs), due to their immense learning capacity and superior efficiency, have advanced a variety of computer vision tasks, including optical flow prediction. Recent work[1, 2] built large-scale synthetic datasets to train a supervised CNN and show that networks trained on such unrealistic data still generalize very well to existing datasets such as Sintel  and KITTI . Other works [5, 6, 7] have designed new objectives such as image reconstruction loss to guide the network learning in an unsupervised way for motion estimation. Though [1, 2, 5, 6] are totally different approaches, they all use variants of one architecture, the “FlowNet Simple” network .
FlowNetS is a conventional CNN architecture, consisting of a contracting part and an expanding part. Given adjacent frames as input, the contracting part uses a series of convolutional layers to extract high level semantic features, while the expanding part tries to predict the optical flow at the original image resolution by successive deconvolutions. In between, it uses skip connections  to provide fine image details from lower layer feature maps. This generic pipeline, contract, expand, skip connections, is widely adopted for per-pixel prediction problems, such as semantic segmentation , depth estimation , video coloring , etc.
However, skip connections are a simple strategy for combining coarse semantic features and fine image details; they are not involved in the learning process. What we desire is to keep the high frequency image details until the end of the network in order to provide implicit deep supervision. Simply put, we want to ensure maximum information flow between layers in the network.
, a recently proposed CNN architecture, has an interesting connectivity pattern: each layer is connected to all the others within a dense block. In this case, all layers can access feature maps from their preceding layers which encourages heavy feature reuse. As a direct consequence, the model is more compact and less prone to overfitting. Besides, each individual layer receives direct supervision from the loss function through the shortcut paths, which provides implicit deep supervision. All these good properties make DenseNet a natural fit for per-pixel prediction problems. There is a concurrent work using DenseNet for semantic segmentation, which achieves state-of-the-art performance without either pretraining or additional post-processing. However, estimating optical flow is different from semantic segmentation. We will illustrate the differences in Section 3.
In this paper, we propose to use DenseNet for optical flow prediction. Our contributions are two-fold. First, we extend current DenseNet to a fully convolutional network. Our model is totally unsupervised, and achieves performance close to supervised approaches. Second, we empirically show that replacing convolutions with dense blocks in the expanding part yields better performance.
Given adjacent frames, previous and next , our goal is to learn a model that can predict per-pixel motion field between the two images. and are the horizontal and vertical displacement. In this section, we first review the DenseNet architecture, and then outline our unsupervised learning framework based on a fully convolutional DenseNet.
2.1 DenseNet Review
Traditional CNNs, such as FlowNetS, calculate the output of the layer by applying a nonlinear transformation to the previous layer’s output ,
Through consecutive convolution and pooling, the network achieves spatial invariance and obtains coarse semantic features in the top layers. However, fine image details tend to disappear in the very top of the network.
To improve information flow between layers, DenseNet  provides a simple connectivity pattern: the layer receives the feature maps of all preceding layers as inputs:
is a single tensor constructed by concatenation of the previous layers’ output feature maps. In this manner, even the last layer can access the input information of the first layer. And all layers receive direct supervision from the loss function through the shortcut connections.convolution and dropout. We denote such composite function as one layer.
In our experiments, the DenseNet in the contracting part has four dense blocks, each of which has four layers. Between the dense blocks, there are transition down layers consisting of a convolution followed by a max pooling. We compare DenseNet with three other popular architectures, namely FlowNetS , VGG16 and ResNet18  in Section 3.3.
2.2 Fully Convolutional DenseNet
Classical expanding uses series of convolutions, deconvolutions, and skip connections to recover the spatial resolution in order to get the per-pixel prediction results. Due to the good properties of DenseNet, we propose to replace the convolutions with dense blocks during expanding as well.
However, if we follow the same dense connectivity pattern, the number of feature maps after each dense block will keep increasing. Considering that the resolution of the feature maps also increases during expanding, the computational cost will be intractable for current GPUs. Thus, for a dense block in the expanding part, we do not concatenate the input to its final output. For example, if the input has channels, the output of an layer dense block will have feature maps. k is the growth rate of a DenseNet, defining the number of feature maps each layer produces. Note that dense blocks in the contracting part will output feature maps.
For symmetry, we also introduce four dense blocks in the expanding part, each of which has four layers. The bottom layer feature maps at the same resolution are concatenated through skip connections. Between the dense blocks, there are transition up layers composed of two
deconvolutions with a stride of. One is for upsampling the estimated optical flow, and the other is for upsampling the feature maps.
2.3 Unsupervised Motion Estimation
Supervised approaches adopt synthetic datasets for CNNs to learn optical flow prediction. However, synthetic motions/scenes are quite different from real world ones, thus limiting the generalizability of the learned model. Besides, even constructing synthetic datasets requires a lot of manual effort . Hence, unsupervised learning is an ideal option for the naturally ill-conditioned motion estimation problem.
Recall that the unsupervised approach  treats the optical flow estimation as an image reconstruction problem. The intuition is that if we can use the predicted flow and the next frame to reconstruct the previous frame, our network is learning useful representations about the underlying motions. To be specific, we denote the reconstructed previous frame as . The goal is to minimize the photometric error between the previous frame and the inverse warped next frame :
Here . N is the total number of pixels. The inverse warp is done by using spatial transformer modules  inside the CNN. We use a robust convex error function, the generalized Charbonnier penalty
, to reduce the influence of outliers. This reconstruction loss is similar to the brightness constancy objective in classical variational formulations.
An overview of our unsupervised learning framework based on DenseNet is illustrated in Fig. 1. Our network has a total of layers with a growth rate of . But due to the parameter efficiency of dense connectivity, our model only has M parameters, while FlowNetS has M.
Flying Chairs  is a synthetic dataset designed specifically for training CNNs to estimate optical flow. It is created by applying affine transformations to real images and synthetically rendered chairs. The dataset contains 22,872 image pairs: 22,232 training and 640 test samples according to the standard evaluation split.
MPI Sintel  is also a synthetic dataset derived from a short open source animated 3D movie. There are 1,628 frames, 1,064 for training and 564 for testing. In this work, we only report performance on its final pass because it contains sufficiently realistic scenes including natural image degradations.
KITTI Optical Flow 2012  is a real world dataset collected from a driving platform. It consists of 194 training image pairs and 195 test pairs with sparse ground truth flow. We report the average endpoint error (EPE) on the entire image.
Since the dataset size of Sintel/KITTI is relatively small, we first pretrain our network on Chairs, and then fine tune it to report the performance. Note that the fine tuning here is also unsupervised-we are not using ground truth flow from Sintel/KITTI.
During unsupervised training, we calculate the reconstruction loss for each expansion. There are expansions in our network, resulting in losses. We use the same loss weights as in . The generalized Charbonnier parameter is and is . The models are trained using Adam optimization with default parameters, and . The initial learning rate is set to , and then divided by half every k. We end our training at k iterations. We apply the same data augmentations as in  to prevent overfitting.
|DenseNet + Dense Upsampling|
|DenseNet + Dense Upsampling (Deeper)|
3.3 Results and Discussion
We have three observations given the results in Table 1.
Observation : As shown in the top section of Table 1, all four popular architectures perform reasonably well on optical flow prediction. The reason why VGG16 performs the worst is that multiple pooling layers may lose the image details. On the contrary, ResNet18 only has one pooling layer in the beginning, so it performs better than both VGG16 and FlowNetS. Interestingly, DenseNet also has multiple pooling layers, but due to dense connectivity, we don’t lose fine appearance information. Thus, as expected, DenseNet performs the best with the least number of parameters.
Inspired by success of using deeper models, we also implement a network with five dense blocks in both the contracting and expanding parts, where each block has ten layers. However, as shown in the last row of Table 1, the performance is much worse due to overfitting. This may indicate that optical flow is a low-level vision problem, that doesn’t need a substantially deeper network to achieve better performance.
Observation : Using dense blocks during expanding is beneficial. In Table 1, DenseNet with dense upsampling achieves better performance on all three benchmarks than DenseNet with classical upsampling, especially on Sintel. As Sintel has much more complex context than Chairs and KITTI, it may benefit more from the implicit deep supervision. This confirms that using dense blocks instead of a single convolution can maintain more information during the expanding process, which leads to better flow estimates.
Observation : One of the advantages of DenseNet is that it is less prone to overfitting. The authors in  have shown that it can perform well even when there is no data augmentation compared to other network architectures. We investigate this by directly training from scratch on Sintel, without pretraining using Chairs. We built the training dataset using image pairs from both the final and clean passes of Sintel. When we use the same implementation and training strategies, the flow estimation performance is , which is very close to . One possible reason for such robustness is because of the model compactness and implicit deep supervision provided by DenseNet. This is ideal for optical flow estimation since most benchmarks have limited training data.
3.4 Comparison to State-of-the-Art
In this section, we compare our proposed method to recent state-of-the-art approaches. We only consider approaches that are fast because optical flow is often used in time sensitive applications. We evaluated all CNN-based approaches on a workstation with an Intel Core I7 with 4.00GHz and an Nvidia Titan X GPU. For classical approaches, we use their reported runtime.
As shown in Table 2, although unsupervised learning still lags behind supervised approaches , our network based on fully convolutional DenseNet shortens the performance gap and achieves lower EPE on the three standard benchmarks than the state-of-the-art unsupervised approach . Compared to , we get a higher EPE on Sintel because they use a variational refinement technique.
We show some visual examples in Figure 2. We can see that supervised FlowNetS can estimate optical flow close to the ground truth, while UnsupFlowNet struggles to maintain fine image details and generates very noisy flow estimation. Due to the dense connectivity pattern, our proposed method can produce much smoother flow than UnsupFlowNet, and recover the high frequency image details, such as human boundaries and car shapes.
Therefore, we demonstrate that DenseNet is a better fit for dense optical flow prediction, both quantitatively and qualitatively. However, by exploring different network architectures, we found that existing networks perform similarly on predicting optical flow. We may need to design new operators like the correlation layer  or novel architectures [16, 17] to learn motions between adjacent frames in future work. The model should handle both large and small displacement, as well as fine motion boundaries. Another concern of this work is that DenseNet has a large memory bandwidth which may limit its potential applications like action recognition [18, 19, 20].
In this paper, we extend the current DenseNet architecture to a fully convolutional network, and use image reconstruction loss as guidance to learn motion estimation. Due to the dense connectivity pattern, our proposed method achieves better flow accuracy than the previous best unsupervised approach , and shortens the performance gap with supervised ones. Besides, our model is totally unsupervised. Thus we can experiment with large-scale video corpora in future work, to learn non-rigid real world motion patterns. Through comparison of popular CNN architectures, we found that it is important to design novel operators or networks for optical flow estimation instead of relying on existing architectures for image classification.
This work was funded by a National Science Foundation CAREER grant, IIS-1150115. We gratefully acknowledge the support of NVIDIA Corporation through the donation of the Titan X GPU used in this work.
-  A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazırbas, V. Golkov, P. v.d. Smagt, D. Cremers, and T. Brox, “FlowNet: Learning Optical Flow with Convolutional Networks,” in ICCV, 2015.
-  Nikolaus Mayer, Eddy Ilg, Philip Häusser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox, “A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation,” in CVPR, 2016.
-  D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, “A Naturalistic Open Source Movie for Optical Flow Evaluation,” in ECCV, 2012.
-  Andreas Geiger, Philip Lenz, and Raquel Urtasun, “Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite,” in CVPR, 2012.
-  Aria Ahmadi and Ioannis Patras, “Unsupervised Convolutional Neural Networks for Motion Estimation,” in ICIP, 2016.
-  Jason J. Yu, Adam W. Harley, and Konstantinos G. Derpanis, “Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness,” in ECCV Workshop, 2016.
-  Yi Zhu, Zhenzhong Lan, Shawn Newsam, and Alexander G. Hauptmann, “Guided Optical Flow Learning,” arXiv preprint arXiv:1702.02295, 2017.
-  Jonathan Long, Evan Shelhamer, and Trevor Darrell, “Fully Convolutional Models for Semantic Segmentation,” in CVPR, 2015.
-  Simon Jégou, Michal Drozdzal, David Vazquez, Adriana Romero, and Yoshua Bengio, “The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation,” arXiv preprint arXiv:1611.09326, 2016.
-  Clement Godard, Oisin Mac Aodha, and Gabriel J. Brostow, “Unsupervised Monocular Depth Estimation with Left-Right Consistency,” arXiv preprint arXiv:1609.03677, 2016.
-  Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri, “Deep End2End Voxel2Voxel Prediction,” in CVPR Workshop, 2016.
-  Gao Huang, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten, “Densely Connected Convolutional Networks,” arXiv preprint arXiv:1608.06993, 2016.
-  Karen Simonyan and Andrew Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” in ICLR, 2015.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep Residual Learning for Image Recognition,” in CVPR, 2016.
-  Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu, in NIPS, 2015.
-  Anurag Ranjan and Michael J. Black., “Optical Flow Estimation using a Spatial Pyramid Network,” arXiv preprint arXiv:1611.00850, 2016.
-  Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox, “FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks,” arXiv preprint arXiv:1612.01925, 2016.
-  Yi Zhu and Shawn Newsam, “Depth2Action: Exploring Embedded Depth for Large-Scale Action Recognition,” in ECCV Workshop, 2016.
-  Yi Zhu and Shawn Newsam, “Efficient Action Detection in Untrimmed Videos via Multi-Task Learning,” in WACV, 2017.
-  Yi Zhu, Zhenzhong Lan, Shawn Newsam, and Alexander G. Hauptmann, “Hidden Two-Stream Convolutional Networks for Action Recognition,” arXiv preprint arXiv:1704.00389, 2017.
-  L. Bao, Q. Yang, and H. Jin, “Fast Edge-Preserving PatchMatch for Large Displacement Optical Flow,” in CVPR, 2014.
-  J. Wulff and M. Black, “Efficient Sparse-to-Dense Optical Flow Estimation using a Learned Basis and Layers,” in CVPR, 2015.
-  T. Kroeger, R. Timofte, D. Dai, and L. Van Gool, “Fast Optical Flow using Dense Inverse Searchn,” in ECCV, 2016.