STC-Flow: Spatio-temporal Context-aware Optical Flow Estimation

by   Xiaolin Song, et al.

In this paper, we propose a spatio-temporal contextual network, STC-Flow, for optical flow estimation. Unlike previous optical flow estimation approaches with local pyramid feature extraction and multi-level correlation, we propose a contextual relation exploration architecture by capturing rich long-range dependencies in spatial and temporal dimensions. Specifically, STC-Flow contains three key context modules - pyramidal spatial context module, temporal context correlation module and recurrent residual contextual upsampling module, to build the relationship in each stage of feature extraction, correlation, and flow reconstruction, respectively. Experimental results indicate that the proposed scheme achieves the state-of-the-art performance of two-frame based methods on the Sintel dataset and the KITTI 2012/2015 datasets.



page 1

page 4

page 8


FPCR-Net: Feature Pyramidal Correlation and Residual Reconstruction for Semi-supervised Optical Flow Estimation

Optical flow estimation is an important yet challenging problem in the f...

Exploiting Inter-Frame Regional Correlation for Efficient Action Recognition

Temporal feature extraction is an important issue in video-based action ...

ASFlow: Unsupervised Optical Flow Learning with Adaptive Pyramid Sampling

We present an unsupervised optical flow estimation method by proposing a...

CSFlow: Learning Optical Flow via Cross Strip Correlation for Autonomous Driving

Optical flow estimation is an essential task in self-driving systems, wh...

Residual 3D Scene Flow Learning with Context-Aware Feature Extraction

Scene flow estimation is the task to predict the point-wise 3D displacem...

End-to-End Active Speaker Detection

Recent advances in the Active Speaker Detection (ASD) problem build upon...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Optical flow estimation is an important yet challenging problem in the field of video analytics. Recently, deep learning based approaches have been extensively exploited to estimate optical flow via convolutional neural networks (CNNs). Despite the great efforts and rapid developments, the advancements are not as significant as works in single image based computer vision tasks. The main reason is that optical flow is not directly measurable in the wild, and it is challenging to model motion dynamics with pixel-wise correspondence between two consecutive frames, which would contain variable motion displacements; thus optical flow estimation requires the efficient representation of features to match different motion objects or scenes.

Fig. 1: Overview of spatio-temporal contextual network for optical flow estimation. The context modules aim to build the relationship in spatial and temporal dimensions. With multiple context modeling, STC-Flow achieves the better performance with rich details.

Conventional methods attempt to propose mathematical algorithms of optical flow estimation such as DeepFlow [20] and EpicFlow [16] by matching features of two frames. Most of these methods, however, are complicated with heavy computational complexity, and usually fail for motions with large displacements. CNN-based methods for optical flow estimation, which usually utilize encoder-decoder architectures with pyramidal feature extraction and flow reconstruction like FlowNet [4], SpyNet [15], PWC-Net [18], boost the state-of-the-art performance of optical flow estimation and outperform conventional methods. However, the features in lower level contain rich details, while the receptive field is small, which is not effective to catch the larger displacement of motions; while the features in higher level highlight the overall outlines or shapes of objects, with less details, which may cause the misalignment of objects with complex shapes or non-rigid motions. So it is essential to capture context information with large receptive field and long-range dependencies, to build the global relationship for each stage of CNNs.

In this paper, as shown in Figure 1, we propose an end-to-end architecture for optical flow estimation, with jointly and effectively spatio-temporal contextual network. To respectively build the relationship in each stage of feature extraction, correlation and flow reconstruction, the network contains three key context modules: (a) Pyramidal spatial context module aims to enhance the discriminant ability of feature representations in spatial dimension. (b) Temporal context correlation module is adopted to model the global spatio-temporal relationships of the cost volume calculated from correlation operation. (c) Recurrent residual context upsampling module leverages the underlying content of predicted flow field between adjacent levels, to learn high-frequency features and preserve edges within a large receptive field.

In summary, the main contributions of this work are three-fold:

  • We propose a general framework, i.e. contextual attention framework, for efficient feature representation learning, which benefits multiple inputs and complicated target operation.

  • We propose corresponding context modules based on the contextual attention framework, for feature extraction, correlation and optical flow reconstruction stages.

  • Our network achieves the state-of-the-art performance in the Sintel and KITTI datasets for two-frame based optical flow estimation.

Ii Related Work

Optical flow estimation. Inspired by the success of CNNs, various deep networks for optical flow estimation have been proposed. Dosovitskiy et al[4] establish FlowNet which is the important CNN exploration on optical flow estimation with the encoder-deconder architecture, of which FlowNetS and FlowNetC are proposed with simple operations. However, the number of parameters is large with heavy calculation on correlation. Ilg et al[11] propose a cascaded network with milestone performance based on FlowNetS and FlowNetC with huge parameters and expensive computation complexity.

To reduce the number of parameters,Ranjan et al[15] present a compact SPyNet for spatial pyramid with multi-level representation learning. Hui et al[9] propose LiteFlowNet and Sun et al[18] propose PWC-Net, which are pioneers of the trend to lightweight optical flow estimation networks. LiteFlowNet [9] involves cascaded flow inference for flow warping and feature matching. PWC-Net [18] utilizes feature pyramid extraction and feature warping to construct the cost volume, and uses context network for optical flow refinement. HD [22] decomposes the full match density into hierarchical features to estimate the local matching, with heavy computational complexity. IRR [10] involves the iterative residual refinement scheme, and integrates occlusion prediction as additional auxiliary supervision. SelFlow [12] uses reliable flow predictions from non-occluded pixels, to learn optical flow for hallucinated occlusions from multiple frames for better performance.

Fig. 2: The overall architecture of our spatio-temporal contextual network for optical flow estimation (STC-Flow). PSC, TCC and RRCU modules are flexible to adopt to model relationships of intra-/extra-features in each stage. These modules at only the top two levels are shown.

Context modeling in neural networks. Context modeling has been successfully applied to capture long-range dependencies. Since a typical convolution operator has a local receptive field, context learning can affect an individual element by aggregating information from all elements. Many recent works utilize spatial self-attention to emphasize features of the key local regions [5, 23]. Object relation module [8] extends original attention to geometric relationship, and could be applied to improve the performance of object detection and other tasks. DANet [6] and CBAM [21] introduce the channel-wise attention via self-attention mechanism. Global context network [3] effectively models the global context with a lightweight architecture. Specifically, the non-local network [19] utilizes 3D convolution layers to aggregate spatial and temporal long-range dependencies for video frames.

In the optical flow estimation task, spatial contextual information helps to refine details and deal with occlusion. PWC-Net [18] consists of the context network with stacked dilated convolution layers for flow post-processing. In LiteFlowNet [9]

, flow regularization layer is applied to ameliorate the issue of outliers and fake edges. IRR

[10] utilizes bilateral filters to refine blurry flow and occlusion. Nevertheless, previous work of context modeling in optical flow estimation mainly focuses on spatial features. For motion context modeling, it is essential to provide an elegant framework to explore spatial and temporal information. Accordingly, our network introduces spatial and temporal context module, and also introduce recurrent context to upsample spatial features of predicted flow field.

Iii STC-Flow

Given a pair of video frames, scene or objects are diverse on movement velocity and direction in temporal dimension, and changes in scales, views, and luminance in spatial dimension. Convolutional operations built in CNNs process just a local neighborhood, and thus convolution stacks would lead to a local receptive field. The features corresponding to the pixels have similar textures of one object, even though they have differences in motion. These textures would introduce false-positive correlation, which result in wrong prediction of optical flow.

To address this issue, our method, i.e. STC-Flow, models contextual information by building global associations of intra-/extra-features with the attention mechanism in spatial and temporal dimensions respectively. The network could adaptively aggregate long-range contextual information, thus optimizing feature representation in feature extraction, correlation, and reconstruction stages, as shown in Figure 2. In this section, we first introduce the contextual attention framework with single or multiple inputs for efficient feature representation learning. Based on the framework, we then propose three key context modules: pyramidal spatial context (PSC) module, temporal context correlation (TCC) module and recurrent residual contextual upsampling (RRCU) module for modeling contextual information.

Iii-a Contextual Attention Framework

Analysis on Attention Mechanism. To capture long-range dependencies and model contextual details of single images or video clips, the basic non-local network [19] aggregates pixel-wise information via self-attention mechanism. We denote and as the input and output signals, such as the single image and the video clip. The non-local block can be expressed as follows:


where and are the indices of target position coordinates and all possible enumerated positions. denotes the relationship between position and , which is normalized by a factor . The matrix multiplication operation is utilized to strengthen details of each query position. Embedded Gaussian is a widely-used instantiation of , to compute similarity in an embedding space, and normalized with a softmax function, which is a soft selection across channels in one position. the non-local block with Embedded Gaussian is shown in Figure 3(b), and is expressed as follows:


where , and

are linear transformation matrices.

Fig. 3: The contextual attention framework (a) with modularization; and the specified forms of (b) the non-local block [19], and (c) global context (GC) block [3].

Why attention for optical flow estimation? Here, we discuss the relation between correlation in optical flow estimation and matrix multiplication in self-attention mechanism. We aim to explore the contextual information from the input pairs. Denote the feature pairs by and . To distinguish the features from different input paths, we mark the size of features with height , width , channel (), and position coordinate , channel index , and here , and . As the key function in optical flow estimation, the “correlation” operation between two patches of feature pairs, and , is defined as follows for temporal modeling:


where denotes the cost volume calculated via correlation. denotes the offset of correlation operation with search region. In consideration of matrix multiplication in the attention mechanism of shown in Figure 4, the different order of the two matrices in multiplication leads to great disparity of explanation with different displacements of correlation.

Discussions. In Figure 4(a), the expression is defined as . If , this operation strengthens the detail features of each position via aggregating information across channels from other positions, which would indicate the spatial attention integration at full resolution, and it is utilized to the basic non-local block [19]. However, if , as the definition of cost volume, only the diagonal elements present the correlation with no displacement. On the contrary, the expression is defined as in Figure 4(b), which is a global correlation representation at full resolution among channels, and is essential to the naive correlation operation between feature pairs. For different matrix multiplication approaches, the attention maps catch dependencies with corresponding concepts in spatial features and temporal dynamics, which enhance representation for input feature extraction and correlation calculation, respectively.

Fig. 4: The matrix multiplication with different contextual information. (a) The position-wise attention embedding; (b) the channel-wise embedding, also the global correlation of feature pairs.
Fig. 5: The proposed simplified matrix multiplication with polyphase decomposition and reconstruction.

Lite matrix multiplication. Considering the runtime of the flow prediction, the matrix multiplication in contextual attention block needs to be simplified with less computational complexity. In Figure 5, according to the neighbor similarity of images or frame pairs, we propose the polyphase decomposition and reconstruction scheme to simplify matrix multiplication opreation, which would obtain better approximation than the naive downsampling-upsampling scheme, and reduce the computation complexity compared to the direct multiplication. Denote the polyphase decomposition factor as (). Given a reshaped feature , the FLOPs of the entire multiplication is reduced from to . The comparison of different factors is presented in Section IV.

Contextual Attention Framework. In general, the input of CNNs is not limited to the single feature through the single path, and the attention block needs to be adapted to more than one features, e.g. two input features of the correlation operation. As shown in Figure 3(a), the components of the attention block can be abstracted as follows:

Attention aggregation.

To aggregate the attention integration feature to the intrinsic feature representation in each corresponding dimension, where the intrinsic representation often adopts basic operators like interpolation, convolution and transposed convolution.

Context transformation. To transform the aggregated attention via the 11 or 1D convolution, and obtain the contextual attention feature of all positions and channels.

Target fusion. To aggregate the output feature from target operation with the contextual attention, where the target operation is the main function to attain the objective from input features.

Denote as the multiple input features. We regard this abstraction as a contextual attention framework defined as follows:


where and are the fusion operations for attention aggregation and target fusion. and denote target operation and attention integration for the input features, is the factor of linear transformation. The non-local block or the other attention modules are the specific form of context attention block with the single input feature, e.g. , and is the all-pass function in the non-local block.

Iii-B Pyramidal Spatial Context Module

Fig. 6: The pyramidal spatial context (PSC) module. (a) The framework of PSC in the network; (b) The details of “Pyramidal Spatial Context Modeling ” in (a).

Inspired by the non-local network and global context network, we propose a pyramidal spatial context module with the tight dual-attention block to enhance the discriminative ability of feature representations in spatial position and channel dimensions. As shown in Figure 6, given a local feature at stage , the calculation of the spatial context module is formulated as:


where and are contextual attention at stage fused with that of stage , which is to aggregate context from different granularity:


where “

” denotes max-pooling, and “

” denotes the concatenation operator. and are attention integrations in position and channel, defined as follows to learn the spatial and channel interdependencies:


Iii-C Temporal Context Correlation Module

Fig. 7: The temporal context correlation (TCC) module. (a) The details of TCC module; (b) The Contextual PWC Module utilized with TCC in (a). “MD” is the max displacement of correlation. The temporal successive representation utilizes 3D convolution with kernel size , and is the frame number of input, i.e. 2.

As the spatial context module learns query-independent context relationships at the feature extraction stage, the temporal context module is adopted to model the relationships of correlation calculation. As the analysis on matrix multiplication, full-resolution correlation is utilized to describe the global context of correlation operation. As shown in Figure 7(a), given the local feature pairs from feature extraction, the contextual correlation is formulated as:


where is the temporal attention integration with the “cross-attention” mechanism, which is defined as follows:


Notice that the linear transformation of is modeled by a 3D convolution and a 11 convolution, which aims to explore the temporal information across time dimension. Since the max displacement of correlation is selected to 4, the kernel of 3D convolution needs to cover all frames in the temporal dimension, and the height and width are greater than or equal to the max displacement, i.e. 5 in the proposed module.

The TCC module is a flexible correlation operator and it can be utilized to PWC module in PWC-Net [18] as “Contextual PWC” module, to learn long-dependencies between the reference feature and the warped feature.

Iii-D Recurrent Residual Contextual Upsampling

Fig. 8: The recurrent residual contextual upsampling (RRCU) module. (a) The framework of RRCU in the network, with the Contextual PWC module in Figure 7; (b) The details of “Recurrent Residual Context Modeling” in (a).

Different from the spatial and temporal context representation modeling, the reconstruction context learning is a detail-aware operation to learn high-frequency features and preserve edges within a large receptive field. In view of the multi-stage structure of reconstruction, we propose an efficient recurrent module for upsampling, which leverages the underlying content information between the current stage and the previous stage.

The predicted optical flow at stage and the upsampled optical flow at stage are encoded by 11 convolution with shared weights, and the residual with smaller size is calculated from the encoder at first. Denote the residual between and as , and then the context modeling is utilized for to explore the up-sampling attention kernels for each corresponding source position, and is fused back to the bilinear interpolated . Finally, the fined residual feature is resembled to to obtain the refined upsampled flow with rich details. The architecture is illustrated in Figure 8, and the formulation is expressed as follows:


where “*” denotes the position-wise convolution operator, and here is a bilinear interpolation operator for . denotes the adaptive attention kernels to model the detail context defined as follows:


where denotes the “Pixel Shuffle [17]” operator for sub-pixel convolution, to reconstruct the sub-pixel information and preserve edges and textures. is the upsampling factor, and here .

Iii-E Overall Architecture

Given the proposed contextual attention modules, we now describe the overall architecture of the proposed STC-Flow. The input is the frame pairs and with size , and the goal of STC-Flow is to obtain the optical flow map with size . The contextual representations are modeled via three key components — pyramidal spatial context (PSC) module, temporal context correlation (TCC) module, and recurrent residual contextual upsampling (RRCU) module, to leverage long-range dependencies relationship in feature extraction, correlation and flow reconstruction, respectively. The entire network is trained jointly, shown in Figure 2.

Since PWC-Net [18] and LiteFlowNet [9] provide superior performance with lightweight architectures, we take a simplified version of PWC-Net, with layer reduction in feature extraction and reconstruction, as the baseline of our STC-Flow. For successive of image/frame pairs, the backbone network with PSC outputs pyramidal feature maps for each image. With the feature maps of each stage converted to cost volumes via correlation operation, the cost volumes are decoded and reconstructed to predict optical flow, assisted by TCC. With the guidance of backbone features and warping alignments, the predicted flow field goes through the RRCU module and the fined flow is obtained.

Iv Experiments

In this section, we introduce the implementation details, and evaluate our method on public optical flow benchmarks, including MPI Sintel [2], KITTI 2012 [7] and KITTI 2015 [14], and compare it with state-of-the-art methods.

Iv-a Implementation and training details

We take a simplified version of PWC-Net, with the same number of stages and layer reduction in feature extraction and reconstruction. PSC and RRCU modules are utilized at stage 3, 4 and 5 for feature extraction and reconstruction respectively. TCC Module is applied at stage 3, 4, 5 and 6 for correlation of feature pairs or warped features. The training loss weights among stages are 0.32, 0.08, 0.02, 0.01, 0.005. We first train the models with the FlyingChairs dataset [4] using L2 loss and the learning rate schedule, with random flipping and cropping of size 448 384 introduced by [11]. Secondly, we fine-tune the models on the FlyingThings3D dataset [13] using the schedule with cropping size of 768 384. Finally, the model is fine-tuned on Sintel and KITTI datasets using the general Charbonnier function () as the robust training loss. We use both the clean and final pass of the training data throughout the Sintel fine-tuning process, with cropping size of 768 384; and we use the mixed data of KITTI 2012 and 2015 training for KITTI fine-tuning process, with cropping size of 896 320.

Sintel KITTI 2012 KITTI 2015
Clean Final AEE AEE Fl-all
baseline 2.924 4.088 4.621 11.743 36.53%
w. PSC 2.802 3.891 4.565 11.031 35.37%
w. PSC 2.747 3.873 4.545 10.677 34.84%
w. PSC 2.741 3.864 4.494 10.332 34.45%
w. 2D-NL 2.785 3.968 4.523 10.482 34.76%
Full model 2.412 3.601 4.196 10.181 32.23%
(a) Pyramidal Spatial Context Module improves quantity results significantly. “w. PSC” means “using PSC in stage 3, 4 and 5”
Sintel KITTI 2012 KITTI 2015
Clean Final AEE AEE Fl-all
baseline 2.924 4.088 4.621 11.743 36.53%
w. TCC 2.787 3.863 4.523 10.712 35.59%
w. TCC 2.641 3.780 4.389 10.313 34.58%
w. 2D-NL 2.764 3.869 4.498 10.564 35.25%
w. 3D-NL 2.635 3.745 4.393 10.324 34.63%
Full model 2.412 3.601 4.196 10.181 32.23%
(b) Temporal Context Correlation Module is critical and outperforms single correlation module.
Sintel KITTI 2012 KITTI 2015
Clean Final AEE AEE Fl-all
baseline 2.924 4.088 4.621 11.743 36.53%
w. RRCU 2.696 3.794 4.432 10.332 34.65%
TCC+RRCU 2.567 3.722 4.368 10.295 33.89%
Full model 2.412 3.601 4.196 10.181 32.23%
(c) Recurrent Residual Context Upsampling has better performance.
TABLE I: Ablation study of our component choices of the network. Average end-point error (AEE) and percentage of erroneous pixels (Fl-all) Results of our STC-Flow with different components of PSC, TCC and RRCU on Sintel training Clean and Final passes, and KITTI 2012/2015.

Iv-B Ablation Study

To demonstrate the effectiveness of individual contextual attention module in our network, as shown in Table I and Figure 9, we conduct a rigorous ablation study of PSC, TCC, and RRCU, respectively. We observe that these modules could capture clear semantic information with long-range dependencies. The baseline is trained on FlyingChairs and finetuned on FlyingThings3D. We also discuss the efficacy of Lite matrix multiplier in Table II.

Pyramidal spatial context module. STC-Flow utilizes PSC Module in level 3, 4 and 5. Table I(a) demonstrates that using PSC Module can improve the performance on both the Sintel and KITTI datasets, since this module enhances the ability of discriminating feature texture in feature extraction stage, and PSC at stage 3 is more beneficial, for the low-level discriminative details matter.

Temporal context correlation module. TCC Module describes the relationship of correlation with the spatial and temporal context. In Table I(b), we compare the performance of our network using TCC Module with naive correlation operator, and also compare with 2D non-local block for concatenated feature and 3D non-local block for feature pairs. It demonstrates that fusion of correlation with spatial and temporal context is better than single correlation. Notice that 3D non-local blocks perform better in Sintel, however, with heavy computational complexity. TCC can achieve the comparable performance with fewer FLOPs.

Recurrent residual contextual upsampling. We utilize the RRCU Module to learn high-frequency context features and preserve edges. In Table I(c), we compare the quantity of our method using RRCU with single transpose convolution, which demonstrates that reconstruction context learning could preserve details and improve performance.

Lite matrix multiplication. Lite matrix multiplication is an efficient scheme to reduce the computational complexity. We compare the performance of this scheme with different polyphase decomposition factor on Sintel training. As shown in Table II, lite matrix multiplication has a margin influence on AEE, but increases the frame rate conspicuously. Considering the tradeoff between accuracy and time consumption, we select for the full model.

AEE/SSIM (Clean) AEE/SSIM (Final) Runtime (fps)
1 2.407/— 3.588/— 20
2 2.412/0.9765 3.601/0.9982 22
4 2.515/0.9061 3.856/0.8990 25
TABLE II: Detailed results of lite matrix multiplication with different polyphase decomposition factor on Sintel training clean and final pass dataset on AEE and frame rate, and structural similarity index (SSIM) of context features in stage 4 between lite multiplication and naive multiplication. (Inference on Intel Core i5 CPU and NVIDIA GEFORCE 1080 Ti GPU for the frame rate.)
Fig. 9: Results of ablation study on Sintel training Clean and Final passes. We also indicate the learned features on corresponding modules — PSC and RRCU in stage 4 and TCC in stage 6. (Zoom in for details.)
Fig. 10: Examples of predicted optical flow from different methods on Sintel and KITTI datasets. Our method achieves the better performance and preserves the details with fewer artifacts. (Zoom in for details.)
Method Sintel Clean Sintel Final KITTI 2012 KITTI 2015
train test train test train test train train(Fl-all) test(Fl-all)
DeepFlow [20] 2.66 5.38 3.57 7.21 4.48 5.8 10.63 26.52% 29.18%
EpicFlow [16] 2.27 4.12 3.56 6.29 3.09 3.8 9.27 27.18% 27.10%
FlowFields [1] 1.86 3.75 3.06 5.81 3.33 3.5 8.33 24.43%
FlowNetS [4] 4.50 7.42 5.45 8.43 8.26
FlowNetS-ft [4] (3.66) 6.96 (4.44) 7.76 7.52 9.1
FlowNetC [4] 4.31 7.28 5.87 8.81 9.35
FlowNetC-ft [4] (3.78) 6.85 (5.28) 8.51 8.79
FlowNet2 [11] 2.02 3.96 3.54 6.02 4.01 10.08 29.99%
FlowNet2-ft [11] (1.45) 4.16 (2.19) 5.74 3.52 9.94 28.02%
SPyNet [15] 4.12 6.69 5.57 8.43 9.12
SPyNet-ft [15] (3.17) 6.64 (4.32) 8.36 3.36 4.1 35.07%

LiteFlowNet [9]
2.48 4.04 4.00 10.39 28.50%
LiteFlowNet-ft [9] (1.35) 4.54 (1.78) 5.38 (1.05) 1.6 (1.62) (5.58%) (9.38%)
PWC-Net [18] 2.55 3.93 4.14 10.35 33.67%
PWC-Net-ft [18] (2.02) 4.39 (2.08) 5.04 (1.45) 1.7 (2.16) (9.80%) 9.60%
SelFlow-ft [12] (1.68) 3.74 (1.77) 4.26 (0.76) 1.5 (1.18) 8.42%
IRR-PWC-ft [10] (1.92) 3.84 (2.51) 4.58 (1.63) (5.32%) 7.65%
HD3-ft [22] (1.70) 4.79 (1.17) 4.67 (0.81) 1.4 (1.31) (4.10%) 6.55%
STC-Flow (ours) 2.41 3.60 4.20 10.18 32.23%
STC-Flow-ft (Ours) (1.36) 3.52 (1.73) 4.87 (0.98) 1.5 (1.46) (5.43%) 7.99%
TABLE III: AEE and Fl-all of different methods on Sintel and KITTI datasets. The “-ft” suffix denotes the fine-tuned networks using the target dataset. The values in parentheses are the results of the networks on the data they were trained on, and hence are not directly comparable to the others.

Iv-C Comparison with State-of-the-art Methods

As shown in Table III, we achieve the comparable quantity results in Sintel and KITTI datasets compared with state-of-the-art methods. Some samples of visualization results are shown in Figure 10. STC-Flow performs better on AEE among the methods on the Sintel Clean pass. We can see that the finer details are well preserved via context modeling of spatial and temporal long-range relationships, with fewer artifacts and lower end-point error. In addition, our method is based on only two frames without additional information (like occlusion maps for IRR [10] and additional datasets) used, but it outperforms state-of-the-art multi-frames methods, e.g. SelFlow[12]. In addition, STC-Flow is lightweight with far fewer parameters, i.e. 9M instead of 110M of FlowNet2 [11] and 40M of HD [22]. We believe that our flexible scheme is helpful to achieve better performance for other baseline networks, including multi-frame based methods.

V Conclusion

To explore the motion context information for accurate optical flow estimation, we propose a spatio-temporal context-aware network, STC-Flow, for optical flow estimation. We propose three context modules for feature extraction, correlation, and optical flow reconstruction stages, i.e. pyramidal spatial context (PSC) module, temporal context correlation (TCC) module, and recurrent residual contextual upsampling (RRCU) module, respectively. We have validated the effectiveness of each component. Our proposed scheme achieves the state-of-the-art performance without multi-frame or additional information used.


  • [1] C. Bailer, B. Taetz, and D. Stricker (2015) Flow fields: dense correspondence fields for highly accurate large displacement optical flow estimation. In ICCV, Cited by: TABLE III.
  • [2] D. J. Butler, J. Wulff, G. B. Stanley, et al. (2012)

    A naturalistic open source movie for optical flow evaluation

    In ECCV, Cited by: §IV.
  • [3] Y. Cao, J. Xu, S. Lin, et al. (2019) GCNet: non-local networks meet squeeze-excitation networks and beyond.. CVPR. Cited by: §II, Fig. 3.
  • [4] A. Dosovitskiy, P. Fischer, E. Ilg, et al. (2015) Flownet: learning optical flow with convolutional networks. In ICCV, Cited by: §I, §II, §IV-A, TABLE III.
  • [5] Y. Du, C. Yuan, B. Li, et al. (2018) Interaction-aware spatio-temporal pyramid attention networks for action classification. ECCV. Cited by: §II.
  • [6] J. Fu, J. Liu, H. Tian, et al. (2018) Dual attention network for scene segmentation. CVPR. Cited by: §II.
  • [7] A. Geiger, P. Lenz, and R. Urtasun (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, Cited by: §IV.
  • [8] H. Hu, J. Gu, Z. Zhang, et al. (2018) Relation networks for object detection. CVPR. Cited by: §II.
  • [9] T. Hui, X. Tang, and C. Change Loy (2018) Liteflownet: a lightweight convolutional neural network for optical flow estimation. In CVPR, Cited by: §II, §II, §III-E, TABLE III.
  • [10] J. Hur and S. Roth (2019) Iterative residual refinement for joint optical flow and occlusion estimation.. In CVPR, Cited by: §II, §II, §IV-C, TABLE III.
  • [11] E. Ilg, N. Mayer, T. Saikia, et al. (2017) FlowNet 2.0: evolution of optical flow estimation with deep networks. In CVPR, Cited by: §II, §IV-A, §IV-C, TABLE III.
  • [12] P. Liu, M. R. Lyu, I. King, et al. (2019)

    SelFlow: self-supervised learning of optical flow

    CVPR. Cited by: §II, §IV-C, TABLE III.
  • [13] N. Mayer, E. Ilg, P. Hausser, et al. (2016) A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In CVPR, Cited by: §IV-A.
  • [14] M. Menze and A. Geiger (2015) Object scene flow for autonomous vehicles. In CVPR, Cited by: §IV.
  • [15] A. Ranjan and M. J. Black (2017) Optical flow estimation using a spatial pyramid network. In CVPR, Cited by: §I, §II, TABLE III.
  • [16] J. Revaud, P. Weinzaepfel, Z. Harchaoui, et al. (2015) Epicflow: edge-preserving interpolation of correspondences for optical flow. In CVPR, Cited by: §I, TABLE III.
  • [17] W. Shi, J. Caballero, F. Huszar, et al. (2016)

    Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network

    CVPR. Cited by: §III-D.
  • [18] D. Sun, X. Yang, M. Liu, et al. (2018) PWC-net: cnns for optical flow using pyramid, warping, and cost volume. In CVPR, Cited by: §I, §II, §II, §III-C, §III-E, TABLE III.
  • [19] X. Wang, R. Girshick, A. Gupta, et al. (2018) Non-local neural networks. CVPR. Cited by: §II, Fig. 3, §III-A, §III-A.
  • [20] P. Weinzaepfel, J. Revaud, Z. Harchaoui, et al. (2013) DeepFlow: large displacement optical flow with deep matching. In ICCV, Cited by: §I, TABLE III.
  • [21] S. Woo, J. Park, J. Lee, et al. (2018) CBAM: convolutional block attention module. ECCV. Cited by: §II.
  • [22] Z. Yin, T. Darrell, and F. Yu (2019) Hierarchical discrete distribution decomposition for match density estimation. In CVPR, Cited by: §II, §IV-C, TABLE III.
  • [23] H. Zhang, I. Goodfellow, D. N. Metaxas, et al. (2019) Self-attention generative adversarial networks. ICML. Cited by: §II.