Mobile robots typically operate in complex environments that are inherently dynamic . Therefore, it is important for such autonomous systems to be conscious of dynamic objects in their surroundings. Optical flow describes pixel-level correspondence between two ordered images, and can be regarded as a useful representation for dynamic object detection. Therefore, many approaches for mobile robot tasks, such as SLAM , dynamic object detection  and robot navigation , incorporate optical flow information to improve their performance.
To overcome these limitations, we propose CoT-AMFlow, which comprises adaptive modulation networks, named AMFlows, that learn optical flow estimation in an unsupervised way with a co-teaching strategy. The overview of our proposed CoT-AMFlow is illustrated in Fig. 1, and we leverage three novel techniques to improve the flow accuracy, as follows:
We apply flow modulation modules (FMMs) in our AMFlow to refine the flow initialization from the preceding pyramid level using local flow consistency, which can address the issue of accumulated errors.
We present cost volume modulation modules (CMMs) in our AMFlow to explicitly reduce outliers in the cost volume using a flexible and efficient sparse point-based scheme.
We adopt a co-teaching strategy, where two AMFlows with different initializations simultaneously teach each other about challenging regions to improve robustness against outliers.
We conduct extensive experiments on the MPI Sintel , KITTI Flow 2012 , KITTI Flow 2015  and Middlebury Flow  benchmarks. Experimental results show that our CoT-AMFlow outperforms all other unsupervised approaches, while still running in real time.
2 Related Work
2.1 Optical Flow Estimation
. With recent development in deep learning technology, supervised approaches using convolutional neural networks (CNNs) have been extensively applied in optical flow estimation, and the achieved results are very promising. FlowNet was the first end-to-end deep neural network for optical flow estimation. It employs a correlation layer to compute feature correspondence. Later on, PWC-Net  and LiteFlowNet  presented a pyramid architecture, which consists of feature warping layers, cost volumes and flow estimation layers. Such an architecture can achieve remarkable flow accuracy and high efficiency simultaneously. Their subsequent versions [22, 11] also made incremental improvements. Unsupervised approaches generally adopt similar network architectures to supervised approaches, and focus more on training strategies. However, existing network architectures do not explicitly address the issues of noisy flow initializations and outliers in the cost volume, as previously mentioned. Therefore, we develop the FMMs and CMMs in our AMFlow to overcome these limitations.
Among the training strategies for unsupervised approaches, DSTFlow  first presented a photometric loss and a smoothness loss for unsupervised training. Additionally, some approaches train a single network to perform occlusion reasoning for accuracy improvement [18, 29]. Self-supervision [16, 17] is also an important strategy for unsupervised training. It first trains a single network to generate flow labels, and then conducts data augmentation to make flow estimations more challenging. The augmented samples are further employed as supervision to train another network. One variant of self-supervision is to train only one network with a two-forward process . However, training a single network to provide flow labels is likely to be unreliable due to the disturbance of outliers and the lack of ground-truth supervision. To address this issue, we integrate self-supervision into a co-teaching framework, where two networks simultaneously teach each other about challenging regions to improve stability against outliers.
2.2 Co-Teaching Strategy
The co-teaching strategy was first proposed for the image classification task with extremely noisy labels 
. Since then, many researchers have resorted to this strategy for various specific robust training tasks, such as face recognition and object detection 
. The main difference between previous studies and our approach is that they focus on the task of supervised learning with noisy labels, while we focus on the task of unsupervised learning. Moreover, the noises in their tasks exist at image level (noisy image classification labels), while the outliers in our task exist at pixel level (inaccurate flow estimation pixels in challenging regions).
In this subsection, we first introduce the overall architecture of our AMFlow, and then present our FMM and CMM. Since we use many notations, we suggest readers refer to the glossary provided in the appendix for better understanding. Fig. 2 illustrates an overview of our proposed AMFlow, which follows the pipeline of PWC-Net . Different pyramid levels of feature maps are first extracted hierarchically from the input images and using a siamese feature pyramid network, and then are sent to the coarse-to-fine flow decoder. Here, we take level as an example to introduce our flow decoder, for simplicity. First, the upsampled flow estimation at level is processed by our FMM for refinement, and the generated modulated flow is employed to align the feature map with the feature map . A correlation operation is then employed to compute the cost volume , which is then processed by our CMM to remove outliers. After getting the modulated cost volume , we take it as input and employ the same flow estimation layer as PWC-Net  to estimate the flow residual, which is subsequently added with to obtain the flow estimation at level . This process iterates and the flow estimations at different scales are generated.
Flow Modulation Module (FMM). In the coarse-to-fine framework, a flow estimation from the preceding level is adopted as a flow initialization at the current level. Therefore, the inaccurate flow estimations in challenging regions can propagate to subsequent levels and cause significant performance degradation. Our FMM is developed to address this problem based on the concept of local flow consistency .
Our FMM is based on the assumption that the neighboring pixels with similar feature maps should have similar optical flows. Therefore, for a pixel with an inaccurate flow estimation , we will look for another pixel around , which has a similar feature map to and an accurate flow estimation . Then, we replace with .
To this end, we first compute a confidence map based on the upsampled flow estimation and the downsampled input images and , as illustrated in Fig. 2. The confidence computing operation is defined as follows:
where denotes the function for measuring the photometric difference , and denotes the warping operation of image based on flow . Then, we use a self-correlation layer to compute a self-cost volume , which measures the similarity between each pixel in the feature map and its neighboring pixels. The adopted self-correlation layer is identical to the correlation layer used in the above-mentioned flow decoder, except that it only takes one feature map as input. We further concatenate with , and send the concatenation to several convolution layers to obtain a displacement map . Finally, we warp based on to get the modulated flow estimation .
Cost Volume Modulation Module (CMM). Ambiguous correspondence in challenging regions can introduce noises into the cost volume, which further influence the subsequent flow estimation layers. Our CMM is designed to reduce noises in the cost volume.
where denotes the modulated cost at pixel for flow residual candidate ; pixel belongs to the neighbors of ; denotes the modulation weight; and denotes the original cost at pixel for flow residual candidate . Note that the one-dimensional is transformed from the original two-dimensional flow residual candidate for simplicity, which is the same as the scheme adopted in PWC-Net .
where denotes the number of sampling points; denotes the modulation weight for the -th point; and is the fixed offset of the original convolution layer to . To make the modulation scheme more flexible, we also employ a separate convolutional layer on to learn an additional offset and a spatial-variant weight . These two terms can effectively and efficiently help remove outliers in challenging regions.
3.2 Loss Function
We employ three common loss functions, 1) photometric loss, 2) smoothness loss and 3) self-supervision loss , to train our CoT-AMFlow, as illustrated in Fig. 1. For each network, the forward flow and backward flow can be obtained given the input images and . Then, we can compute an occlusion map with the range between 0 and 1 , where a higher value indicates that the corresponding pixel is more likely to be occluded, and vice versa. Based on these notations, we first introduce our adopted photometric loss  as follows:
where is the generalized Charbonnier penalty function ; stands for the stop-gradient; and denotes element-wise multiplication. (4) shows that occluded regions have little impact on , since there does not exist correspondence in these regions. Moreover, we stop the gradient at the occlusion maps to avoid a trivial solution. Then, the following formulation shows our utilized second-order edge-aware smoothness loss :
where denotes the color channel and is the total number of pixels. We also adopt a self-supervision scheme . Specifically, we first conduct transformations , and on , and respectively to construct augmented samples , , and . The transformations include spatial, occlusion and appearance transformation . We also obtain a flow prediction based on and . Then, our self-supervision loss is shown as follows :
where denotes the L2 norm. Note that, different from , measures the occlusion relationship between and . A higher value in indicates that the corresponding pixel is less likely to be occluded in but more likely to be occluded in . Therefore, (6) shows that helps improve the accuracy of flow estimations in challenging regions.
3.3 Co-Teaching Strategy
Our co-teaching strategy is illustrated in Fig. 1, and the corresponding steps are shown in Algorithm 1. Specifically, we simultaneously train two networks (with parameter ) and (with parameter ). In each mini-batch, we first let the two networks forward individually to obtain several outputs (Line 4). Then, we filter out the pixels with a high occlusion probability by setting their value in the occlusion map as 1 (completely occluded and thus have no impact on ) (Line 5). The filtering threshold is controlled by , which equals 1 at the beginning and then decreases gradually with the increase of epoch number. The key point of our co-teaching strategy is that each network uses the occlusion maps estimated by the other network to compute its own loss function (Line 6 and 7). Finally, we update the parameters of the two networks separately and also update (Line 8 and 10). Next, we will answer two important questions about our co-teaching strategy: 1) Why do we need a dynamic threshold and 2) why can swapping the occlusion maps estimated by two networks help improve the accuracy for unsupervised optical flow estimation?
To answer the first question, we know that it is meaningless to compute photometric loss on the occluded regions, and thus we adopt an occlusion-masked photometric loss. According to , networks will first learn easy and clear patterns, i.e., unchallenging regions. However, with the number of epochs increasing, networks will gradually be affected by the inaccurately estimated occlusion maps and thus overfit on the occluded regions, which in turn will lead to more inaccurate occlusion estimations and further cause significant performance degradation. To address this, we keep more pixels in the initial epochs, i.e., is large. Then, we gradually filter our pixels with high occlusion probability, i.e., gradually decreases, to ensure the networks do not memorize these possible outliers.
The dynamic threshold can, however, only alleviate but not entirely avoid the adverse impact of the occluded regions. Therefore, we further adopt a scheme with two networks, which connects to the answer to our second question. The intuition is that different networks have different abilities to learn flow estimation, and correspondingly, they can generate different occlusion estimations. Therefore, swapping the occlusion maps estimated by the two networks can help them adaptively correct the inaccurate occlusion estimations. Compared with most existing approaches that directly transfer errors back to themselves, our co-teaching strategy can effectively avoid the accumulation of errors and thus improve stability against outliers for unsupervised optical flow estimation. Note that since deep neural networks are highly non-convex and a network with different initializations can lead to different local optimums, we employ two AMFlows with different initializations in our CoT-AMFlow, following , as illustrated in Fig. 1.
4 Experimental Results
4.1 Dataset and Implementation Details
In our experiments, we set in our loss function. In addition, we use for the first 40% of epochs and increase it to 0.15 linearly for the next 20% of epochs, after which it stays at a constant value. The learning rate adopts an exponential decay scheme, with the initialization as , and the Adam optimizer is used. Moreover, we set and in Algorithm 1 for evaluation on public benchmarks.
We first evaluate our CoT-AMFlow on three popular optical flow benchmarks, MPI Sintel , KITTI Flow 2012  and KITTI Flow 2015 . The experimental results are shown in Section 4.2. Then, we perform a generalization evaluation on the Middlebury Flow benchmark , as presented in Section 4.3. We also conduct extensive ablation studies to demonstrate the superiority of 1) our selection of and ; 2) our FMM and CMM; 3) our AMFlow over other network architectures; and 4) our co-teaching strategy over other strategies for unsupervised training. The experimental results are presented in the appendix.
for fair comparison. For the MPI Sintel benchmark, we first train our model on raw movie frames and then fine-tune it on the training set. For the two KITTI Flow benchmarks, we first employ the KITTI raw dataset to pre-train our model and then fine-tune it using multi-view extension data. Additionally, we adopt two standard evaluation metrics, the average end-point error (AEPE) and the percentage of erroneous pixels (F1)[4, 7, 20, 2].
|Approach||S||MPI Sintel||KITTI 2012||KITTI 2015|
4.2 Performance on Public Benchmarks
According to the online leaderboards of the MPI Sintel111http://sintel.is.tue.mpg.de/results, KITTI Flow 2012222http://www.cvlibs.net/datasets/kitti/eval_stereo_flow.php?benchmark=flow and KITTI Flow 2015333http://cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow benchmarks, as shown in Table 1, our CoT-AMFlow outperforms all other unsupervised optical flow estimation approaches. We can clearly observe that our approach is significantly ahead of other unsupervised approaches, especially on the MPI Sintel benchmark, where an AEPE improvement of 0.53px–5.42px is achieved on the Sintel Clean benchmark. We also use the KITTI Flow 2015 benchmark to record the average inference time of our CoT-AMFlow. The results in Table 1 show that our approach can still run in real time with the state-of-the-art performance. One exciting fact is that our unsupervised CoT-AMFlow can achieve considerable performance when compared with supervised approaches. Specifically, on the MPI Sintel Clean benchmark, our CoT-AMFlow outperforms some classic networks such as PWC-Net  and LiteFlowNet , while achieving only a slightly inferior performance compared with LiteFlowNet2 , which demonstrates the effectiveness of our adaptive modulation network and co-teaching strategy. Fig. 3 illustrates examples of the three public benchmarks, where we can obviously see that our CoT-AMFlow yields more robust and accurate results.
4.3 Generalization Analysis across Datasets
We employ the CoT-AMFlow trained on the MPI Sintel benchmark directly on the Middlebury Flow benchmark to test the generalization ability of our approach. Table 2 shows the online leaderboard of the Middlebury Flow benchmark444https://vision.middlebury.edu/flow/eval/results/results-e1.php. Note that our CoT-AMFlow has not been fine-tuned on the benchmark. We can observe that our CoT-AMFlow significantly outperforms the unsupervised UnFlow  and even presents superior performance over supervised approaches such as PWC-Net  and LiteFlowNet . The results strongly verify that our CoT-AMFlow has an excellent generalization ability.
|Metric||PWC-Net ||LiteFlowNet ||UnFlow ||CoT-AMFlow (Ours)|
In this paper, we proposed CoT-AMFlow, an adaptive modulation network with a co-teaching strategy for unsupervised optical flow estimation. Our CoT-AMFlow presents three major contributions: 1) a flow modulation module (FMM), which can refine the flow initialization from the preceding pyramid level to address the issue of accumulated errors; 2) a cost volume modulation module (CMM), which can explicitly reduce outliers in the cost volume to improve the accuracy of optical flow estimation; and 3) a co-teaching strategy for unsupervised training, which employs two networks to teach each other about challenging regions to improve robustness against outliers for unsupervised optical flow estimation. Extensive experiments have demonstrated that our CoT-AMFlow achieves the state-of-the-art performance for unsupervised optical flow estimation with an impressive generalization ability, while still running in real time. We believe that our CoT-AMFlow can be directly used in many mobile robot tasks, such as SLAM and robot navigation, to improve their performance. It is also promising to employ the co-teaching strategy in other unsupervised tasks, such as unsupervised disparity or scene flow estimation.
We thank the anonymous reviewers for their useful comments. This work was supported by the National Natural Science Foundation of China, under grant No. U1713211, Collaborative Research Fund by Research Grants Council Hong Kong, under Project No. C4063-18G, and HKUST-SJTU Joint Research Collaboration Fund, under project SJTU20EG03, awarded to Prof. Ming Liu.
-  (2017) A closer look at memorization in deep networks. In ICML, Cited by: §3.3.
A database and evaluation methodology for optical flow.
Inter. J. Comput. Vision92 (1), pp. 1–31. Cited by: §1, §4.1, §4.1.
-  (2004) High accuracy optical flow estimation based on a theory for warping. In Eur. Conf. Comput. Vision (ECCV), pp. 25–36. Cited by: §2.1.
A naturalistic open source movie for optical flow evaluation. In Proc. Eur. Conf. Comput. Vision (ECCV), A. Fitzgibbon et al. (Eds.) (Ed.), Part IV, LNCS 7577, pp. 611–625. Cited by: §1, §4.1, §4.1, CoT-AMFlow: Adaptive Modulation Network with Co-Teaching Strategy for Unsupervised Optical Flow Estimation.
-  (2019) Training object detectors with noisy data. In 2019 IEEE Intell. Veh. Symp. (IV), pp. 1319–1325. Cited by: §2.2.
Flownet: learning optical flow with convolutional networks.
Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), pp. 2758–2766. Cited by: §1, §2.1.
-  (2012) Are we ready for autonomous driving? the KITTI vision benchmark suite. In Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), Cited by: §1, §4.1, §4.1.
-  (2018) Co-teaching: robust training of deep neural networks with extremely noisy labels. In Adv. Neural Inf. Process. Syst. (NIPS), pp. 8527–8537. Cited by: §2.2, §3.3.
-  (1981) Determining optical flow. In Techn. Appl. Image Understanding, Vol. 281, pp. 319–331. Cited by: §2.1.
-  (2012) Fast cost-volume filtering for visual correspondence and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 35 (2), pp. 504–511. Cited by: §3.1.
-  (2020) A lightweight optical flow CNN - revisiting data fidelity and regularization. IEEE Trans. Pattern Anal. Mach. Intell. (), pp. 1–1. Cited by: §2.1, §4.2, Table 1.
-  (2018) Liteflownet: a lightweight convolutional neural network for optical flow estimation. In Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), pp. 8981–8989. Cited by: §1, §2.1, §4.2, §4.3, Table 1, Table 2.
-  (2020) What matters in unsupervised optical flow. In Eur. Conf. Comput. Vision (ECCV), Cited by: Table 1.
-  (2020) Aggressive perception-aware navigation using deep optical flow dynamics and PixelMPC. IEEE Robot. Automat. Lett. 5 (2), pp. 1207–1214. Cited by: §1.
-  (2020) Learning by analogy: reliable supervision from transformations for unsupervised optical flow estimation. In Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), pp. 6489–6498. Cited by: Table 3, Table 6, §1, §2.1, §3.1, §3.2, Figure 3, §4.1, Table 1.
-  (2019) DDFlow: learning optical flow with unlabeled data distillation. In Proceedings of the AAAI Conf. Artif. Intelli., Vol. 33, pp. 8770–8777. Cited by: Table 6, §1, §2.1, §3.2, §4.1, Table 1.
-  (2019) Selflow: self-supervised learning of optical flow. In Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), pp. 4571–4580. Cited by: Table 6, §1, §2.1, Figure 3, §4.1, Table 1.
-  (2018) Unflow: unsupervised learning of optical flow with a bidirectional census loss. In Thirty-Second AAAI Conf. Artif. Intelli., Cited by: Table 6, §1, §2.1, §4.3, Table 1, Table 2.
-  (1998) Dense estimation and object-based segmentation of the optical flow with robust techniques. IEEE Trans. Image Process. 7 (5), pp. 703–719. Cited by: §2.1.
-  (2015) Object scene flow for autonomous vehicles. In Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), Cited by: §1, §4.1, §4.1.
-  (2017) Unsupervised deep learning for optical flow estimation. In Thirty-First AAAI Conf. Artif. Intelli., Cited by: §1, §2.1.
-  (2019) A fusion approach for multi-frame optical flow estimation. In 2019 IEEE Winter Conf. Appl. Comput. Vision (WACV), pp. 2077–2086. Cited by: §2.1.
-  (2010) Secrets of optical flow estimation and their principles. In 2010 IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit., pp. 2432–2439. Cited by: §3.2.
-  (2018) PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), pp. 8934–8943. Cited by: §1, §2.1, §3.1, §3.1, §4.2, §4.3, Table 1, Table 2.
-  (2020) RAFT: recurrent all-pairs field transforms for optical flow. In Eur. Conf. Comput. Vision (ECCV), Cited by: Table 1.
-  (1998) Bilateral filtering for gray and color images. In Sixth Inter. Conf. Comput. Vision (IEEE Cat. No. 98CH36271), pp. 839–846. Cited by: §3.2.
-  (2018) Feature learning for scene flow estimation from lidar. In Conf. Robot Learn. (CoRL), pp. 283–292. Cited by: §1.
-  (2020) ATG-PVD: ticketing parking violations on a drone. In Eur. Conf. Comput. Vision Workshops (ECCVW), Cited by: §1.
-  (2018) Occlusion aware unsupervised learning of optical flow. In Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), pp. 4884–4893. Cited by: §1, §2.1, §3.2.
-  (2020) Asymmetric co-teaching for unsupervised cross-domain person re-identification.. In AAAI, pp. 12597–12604. Cited by: §2.2.
-  (2006) Adaptive support-weight approach for correspondence search. IEEE Trans. Pattern Anal. Mach. Intell. 28 (4), pp. 650–656. Cited by: §3.1.
-  (2019) FlowFusion: dynamic dense RGB-D SLAM based on optical flow. In 2019 Int. Conf. Robot. Automat. (ICRA), Cited by: §1.
-  (2020) MaskFlownet: asymmetric feature matching with learnable occlusion mask. In Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), pp. 6278–6287. Cited by: Table 1.
-  (2019) Deformable convnets v2: more deformable, better results. In Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), pp. 9308–9316. Cited by: §3.1.
-  (2011) Optic flow in harmony. Inter. J. Comput. Vision 93 (3), pp. 368–388. Cited by: §3.1.
Appendix A Glossary of Notations
The glossary of notations used in the paper is presented in Table 3.
Appendix B Impact of Different and
In our co-teaching strategy, and controls the filtering speed and filtering range of the pixels with high occlusion probability, respectively. We consider three values of , and five values of , . We also test the training schemes that adopt a constant . The results of our CoT-AMFlow are shown in Table 4. We can observe that the dynamic threshold scheme can effectively improve the performance and our CoT-AMFlow is robust on different choices of . Moreover, has a significant impact on the performance. Specifically, a higher indicates that more pixels will be filtered out. We can see that the performance can be improved when increases. However, when too many pixels are filtered out, i.e., or , the performance can deteriorate because the networks cannot get sufficient training data. Note that we set and in the rest of our ablation studies.
Appendix C Effectiveness of Our FMM and CMM
Table 5 shows the evaluation results of variants of our CoT-AMFlow with some of the proposed modules disabled. We can observe that our FMM and CMM can effectively improve the optical flow accuracy, especially for the pixels with large movements. This is because our FMM can refine the flow initialization from the preceding pyramid level to address the issue of accumulated errors by using local flow consistency, while our CMM can explicitly reduce outliers in the cost volume to improve the accuracy of optical flow estimation by using a flexible and efficient sparse point-based scheme. In addition, the best performance is achieved by integrating our FMM and CMM, which demonstrates the effectiveness of our proposed modules.
|and||The input images|
|and||The downsampled input images at level|
|and||The feature maps of input images at level|
|The forward flow estimation at level|
|The upsampled forward flow estimation at level|
|The modulated forward flow generated via our FMM at level|
|The confidence map used in our FMM at level|
|The self-cost volume used in our FMM at level|
|The displacement map used in our FMM at level|
|The cost volume at level|
|The modulated cost volume generated via our CMM at level|
|Section 3.2 and 3.3|
|and||The input images|
|The forward flow estimation|
|The backward flow estimation|
|The occlusion map|
|, and||The transformations employed on , , and , respectively |
|, , and||The samples augmented via the above-mentioned transformations|
|The forward flow prediction based on and|
Appendix D Superiority of Our AMFlow over Other Network Architectures
To further demonstrate the superiority of our AMFlow over other network architectures, we compare the performance of different combinations of unsupervised network architectures and unsupervised training strategies. The results are shown in Table 6. From rows a)–d), we can observe that for each existing unsupervised approach, the performance can be significantly improved when the network architecture is changed from the original one to our AMFlow, which strongly demonstrates the effectiveness of our architecture. The reason why our AMFlow performs better is that it can address the issues of accumulated errors and reduce outliers in the cost volume to improve the optical flow accuracy by using our FMMs and CMMs. Moreover, from row e), we can see that, compared with other network architectures, our AMFlow achieves the best performance when equipped with the same training strategy, i.e., our co-teaching strategy, which further demonstrates the superiority of our AMFlow over other network architectures.
Appendix E Superiority of Our Co-Teaching Strategy over Other Strategies for Unsupervised Training
From columns 1)–4) in Table 6, we can observe that for each existing unsupervised approach, the performance can be significantly improved when the training strategy is changed from the original one to our co-teaching strategy, which strongly demonstrates the effectiveness of our strategy. The reason why our co-teaching strategy performs better is that it can improve robustness against outliers for unsupervised optical flow estimation by employing two networks to teach each other about challenging regions simultaneously. Moreover, from column 5), we can see that, compared with other training strategies, our co-teaching strategy achieves the best performance when employed in the same network architecture, i.e., our AMFlow, which further demonstrates the superiority of our co-teaching strategy over other strategies for unsupervised training.
|a) UnFlowStrat ||8.87||–||–||–||6.61|
|b) DDFlowStrat ||–||5.95||–||–||5.59|
|c) SelFlowStrat ||–||–||5.22||–||4.98|
|d) ARFlowStrat ||–||–||–||4.67||4.36|
|e) Co-Teaching (Ours)||5.65||4.73||3.94||4.29||3.79|