Noise-Tolerant Learning for Audio-Visual Action Recognition

05/16/2022
by   Haochen Han, et al.
0

Recently, video recognition is emerging with the help of multi-modal learning, which focuses on integrating multiple modalities to improve the performance or robustness of a model. Although various multi-modal learning methods have been proposed and offer remarkable recognition results, almost all of these methods rely on high-quality manual annotations and assume that modalities among multi-modal data provide relevant semantic information. Unfortunately, most widely used video datasets are collected from the Internet and inevitably contain noisy labels and noisy correspondence. To solve this problem, we use the audio-visual action recognition task as a proxy and propose a noise-tolerant learning framework to find anti-interference model parameters to both noisy labels and noisy correspondence. Our method consists of two phases and aims to rectify noise by the inherent correlation between modalities. A noise-tolerant contrastive training phase is performed first to learn robust model parameters unaffected by the noisy labels. To reduce the influence of noisy correspondence, we propose a cross-modal noise estimation component to adjust the consistency between different modalities. Since the noisy correspondence existed at the instance level, a category-level contrastive loss is proposed to further alleviate the interference of noisy correspondence. Then in the hybrid supervised training phase, we calculate the distance metric among features to obtain corrected labels, which are used as complementary supervision. In addition, we investigate the noisy correspondence in real-world datasets and conduct comprehensive experiments with synthetic and real noise data. The results verify the advantageous performance of our method compared to state-of-the-art methods.

READ FULL TEXT

page 1

page 3

page 7

page 10

research
04/22/2021

Distilling Audio-Visual Knowledge by Compositional Contrastive Learning

Having access to multi-modal cues (e.g. vision and audio) empowers some ...
research
05/11/2021

AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

Multi-modal learning, which focuses on utilizing various modalities to i...
research
06/21/2021

Contrastive Multi-Modal Clustering

Multi-modal clustering, which explores complementary information from mu...
research
04/13/2023

Noisy Correspondence Learning with Meta Similarity Correction

Despite the success of multimodal learning in cross-modal retrieval task...
research
12/08/2022

Graph Matching with Bi-level Noisy Correspondence

In this paper, we study a novel and widely existing problem in graph mat...
research
03/22/2023

BiCro: Noisy Correspondence Rectification for Multi-modality Data via Bi-directional Cross-modal Similarity Consistency

As one of the most fundamental techniques in multimodal learning, cross-...
research
11/12/2020

Learning Inter-Modal Correspondence and Phenotypes from Multi-Modal Electronic Health Records

Non-negative tensor factorization has been shown a practical solution to...

Please sign up or login with your details

Forgot password? Click here to reset