On Adversarial Robustness of Large-scale Audio Visual Learning

03/23/2022
by   Juncheng B. Li, et al.
0

As audio-visual systems are being deployed for safety-critical tasks such as surveillance and malicious content filtering, their robustness remains an under-studied area. Existing published work on robustness either does not scale to large-scale dataset, or does not deal with multiple modalities. This work aims to study several key questions related to multi-modal learning through the lens of robustness: 1) Are multi-modal models necessarily more robust than uni-modal models? 2) How to efficiently measure the robustness of multi-modal learning? 3) How to fuse different modalities to achieve a more robust multi-modal model? To understand the robustness of the multi-modal model in a large-scale setting, we propose a density-based metric, and a convexity metric to efficiently measure the distribution of each modality in high-dimensional latent space. Our work provides a theoretical intuition together with empirical evidence showing how multi-modal fusion affects adversarial robustness through these metrics. We further devise a mix-up strategy based on our metrics to improve the robustness of the trained model. Our experiments on AudioSet and Kinetics-Sounds verify our hypothesis that multi-modal models are not necessarily more robust than their uni-modal counterparts in the face of adversarial examples. We also observe our mix-up trained method could achieve as much protection as traditional adversarial training, offering a computationally cheap alternative. Implementation: https://github.com/lijuncheng16/AudioSetDoneRight

READ FULL TEXT

page 2

page 3

research
04/21/2023

Missing Modality Robustness in Semi-Supervised Multi-Modal Semantic Segmentation

Using multiple spatial modalities has been proven helpful in improving s...
research
11/15/2020

Audio-Visual Event Recognition through the lens of Adversary

As audio/visual classification models are widely deployed for sensitive ...
research
02/11/2023

HateProof: Are Hateful Meme Detection Systems really Robust?

Exploiting social media to spread hate has tremendously increased over t...
research
08/21/2023

On the Adversarial Robustness of Multi-Modal Foundation Models

Multi-modal foundation models combining vision and language models such ...
research
10/27/2021

Perceptual Score: What Data Modalities Does Your Model Perceive?

Machine learning advances in the last decade have relied significantly o...
research
09/28/2021

Fail-Safe Human Detection for Drones Using a Multi-Modal Curriculum Learning Approach

Drones are currently being explored for safety-critical applications whe...
research
04/21/2021

Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings

Reliability of machine learning (ML) systems is crucial in safety-critic...

Please sign up or login with your details

Forgot password? Click here to reset