Understanding and Measuring Robustness of Multimodal Learning

12/22/2021
by   Nishant Vishwamitra, et al.
0

The modern digital world is increasingly becoming multimodal. Although multimodal learning has recently revolutionized the state-of-the-art performance in multimodal tasks, relatively little is known about the robustness of multimodal learning in an adversarial setting. In this paper, we introduce a comprehensive measurement of the adversarial robustness of multimodal learning by focusing on the fusion of input modalities in multimodal models, via a framework called MUROAN (MUltimodal RObustness ANalyzer). We first present a unified view of multimodal models in MUROAN and identify the fusion mechanism of multimodal models as a key vulnerability. We then introduce a new type of multimodal adversarial attacks called decoupling attack in MUROAN that aims to compromise multimodal models by decoupling their fused modalities. We leverage the decoupling attack of MUROAN to measure several state-of-the-art multimodal models and find that the multimodal fusion mechanism in all these models is vulnerable to decoupling attacks. We especially demonstrate that, in the worst case, the decoupling attack of MUROAN achieves an attack success rate of 100 traditional adversarial training is insufficient to improve the robustness of multimodal models with respect to decoupling attacks. We hope our findings encourage researchers to pursue improving the robustness of multimodal learning.

READ FULL TEXT

page 2

page 4

page 6

research
06/25/2022

Defending Multimodal Fusion Models against Single-Source Adversaries

Beyond achieving high performance across many vision tasks, multimodal m...
research
04/10/2023

On Robustness in Multimodal Learning

Multimodal learning is defined as learning over multiple heterogeneous i...
research
11/15/2020

Audio-Visual Event Recognition through the lens of Adversary

As audio/visual classification models are widely deployed for sensitive ...
research
06/19/2022

Towards Adversarial Attack on Vision-Language Pre-training Models

While vision-language pre-training model (VLP) has shown revolutionary i...
research
04/01/2019

Robustness of 3D Deep Learning in an Adversarial Setting

Understanding the spatial arrangement and nature of real-world objects i...
research
12/14/2021

Dual-Key Multimodal Backdoors for Visual Question Answering

The success of deep learning has enabled advances in multimodal tasks th...
research
06/06/2020

Multimodal Systems: Taxonomy, Methods, and Challenges

Naturally, humans use multiple modalities to convey information. The mod...

Please sign up or login with your details

Forgot password? Click here to reset