Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption

11/25/2020
by   Ivan Evtimov, et al.
2

This work examines the vulnerability of multimodal (image + text) models to adversarial threats similar to those discussed in previous literature on unimodal (image- or text-only) models. We introduce realistic assumptions of partial model knowledge and access, and discuss how these assumptions differ from the standard "black-box"/"white-box" dichotomy common in current literature on adversarial attacks. Working under various levels of these "gray-box" assumptions, we develop new attack methodologies unique to multimodal classification and evaluate them on the Hateful Memes Challenge classification task. We find that attacking multiple modalities yields stronger attacks than unimodal attacks alone (inducing errors in up to 73 and that the unimodal image attacks on multimodal classifiers we explored were stronger than character-based text augmentation attacks (inducing errors on average in 45

READ FULL TEXT
research
08/12/2020

Model Robustness with Text Classification: Semantic-preserving adversarial attacks

We propose algorithms to create adversarial attacks to assess model robu...
research
10/19/2019

Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification

High-performance spoofing countermeasure systems for automatic speaker v...
research
09/07/2020

Black Box to White Box: Discover Model Characteristics Based on Strategic Probing

In Machine Learning, White Box Adversarial Attacks rely on knowing under...
research
03/09/2021

BASAR:Black-box Attack on Skeletal Action Recognition

Skeletal motion plays a vital role in human activity recognition as eith...
research
02/20/2019

Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers

Deep neural networks provide unprecedented performance in all image clas...
research
03/10/2020

Using an ensemble color space model to tackle adversarial examples

Minute pixel changes in an image drastically change the prediction that ...
research
07/08/2022

Online Evasion Attacks on Recurrent Models:The Power of Hallucinating the Future

Recurrent models are frequently being used in online tasks such as auton...

Please sign up or login with your details

Forgot password? Click here to reset