Adversarial Robustness of Visual Dialog

07/06/2022
by   Lu Yu, et al.
0

Adversarial robustness evaluates the worst-case performance scenario of a machine learning model to ensure its safety and reliability. This study is the first to investigate the robustness of visually grounded dialog models towards textual attacks. These attacks represent a worst-case scenario where the input question contains a synonym which causes the previously correct model to return a wrong answer. Using this scenario, we first aim to understand how multimodal input components contribute to model robustness. Our results show that models which encode dialog history are more robust, and when launching an attack on history, model prediction becomes more uncertain. This is in contrast to prior work which finds that dialog history is negligible for model performance on this task. We also evaluate how to generate adversarial test examples which successfully fool the model but remain undetected by the user/software designer. We find that the textual, as well as the visual context are important to generate plausible worst-case scenarios.

READ FULL TEXT

page 1

page 7

page 8

page 9

research
05/08/2020

History for Visual Dialog: Do we really need it?

Visual Dialog involves "understanding" the dialog history (what has been...
research
02/15/2022

Holistic Adversarial Robustness of Deep Learning Models

Adversarial robustness studies the worst-case performance of a machine l...
research
02/26/2019

Image-Question-Answer Synergistic Network for Visual Dialog

The image, question (combined with the history for de-referencing), and ...
research
09/09/2022

The Space of Adversarial Strategies

Adversarial examples, inputs designed to induce worst-case behavior in m...
research
10/06/2022

Towards Out-of-Distribution Adversarial Robustness

Adversarial robustness continues to be a major challenge for deep learni...
research
12/04/2017

Examining Cooperation in Visual Dialog Models

In this work we propose a blackbox intervention method for visual dialog...
research
11/25/2019

Identifying Model Weakness with Adversarial Examiner

Machine learning models are usually evaluated according to the average c...

Please sign up or login with your details

Forgot password? Click here to reset