NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models

03/29/2022
by   Simin Chen, et al.
0

Neural image caption generation (NICG) models have received massive attention from the research community due to their excellent performance in visual understanding. Existing work focuses on improving NICG model accuracy while efficiency is less explored. However, many real-world applications require real-time feedback, which highly relies on the efficiency of NICG models. Recent research observed that the efficiency of NICG models could vary for different inputs. This observation brings in a new attack surface of NICG models, i.e., An adversary might be able to slightly change inputs to cause the NICG models to consume more computational resources. To further understand such efficiency-oriented threats, we propose a new attack approach, NICGSlowDown, to evaluate the efficiency robustness of NICG models. Our experimental results show that NICGSlowDown can generate images with human-unnoticeable perturbations that will increase the NICG model latency up to 483.86 this research could raise the community's concern about the efficiency robustness of NICG models.

READ FULL TEXT

page 3

page 7

page 12

research
06/01/2023

SlothSpeech: Denial-of-service Attack Against Speech Recognition Models

Deep Learning (DL) models have been popular nowadays to execute differen...
research
03/22/2023

Revisiting DeepFool: generalization and improvement

Deep neural networks have been known to be vulnerable to adversarial exa...
research
08/19/2021

Pruning in the Face of Adversaries

The vulnerability of deep neural networks against adversarial examples -...
research
10/07/2022

NMTSloth: Understanding and Testing Efficiency Degradation of Neural Machine Translation Systems

Neural Machine Translation (NMT) systems have received much recent atten...
research
01/29/2019

Adversarial Examples Are a Natural Consequence of Test Error in Noise

Over the last few years, the phenomenon of adversarial examples --- mali...
research
08/17/2023

Dynamic Neural Network is All You Need: Understanding the Robustness of Dynamic Mechanisms in Neural Networks

Deep Neural Networks (DNNs) have been used to solve different day-to-day...
research
06/15/2022

Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack

The AutoAttack (AA) has been the most reliable method to evaluate advers...

Please sign up or login with your details

Forgot password? Click here to reset