Generating Adversarial Examples with Task Oriented Multi-Objective Optimization

04/26/2023
by   Anh Bui, et al.
0

Deep learning models, even the-state-of-the-art ones, are highly vulnerable to adversarial examples. Adversarial training is one of the most efficient methods to improve the model's robustness. The key factor for the success of adversarial training is the capability to generate qualified and divergent adversarial examples which satisfy some objectives/goals (e.g., finding adversarial examples that maximize the model losses for simultaneously attacking multiple models). Therefore, multi-objective optimization (MOO) is a natural tool for adversarial example generation to achieve multiple objectives/goals simultaneously. However, we observe that a naive application of MOO tends to maximize all objectives/goals equally, without caring if an objective/goal has been achieved yet. This leads to useless effort to further improve the goal-achieved tasks, while putting less focus on the goal-unachieved tasks. In this paper, we propose Task Oriented MOO to address this issue, in the context where we can explicitly define the goal achievement for a task. Our principle is to only maintain the goal-achieved tasks, while letting the optimizer spend more effort on improving the goal-unachieved tasks. We conduct comprehensive experiments for our Task Oriented MOO on various adversarial example generation schemes. The experimental results firmly demonstrate the merit of our proposed approach. Our code is available at <https://github.com/tuananhbui89/TAMOO>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/01/2022

The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training

Although current deep learning techniques have yielded superior performa...
research
02/27/2023

RangedIK: An Optimization-based Robot Motion Generation Method for Ranged-Goal Tasks

Generating feasible robot motions in real-time requires achieving multip...
research
03/02/2021

Adversarial Examples for Unsupervised Machine Learning Models

Adversarial examples causing evasive predictions are widely used to eval...
research
03/09/2023

BeamAttack: Generating High-quality Textual Adversarial Examples through Beam Search and Mixed Semantic Spaces

Natural language processing models based on neural networks are vulnerab...
research
03/24/2023

Generalist: Decoupling Natural and Robust Generalization

Deep neural networks obtained by standard training have been constantly ...
research
04/21/2022

Fast AdvProp

Adversarial Propagation (AdvProp) is an effective way to improve recogni...
research
06/30/2020

Generating Adversarial Examples with an Optimized Quality

Deep learning models are widely used in a range of application areas, su...

Please sign up or login with your details

Forgot password? Click here to reset