Attack-SAM: Towards Evaluating Adversarial Robustness of Segment Anything Model

05/01/2023
by   Chenshuang Zhang, et al.
0

Segment Anything Model (SAM) has attracted significant attention recently, due to its impressive performance on various downstream tasks in a zero-short manner. Computer vision (CV) area might follow the natural language processing (NLP) area to embark on a path from task-specific vision models toward foundation models. However, previous task-specific models are widely recognized as vulnerable to adversarial examples, which fool the model to make wrong predictions with imperceptible perturbation. Such vulnerability to adversarial attacks causes serious concerns when applying deep models to security-sensitive applications. Therefore, it is critical to know whether the vision foundation model SAM can also be easily fooled by adversarial attacks. To the best of our knowledge, our work is the first of its kind to conduct a comprehensive investigation on how to attack SAM with adversarial examples. Specifically, we find that SAM is vulnerable to white-box attacks while maintaining robustness to some extent in the black-box setting. This is an ongoing project and more results and findings will be updated soon through https://github.com/chenshuang-zhang/attack-sam.

READ FULL TEXT

page 3

page 4

research
10/17/2017

Boosting Adversarial Attacks with Momentum

Deep neural networks are vulnerable to adversarial examples, which poses...
research
06/05/2019

Multi-way Encoding for Robustness

Deep models are state-of-the-art for many computer vision tasks includin...
research
04/04/2019

White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks

Adversarial examples are important for understanding the behavior of neu...
research
09/21/2023

How Robust is Google's Bard to Adversarial Image Attacks?

Multimodal Large Language Models (MLLMs) that integrate text and other m...
research
05/26/2023

On Evaluating Adversarial Robustness of Large Vision-Language Models

Large vision-language models (VLMs) such as GPT-4 have achieved unpreced...
research
12/18/2020

ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries

With the successful application of deep learning models in many real-wor...
research
05/26/2019

Non-Determinism in Neural Networks for Adversarial Robustness

Recent breakthroughs in the field of deep learning have led to advanceme...

Please sign up or login with your details

Forgot password? Click here to reset