BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks

05/05/2023
by   Zihan Guan, et al.
0

Recently, the Segment Anything Model (SAM) has gained significant attention as an image segmentation foundation model due to its strong performance on various downstream tasks. However, it has been found that SAM does not always perform satisfactorily when faced with challenging downstream tasks. This has led downstream users to demand a customized SAM model that can be adapted to these downstream tasks. In this paper, we present BadSAM, the first backdoor attack on the image segmentation foundation model. Our preliminary experiments on the CAMO dataset demonstrate the effectiveness of BadSAM.

READ FULL TEXT
research
04/18/2023

SAM Fails to Segment Anything? – SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, and More

The emergence of large models, also known as foundation models, has brou...
research
07/06/2023

A Critical Look at the Current Usage of Foundation Model for Dense Recognition Task

In recent years large model trained on huge amount of cross-modality dat...
research
09/15/2023

BROW: Better featuRes fOr Whole slide image based on self-distillation

Whole slide image (WSI) processing is becoming part of the key component...
research
05/09/2023

Comparing Foundation Models using Data Kernels

Recent advances in self-supervised learning and neural network scaling h...
research
03/13/2023

ViM: Vision Middleware for Unified Downstream Transferring

Foundation models are pre-trained on massive data and transferred to dow...
research
06/21/2021

Lossy Compression for Lossless Prediction

Most data is automatically collected and only ever "seen" by algorithms....
research
06/15/2023

On Certified Generalization in Structured Prediction

In structured prediction, target objects have rich internal structure wh...

Please sign up or login with your details

Forgot password? Click here to reset