Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework

06/11/2023
by   Minglei Yin, et al.
0

With rich visual data, such as images, becoming readily associated with items, visually-aware recommendation systems (VARS) have been widely used in different applications. Recent studies have shown that VARS are vulnerable to item-image adversarial attacks, which add human-imperceptible perturbations to the clean images associated with those items. Attacks on VARS pose new security challenges to a wide range of applications such as e-Commerce and social networks where VARS are widely used. How to secure VARS from such adversarial attacks becomes a critical problem. Currently, there is still a lack of systematic study on how to design secure defense strategies against visual attacks on VARS. In this paper, we attempt to fill this gap by proposing an adversarial image reconstruction and detection framework to secure VARS. Our proposed method can simultaneously (1) secure VARS from adversarial attacks characterized by local perturbations by image reconstruction based on global vision transformers; and (2) accurately detect adversarial examples using a novel contrastive learning approach. Meanwhile, our framework is designed to be used as both a filter and a detector so that they can be jointly trained to improve the flexibility of our defense strategy to a variety of attacks and VARS models. We have conducted extensive experimental studies with two popular attack methods (FGSM and PGD). Our experimental results on two real-world datasets show that our defense strategy against visual attacks is effective and outperforms existing methods on different attacks. Moreover, our method can detect adversarial examples with high accuracy.

READ FULL TEXT

page 7

page 9

page 10

page 17

page 18

page 19

page 20

page 21

research
08/29/2021

Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

Recent studies have shown that deep neural networks are vulnerable to in...
research
07/20/2020

Robust Tracking against Adversarial Attacks

While deep convolutional neural networks (CNNs) are vulnerable to advers...
research
07/28/2021

Detecting AutoAttack Perturbations in the Frequency Domain

Recently, adversarial attacks on image classification networks by the Au...
research
12/02/2021

Adversarial Robustness of Deep Reinforcement Learning based Dynamic Recommender Systems

Adversarial attacks, e.g., adversarial perturbations of the input and ad...
research
06/26/2019

Defending Adversarial Attacks by Correcting logits

Generating and eliminating adversarial examples has been an intriguing t...
research
07/19/2022

Defending Substitution-Based Profile Pollution Attacks on Sequential Recommenders

While sequential recommender systems achieve significant improvements on...
research
05/18/2023

Towards an Accurate and Secure Detector against Adversarial Perturbations

The vulnerability of deep neural networks to adversarial perturbations h...

Please sign up or login with your details

Forgot password? Click here to reset