Attacking Perceptual Similarity Metrics

05/15/2023
by   Abhijay Ghildyal, et al.
0

Perceptual similarity metrics have progressively become more correlated with human judgments on perceptual similarity; however, despite recent advances, the addition of an imperceptible distortion can still compromise these metrics. In our study, we systematically examine the robustness of these metrics to imperceptible adversarial perturbations. Following the two-alternative forced-choice experimental design with two distorted images and one reference image, we perturb the distorted image closer to the reference via an adversarial attack until the metric flips its judgment. We first show that all metrics in our study are susceptible to perturbations generated via common adversarial attacks such as FGSM, PGD, and the One-pixel attack. Next, we attack the widely adopted LPIPS metric using spatial-transformation-based adversarial perturbations (stAdv) in a white-box setting to craft adversarial examples that can effectively transfer to other similarity metrics in a black-box setting. We also combine the spatial attack stAdv with PGD (ℓ_∞-bounded) attack to increase transferability and use these adversarial examples to benchmark the robustness of both traditional and recently developed metrics. Our benchmark provides a good starting point for discussion and further research on the robustness of metrics to imperceptible adversarial perturbations.

READ FULL TEXT

page 12

page 18

page 19

research
10/30/2020

Perception Improvement for Free: Exploring Imperceptible Black-box Adversarial Attacks on Image Classification

Deep neural networks are vulnerable to adversarial attacks. White-box ad...
research
06/10/2019

E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles

It has been recently shown that the hidden variables of convolutional ne...
research
02/21/2019

Quantifying Perceptual Distortion of Adversarial Examples

Recent work has shown that additive threat models, which only permit the...
research
05/01/2021

A Perceptual Distortion Reduction Framework for Adversarial Perturbation Generation

Most of the adversarial attack methods suffer from large perceptual dist...
research
01/31/2021

Towards Imperceptible Query-limited Adversarial Attacks with Perceptual Feature Fidelity Loss

Recently, there has been a large amount of work towards fooling deep-lea...
research
04/26/2021

Impact of Spatial Frequency Based Constraints on Adversarial Robustness

Adversarial examples mainly exploit changes to input pixels to which hum...
research
07/18/2018

Harmonic Adversarial Attack Method

Adversarial attacks find perturbations that can fool models into misclas...

Please sign up or login with your details

Forgot password? Click here to reset