Robust Person Re-identification with Multi-Modal Joint Defence

11/18/2021
by   Yunpeng Gong, et al.
7

The Person Re-identification (ReID) system based on metric learning has been proved to inherit the vulnerability of deep neural networks (DNNs), which are easy to be fooled by adversarail metric attacks. Existing work mainly relies on adversarial training for metric defense, and more methods have not been fully studied. By exploring the impact of attacks on the underlying features, we propose targeted methods for metric attacks and defence methods. In terms of metric attack, we use the local color deviation to construct the intra-class variation of the input to attack color features. In terms of metric defenses, we propose a joint defense method which includes two parts of proactive defense and passive defense. Proactive defense helps to enhance the robustness of the model to color variations and the learning of structure relations across multiple modalities by constructing different inputs from multimodal images, and passive defense exploits the invariance of structural features in a changing pixel space by circuitous scaling to preserve structural features while eliminating some of the adversarial noise. Extensive experiments demonstrate that the proposed joint defense compared with the existing adversarial metric defense methods which not only against multiple attacks at the same time but also has not significantly reduced the generalization capacity of the model. The code is available at https://github.com/finger-monkey/multi-modal_joint_defence.

READ FULL TEXT

page 1

page 4

page 5

page 8

research
01/21/2021

A Person Re-identification Data Augmentation Method with Adversarial Defense Effect

The security of the Person Re-identification(ReID) model plays a decisiv...
research
07/18/2014

Deep Metric Learning for Practical Person Re-Identification

Various hand-crafted features and metric learning methods prevail in the...
research
05/28/2019

Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss

Recent studies have highlighted that deep neural networks (DNNs) are vul...
research
03/30/2023

Adversarial Attack and Defense for Dehazing Networks

The research on single image dehazing task has been widely explored. How...
research
04/08/2020

Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking

The success of DNNs has driven the extensive applications of person re-i...
research
11/04/2022

Improving Adversarial Robustness to Sensitivity and Invariance Attacks with Deep Metric Learning

Intentionally crafted adversarial samples have effectively exploited wea...
research
11/12/2020

Adversarial Robustness Against Image Color Transformation within Parametric Filter Space

We propose Adversarial Color Enhancement (ACE), a novel approach to gene...

Please sign up or login with your details

Forgot password? Click here to reset