Towards Homogeneous Modality Learning and Multi-Granularity Information Exploration for Visible-Infrared Person Re-Identification

04/11/2022
by   Haojie Liu, et al.
0

Visible-infrared person re-identification (VI-ReID) is a challenging and essential task, which aims to retrieve a set of person images over visible and infrared camera views. In order to mitigate the impact of large modality discrepancy existing in heterogeneous images, previous methods attempt to apply generative adversarial network (GAN) to generate the modality-consisitent data. However, due to severe color variations between the visible domain and infrared domain, the generated fake cross-modality samples often fail to possess good qualities to fill the modality gap between synthesized scenarios and target real ones, which leads to sub-optimal feature representations. In this work, we address cross-modality matching problem with Aligned Grayscale Modality (AGM), an unified dark-line spectrum that reformulates visible-infrared dual-mode learning as a gray-gray single-mode learning problem. Specifically, we generate the grasycale modality from the homogeneous visible images. Then, we train a style tranfer model to transfer infrared images into homogeneous grayscale images. In this way, the modality discrepancy is significantly reduced in the image space. In order to reduce the remaining appearance discrepancy, we further introduce a multi-granularity feature extraction network to conduct feature-level alignment. Rather than relying on the global information, we propose to exploit local (head-shoulder) features to assist person Re-ID, which complements each other to form a stronger feature descriptor. Comprehensive experiments implemented on the mainstream evaluation datasets include SYSU-MM01 and RegDB indicate that our method can significantly boost cross-modality retrieval performance against the state of the art methods.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 12

page 13

research
07/23/2019

Enhancing the Discriminative Feature Learning for Visible-Thermal Cross-Modality Person Re-Identification

Existing person re-identification has achieved great progress in the vis...
research
02/24/2021

SFANet: A Spectrum-aware Feature Augmentation Network for Visible-Infrared Person Re-Identification

Visible-Infrared person re-identification (VI-ReID) is a challenging mat...
research
10/04/2022

How Image Generation Helps Visible-to-Infrared Person Re-Identification?

Compared to visible-to-visible (V2V) person re-identification (ReID), th...
research
08/01/2022

Counterfactual Intervention Feature Transfer for Visible-Infrared Person Re-identification

Graph-based models have achieved great success in person re-identificati...
research
07/17/2023

Bridging the Gap: Multi-Level Cross-Modality Joint Alignment for Visible-Infrared Person Re-Identification

Visible-Infrared person Re-IDentification (VI-ReID) is a challenging cro...
research
08/21/2022

CycleTrans: Learning Neutral yet Discriminative Features for Visible-Infrared Person Re-Identification

Visible-infrared person re-identification (VI-ReID) is a task of matchin...
research
08/01/2022

Multi-spectral Vehicle Re-identification with Cross-directional Consistency Network and a High-quality Benchmark

To tackle the challenge of vehicle re-identification (Re-ID) in complex ...

Please sign up or login with your details

Forgot password? Click here to reset