Modeling Multiple Views via Implicitly Preserving Global Consistency and Local Complementarity

09/16/2022
by   Jiangmeng Li, et al.
0

While self-supervised learning techniques are often used to mining implicit knowledge from unlabeled data via modeling multiple views, it is unclear how to perform effective representation learning in a complex and inconsistent context. To this end, we propose a methodology, specifically consistency and complementarity network (CoCoNet), which avails of strict global inter-view consistency and local cross-view complementarity preserving regularization to comprehensively learn representations from multiple views. On the global stage, we reckon that the crucial knowledge is implicitly shared among views, and enhancing the encoder to capture such knowledge from data can improve the discriminability of the learned representations. Hence, preserving the global consistency of multiple views ensures the acquisition of common knowledge. CoCoNet aligns the probabilistic distribution of views by utilizing an efficient discrepancy metric measurement based on the generalized sliced Wasserstein distance. Lastly on the local stage, we propose a heuristic complementarity-factor, which joints cross-view discriminative knowledge, and it guides the encoders to learn not only view-wise discriminability but also cross-view complementary information. Theoretically, we provide the information-theoretical-based analyses of our proposed CoCoNet. Empirically, to investigate the improvement gains of our approach, we conduct adequate experimental validations, which demonstrate that CoCoNet outperforms the state-of-the-art self-supervised methods by a significant margin proves that such implicit consistency and complementarity preserving regularization can enhance the discriminability of latent representations.

READ FULL TEXT

page 2

page 4

page 8

page 12

page 14

page 15

page 18

research
03/28/2021

Self-supervised Discriminative Feature Learning for Multi-view Clustering

Multi-view clustering is an important research topic due to its capabili...
research
04/07/2022

mulEEG: A Multi-View Representation Learning on EEG Signals

Modeling effective representations using multiple views that positively ...
research
08/27/2021

Binocular Mutual Learning for Improving Few-shot Classification

Most of the few-shot learning methods learn to transfer knowledge from d...
research
09/06/2021

Information Theory-Guided Heuristic Progressive Multi-View Coding

Multi-view representation learning captures comprehensive information fr...
research
10/02/2022

Pixel-global Self-supervised Learning with Uncertainty-aware Context Stabilizer

We developed a novel SSL approach to capture global consistency and pixe...
research
06/26/2021

Improving Sequential Recommendation Consistency with Self-Supervised Imitation

Most sequential recommendation models capture the features of consecutiv...
research
06/06/2022

CORE: Consistent Representation Learning for Face Forgery Detection

Face manipulation techniques develop rapidly and arouse widespread publi...

Please sign up or login with your details

Forgot password? Click here to reset