-
Label Contrastive Coding based Graph Neural Network for Graph Classification
Graph classification is a critical research problem in many applications...
read it
-
Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Contrastive self-supervised learning (CSL) leverages unlabeled data to t...
read it
-
Spatial Contrastive Learning for Few-Shot Classification
Existing few-shot classification methods rely to some degree on the cros...
read it
-
Supervised Deep Sparse Coding Networks
In this paper, we describe the deep sparse coding network (SCN), a novel...
read it
-
Compression-Based Regularization with an Application to Multi-Task Learning
This paper investigates, from information theoretic grounds, a learning ...
read it
-
Semi-Supervised Histology Classification using Deep Multiple Instance Learning and Contrastive Predictive Coding
Convolutional neural networks can be trained to perform histology slide ...
read it
-
Self-Calibrating Active Binocular Vision via Active Efficient Coding with Deep Autoencoders
We present a model of the self-calibration of active binocular vision co...
read it
Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction
To learn intrinsic low-dimensional structures from high-dimensional data that most discriminate between classes, we propose the principle of Maximal Coding Rate Reduction (MCR^2), an information-theoretic measure that maximizes the coding rate difference between the whole dataset and the sum of each individual class. We clarify its relationships with most existing frameworks such as cross-entropy, information bottleneck, information gain, contractive and contrastive learning, and provide theoretical guarantees for learning diverse and discriminative features. The coding rate can be accurately computed from finite samples of degenerate subspace-like distributions and can learn intrinsic representations in supervised, self-supervised, and unsupervised settings in a unified manner. Empirically, the representations learned using this principle alone are significantly more robust to label corruptions in classification than those using cross-entropy, and can lead to state-of-the-art results in clustering mixed data from self-learned invariant features.
READ FULL TEXT
Comments
There are no comments yet.