Unsupervised Representation Learning via Neural Activation Coding

12/07/2021
by   Yookoon Park, et al.
4

We present neural activation coding (NAC) as a novel approach for learning deep representations from unlabeled data for downstream applications. We argue that the deep encoder should maximize its nonlinear expressivity on the data for downstream predictors to take full advantage of its representation power. To this end, NAC maximizes the mutual information between activation patterns of the encoder and the data over a noisy communication channel. We show that learning for a noise-robust activation code increases the number of distinct linear regions of ReLU encoders, hence the maximum nonlinear expressivity. More interestingly, NAC learns both continuous and discrete representations of data, which we respectively evaluate on two downstream tasks: (i) linear classification on CIFAR-10 and ImageNet-1K and (ii) nearest neighbor retrieval on CIFAR-10 and FLICKR-25K. Empirical results show that NAC attains better or comparable performance on both tasks over recent baselines including SimCLR and DistillHash. In addition, NAC pretraining provides significant benefits to the training of deep generative models. Our code is available at https://github.com/yookoon/nac.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2022

Self-Supervised Learning via Maximum Entropy Coding

A mainstream type of current self-supervised learning methods pursues a ...
research
05/26/2022

Matryoshka Representations for Adaptive Deployment

Learned representations are a central component in modern ML systems, se...
research
03/13/2020

DHOG: Deep Hierarchical Object Grouping

Recently, a number of competitive methods have tackled unsupervised repr...
research
03/02/2023

On the Provable Advantage of Unsupervised Pretraining

Unsupervised pretraining, which learns a useful representation using a l...
research
01/14/2021

Neural networks behave as hash encoders: An empirical study

The input space of a neural network with ReLU-like activations is partit...
research
11/20/2020

Exploring Simple Siamese Representation Learning

Siamese networks have become a common structure in various recent models...
research
02/20/2020

Neural Bayes: A Generic Parameterization Method for Unsupervised Representation Learning

We introduce a parameterization method called Neural Bayes which allows ...

Please sign up or login with your details

Forgot password? Click here to reset