align_uniform
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere.
view repo
Contrastive representation learning has been outstandingly successful in practice. In this work, we identify two key properties related to the contrastive loss: (1) alignment (closeness) of features from positive pairs, and (2) uniformity of the induced distribution of the (normalized) features on the hypersphere. We prove that, asymptotically, the contrastive loss optimizes these properties, and analyze their positive effects on downstream tasks. Empirically, we introduce an optimizable metric to quantify each property. Extensive experiments on standard vision and language datasets confirm the strong agreement between both metrics and downstream task performance. Remarkably, directly optimizing for these two metrics leads to representations with comparable or better performance at downstream tasks than contrastive learning. Project Page: https://ssnl.github.io/hypersphere Code: https://github.com/SsnL/align_uniform
READ FULL TEXT
Contrastive representation learning has shown to be an effective way of
...
read it
We propose methods to strengthen the invariance properties of representa...
read it
Understanding what information neural networks capture is an essential
p...
read it
This paper explores useful modifications of the recent development in
co...
read it
Unsupervised contrastive learning has achieved outstanding success, whil...
read it
We use Bayesian optimization to learn curricula for word representation
...
read it
Deep neural nets typically perform end-to-end backpropagation to learn t...
read it
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere.
Comments
There are no comments yet.