Contrastive Learning for Lifted Networks

05/07/2019
by   Christopher Zach, et al.
0

In this work we address supervised learning via lifted network formulations. Lifted networks are interesting because they allow training on massively parallel hardware and assign energy models to discriminatively trained neural networks. We demonstrate that training methods for lifted networks proposed in the literature have significant limitations, and therefore we propose to use a contrastive loss to train lifted networks. We show that this contrastive training approximates back-propagation in theory and in practice, and that it is superior to the regular training objective for lifted networks.

READ FULL TEXT
research
07/17/2020

Hybrid Discriminative-Generative Training via Contrastive Learning

Contrastive learning and supervised learning have both seen significant ...
research
05/19/2023

Towards understanding neural collapse in supervised contrastive learning with the information bottleneck method

Neural collapse describes the geometry of activation in the final layer ...
research
04/07/2023

Supervised Contrastive Learning with Heterogeneous Similarity for Distribution Shifts

Distribution shifts are problems where the distribution of data changes ...
research
12/14/2022

Establishing a stronger baseline for lightweight contrastive models

Recent research has reported a performance degradation in self-supervise...
research
06/17/2021

Poisoning and Backdooring Contrastive Learning

Contrastive learning methods like CLIP train on noisy and uncurated trai...
research
02/17/2020

Convergence of End-to-End Training in Deep Unsupervised Contrasitive Learning

Unsupervised contrastive learning has gained increasing attention in the...
research
02/26/2023

Contrast-PLC: Contrastive Learning for Packet Loss Concealment

Packet loss concealment (PLC) is challenging in concealing missing conte...

Please sign up or login with your details

Forgot password? Click here to reset