Learning Compact Features via In-Training Representation Alignment

11/23/2022
by   Xin Li, et al.
0

Deep neural networks (DNNs) for supervised learning can be viewed as a pipeline of the feature extractor (i.e., last hidden layer) and a linear classifier (i.e., output layer) that are trained jointly with stochastic gradient descent (SGD) on the loss function (e.g., cross-entropy). In each epoch, the true gradient of the loss function is estimated using a mini-batch sampled from the training set and model parameters are then updated with the mini-batch gradients. Although the latter provides an unbiased estimation of the former, they are subject to substantial variances derived from the size and number of sampled mini-batches, leading to noisy and jumpy updates. To stabilize such undesirable variance in estimating the true gradients, we propose In-Training Representation Alignment (ITRA) that explicitly aligns feature distributions of two different mini-batches with a matching loss in the SGD training process. We also provide a rigorous analysis of the desirable effects of the matching loss on feature representation learning: (1) extracting compact feature representation; (2) reducing over-adaption on mini-batches via an adaptive weighting mechanism; and (3) accommodating to multi-modalities. Finally, we conduct large-scale experiments on both image and text classifications to demonstrate its superior performance to the strong baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2020

Improve SGD Training via Aligning Mini-batches

Deep neural networks (DNNs) for supervised learning can be viewed as a p...
research
02/23/2020

Improve SGD Training via Aligning Min-batches

Deep neural networks (DNNs) for supervised learning can be viewed as a p...
research
06/15/2019

RECAL: Reuse of Established CNN classifer Apropos unsupervised Learning paradigm

Recently, clustering with deep network framework has attracted attention...
research
05/01/2017

Determinantal Point Processes for Mini-Batch Diversification

We study a mini-batch diversification scheme for stochastic gradient des...
research
05/20/2023

A Framework for Provably Stable and Consistent Training of Deep Feedforward Networks

We present a novel algorithm for training deep neural networks in superv...
research
05/19/2017

CNN-Based Joint Clustering and Representation Learning with Feature Drift Compensation for Large-Scale Image Data

Given a large unlabeled set of images, how to efficiently and effectivel...
research
01/27/2023

Meta-Learning Mini-Batch Risk Functionals

Supervised learning typically optimizes the expected value risk function...

Please sign up or login with your details

Forgot password? Click here to reset