On Designing Good Representation Learning Models

07/13/2021
by   Qinglin Li, et al.
0

The goal of representation learning is different from the ultimate objective of machine learning such as decision making, it is therefore very difficult to establish clear and direct objectives for training representation learning models. It has been argued that a good representation should disentangle the underlying variation factors, yet how to translate this into training objectives remains unknown. This paper presents an attempt to establish direct training criterions and design principles for developing good representation learning models. We propose that a good representation learning model should be maximally expressive, i.e., capable of distinguishing the maximum number of input configurations. We formally define expressiveness and introduce the maximum expressiveness (MEXS) theorem of a general learning model. We propose to train a model by maximizing its expressiveness while at the same time incorporating general priors such as model smoothness. We present a conscience competitive learning algorithm which encourages the model to reach its MEXS whilst at the same time adheres to model smoothness prior. We also introduce a label consistent training (LCT) technique to boost model smoothness by encouraging it to assign consistent labels to similar samples. We present extensive experimental results to show that our method can indeed design representation learning models capable of developing representations that are as good as or better than state of the art. We also show that our technique is computationally efficient, robust against different parameter settings and can work effectively on a variety of datasets.

READ FULL TEXT

page 1

page 14

research
06/29/2017

Improving Distributed Representations of Tweets - Present and Future

Unsupervised representation learning for tweets is an important research...
research
06/19/2019

Unsupervised State Representation Learning in Atari

State representation learning, or the ability to capture latent generati...
research
06/24/2012

Representation Learning: A Review and New Perspectives

The success of machine learning algorithms generally depends on data rep...
research
04/25/2020

Convex Representation Learning for Generalized Invariance in Semi-Inner-Product Space

Invariance (defined in a general sense) has been one of the most effecti...
research
11/19/2018

Learning Actionable Representations with Goal-Conditioned Policies

Representation learning is a central challenge across a range of machine...
research
12/02/2019

Discovery and Separation of Features for Invariant Representation Learning

Supervised machine learning models often associate irrelevant nuisance f...
research
06/01/2020

High-Fidelity Audio Generation and Representation Learning with Guided Adversarial Autoencoder

Unsupervised disentangled representation learning from the unlabelled au...

Please sign up or login with your details

Forgot password? Click here to reset