DeepAI AI Chat
Log In Sign Up

Learning Less-Overlapping Representations

by   Pengtao Xie, et al.
Petuum, Inc.

In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regulariza- tion approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regu- larized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance.


page 1

page 2

page 3

page 4


Feature Incay for Representation Regularization

Softmax loss is widely used in deep neural networks for multi-class clas...

Disentangling Options with Hellinger Distance Regularizer

In reinforcement learning (RL), temporal abstraction still remains as an...

DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization

Despite overparameterization, deep networks trained via supervised learn...

Non-Negative Kernel Sparse Coding for the Classification of Motion Data

We are interested in the decomposition of motion data into a sparse line...

Discount Factor as a Regularizer in Reinforcement Learning

Specifying a Reinforcement Learning (RL) task involves choosing a suitab...

Fair Interpretable Learning via Correction Vectors

Neural network architectures have been extensively employed in the fair ...

Fair Interpretable Representation Learning with Correction Vectors

Neural network architectures have been extensively employed in the fair ...