DeepAI AI Chat
Log In Sign Up

Learning Less-Overlapping Representations

11/25/2017
by   Pengtao Xie, et al.
Petuum, Inc.
0

In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regulariza- tion approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regu- larized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/29/2017

Feature Incay for Representation Regularization

Softmax loss is widely used in deep neural networks for multi-class clas...
04/15/2019

Disentangling Options with Hellinger Distance Regularizer

In reinforcement learning (RL), temporal abstraction still remains as an...
12/09/2021

DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization

Despite overparameterization, deep networks trained via supervised learn...
03/10/2019

Non-Negative Kernel Sparse Coding for the Classification of Motion Data

We are interested in the decomposition of motion data into a sparse line...
07/04/2020

Discount Factor as a Regularizer in Reinforcement Learning

Specifying a Reinforcement Learning (RL) task involves choosing a suitab...
01/17/2022

Fair Interpretable Learning via Correction Vectors

Neural network architectures have been extensively employed in the fair ...
02/07/2022

Fair Interpretable Representation Learning with Correction Vectors

Neural network architectures have been extensively employed in the fair ...