Learning Less-Overlapping Representations

11/25/2017
by   Pengtao Xie, et al.
0

In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regulariza- tion approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regu- larized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2017

Feature Incay for Representation Regularization

Softmax loss is widely used in deep neural networks for multi-class clas...
research
03/29/2023

SC-VAE: Sparse Coding-based Variational Autoencoder

Learning rich data representations from unlabeled data is a key challeng...
research
04/15/2019

Disentangling Options with Hellinger Distance Regularizer

In reinforcement learning (RL), temporal abstraction still remains as an...
research
12/09/2021

DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization

Despite overparameterization, deep networks trained via supervised learn...
research
03/10/2019

Non-Negative Kernel Sparse Coding for the Classification of Motion Data

We are interested in the decomposition of motion data into a sparse line...
research
01/17/2022

Fair Interpretable Learning via Correction Vectors

Neural network architectures have been extensively employed in the fair ...
research
02/07/2022

Fair Interpretable Representation Learning with Correction Vectors

Neural network architectures have been extensively employed in the fair ...

Please sign up or login with your details

Forgot password? Click here to reset