Unsupervised Deep Learning by Injecting Low-Rank and Sparse Priors

06/21/2021
by   Tomoya Sakai, et al.
0

What if deep neural networks can learn from sparsity-inducing priors? When the networks are designed by combining layer modules (CNN, RNN, etc), engineers less exploit the inductive bias, i.e., existing well-known rules or prior knowledge, other than annotated training data sets. We focus on employing sparsity-inducing priors in deep learning to encourage the network to concisely capture the nature of high-dimensional data in an unsupervised way. In order to use non-differentiable sparsity-inducing norms as loss functions, we plug their proximal mappings into the automatic differentiation framework. We demonstrate unsupervised learning of U-Net for background subtraction using low-rank and sparse priors. The U-Net can learn moving objects in a training sequence without any annotation, and successfully detect the foreground objects in test sequences.

READ FULL TEXT

page 1

page 4

research
10/17/2018

Efficient Proximal Mapping Computation for Unitarily Invariant Low-Rank Inducing Norms

Low-rank inducing unitarily invariant norms have been introduced to conv...
research
04/05/2019

Moving Object Detection under Discontinuous Change in Illumination Using Tensor Low-Rank and Invariant Sparse Decomposition

Although low-rank and sparse decomposition based methods have been succe...
research
08/25/2010

Structured sparsity-inducing norms through submodular functions

Sparse methods for supervised learning aim at finding good linear predic...
research
01/29/2019

On the Effect of Low-Rank Weights on Adversarial Robustness of Neural Networks

Recently, there has been an abundance of works on designing Deep Neural ...
research
01/04/2015

A generalization error bound for sparse and low-rank multivariate Hawkes processes

We consider the problem of unveiling the implicit network structure of u...
research
09/07/2021

A low-rank tensor method to reconstruct sparse initial states for PDEs with Isogeometric Analysis

When working with PDEs the reconstruction of a previous state often prov...
research
05/24/2017

Bayesian Compression for Deep Learning

Compression and computational efficiency in deep learning have become a ...

Please sign up or login with your details

Forgot password? Click here to reset