Stochastic Weight Matrix-based Regularization Methods for Deep Neural Networks

09/26/2019
by   Patrik Reizinger, et al.
0

The aim of this paper is to introduce two widely applicable regularization methods based on the direct modification of weight matrices. The first method, Weight Reinitialization, utilizes a simplified Bayesian assumption with partially resetting a sparse subset of the parameters. The second one, Weight Shuffling, introduces an entropy- and weight distribution-invariant non-white noise to the parameters. The latter can also be interpreted as an ensemble approach. The proposed methods are evaluated on benchmark datasets, such as MNIST, CIFAR-10 or the JSB Chorales database, and also on time series modeling tasks. We report gains both regarding performance and entropy of the analyzed networks. We also made our code available as a GitHub repository (https://github.com/rpatrik96/lod-wmm-2019).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2016

Stochastic Function Norm Regularization of Deep Networks

Deep neural networks have had an enormous impact on image analysis. Stat...
research
01/31/2021

Deep Deterministic Information Bottleneck with Matrix-based Entropy Functional

We introduce the matrix-based Renyi's α-order entropy functional to para...
research
10/01/2019

Entropy Penalty: Towards Generalization Beyond the IID Assumption

It has been shown that instead of learning actual object features, deep ...
research
06/28/2023

Time Regularization in Optimal Time Variable Learning

Recently, optimal time variable learning in deep neural networks (DNNs) ...
research
12/01/2021

A Unified Benchmark for the Unknown Detection Capability of Deep Neural Networks

Deep neural networks have achieved outstanding performance over various ...
research
02/16/2021

SiMaN: Sign-to-Magnitude Network Binarization

Binary neural networks (BNNs) have attracted broad research interest due...

Please sign up or login with your details

Forgot password? Click here to reset