Unbounded Output Networks for Classification

07/25/2018
by   Stefan Elfwing, et al.
0

We proposed the expected energy-based restricted Boltzmann machine (EE-RBM) as a discriminative RBM method for classification. Two characteristics of the EE-RBM are that the output is unbounded and that the target value of correct classification is set to a value much greater than one. In this study, by adopting features of the EE-RBM approach to feed-forward neural networks, we propose the UnBounded output network (UBnet) which is characterized by three features: (1) unbounded output units; (2) the target value of correct classification is set to a value much greater than one; and (3) the models are trained by a modified mean-squared error objective. We evaluate our approach using the MNIST, CIFAR-10, and CIFAR-100 benchmark datasets. We first demonstrate, for shallow UBnets on MNIST, that a setting of the target value equal to the number of hidden units significantly outperforms a setting of the target value equal to one, and it also outperforms standard neural networks by about 25%. We then validate our approach by achieving high-level classification performance on the three datasets using unbounded output residual networks. We finally use MNIST to analyze the learned features and weights, and we demonstrate that UBnets are much more robust against adversarial examples than the standard approach of using a softmax output layer and training the networks by a cross-entropy objective.

READ FULL TEXT
research
12/01/2018

Effects of Loss Functions And Target Representations on Adversarial Robustness

Understanding and evaluating the robustness of neural networks against a...
research
11/25/2021

Robustness against Adversarial Attacks in Neural Networks using Incremental Dissipativity

Adversarial examples can easily degrade the classification performance i...
research
10/27/2022

Multi-layered Discriminative Restricted Boltzmann Machine with Untrained Probabilistic Layer

An extreme learning machine (ELM) is a three-layered feed-forward neural...
research
03/08/2018

Rethinking Feature Distribution for Loss Functions in Image Classification

We propose a large-margin Gaussian Mixture (L-GM) loss for deep neural n...
research
09/18/2014

Deeply-Supervised Nets

Our proposed deeply-supervised nets (DSN) method simultaneously minimize...
research
05/10/2018

Monotone Learning with Rectifier Networks

We introduce a new neural network model, together with a tractable and m...
research
05/10/2018

Monotone Learning with Rectified Wire Networks

We introduce a new neural network model, together with a tractable and m...

Please sign up or login with your details

Forgot password? Click here to reset