Hidden Classification Layers: a study on Data Hidden Representations with a Higher Degree of Linear Separability between the Classes

06/09/2023
by   Andrea Apicella, et al.
0

In the context of classification problems, Deep Learning (DL) approaches represent state of art. Many DL approaches are based on variations of standard multi-layer feed-forward neural networks. These are also referred to as deep networks. The basic idea is that each hidden neural layer accomplishes a data transformation which is expected to make the data representation "somewhat more linearly separable" than the previous one to obtain a final data representation which is as linearly separable as possible. However, determining the appropriate neural network parameters that can perform these transformations is a critical problem. In this paper, we investigate the impact on deep network classifier performances of a training approach favouring solutions where data representations at the hidden layers have a higher degree of linear separability between the classes with respect to standard methods. To this aim, we propose a neural network architecture which induces an error function involving the outputs of all the network layers. Although similar approaches have already been partially discussed in the past literature, here we propose a new architecture with a novel error function and an extensive experimental analysis. This experimental analysis was made in the context of image classification tasks considering four widely used datasets. The results show that our approach improves the accuracy on the test set in all the considered cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2023

Understanding Deep Neural Networks via Linear Separability of Hidden Layers

In this paper, we measure the linear separability of hidden layer output...
research
03/21/2022

Origami in N dimensions: How feed-forward networks manufacture linear separability

Neural networks can implement arbitrary functions. But, mechanistically,...
research
07/08/2018

Separability is not the best goal for machine learning

Neural networks use their hidden layers to transform input data into lin...
research
01/21/2021

Superiorities of Deep Extreme Learning Machines against Convolutional Neural Networks

Deep Learning (DL) is a machine learning procedure for artificial intell...
research
02/22/2018

Vector Field Based Neural Networks

A novel Neural Network architecture is proposed using the mathematically...
research
12/20/2014

Classifier with Hierarchical Topographical Maps as Internal Representation

In this study we want to connect our previously proposed context-relevan...
research
04/24/2018

Genesis of Basic and Multi-Layer Echo State Network Recurrent Autoencoders for Efficient Data Representations

It is a widely accepted fact that data representations intervene noticea...

Please sign up or login with your details

Forgot password? Click here to reset