Exploring Linear Feature Disentanglement For Neural Networks

03/22/2022
by   Tiantian He, et al.
0

Non-linear activation functions, e.g., Sigmoid, ReLU, and Tanh, have achieved great success in neural networks (NNs). Due to the complex non-linear characteristic of samples, the objective of those activation functions is to project samples from their original feature space to a linear separable feature space. This phenomenon ignites our interest in exploring whether all features need to be transformed by all non-linear functions in current typical NNs, i.e., whether there exists a part of features arriving at the linear separable feature space in the intermediate layers, that does not require further non-linear variation but an affine transformation instead. To validate the above hypothesis, we explore the problem of linear feature disentanglement for neural networks in this paper. Specifically, we devise a learnable mask module to distinguish between linear and non-linear features. Through our designed experiments we found that some features reach the linearly separable space earlier than the others and can be detached partly from the NNs. The explored method also provides a readily feasible pruning strategy which barely affects the performance of the original model. We conduct our experiments on four datasets and present promising results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2018

Comparison of non-linear activation functions for deep neural networks on MNIST classification task

Activation functions play a key role in neural networks so it becomes fu...
research
12/19/2013

Continuous Learning: Engineering Super Features With Feature Algebras

In this paper we consider a problem of searching a space of predictive m...
research
09/06/2020

The role of feature space in atomistic learning

Efficient, physically-inspired descriptors of the structure and composit...
research
04/03/2016

Multi-Bias Non-linear Activation in Deep Neural Networks

As a widely used non-linear activation, Rectified Linear Unit (ReLU) sep...
research
09/29/2021

A Comprehensive Survey and Performance Analysis of Activation Functions in Deep Learning

Neural networks have shown tremendous growth in recent years to solve nu...
research
06/14/2021

Learning Deep Morphological Networks with Neural Architecture Search

Deep Neural Networks (DNNs) are generated by sequentially performing lin...
research
07/08/2018

Separability is not the best goal for machine learning

Neural networks use their hidden layers to transform input data into lin...

Please sign up or login with your details

Forgot password? Click here to reset