Randomly Initialized One-Layer Neural Networks Make Data Linearly Separable

05/24/2022
by   Promit Ghosal, et al.
0

Recently, neural networks have been shown to perform exceptionally well in transforming two arbitrary sets into two linearly separable sets. Doing this with a randomly initialized neural network is of immense interest because the associated computation is cheaper than using fully trained networks. In this paper, we show that, with sufficient width, a randomly initialized one-layer neural network transforms two sets into two linearly separable sets with high probability. Furthermore, we provide explicit bounds on the required width of the neural network for this to occur. Our first bound is exponential in the input dimension and polynomial in all other parameters, while our second bound is independent of the input dimension, thereby overcoming the curse of dimensionality. We also perform an experimental study comparing the separation capacity of randomly initialized one-layer and two-layer neural networks. With correctly chosen biases, our study shows for low-dimensional data, the two-layer neural network outperforms the one-layer network. However, the opposite is observed for higher-dimensional data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2021

The Separation Capacity of Random Neural Networks

Neural networks with random weights appear in a variety of machine learn...
research
06/09/2022

Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks

Adversarial examples, which are usually generated for specific inputs wi...
research
06/15/2020

Feature Space Saturation during Training

We propose layer saturation - a simple, online-computable method for ana...
research
04/15/2022

Kernel similarity matching with Hebbian neural networks

Recent works have derived neural networks with online correlation-based ...
research
02/01/2023

Simplicity Bias in 1-Hidden Layer Neural Networks

Recent works have demonstrated that neural networks exhibit extreme simp...
research
06/23/2023

On the Convergence Rate of Gaussianization with Random Rotations

Gaussianization is a simple generative model that can be trained without...
research
02/28/2022

How and what to learn:The modes of machine learning

We proposal a new approach, namely the weight pathway analysis (WPA), to...

Please sign up or login with your details

Forgot password? Click here to reset