On overcoming the Curse of Dimensionality in Neural Networks

09/02/2018
by   Karen Yeressian, et al.
0

Let H be a reproducing Kernel Hilbert space. For i=1,...,N, let x_i∈R^d and y_i∈R^m comprise our dataset. Let f^*∈ H be the unique global minimiser of the functional J(f) = 1/2 f_H^2 + 1/N∑_i=1^N1/2 f(x_i)-y_i^2. In this paper we show that for each n∈N there exists a two layer network where the first layer has nm number of basis functions Φ_x_i_k,j for i_1,...,i_n∈{1,...,N}, j=1,...,m and the second layer takes a weighted summation of the first layer, such that the functions f_n realised by these networks satisfy f_n-f^*_H≤ O(1/√(n))for all n∈N. Thus the error rate is independent of input dimension d, output dimension m and data size N.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro