Efficient shallow learning as an alternative to deep learning

11/15/2022
by   Yuval Meir, et al.
0

The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutional layer reveals localized patterns in the input and large-scale patterns in the following layers, until it reliably characterizes a class of inputs. Here, we demonstrate that with a fixed ratio between the depths of the first and second convolutional layers, the error rates of the generalized shallow LeNet architecture, consisting of only five layers, decay as a power law with the number of filters in the first convolutional layer. The extrapolation of this power law indicates that the generalized LeNet can achieve small error rates that were previously obtained for the CIFAR-10 database using DL architectures. A power law with a similar exponent also characterizes the generalized VGG-16 architecture. However, this results in a significantly increased number of operations required to achieve a given error rate with respect to LeNet. This power law phenomenon governs various generalized LeNet and VGG-16 architectures, hinting at its universal behavior and suggesting a quantitative hierarchical time-space complexity among machine learning architectures. Additionally, the conservation law along the convolutional layers, which is the square-root of their size times their depth, is found to asymptotically minimize error rates. The efficient shallow learning that is demonstrated in this study calls for further quantitative examination using various databases and architectures and its accelerated implementation using future dedicated hardware developments.

READ FULL TEXT

page 1

page 12

research
05/29/2023

The mechanism underlying successful deep learning

Deep architectures consist of tens or hundreds of convolutional layers (...
research
11/21/2022

Learning on tree architectures outperforms a convolutional feedforward network

Advanced deep learning architectures consist of tens of fully connected ...
research
11/15/2022

Power-law Scaling to Assist with Key Challenges in Artificial Intelligence

Power-law scaling, a central concept in critical phenomena, is found to ...
research
03/10/2023

Enhancing the success rates by performing pooling decisions adjacent to the output layer

Learning classification tasks of (2^nx2^n) inputs typically consist of ≤...
research
09/14/2023

Universality of underlying mechanism for successful deep learning

An underlying mechanism for successful deep learning (DL) with a limited...
research
10/31/2022

A Law of Data Separation in Deep Learning

Multilayer neural networks have achieved superhuman performance in many ...
research
04/12/2019

Revisit Lmser and its further development based on convolutional layers

Proposed in 1991, Least Mean Square Error Reconstruction for self-organi...

Please sign up or login with your details

Forgot password? Click here to reset