Universality of underlying mechanism for successful deep learning

09/14/2023
by   Yuval Meir, et al.
0

An underlying mechanism for successful deep learning (DL) with a limited deep architecture and dataset, namely VGG-16 on CIFAR-10, was recently presented based on a quantitative method to measure the quality of a single filter in each layer. In this method, each filter identifies small clusters of possible output labels, with additional noise selected as labels out of the clusters. This feature is progressively sharpened with the layers, resulting in an enhanced signal-to-noise ratio (SNR) and higher accuracy. In this study, the suggested universal mechanism is verified for VGG-16 and EfficientNet-B0 trained on the CIFAR-100 and ImageNet datasets with the following main results. First, the accuracy progressively increases with the layers, whereas the noise per filter typically progressively decreases. Second, for a given deep architecture, the maximal error rate increases approximately linearly with the number of output labels. Third, the average filter cluster size and the number of clusters per filter at the last convolutional layer adjacent to the output layer are almost independent of the number of dataset labels in the range [3, 1,000], while a high SNR is preserved. The presented DL mechanism suggests several techniques, such as applying filter's cluster connections (AFCC), to improve the computational complexity and accuracy of deep architectures and furthermore pinpoints the simplification of pre-existing structures while maintaining their accuracies.

READ FULL TEXT

page 1

page 4

page 7

research
05/29/2023

The mechanism underlying successful deep learning

Deep architectures consist of tens or hundreds of convolutional layers (...
research
05/19/2020

A Lite Microphone Array Beamforming Scheme with Maximum Signal-to-Noise Ratio Filter

Since space-domain information can be utilized, microphone array beamfor...
research
03/10/2023

Enhancing the success rates by performing pooling decisions adjacent to the output layer

Learning classification tasks of (2^nx2^n) inputs typically consist of ≤...
research
02/21/2018

Learning Multiple Categories on Deep Convolution Networks

Deep convolution networks have proved very successful with big datasets ...
research
11/15/2022

Efficient shallow learning as an alternative to deep learning

The realization of complex classification tasks requires training of dee...
research
03/07/2023

A Comparative Study of Deep Learning and Iterative Algorithms for Joint Channel Estimation and Signal Detection

Joint channel estimation and signal detection (JCESD) is crucial in wire...

Please sign up or login with your details

Forgot password? Click here to reset