A General Interpretation of Deep Learning by Affine Transform and Region Dividing without Mutual Interference

06/16/2019
by   Changcun Huang, et al.
0

This paper mainly deals with the "black-box" problem of deep learning composed of ReLUs with n-dimensional input space, as well as some discussions of sigmoid-unit deep learning. We prove that a region of input space can be transmitted to succeeding layers one by one in the sense of affine transforms; adding a new layer can help to realize the subregion dividing without influencing an excluded region, which is a key distinctive feature of deep leaning. Then constructive proof is given to demonstrate that multi-category data points can be classified by deep learning. Furthermore, we prove that deep learning can approximate an arbitrary continuous function on a closed set of n-dimensional space with arbitrary precision. Finally, generalize some of the conclusions of ReLU deep learning to the case of sigmoid-unit deep learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2018

On Deep Ensemble Learning from a Function Approximation Perspective

In this paper, we propose to provide a general ensemble learning framewo...
research
06/04/2022

Evaluation of Xilinx Deep Learning Processing Unit under Neutron Irradiation

This paper studies the dependability of the Xilinx Deep-Learning Process...
research
06/12/2021

Affine OneMax

A new class of test functions for black box optimization is introduced. ...
research
08/12/2021

On minimal representations of shallow ReLU networks

The realization function of a shallow ReLU network is a continuous and p...
research
10/13/2021

Clustering-Based Interpretation of Deep ReLU Network

Amongst others, the adoption of Rectified Linear Units (ReLUs) is regard...
research
07/24/2020

A short letter on the dot product between rotated Fourier transforms

Spatial Semantic Pointers (SSPs) have recently emerged as a powerful too...
research
10/04/2021

AdjointBackMapV2: Precise Reconstruction of Arbitrary CNN Unit's Activation via Adjoint Operators

Adjoint operators have been found to be effective in the exploration of ...

Please sign up or login with your details

Forgot password? Click here to reset