
Transformationally Identical and Invariant Convolutional Neural Networks by Combining Symmetric Operations or Input Vectors
Transformationally invariant processors constructed by transformed input...
read it

Geared Rotationally Identical and Invariant Convolutional Neural Network Systems
Theorems and techniques to form different types of transformationally in...
read it

Enhanced Convolutional Neural Tangent Kernels
Recent research shows that for training with ℓ_2 loss, convolutional neu...
read it

On regularization for a convolutional kernel in neural networks
Convolutional neural network is a very important model of deep learning....
read it

Generalizing the Convolution Operator in Convolutional Neural Networks
Convolutional neural networks have become a main tool for solving many m...
read it

What Happened to My Dog in That Network: Unraveling Topdown Generators in Convolutional Neural Networks
Topdown information plays a central role in human perception, but plays...
read it

Seeing Convolution Through the Eyes of Finite Transformation Semigroup Theory: An Abstract Algebraic Interpretation of Convolutional Neural Networks
Researchers are actively trying to gain better insights into the represe...
read it
Transformationally Identical and Invariant Convolutional Neural Networks through Symmetric Element Operators
Mathematically speaking, a transformationally invariant operator, such as a transformationally identical (TI) matrix kernel (i.e., K= TK), commutes with the transformation (T.) itself when they operate on the first operand matrix. We found that by consistently applying the same type of TI kernels in a convolutional neural networks (CNN) system, the commutative property holds throughout all layers of convolution processes with and without involving an activation function and/or a 1D convolution across channels within a layer. We further found that any CNN possessing the same TI kernel property for all convolution layers followed by a flatten layer with weight sharing among their transformation corresponding elements would output the same result for all transformation versions of the original input vector. In short, CNN[ Vi ] = CNN[ TVi ] providing every K = TK in CNN, where Vi denotes input vector and CNN[.] represents the whole CNN process as a function of input vector that produces an output vector. With such a transformationally identical CNN (TICNN) system, each transformation, that is not associated with a predefined TI used in data augmentation, would inherently include all of its corresponding transformation versions of the input vector for the training. Hence the use of same TI property for every kernel in the CNN would serve as an orientation or a translation independent training guide in conjunction with the errorbackpropagation during the training. This TI kernel property is desirable for applications requiring a highly consistent output result from corresponding transformation versions of an input. Several C programming routines are provided to facilitate interested parties of using the TICNN technique which is expected to produce a better generalization performance than its ordinary CNN counterpart.
READ FULL TEXT
Comments
There are no comments yet.