
Revisit Fuzzy Neural Network: Demystifying Batch Normalization and ReLU with Generalized Hamming Network
We revisit fuzzy neural network with a cornerstone notion of generalized...
read it

Using PSPNet and UNet to analyze the internal parameter relationship and visualization of the convolutional neural network
Convolutional neural network(CNN) has achieved great success in many fie...
read it

Unsupervised prototype learning in an associativememory network
Unsupervised learning in a generalized Hopfield associativememory netwo...
read it

A note on the generalized Hamming weights of ReedMuller codes
In this note, we give a very simple description of the generalized Hammi...
read it

Relative Generalized Hamming weights of affine Cartesian codes
We explicitly determine all the relative generalized Hamming weights of ...
read it

Adaptive and Interpretable Graph Convolution Networks Using Generalized Pagerank
We investigate adaptive layerwise graph convolution in deep GCN models....
read it
Deep Epitome for Unravelling Generalized Hamming Network: A Fuzzy Logic Interpretation of Deep Learning
This paper gives a rigorous analysis of trained Generalized Hamming Networks(GHN) proposed by Fan (2017) and discloses an interesting finding about GHNs, i.e., stacked convolution layers in a GHN is equivalent to a single yet wide convolution layer. The revealed equivalence, on the theoretical side, can be regarded as a constructive manifestation of the universal approximation theorem Cybenko(1989); Hornik (1991). In practice, it has profound and multifold implications. For network visualization, the constructed deep epitomes at each layer provide a visualization of network internal representation that does not rely on the input data. Moreover, deep epitomes allows the direct extraction of features in just one step, without resorting to regularized optimizations used in existing visualization tools.
READ FULL TEXT
Comments
There are no comments yet.