Two Instances of Interpretable Neural Network for Universal Approximations

12/30/2021
by   Erico Tjoa, et al.
0

This paper proposes two bottom-up interpretable neural network (NN) constructions for universal approximation, namely Triangularly-constructed NN (TNN) and Semi-Quantized Activation NN (SQANN). The notable properties are (1) resistance to catastrophic forgetting (2) existence of proof for arbitrarily high accuracies on training dataset (3) for an input x, users can identify specific samples of training data whose activation “fingerprints" are similar to that of x's activations. Users can also identify samples that are out of distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2023

Neural Network Layer Matrix Decomposition reveals Latent Manifold Encoding and Memory Capacity

We prove the converse of the universal approximation theorem, i.e. a neu...
research
01/10/2023

Optimal Power Flow Based on Physical-Model-Integrated Neural Network with Worth-Learning Data Generation

Fast and reliable solvers for optimal power flow (OPF) problems are attr...
research
08/11/2023

Noise-Resilient Designs for Optical Neural Networks

All analog signal processing is fundamentally subject to noise, and this...
research
01/20/2021

Can stable and accurate neural networks be computed? – On the barriers of deep learning and Smale's 18th problem

Deep learning (DL) has had unprecedented success and is now entering sci...
research
07/27/2018

AXNet: ApproXimate computing using an end-to-end trainable neural network

Neural network based approximate computing is a universal architecture p...
research
04/27/2023

Uncertainty Aware Neural Network from Similarity and Sensitivity

Researchers have proposed several approaches for neural network (NN) bas...
research
06/22/2022

GACT: Activation Compressed Training for General Architectures

Training large neural network (NN) models requires extensive memory reso...

Please sign up or login with your details

Forgot password? Click here to reset