Explainability in Simplicial Map Neural Networks

05/29/2023
by   Eduardo Paluzo-Hidalgo, et al.
0

Simplicial map neural networks (SMNNs) are topology-based neural networks with interesting properties such as universal approximation capability and robustness to adversarial examples under appropriate conditions. However, SMNNs present some bottlenecks for their possible application in high dimensions. First, no SMNN training process has been defined so far. Second, SMNNs require the construction of a convex polytope surrounding the input dataset. In this paper, we propose a SMNN training procedure based on a support subset of the given dataset and a method based on projection to a hypersphere as a replacement for the convex polytope construction. In addition, the explainability capacity of SMNNs is also introduced for the first time in this paper.

READ FULL TEXT
research
03/29/2018

Weakening the Detecting Capability of CNN-based Steganalysis

Recently, the application of deep learning in steganalysis has drawn man...
research
10/08/2021

Explainability-Aware One Point Attack for Point Cloud Neural Networks

With the proposition of neural networks for point clouds, deep learning ...
research
02/03/2023

Asymmetric Certified Robustness via Feature-Convex Neural Networks

Recent works have introduced input-convex neural networks (ICNNs) as lea...
research
07/20/2020

DeepNNK: Explaining deep models and their generalization using polytope interpolation

Modern machine learning systems based on neural networks have shown grea...
research
01/12/2019

Enhancing Explainability of Neural Networks through Architecture Constraints

Prediction accuracy and model explainability are the two most important ...
research
09/26/2020

A light-weight method to foster the (Grad)CAM interpretability and explainability of classification networks

We consider a light-weight method which allows to improve the explainabi...

Please sign up or login with your details

Forgot password? Click here to reset