Exploring Hidden Semantics in Neural Networks with Symbolic Regression

04/22/2022
by   Yuanzhen Luo, et al.
0

Many recent studies focus on developing mechanisms to explain the black-box behaviors of neural networks (NNs). However, little work has been done to extract the potential hidden semantics (mathematical representation) of a neural network. A succinct and explicit mathematical representation of a NN model could improve the understanding and interpretation of its behaviors. To address this need, we propose a novel symbolic regression method for neural works (called SRNet) to discover the mathematical expressions of a NN. SRNet creates a Cartesian genetic programming (NNCGP) to represent the hidden semantics of a single layer in a NN. It then leverages a multi-chromosome NNCGP to represent hidden semantics of all layers of the NN. The method uses a (1+λ) evolutionary strategy (called MNNCGP-ES) to extract the final mathematical expressions of all layers in the NN. Experiments on 12 symbolic regression benchmarks and 5 classification benchmarks show that SRNet not only can reveal the complex relationships between each layer of a NN but also can extract the mathematical representation of the whole NN. Compared with LIME and MAPLE, SRNet has higher interpolation accuracy and trends to approximate the real model on the practical dataset.

READ FULL TEXT
research
12/10/2019

Deep symbolic regression: Recovering mathematical expressions from data via policy gradients

Discovering the underlying mathematical expressions describing a dataset...
research
12/12/2020

Learning Symbolic Expressions via Gumbel-Max Equation Learner Network

Although modern machine learning, in particular deep learning, has achie...
research
05/02/2022

Extracting Symbolic Models of Collective Behaviors with Graph Neural Networks and Macro-Micro Evolution

Collective behaviors are typically hard to model. The scale of the swarm...
research
06/24/2022

A Grey-box Launch-profile Aware Model for C+L Band Raman Amplification

Based on the physical features of Raman amplification, we propose a thre...
research
03/18/2021

Linear Iterative Feature Embedding: An Ensemble Framework for Interpretable Model

A new ensemble framework for interpretable model called Linear Iterative...
research
11/19/2022

Class-Specific Attention (CSA) for Time-Series Classification

Most neural network-based classifiers extract features using several hid...
research
04/30/2021

InfoNEAT: Information Theory-based NeuroEvolution of Augmenting Topologies for Side-channel Analysis

Profiled side-channel analysis (SCA) leverages leakage from cryptographi...

Please sign up or login with your details

Forgot password? Click here to reset