BEAN: Interpretable Representation Learning with Biologically-Enhanced Artificial Neuronal Assembly Regularization

09/27/2019
by   Yuyang Gao, et al.
0

Deep neural networks (DNNs) are known for extracting good representations from a large amount of data. However, the representations learned in DNNs are typically hard to interpret, especially the ones learned in dense layers. One crucial issue is that neurons within each layer of DNNs are conditionally independent with each other, which makes the co-training and analysis of neurons at higher modularity difficult. In contrast, the dependency patterns of biological neurons in the human brain are largely different from those of DNNs. Neuronal assembly describes such neuron dependencies that could be found among a group of biological neurons as having strong internal synaptic interactions, potentially high semantical correlations that are deemed to facilitate the memorization process. In this paper, we show such a crucial gap between DNNs and biological neural networks (BNNs) can be bridged by the newly proposed Biologically-Enhanced Artificial Neuronal assembly (BEAN) regularization that could enforce dependencies among neurons in dense layers of DNNs without altering the conventional architecture. Both qualitative and quantitative analyses show that BEAN enables the formations of interpretable and biologically plausible neuronal assemblies in dense layers and consequently enhances the modularity and interpretability of the hidden representations learned. Moreover, BEAN further results in sparse and structured connectivity and parameter sharing among neurons, which substantially improves the efficiency and generalizability of the model.

READ FULL TEXT
research
07/21/2019

Shallow Unorganized Neural Networks using Smart Neuron Model for Visual Perception

The recent success of Deep Neural Networks (DNNs) has revealed the signi...
research
04/07/2022

Visualizing Deep Neural Networks with Topographic Activation Maps

Machine Learning with Deep Neural Networks (DNNs) has become a successfu...
research
01/25/2019

Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples

Sometimes it is not enough for a DNN to produce an outcome. For example,...
research
09/12/2022

Deep Neural Networks as Complex Networks

Deep Neural Networks are, from a physical perspective, graphs whose `lin...
research
12/15/2021

Planning with Biological Neurons and Synapses

We revisit the planning problem in the blocks world, and we implement a ...
research
06/22/2021

Towards Biologically Plausible Convolutional Networks

Convolutional networks are ubiquitous in deep learning. They are particu...

Please sign up or login with your details

Forgot password? Click here to reset