Understanding Activation Patterns in Artificial Neural Networks by Exploring Stochastic Processes

08/01/2023
by   Stephan Johann Lehmler, et al.
0

To gain a deeper understanding of the behavior and learning dynamics of (deep) artificial neural networks, it is valuable to employ mathematical abstractions and models. These tools provide a simplified perspective on network performance and facilitate systematic investigations through simulations. In this paper, we propose utilizing the framework of stochastic processes, which has been underutilized thus far. Our approach models activation patterns of thresholded nodes in (deep) artificial neural networks as stochastic processes. We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains. During a classification task, we extract spiking activity and use an arrival process following the Poisson distribution. We examine observed data from various artificial neural networks in image recognition tasks, fitting the proposed model's assumptions. Through this, we derive parameters describing activation patterns in each network. Our analysis covers randomly initialized, generalizing, and memorizing networks, revealing consistent differences across architectures and training sets. Calculating Mean Firing Rate, Mean Fano Factor, and Variances, we find stable indicators of memorization during learning, providing valuable insights into network behavior. The proposed model shows promise in describing activation patterns and could serve as a general framework for future investigations. It has potential applications in theoretical simulations, pruning, and transfer learning.

READ FULL TEXT
research
04/02/2020

Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations

The need for more transparency of the decision-making processes in artif...
research
04/16/2021

Probing artificial neural networks: insights from neuroscience

A major challenge in both neuroscience and machine learning is the devel...
research
07/11/2022

The Mean Dimension of Neural Networks – What causes the interaction effects?

Owen and Hoyt recently showed that the effective dimension offers key st...
research
10/26/2018

Whetstone: A Method for Training Deep Artificial Neural Networks for Binary Communication

This paper presents a new technique for training networks for low-precis...
research
02/14/2022

Testing the Tools of Systems Neuroscience on Artificial Neural Networks

Neuroscientists apply a range of common analysis tools to recorded neura...
research
09/07/2022

The Role Of Biology In Deep Learning

Artificial neural networks took a lot of inspiration from their biologic...
research
04/30/2020

Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima

Recently, a race towards the simplification of deep networks has begun, ...

Please sign up or login with your details

Forgot password? Click here to reset