Interpreting RNN behaviour via excitable network attractors

07/27/2018
by   Andrea Ceni, et al.
4

Machine learning has become a basic tool in scientific research and for the development of technologies with significant impact on society. In fact, such methods allow to discover regularities in data and make predictions without explicit knowledge of the rules governing the system under analysis. However, a price must be paid for exploiting such a modeling flexibility: machine learning methods are usually black-box, meaning that it is difficult to fully understand what the machine is doing and how. This poses constraints on the applicability of such methods, neglecting the possibility to gather novel scientific insights from experimental data. Our research aims to open the black-box of recurrent neural networks, an important family of neural networks suitable to process sequential data. Here, we propose a novel methodology that allows to provide a mechanistic interpretation of their behaviour when used to solve computational tasks. The methodology is based on mathematical constructs called excitable network attractors, which are models represented as networks in phase space composed by stable attractors and excitable connections between them. As the behaviour of recurrent neural networks depends on training and inputs driving the autonomous system, we introduce an algorithm to extract network attractors directly from a trajectory generated by the neural network while solving tasks. Simulations conducted on a controlled benchmark highlight the relevance of the proposed methodology for interpreting the behaviour of recurrent neural networks on tasks that involve learning a finite number of stable states.

READ FULL TEXT

page 13

page 15

page 18

page 24

research
10/12/2018

Explaining Black Boxes on Sequential Data using Weighted Automata

Understanding how a learned black box works is of crucial interest for t...
research
07/30/2021

Creating Powerful and Interpretable Models withRegression Networks

As the discipline has evolved, research in machine learning has been foc...
research
04/28/2016

Crafting Adversarial Input Sequences for Recurrent Neural Networks

Machine learning models are frequently used to solve complex security pr...
research
10/15/2020

What you need to know to train recurrent neural networks to make Flip Flops memories and more

Training neural networks to perform different tasks is relevant across v...
research
09/06/2022

Making the black-box brighter: interpreting machine learning algorithm for forecasting drilling accidents

We present an approach for interpreting a black-box alarming system for ...
research
04/01/2016

Using Recurrent Neural Networks to Optimize Dynamical Decoupling for Quantum Memory

We utilize machine learning models which are based on recurrent neural n...
research
06/15/2019

LioNets: Local Interpretation of Neural Networks through Penultimate Layer Decoding

Technological breakthroughs on smart homes, self-driving cars, health ca...

Please sign up or login with your details

Forgot password? Click here to reset