Exposing the Functionalities of Neurons for Gated Recurrent Unit Based Sequence-to-Sequence Model

03/27/2023
by   Yi-Ting Lee, et al.
0

The goal of this paper is to report certain scientific discoveries about a Seq2Seq model. It is known that analyzing the behavior of RNN-based models at the neuron level is considered a more challenging task than analyzing a DNN or CNN models due to their recursive mechanism in nature. This paper aims to provide neuron-level analysis to explain why a vanilla GRU-based Seq2Seq model without attention can achieve token-positioning. We found four different types of neurons: storing, counting, triggering, and outputting and further uncover the mechanism for these neurons to work together in order to produce the right token in the right position.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/04/2019

Integrating Whole Context to Sequence-to-sequence Speech Recognition

Because an attention based sequence-to-sequence speech (Seq2Seq) recogni...
research
08/30/2021

Neuron-level Interpretation of Deep NLP Models: A Survey

The proliferation of deep neural networks in various domains has seen an...
research
09/09/2023

Neurons in Large Language Models: Dead, N-gram, Positional

We analyze a family of large language models in such a lightweight manne...
research
06/08/2021

On the Evolution of Neuron Communities in a Deep Learning Architecture

Deep learning techniques are increasingly being adopted for classificati...
research
05/04/2018

Superconducting Optoelectronic Neurons IV: Transmitter Circuits

A superconducting optoelectronic neuron will produce a small current pul...
research
10/31/2015

Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex

Neocortical neurons have thousands of excitatory synapses. It is a myste...
research
08/22/2014

Neural Mechanism of Language

This paper is based on our previous work on neural coding. It is a self-...

Please sign up or login with your details

Forgot password? Click here to reset