Internal representation dynamics and geometry in recurrent neural networks

01/09/2020 ∙ by Stefan Horoi, et al. ∙ 0

The efficiency of recurrent neural networks (RNNs) in dealing with sequential data has long been established. However, unlike deep, and convolution networks where we can attribute the recognition of a certain feature to every layer, it is unclear what "sub-task" a single recurrent step or layer accomplishes. Our work seeks to shed light onto how a vanilla RNN implements a simple classification task by analysing the dynamics of the network and the geometric properties of its hidden states. We find that early internal representations are evocative of the real labels of the data but this information is not directly accessible to the output layer. Furthermore the network's dynamics and the sequence length are both critical to correct classifications even when there is no additional task relevant information provided.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Research has been done studying the geometry of internal representations and its effect on classification in deep neural networks (Cohen et al., 2019). However, this was not studied in RNNs where recurrent dynamics also play a significant role in the task completion. Therefore, our experiments were chosen with two goals in mind: finding the geometric properties of internal class representations and evaluating the effects of the recurrent dynamics on the classification accuracy.

2 Methodology and Analysis tools

We trained a vanilla RNN to complete the well known sequential MNIST classification task, where each input to the network is a sequence of 28 lines (rows) each having 28 pixels. The RNN has a single recurrent layer of 200 tanh neurons and its only parameters are the recurrent and input weights (i.e., with no bias parameters to affect the recurrent dynamics). The output layer is linear and the network is trained using the Adam optimization algorithm with a cross-entropy loss function. After training the network for 30 epochs we achieve an accuracy of a little over 93% on the test data. While this accuracy is far from the state-of-the-art, it is sufficient to ensure that our network is able to generalize the information provided by the training data to the test data and correctly classify the images in a majority of cases. The code was implemented in Python using PyTorch and all the experiments were conducted using the trained network and the test data, which the network did not see during training. All experiments reported below are conducted on networks trained in the above manner.

For data analysis, PCA is used to estimate the linear dimensionality of the internal representations of both the entire data as a whole and of the classes. This is done by counting the number of principal components necessary to explain at least 90% of the variance in the data (or in a class). Also, t-SNE

(van der Maaten and Hinton, 2008) is used to create a two dimensional representation of the network’s internal states. In this representation, points that are close in the original space will be mapped close together and distant point will be mapped far from one another. The aim is to see how early in the classification process did the network group together similar inputs. Ultimately, we want to see how early does the network ”know”, in a sense, that a certain input is of a specific class.


3 Experiments

Experiment 1: The first experiment consists of modifying the test images so that the last lines of each image are blank (value 0), this was done for ranging from 1 to 27. The part-blank images were then used as inputs for the classification task. This experiment aims to determine how the recurrent dynamics alone help the classification task.

Experiment 2: The second experiment consists of giving the network only the first lines of the images and then stopping, in contrast to Experiment 1. This allows to probe the importance of early internal representations by the last linear layer and to see whether or not the network relies on a fixed sequence length for classification.

Experiment 3: In the third experiment, blank pixel rows are added at the end of the full input sequences, increasing their length. This is in order to see how the network’s dynamics affect the internal representations after the network was provided with all the available information.

4 Discussion

Figure 1a shows that the network’s accuracy in experiments 1 and 2 behaves in very different manners. The network dynamics seem to have a significant role in the classification process even when the inputs are blank, as is the case in experiment 1. Despite being shown the same number of real pixel rows, the accuracy in experiment 1 is greatly superior to experiment 2 for all amounts of shown rows. This also indicates that the network’s classification abilities are highly dependent on the sequence length even when the amount of relevant information in the sequence is exactly the same. Figure 1b further emphasizes the importance of the sequence length for proper classification since the accuracy dramatically drops as soon as additional blank rows of pixels are added to the input sequences. The reason we chose to plot the accuracy of the classification for up to 500 added blank rows is to display the highly dynamic nature of the network which seems to begin an oscillatory trajectory as can be deduced from the recurring pattern in the accuracy.

Figure 3 shows that the tendency of neural networks to rely on dimensionality expansions followed by dimensionality reductions in order to perform their tasks which was discussed by (Recanatesi et al., 2019) and (Fusi et al., 2016) seems to be maintained for recurrent neural networks.

Finally the t-SNE visualization is especially evocative when the internal representations are colored according to their real digit class. As early as time step 4 (Figure 4) the network seems to create classification relevant clusters in the representation space. In particular, it seems to ”know” that certain points are 6es (rightmost cluster) or either 6es or 2s (middle cluster) and it separates them from the rest of the points (leftmost cluster). This internal separation increases in precision throughout the time steps as is shown in Figures 5 and 6, but the initial cluster of ”6es” is maintained suggesting that the initial separation of this cluster from the other points was correct.

5 Conclusion

Our results show that the network’s internal representation is evocative of the real data classification early in the input’s sequence so the task is carried out as soon as relevant information is available. Despite the separability of the internal representations and the clusters formed in the representation space, the task relevant information is only interpretable by the output layer after a fixed sequence length. If the sequence length varies the network’s dynamics affects the hidden states in a way that greatly hinders accuracy.

The retrieval of ”early classifications”, where the network doesn’t wait for the whole input sequence, could be of great impact in time sensitive real-world applications in which a decision is needed as soon as possible. Further work is required to determine how this information is retrievable (see related work (Raghu et al., 2017) and (Alain and Bengio, 2017)

). Also, machine learning visualization algorithms show great promise in exposing the inner mechanisms of neural networks and could greatly help in the understanding of these ”black-box” algorithms. A great number of vizualisation, dimensionality reduction and manifold learning techniques exist and their applicability in neural network analysis should be evaluated more thoroughly.

References

  • G. Alain and Y. Bengio (2017) Understanding intermediate layers using linear classifier probes. In International Conference on Learning Representations, Cited by: §5.
  • U. Cohen, S. Chung, D. D. Lee, and H. Sompolinsky (2019) Separability and geometry of object manifolds in deep neural networks. bioRxiv. External Links: https://www.biorxiv.org/content/early/2019/05/23/644658.full.pdf Cited by: §1.
  • S. Fusi, E. K. Miller, and M. Rigotti (2016) Why neurons mix: high dimensionality for higher cognition. Current Opinion in Neurobiology 37, pp. 66 – 74. Cited by: §4.
  • M. Raghu, J. Gilmer, J. Yosinski, and J. Sohl-Dickstein (2017)

    SVCCA: singular vector canonical correlation analysis for deep learning dynamics and interpretability

    .
    In Advances in Neural Information Processing Systems 30, pp. 6076–6085. Cited by: §5.
  • S. Recanatesi, M. Farrell, M. Advani, T. Moore, G. Lajoie, and E. Shea-Brown (2019) Dimensionality compression and expansion in deep neural networks. CoRR abs/1906.00443. External Links: 1906.00443 Cited by: §4.
  • L. van der Maaten and G. Hinton (2008) Visualizing data using t-sne. Journal of Machine Learning Research 9, pp. 2579–2605. Cited by: §2.