Learning algorithms typically operate on the entire training data set, in an offline mode, and may require costly re-training from scratch when new data becomes available. In contrast, humans learn continuously ("online"), adapting to the environment while leveraging past experiences. In this setting of continual learning, an agent is presented with a stream of samples (input-output pairs) from a non-stationary data distribution, such as a sequence of different classification tasks. The agent has to learn the input-output relationships associated with different data distributions in order to be able to adapt well to new tasks without forgetting the tasks learned previously, i.e. to avoid the catastrophic forgetting catastrophic problem in continual learning. Several types of approaches to this problem have been proposed in the past that introduce structural robustness goodfellow2013 ; srivastava2013 , regularize the parameters yosinski ; Zenke2017 ; ewc or utilize memory buffers gem ; mbpa . While the memory-based methods are among the most successful methods, these approaches generally utilize explicit task labels when storing instances of past learning and they also don’t scale well to real life problems.
Herein, we propose to capture the essence of the multimodal data distributions encountered in continual learning via the unsupervised learning mechanism known as Self-Organizing Map (SOM). The map learns simultaneously with a supervised feedforward neural net in such a way that the SOM routes each input sample to the most relevant part of the network. Unlike previous methods, SOMs do not require explicit information about the change in task, or an explicit memory of previous samples, while still achieving performance levels close to the current state-of-art on several benchmarks.
2 Self-Organizing Maps for Continual Learning
Self-Organizing Maps (SOMs) Kohonen1990 are a type of artificial neural network which reduce the input from a high-dimensional (
) space to a low-dimensional representation without supervision. Unlike supervised training via error-correction (backpropagation), self-organizing maps employ unsupervised competitive learning in order to map similar input vectors to physically nearby nodes in the map layer. Namely, each SOM node will be associated with an-dimensional weight vector, and, at each iteration, the next input sample will be assigned to the most similar network node called the Best Matching Unit (BMU). Trainable weights associated with the BMU along with weights of its topographical neighbors (determined by neighborhood radius ) are tuned to reduce the error between the weights and the input, (proportioanl to a learning rate ) so that the neighborhood becomes more similar to the input. This allows the SOM to only adapt parts of its parameters in response to each observed input. To stabilize the learning, both and parameters are decayed during learning (with an exponential decaying rate of ).
SOMs has been previously used as a potential solution to the catastrophic forgetting problem Gepperth2016 but it has been shown that the proposed algorithms often fail on small-scale benchmarks like MNIST-permutations and rotations Kemker2017
. Here we introduce the Self-Organized Multi-Layer Perceptron (SOMLP) where a SOM layer is used in parallel with a fully-connected layer, as illustrated inFigure 1. The SOM receives the same input as the fully-connected layer and is trained without supervision on the network inputs (Algorithm 1). During the training it learns the input distribution for each task and, most importantly, a 2D map of relationships between the tasks. For every input, an output mask is computed based on the Euclidean distances between the input and SOM weight vectors, as shown in Algorithm 1
(tuned with hyperparameter), and is multiplied with the output of the fully connected layer (Figure 1). This allows the network to allocate resources (i.e. nodes) to learning input-output associations for each task, and mask irrelevant nodes. In addition, the SOM shares nodes between similar tasks while using separate nodes for dissimilar tasks. The overall training procedure is shown in Algorithm 2, which includes the option of pretraining SOM weights on unlabeled data from different tasks (if available).
We evaluated SOMLP against three other baseline methods: a naive Multi-Layer Perceptron (MLP), Elastic Weight Consolidation (EWC) ewc and Gradient Episodic Memory (GEM) gem . We evaluated all methods on two standard benchmark datasets, MNIST-permutations and MNIST-rotations gem . MNIST-permutations consists of 20 random pixel permutations of the standard MNIST dataset while MNIST-rotations contains 20 rotations of MNIST digits between 0 and 180 degrees.
All experiments were performed on a fully-connected network with a single hidden layer with units, varying across the experiments, and an output layer with units. For SOMLP, the hidden layer is augmented with a SOM of the same size and the output of is multiplied by the SOM output before being fed to the next layer. For a fair comparison with respect to the numbers of parameters, we used for MLP, EWC and GEM and for SOMLP111The total number of parameters for MLP, EWC and GEM is ; and for SOMLP .
All networks were trained for 1 epoch on the training set, learning 20 tasks in sequence. For each task, the network is presented with 60,000 samples. The hyperparameter settings used in the experiments are summarized in supplementaryTable S1.
In addition, EWC and GEM methods require additional memory slots. EWC uses a memory buffer to save samples before computing the Fisher matrix before moving to the next task. On the other hand, GEM uses a memory buffer per task to retain samples for experience replay. A scalar "memory strength" additionally dictates how much to weigh previous samples. In our experiments, we varied the number of memory slots between 8 to 5120 for EWC with a memory strength of 3, and 256 to 5120 for GEM with a memory strength of 0.5. An advantage of SOMLP is that it does not require a sample memory buffer.
In our approach, we used two methods for pretraining SOM weights. The first one pretrains the weights on the unlabeled MNIST training set, with large neighborhood values (), while for the second one, the weights are pretrained on 10% of the training set from all the tasks in the benchmark (). We only use the input images for pretraining.
Table 1 shows the networks’ performances on each benchmark dataset after learning a battery of 20 tasks.
|Network||Memory Size||#Parameters (M)||
On MNIST-permutations, performed on par with EWC but lower than GEM with large memory buffers, whereas it outperformed EWC on MNIST-rotations but still was lower than GEM. On the other hand, in the low-memory case that the size of available memory is reduced, performs better than all baseline methods. In contrast, only performed better than EWC on MNIST-rotations. Naive MLP worked fairly well on MNIST-rotations but lower than all methods on MNIST-permutations.
It should be noted that both EWC and GEM make use of a "sample memory buffer" as well as explicit information about when the task is switched while SOMLP requires neither a memory buffer nor knowledge of task switching.
We examined the effect of the amount of sample memory on mean performance222 We note that EWC additionally stores the Fisher matrix associated with each task which we did not account for in our memory requirement estimate.
We note that EWC additionally stores the Fisher matrix associated with each task which we did not account for in our memory requirement estimate.. We found that the performance of GEM on both benchmarks is significantly reduced with smaller memory sizes. EWC performance is almost constant in the MNIST-permutations benchmark and even decreases with enlarged memory buffer in MNIST-rotations (Figure 2).
We also examined the mean performance during sequential learning of all tasks (Figure S1) on each dataset. As more tasks are learned, EWC and experience some degree of forgetting and therefore the mean performance decreases with more tasks. While both GEM and are able to maintain the same level of mean performance throughout, is consistently below GEM’s level of performance. This is potentially due to lower network capacity in our SOMLP approach because of (1) smaller network size and (2) per-task SOM masks that limits the resources available to learn each task.
The learned feature maps and per-task masks for each benchmark are shown in Figure 3. In MNIST-permutations, because of the random pixel permutations in each task, masks corresponding to each task are independent of each other. Conversely, in MNIST-rotations, the learned masks share nodes between tasks that are more similar (i.e. slightly rotated tasks) and use independent nodes for more dissimilar tasks (e.g. tasks that are separated by larger rotation angles).
Acknowledgements: This research was supported by the MIT-IBM Watson AI Lab and the Semiconductor Research Corporation (SRC).
- (1) Alexander Gepperth and Cem Karaoguz. A Bio-Inspired Incremental Learning Architecture for Applied Perceptual Problems. Cognitive Computation, 8(5):924–934, 2016.
- (2) Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013.
- (3) Ronald Kemker, Marc Mcclure, Angelina Abitino, Tyler Hayes, Christopher Kanan, and Rochester Ny. Measuring Catastrophic Forgetting in Neural Networks. arXiv, 2017.
- (4) James Kirkpatrick, Pascanua Razvan, Neil Rabinowitza, Joel Venessa, Guillaume Desjardinsa, Andrei A. Rusua, Kieran Milana, John Quana, Tiago Ramalhoa, Agnieszka Grabska-Barwinskaa, Demis Hassabisa, Claudia Clopathb, Dharshan Kumarana, and Raia Hadsella. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Science (PNAS), 2017.
- (5) Teuvo Kohonen. The Self-Organizing Map. Proceedings of the IEEE, 78(9):1464–1480, 1990.
- (6) David Lopez-Paz and Marc’Aurelio Ranzato. Gradient Episodic Memory for Continual Learning. Neural Information Processing Systems, 2017.
- (7) Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109–165. Elsevier, 1989.
- (8) Pablo Sprechmann, Siddhant M. Jayakumar, and Jack W. Rae. Memory-based Parameter Adaptation. ICLR, pages 1–13, 2018.
- (9) Rupesh K Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez, and Jürgen Schmidhuber. Compete to compute. Neural Information Processing Systems, pages 2310–2318, 2013.
- (10) Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? Neural Information Processing Systems, pages 3320–3328, 2014.
- (11) Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual Learning Through Synaptic Intelligence. arXiv, 2017.