A Strategy for an Uncompromising Incremental Learner

05/02/2017
by   Ragav Venkatesan, et al.
Arizona State University
0

Multi-class supervised learning systems require the knowledge of the entire range of labels they predict. Often when learnt incrementally, they suffer from catastrophic forgetting. To avoid this, generous leeways have to be made to the philosophy of incremental learning that either forces a part of the machine to not learn, or to retrain the machine again with a selection of the historic data. While these hacks work to various degrees, they do not adhere to the spirit of incremental learning. In this article, we redefine incremental learning with stringent conditions that do not allow for any undesirable relaxations and assumptions. We design a strategy involving generative models and the distillation of dark knowledge as a means of hallucinating data along with appropriate targets from past distributions. We call this technique, phantom sampling.We show that phantom sampling helps avoid catastrophic forgetting during incremental learning. Using an implementation based on deep neural networks, we demonstrate that phantom sampling dramatically avoids catastrophic forgetting. We apply these strategies to competitive multi-class incremental learning of deep neural networks. Using various benchmark datasets and through our strategy, we demonstrate that strict incremental learning could be achieved. We further put our strategy to test on challenging cases, including cross-domain increments and incrementing on a novel label space. We also propose a trivial extension to unbounded-continual learning and identify potential for future development.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 6

page 10

02/27/2020

Brain-Inspired Model for Incremental Learning Using a Few Examples

Incremental learning attempts to develop a classifier which learns conti...
03/29/2019

Incremental Learning with Unlabeled Data in the Wild

Deep neural networks are known to suffer from catastrophic forgetting in...
11/17/2017

Generation and Consolidation of Recollections for Efficient Deep Lifelong Learning

Deep lifelong learning systems need to efficiently manage resources to s...
10/04/2021

Incremental Class Learning using Variational Autoencoders with Similarity Learning

Catastrophic forgetting in neural networks during incremental learning r...
01/18/2021

Studying Catastrophic Forgetting in Neural Ranking Models

Several deep neural ranking models have been proposed in the recent IR l...
01/07/2022

An Incremental Learning Approach to Automatically Recognize Pulmonary Diseases from the Multi-vendor Chest Radiographs

Pulmonary diseases can cause severe respiratory problems, leading to sud...
09/04/2019

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis

When a robot acquires new information, ideally it would immediately be c...

Code Repositories

Incremental-GAN

Just some experiments on GANs hallucinating data samples for an incremental learner.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Animals and humans learn incrementally. A child grows its vocabulary of identifiable concepts as different concepts are presented, without forgetting the concepts with which they are already familiar. Antithetically, most supervised learning systems work under the omniscience of the existence of all classes to be learned, prior to training. This is crucial for learning systems that produce an inference as a conditional probability distribution over all known categories.

Incremental supervised learning though reasonably studied, lacks a formal and structured definition. One of the earliest formalization of incremental learning comes from the work of Jantke [9]. In this article the author defines incremental learning roughly as systems that “have no permission to look back at the whole history of information presented during the learning process”. Immediately following this statement though is the relaxation of the definition: “Operationally incremental learning algorithms may have permission to look back, but they are not allowed to use information of the past in some effective way”, with the terms information and effective not being sufficiently well-defined. Subsequently, other studies made conforming or divergent assumptions and relaxations thereby adopting their own characteristic definitions. Following suit, we redefine a more fundamental and rigorous incremental learning system using two fundamental philosophies: data membrane and domain agnosticism.

Figure 1:

Catastrophic forgetting: Figure (a) is the confusion matrix of a network

, trained and tested on data from a subset containing only samples of labels . Figure (b) is the confusion matrix of a network initialized with the weights of trained , re-trained with data from classes and tested on the same label space. No testing samples were provided for the classes . Figure (c) is the same network as (b) tested on the entire label space. Figure (d) is similar to (c) but trained with a much lower learning rate. These confusion matrices demonstrate that a neural network retrained on new labels without supplying it data from the old data subset, forgets the previous data, unless the learning rate is very measured and slow as was the case in (d). If the learning rate were slow, though the old labels are not forgotten, new labels are not effectively learned.

Consider there are two sites: the base site and the incremental site each with ample computational resources. possesses the base dataset , where and . possesses the increment dataset , where and and .

Property 1.

is only available at and is only available at

. Neither set can be transferred either directly or as features extracted by any deterministic encoder, either in whole or in part to the other site, respectively.

is allowed to trai n a discriminative learner using and make available to the world. Once broadcast, does not maintain and will therefore not support queries regarding . Property 1 is referred to as the data membrane. Data membrane ensures that does not query

and that no data is transferred either in original form or in any encoded fashion (say as feature vectors). The generalization set at

contains labels in the space of . This implies that though , has no data for training the labels , the discriminator trained at with alone is expected to generalize on the combined label space in the range . can acquire and other models from and infer the existence of the classes that can distinguish. Therefore incremental learning differs from the problem of zero-shot novel class identification.

A second property of multi-class incremental learning is domain agnosticism, which can be defined as follows:

Property 2.

No priors shall be established as to the dependencies of classes or domains between and .

Property 2 implies that we cannot presume to gain any knowledge about the label space of () by simply studying the behaviour of using . In other words, the predictions of the network does not provide us meaningful enough information regarding . This implies that the conditional probability distribution across the labels in , for produced by , cannot provide any meaningful inference to the conditional probability distribution across the labels when generalizing on the incremental data. For any samples , the conditional probability over the labels of classes are meaningless. Property (2) is called domain agnosticism.

From the above definition it is implied that sites must train independently. The training at of labels could be at any state when triggers site by publishing its models, which marks the beginning of incremental training at . To keep experiments and discussions simpler, we assume the worst case scenario where the site does not begin training by itself, but we will generalize to all chronology in the later sections.

We live in a world of data abundance. Even in this environment of data affluence, we may still encounter cases of scarcity of data. Data is a valuable commodity and is often jealously guarded by those who posses it. Most large institutions and organizations that deploy trained models, do not share the data with which the models are trained. A consumer who wants to add additional capability is faced with an incremental learning problem as defined. In other cases, such as in military or medicine, data may be protected by legal, intellectual property and privacy restrictions. A medical facility that wants to add the capability of diagnosing a related-but-different pathology to an already purchased model also faces a similar problem and often has to expend large sums of money to purchase an instrument with this incrementally additional capability. All these scenarios are plausible contenders for strict incremental learning following the above definition. The data membrane property ensures that even if data could be transferred, we are restricted by means other than technological, be it legal or privacy-related that prevents the sharing of data across sites. The domain agnosticism property implies that we should be able to add the capability of predicting labels to the network, without making any assumptions that the new labels may or may not hold any tangible relationship to the old labels.

A trivial baseline: Given this formalism, the most trivial incremental training protocol would be to train a machine at with , transfer this machine (make it available in some fashion) to . At , initialize a new machine with the parameters of the transferred machine, while alerting the new machine to the existence of classes and simply teach it to model an updated conditional probability distribution over classes . A quick experiment can demonstrate to us that such a system is afflicted by a well-studied problem called catastrophic forgetting. Figure 1 demonstrates this effect using neural networks. This demonstrates that without supplying samples from , incremental training without catastrophic forgetting at is difficult without relaxing our definition.

To avoid this, we propose that the use of generative models trained at , be deployed at to hallucinate samples from . The one-time broadcast from could include this generator along with the initializer machine that is transferred. While this system could generate samples-on-demand, we still do not have targets for the generated samples to learn classification with. To solve this problem, we propose the generation of supervision from the initializer network itself using a temperature-raised softmax. A temperature raised softmax was previously proposed as a means of distilling knowledge in the context of neural network compression [7]. Not only does this provide supervision for generated samples, but will also serve as a regularizer while training a machine at , similar to the fashion described in [7].

In summary this paper provides two major contributions: 1. A novel, uncompromising and practical definition of incremental learning and 2. a strategy to attack the defined paradigm through a novel sampling process called phantom sampling. The rest of this article is organized as follows: section 2 outlines the proposed method, section 3 discusses related works on the basis of the properties we have presented, section 4 presents the design of our experiments along with the results, section 5 extends this idea to continual learning systems, where we present an trivial extension to more than one increment and section 6 provides concluding remarks.

2 Proposed method

Figure 2: Sites , and the networks that they train respectively. The networks and are transferred from to and work in feed-forward mode only at . In this illustration using MNIST dataset, . Classes are in and classes are available in

Our design begins at . Although and may train at various speeds and begin at various times, in this presentation we focus on the systems that mimic the following chronology of events:

  1. trains a generative model and a discriminative model for and using , respectively.

  2. broadcasts and .

  3. collects the models and and initializes new model with the parameters of adding new random parameters as appropriate. Expansion using new random parameters is required since, should make predictions on a larger range of labels.

  4. Using together with phantom sampling from and , trains the model until convergence.

This is an asymptotic special case of the definition established in the previous section and is therefore considered. Other designs could also be established and we will describe briefly a generalized approach in the latter part of this section. While the strategy we propose could be generalized to any discriminatory multi-class classifier, for the sake of clarity and being precise, in this article we restrict our discussions to the context of deep neural networks.

The generative model, models . In this article we considered networks that are trained as simple generative adversarial networks (GAN) for our generative models. GANs have recently become very popular for approximating and sampling from distributions of data. GAN was originally proposed by Goodfellow et. al, in 2014 and has since seen many advances [4]

. We consider the GANs proposed in the original article by Goodfellow et. al, for the sake of convenience. We use a simple convolutional neural network model as the discriminator

. Figure 2 shows the overall architecture of our strategy with and within the capsule. As can be seen, attempts to produce samples that are similar to the data and

learns a classifier using the softmax layer that is capable of producing

as follows:

(1)

where, is the weight matrix of the last softmax layer with representing the weight vector that produces the output of the class and is the output of the layer in , immediately preceding the softmax layer. Once this network is trained, broadcasts these models.

At , a new discriminative model is initialized with the parameters of . is trained (and has the ability) to only make predictions on the label space of , i.e. . The incremental learner model therefore, cannot be initialized with the same weights in the softmax layer of alone. Along with the weights for the first classes, should also be initialized with random parameters as necessary to allow for the prediction on a combined incremental label space of . We can simply do the following assignment to get the desired arrangement:

(2)

Equation 2 describes a simple strategy where the weight vectors are carried over to the first classes and random weight vectors are assigned to the rest of the classes. In figure 2, the gray weights in represent those that are copied and the red weights represent the newly initialized weights.

We now have at , a network that will generate samples from the distribution of and an initialized network whose layers are setup with the weights from . To train this network on , if we simply ignore and train the network with samples , we will run into the catastrophic forgetting problem as discussed in figure 1. To avoid this, we can use samples queried from (such samples are notationally represented as to indicate sampling using a random vector

) and use these samples to avoid forgetting. However we do not have targets for these samples to estimate an error with. Phantom sampling will help us to acquire targets.

Definition 1.

A phantom sampler is a process of the following form:

(3)

where, and is a temperature parameter which will be described below. Using and , we can use this sampling process to generate sets of sample-target pairs that simulate samples from the dataset . Simply using is not possible as we do not have access to at , and is not allowed to communicate with regarding the data due to the data membrane condition described in property 1. We can however replace with and use the generated samples to produce targets from this network for the generated samples itself. This is justifiable since is learnt to hallucinate samples from

. However, given that we only use a simple GAN and that the samples are expected to be noisy, we might get corrupted and untrustworthy targets. GANs have not advanced sufficiently to a degree where perfect sampling is possible at the image level, at the moment of writing this article. As GAN technology improves, much better sampling could be achieved using this process.

Given that GANs (and any other similar generative models) are imperfect, often samples can have properties that are blended from two or more classes. In these cases, the targets generated from might also be too high for only one of these classes, which is not optimal. To avoid this problem, we use a replacement for the softmax layer of with a new temperature-raised softmax layer,

(4)

This temperature-raised softmax for ( is simply the softmax described in equation 1) provides a softer target which is smoother across the labels. It reduces the probability of the most probable label and provides rewards for the second and third most probable labels also, by equalizing the distribution. Soft targets such as the one described and their use in producing ambiguous targets exemplifying the relationships between classes were proposed in [7]. In this context, the use of soft targets for helps us get appropriate labels for the samples that may be poorly generated. For instance, a generated sample could be in between classes and . The soft target for this will not be a strict or a strict , but a smoother probability distribution over the two (all the) classes.

While learning , with a batch of samples from , we may simply use a negative log-likelihood with the softmax layer for the labels. To be able to back-propagate samples from phantom sampling, we require a temperature softmax layer at as well. For this, we simply create a temperature softmax layer that share the weights , of the softmax layer of , just as we did for . This implies that will have additional units for which we would not have targets as phantom sampling will only provide us with targets for the first classes. Given that the samples themselves are hallucinated from , the optimal targets to assign for the output units of the temperature softmax layer are zero. Equivalently, we could simply avoid sharing the extra weights. Therefore along with the phantom sample’s targets, we concatenate a zero vector of length . This way, we could simply back-propagate the errors for the phantom samples also. The error for data from is,

(5)

where, represents an error function. The error for phantom samples is,

(6)

Typically, we use a categorical-cross-entropy for learning labels and a root mean-squared error for learning soft-targets.

While both samples from and from the phantom sampler are fed-forward through the same network, the weights are updated for two different errors. If the samples come from the phantom sampler, we estimate the error from the temperature softmax layer and if the samples come from , we estimate the errors from the softmax layer. For every iterations of , we train with iteration of phantom samples . is decided based on the number of classes that are in each set and .

Thus far we have assumed a certain chronology of events where begins training only after is finished training. We could generalize this strategy of using phantom sampling when is already, partially trained by the time finishes and triggers the incremental learning. In this case, we will not be able to re-initialize the network with new weights, but as long as we have phantom samples, we can use a technique similar to mentor nets or fitnets, using embeded losses between and and transfer knowledge about to  [27] [32]. This strategy could also be extended to more than one increment of data in a straight-forward manner. Using the same phantom sampling technique we could continue training the GAN to update it with the distributions of the new classes. Once trained, we can pass on this GAN and the newly trained net to the next incremental site.

3 Related Work

Figure 3: Results for the MNIST dataset. Base network is , baseline is without phantom sampling. GAN is phantom sampling with GAN trained for epochs.

Catastrophic Forgetting: Early works by McCloskey, French and Robins outlines this issue [17, 1, 26]

. In recent years, this problem has been tackled using special activation functions and dropout regularization. Srivastava et al. demonstrated that the choice of activation function affects catastrophic forgetting and introduced the

Hard Winner Take All (HWTA) activation [30]. Goodfellow et al. argued that increased dropout works better at minimizing catastrophic forgetting compared to activation functions [5]. All these studies were made in regards to unavailability of data for particular classes, rather than in terms of incremental learning.

We find that most previous works in incremental learning, relaxes or violates the rigorous constraints that we have proposed for an incremental learner. While this may satisfy certain case studies, pertaining to each article, we find no work that has addressed our definition sufficiently. In this section, we organize our survey of existing literature in terms of the conditions they violate.

Relaxing the data membrane: The following approaches relax property (1) to varying degrees. Mensink et al. develop a metric learning method to estimate the similarity (distance) between test samples and the nearest class mean (NCM) [18, 19]

. The class mean vectors represent the centers of data samples belonging to different classes. The learned model is a collection class center vectors and a metric for distance measurement that is determined using the training data. The NCM approach has also been successfully applied to random forest based models for incremental learning in

[25]. The nodes and leaves of the trees in the NCM forest are dynamically grown and updated when trained with data from new classes. A tree of deep convolutional networks (DCNN) for incremental learning was proposed by Xiao et al. [34]. The leaves of this tree are CNNs with a subset of class outputs and the nodes of the tree are CNNs which split the classes. With the input of new data and classes, the DCNN grows hierarchically to accommodate the new classes. The clustering of classes, branching and tree growth is guided by an error-driven preview process and their results indicate that the incremental learning strategy performs better than a network trained from scratch.

The Learn++ is an ensemble based approach for incremental learning [23] [20]. Based on the Adaboost, the algorithm weights the samples to achieve incremental learning. The procedure, however requires every data batch to have examples from all the previously seen classes. In [13], Kuzborskij et al. develop a least squares SVM approach to incrementally update a N-category classifier to recognize N+1 classes. The results indicate that the model performs well only when the N+1 classifier model is also trained with some data samples from the previous N classes.

iCaRL is an incremental representation based learning method by Rebuffi et al. [24]. It progressively learns to recognize classes from a stream of labeled data with a limited budget for storing exemplars. The iCaRL classification is based on the nearest-mean-of-exemplars. The number of exemplars for each class is determined by a budget and the best representation for the exemplars is updated with existing exemplars and newly input data. The exemplars are chosen based on a herding mechanism that creates a representative set of samples based on a distribution [33]. This method while being very successful, violates the membrane property by transferring well-chosen exemplar samples. In our results section we address this idea by demonstrating that significant amount of (randomly chosen) samples are required to out-perform our strategy, which violates the budget criteria of the iCaRL methods.

Relaxing data agnosticism: Incremental learning procedures that draw inference regarding previously trained data based on current batch of training data, can be viewed as violating this constraint. Li et al. use the base classifier to estimate the conditional probabilities for . When training with , they use these conditional probabilities to guide the output probabilities for classes [16]. In essence, the procedure assumes that if is trained in such a manner that for is the same for both classifier and , this ensures that for will also be the same. This is a strong assumption relating and violating agnosticism. The authors Furlanello et al. develop a closely related procedure to in [3]. They train neural networks for the incremental classifier by making sure the conditional probabilities for is the same for both and . The only difference compared to [16] is in the regularization of network parameters using weight decay and the network initialization. In another procedure based on the same principles, Jung et al. constrain the feature representations for to be similar to the feature representations for [10].

Other models assume that the parameters of the classifiers for and for are related. Kirkpatrick et al. model the probability and get an estimate for the important parameters in [11]. When training initialized with parameters , they make sure not to offset the important parameters in . This compromises the training of under the assumption that important parameters in for are not important for .

Closely related to the previous idea is pseudo-rehearsal proposed by Robins in 1995 [26]. Neuro-biological underpinnings of this work was also studied by French et. al, [2]. This method is a special case of ours if, the GAN was untrained and produces random samples. In other words, they used to produce targets for random samples , instead of using a generative model, similar to phantom sampling. This might partly be due to the fact that sophisticated generative models were not available at the time. This article also does not use soft targets such as those that we use because, for samples that are generated randomly, is a better target. This article does not violate any of the properties that we required for our uncompromising incremental learner.

4 Experiments and Results

To demonstrate the effectiveness of our strategy we conduct thorough experiments using three benchmark datasets: MNIST dataset of handwritten character recognition, Street view housing numbers (SVHN) dataset and the CIFAR10 10-class visual object categorization dataset [15, 22, 12]. In all our experiments111

Our implementations are in theano and our code is available at

https://github.com/ragavvenkatesan/Incremental-GAN. we train the ’s GAN, and base networks independently. The network parameters of all these models are written to drive, which simulates broadcasting the networks. Once trained, the datasets that are used to train and test these methods are deleted, simulating the data membrane and the processes are killed.

We then begin as an independent process in keeping with site independence. This uses a new dataset which is setup in accordance with property 1. Networks and ’s parameters are loaded but only in their feed-forward operations. Two identical copies of networks and that share weights are built. These are initialized with the parameters of , with without temperate and with temperature softmax layers. By virtue of the way they are setup, updating the weights on one, updates both the networks. We feed forward mini batches of data from through the column that connects to the softmax layer and use the error generated here to update the weights for each mini batch. For every updates of weights from the data, we update one mini batch of phantom samples from . This is run until early termination or until a pre-determined number of epochs. Since we save the parameters of after every epoch, we can load the corresponding GAN for our experiments. We use the same learning rate schedules, optimizers and momentums across all the architectures. We fix our temperature values using a simple grid search. We conducted several experiments using the above protocol to demonstrate the effectiveness of our strategy. The following sections discuss these experiments.

Figure 4: Results for the CIFAR10 dataset. Notation, similar to that of figure 3.
Figure 5: Results for the SVHN dataset using a well-trained GAN.

4.1 Single dataset experiments

MNIST: For the MNIST dataset, we used a GAN that samples image generations from a uniform -mean Gaussian. The generator part of the network has three fully-connected layers of , and neurons with ReLU activations for the first two and tanh activation for the last layers, respectively [21]. The discriminator part of has two layers of maxout-by- neurons [6]. This architecture that mimics the one used by Goodfellow et. al, closely [4]. All our discriminator networks across both sites and are the same architecture which for the MNIST dataset is, two convolutional layers of and neurons each with filter sizes of and

respectively, with max pooling by

on both layers. These are followed by two full-connected layers of

neurons each. All the layers in the discriminators are trained with batch normalization and weight decay with the fully-connected layers trained with a dropout of

 [29, 8].

Results of the MNIST dataset are discussed in figure 3. The bar graph is divided into many factions , each representing the performance having samples per class transmitted between to . Within each faction are five bars, except that has six bars. The first bar at represents the state-of-the-art accuracy with the (base) network trained on the entire dataset (, for the given hypothesis. This is the upper-bound on the accuracies, given the architecture. The first bar on the left (second for ) represents the accuracy of a baseline network that is learnt without using our strategy. A baseline network does not use a phantom sampler and is therefore prone to catastrophic forgetting. The other four bars represent the performance of networks learnt using our strategy. From left to right, the for each network is trained for epochs, respectively. Confusion matrices are shown wherever appropriate.

The central result of this experiment is the block of accuracies highlighted within the blue-shaded box (), which show the performances while maintaining a strict data membrane. The confusion matrix in the top-left corner shows the performance of the base network with , which is similar to (c) from figure 1, demonstrating catastrophic forgetting. The next confusion matrix that is marked with blue dashed line depicts the accuracy of with producing random noise. This setup is the same as in the work by Robins [26]. It can be observed that even when using a phantom sampler that samples pure noise, we achieve a noticeable boost in recognition performance, significantly limiting catastrophic forgetting. The confusion matrix in the bottom-left corner is the performance using trained for only epochs. This shows that even with a poorly trained GAN, we achieve a marked increase in performance. The best result of this faction is the confusion matrix highlighted in the red square. This is the result of a network learnt with phantom sampling with a GAN that is trained closest to convergence at epochs. It can be clearly noticed that the phantom sampling strategy helps in avoiding catastrophic forgetting, going so far as to achieve nearly state-of-the-art base accuracy.

The rest of the factions in this experiment make a strong case against the relaxation of the data membrane. Consider, for instance, the pair of confusion matrices at the bottom right, highlighted within the green dotted lines. These represent the performance of baseline and networks, when samples per-class were transmitted through the membrane. A baseline network that was trained carefully without overfitting produced an accuracy of and still retained a lot of confusion (shown in green dashed lines within the confusion matrix). The network trained with phantom sampling significantly outperforms this. In fact (refer the orange dotted line among the bars), this relaxation is outperformed by a phantom sampling trained network even with a poorly trained GAN (with just epochs) while adhering to a strict data membrane (). It is only when samples per-class (which is ) of the data are being transferred, does the baseline even match the phantom sampling network with (as demonstrated by the blue dotted line among the bars). All these results conclusively demonstrate the significance of phantom sampling and demonstrate the nonnecessity of the relaxation of the data membrane. An uncompromising incremental learner was thereby achieved using our strategy.

SVHN and CIFAR 10: For both these datasets we used a generator model that generates images from

Gaussian random variables. The number of neurons in subsequent fully-connected layers are

and

respectively. This is followed by two fractionally-strided or transposed convolution layers with filter sizes

and respectively. Apart from the last layer that generates the image, every layer has a ReLU activation. The last layer uses a tanh activation. Our discriminator networks including the discriminator part of the GANs have six convolutional layers with neurons and respectively. Except the first layer, which has a filter size of , every layer has filter sizes of . Every third layer maxpools by . These are followed by two fully-connected layers of nodes each. All activations are ReLU.

Results of the CIFAR 10 dataset are shown in figure  4 and that of SVHN are shown in figure 5. CIFAR 10 and SVHN contain three channel full-color images that are sophisticated. GANs, as originally proposed by Goodfellow et. al, fail to generate reasonably good looking samples for these datasets [4]. Since we used the same models, the results shown here could be improved significantly with the invention (or adoption) of better generative models.

We can observe from figures 4 and 5, that they follow patterns similar to the MNIST results in figure 3. The CIFAR-10 results clearly demonstrate that only after about of data is transmitted, do the performance come close to matching the phantom sampler approach. In the SVHN results shown in figure 5, we can observe the marked difference in performance with only few samples being transmitted. Because SVHN is a large dataset in number of samples, the GANs were able to generate sufficiently good images that lead to superior performance. This result shows us the advantage our strategy has when working with big datasets. Firstly, having a big dataset imposes additional penalties for requiring to transmit data and therefore should be avoided. Secondly, having more number of samples implies that a simple GAN could generate potentially good looking images, helping us maintain consistent performance throughout.

4.2 Cross-domain increments

Figure 6: Results for MNIST-rotated trained at and incremented with new data from the MNIST original dataset at . The class labels for both these datasets is . The confusion matrix on the left is for the baseline network and the one on the right is for our strategy with .
Figure 7: Results for MNIST trained at and incremented with new data from the SVHN dataset at . The SVHN classes are considered as novel classes in this experiment, therefore we have twenty classes. The confusion matrix on the left is for the baseline network and the one on the right is for our strategy with .

It could be argued that performing incremental learning within the same dataset has some advantages in terms of the domain of the datasets being similar. The similarity in domains could imply that the datasets are general and therefore, the base network already has some features of the incremental dataset encoded in it [31]. In this section we demonstrate two special cross-domain cases. In the first case, the incremental data , while sampled from a new domain, has the same label space as . In the second case, has new classes that are not seen in .

Figure 8: Results for the bounded-continual learning experiments. There are two steps of increment. Each increment has its own GAN. The top row is MNIST and the bottom row is SVHN. In each row, the image on the left is the confusion of the base net with classes . The center image is the confusion for the first increment with training data in classes and testing data in classes . The confusion on the right is the final increment with training data from classes and testing data from the classes .

Case 1: In this experiment, our base dataset is the MNIST-rotated dataset developed by Larochelle et al. [14]. This is used to learn and . This is a dataset that is the same as the MNIST dataset, but the samples are randomly rotated. The incremental data comes from the MNIST dataset . The incremental data and the base dataset has the same label space. The domain of incremental dataset (MNIST) can be considered as a special subset of the domain of (MNIST-rotated). Therefore, this setup is ripe for a scenario where the incremental site forgets the expanse of the domain of the base site. The network architecture remains the same as for the MNIST experiments. The results for this experiment are shown in figure 6. It can be clearly noted that there is about difference in performance using our strategy.

Case 2: In this experiment, our base dataset is the MNIST dataset and it is used to learn and . The incremental dataset is SVHN. The classes of SVHN are labelled at and the labels of MNIST are maintained as . This is essentially incrementing on a new task from a disjoint domain. The results of this experiment are shown in figure 7. It can be clearly noted that there is about increase in performance using our strategy.

5 Extension to bounded-continual learning

So far we have defined and studied incremental learning. Incremental learning consists of a single increment. In this section, we extend this idea to bounded-continual learning. Continual learning is incremental learning with multiple increments. Bounded-continual learning is a special case of continual learning, where the number of increments is limited. Life-long learning for instance, is an example of unbounded-continual learning.

The proposed strategy can be trivially modified to work for multiple increments. Consider there are sites. Consider also that we have one base network , with indicating its state after the increment . We learn for every increment , a new GAN . We use the set of GANs to create phantom samplers, one for each increment.

Continual learning can be implemented in the following manner. At the beginning, we construct a base network . Once is trained with , we create a copy () of for phantom labelling. The samples generated by are fed through , to get phantom samples for the increment . This phantom sampler will be used when learning the increment .

On receiving the data increment , we have GANs . We can create an updated copy of the phantom sampler , by making a copy of . We create a phantom sampler, where samples from all the GANs uniformly and hallucinates the labels. We update to , by training it on along with this new phantom sampler .

This approach of bounded-continual learning is apt in cases where the data at each increment is large enough to warrant training a GAN. While, this approach works well for bounded-continual learning systems, it is not scalable to lifelong learning. This is because unbounded-continual learning could result in an infinite number of GANs. Seff et al, recently proposed an idea to update the same GAN for a large number of increments [28]. Such a GAN could generate data from the combined distributions of all increments it has seen. While this still works only on a bounded number of increments, this is a step towards unbounded-continual learning. If we employ this idea in our system, we could eliminate the need for having multiple GANs and extend our strategy trivially to life-long learning as well. This idea is still in its infancy and is not fully mature yet. Although we have drawn a road map, we await further development of this idea to incorporate it fully into our strategy.

5.1 Experiments and results

We use GANs and classifier architectures which are the same as defined for MNIST and SVHN in the previous section, respectively. We demonstrate continual learning on both datasets by performing two increments. The base dataset contains the classes , the first increment contains classes and the last increment contains ]. Figure 8 shows the results for continual learning for both datasets. It can be easily noticed that we can achieve close to state-of-the-art accuracy even while performing continual learning. A note of prominence is that even at the end of the third increment, there is little confusion remaining from the first increment. This demonstrates strong support for our strategy even when extending to continual learning.

6 Conclusions

In this paper, we redefined the problem of incremental learning, in its most rigorous form so that it can be a more realistic model for important real-world applications. Using a novel sampling procedure involving generative models and the distillation technique, we implemented a strategy to hallucinate samples with appropriate targets using models that were previously trained and broadcast. Without having access to historic data, we demonstrated that we could still implement an uncompromising incremental learning system without relaxing any of the constraints of our definitions. We show strong and conclusive results on three benchmark datasets in support of our strategy. We further demonstrate the effectiveness of our strategy under challenging conditions, such as cross-domain increments, incrementing label space and bounded-continual learning.

References