Human beings have a natural ability to adapt to different tasks sequentially without forgetting what they have learned. They can also seamlessly leverage knowledge learned from past tasks to tackle new tasks. This impressive ability is crucial for learning systems deployed in the real world. Lifelong learning  aims to develop models that mimic this human ability to learn continually without forgetting knowledge acquired earlier. In concrete terms, in a lifelong learning setting, we wish to maintain and update a model (e.g., a neural network classifier) in the presence of new classification tasks that arise sequentially. The model should both exhibit high accuracy on new tasks as well as perform well on old classification tasks, even if the old data is no longer accessible. However, learning algorithms are often designed to operate under stationary data distributions – typically, only a single task needs to be addressed. Under the lifelong learning setting, applying standard learning algorithms may lead to forgetting what has been learned on old tasks: this phenomenon, known as catastrophic forgetting [27, 32], results in severe performance degradation on old tasks after adapting to a new task.
A large body of work has been proposed to address catastrophic forgetting, using a varied arsenal of techniques . Despite advances in lifelong learning, there are still limitations. Most of the methods, including, e.g., regularization-based [18, 42, 22, 28, 1] and rehearsal-based [35, 24, 37, 29]
methods, mitigate catastrophic forgetting under relatively restictive conditions, e.g., assuming a small number of highly related tasks. When tasks differ drastically, and the number of tasks grows, these methods suffer significant degradation. Another approach is to increase the model capacity (i.e., add parameters, neurons, layers, etc.), to accommodate new tasks, while preserving parts of the model for old tasks[34, 40, 4]. However, increasing complexity makes such methods prone to overfitting, and can be undesirable when models are to be deployed over memory-limited devices. Therefore, a competing objective of parsimony is desirable.
Another related challenge in lifelong learning is how to reuse learned knowledge to help the model learn future tasks better. Current research work often ignores this critical point by, e.g., independently considering different tasks , or by addressing it only partialy, e.g., using past parameters as an initialization during training . However, the usefulness of knowledge gained from old tasks may depend on the relevance between old and new tasks. For example, a classifier trained for classifying dogs may be more helpful for classifying cats than digits. Thus, how to adaptively select useful past knowledge is critical for improving the performance on a new task.
Our proposed method, named learn-prune-share
(LPS), is a novel deep learning framework aimed at addressing these challenges. LPS learns sequential tasks without experiencing catastrophic forgetting, by partitioning the neural network and dedicating portions to each task. It also prunes the neural network, thereby maintaining parsimony and avoiding overfitting. Finally, it selectively shares knowledge from old tasks and reuses them on new tasks. All of these happen simultaneously, in a unified optimization framework trained in an end-to-end fashion. Our contributions are as follows:
We incorporate the state-of-the-art Alternating Direction Method of Multipliers (ADMM) based pruning strategy to solve the lifelong learning problem, maintaining a single parsimonious neural network model and eliminating catastrophic forgetting thoroughly.
We design a novel knowledge sharing scheme, which learns to select useful knowledge from old tasks and adapt them to the current task. Our knowledge-sharing scheme is seamlessly integrated with our ADMM pruning strategy, and is trained jointly with the classifier parameters. We make our code publicly available111https://github.com/neu-spiral/LPSforLifelong to accelerate community contributions in this exciting topic.
Our method, LPS, shows superior performance on two standard lifelong learning benchmark datasets as well as a challenging real world radio fingerprinting dataset. LPS beats state-of-the-art methods by a 2%–54% margin.
Ii Related Work
Ii-a Lifelong Learning
Regularization-based methods [18, 42, 22, 28, 1] limit plasticity of the network via regularization terms or by limiting the learning rate on parameters learned from previous tasks. While regularization-based methods mitigate catastrophic forgetting to some extent, performance on previous tasks gets increasingly worse when more diverse tasks are seen. By design, our method deals with catastrophic forgetting problem more effectively, as performance on previous tasks remains unchanged.
Rehearsal-based methods capture the data distribution in previous tasks by learning a generative model. When a new task arrives, data from previous tasks is simulated via the generative model and combined with current data to reinforce previous knowledge [35, 24, 37, 29]. Though saving the generative model is less memory intensive than saving data, such models can still be big. Performance largely depends on the quality generative model on careful tuning of the mix of generated and new data. Our approach avoids the additional cost of training and storing an external generative model, again while experiencing no catastrophic forgetting.
Expansion-based methods accommodate new tasks by gradually increasing capacity of the model [34, 40, 4]. These methods generally outperform regularization and rehearsal based methods, which maintain a model with fixed capacity. However, the size of model parameters grows linearly with the number of tasks. This limits their practical usage, and makes them prone to overfitting. On the contrary, our approach fully exploits the potential of a fixed-capacity model.
. In these works, model pruning techniques are utilized to compress the original model iteratively to allocate free capacity for new tasks. However, both of these methods use simple threshold-based heuristics to prune the model with no structure constraint, resulting in a sparse, irregular matrix which limits further acceleration at inference time. Additionally, both of these methods consider tasks independently, ignoring the relationship between the current and previous tasks. In contrast, our approach adopts a systematic pruning strategy via Alternating Direction Method of Multipliers (ADMM), where structural constraints, e.g. filter pruning or column pruning, can be specified as needed. Moreover, our proposed novel knowledge inheritance scheme adaptively select weights shared from previous tasks to facilitate learning the current and future tasks. Our experimental results in Section V-B show that, due to these improvements, LPS outperforms these two algorithms.
Ii-B Neural Network Weight Pruning
The rich literature in neural network weight pruning can be categorized into heuristic pruning algorithms and regularization-based pruning algorithms. The former starts from the early work on irregular, unstructured weight pruning where arbitrary weights can be pruned. Han et al.  use an iterative algorithm to eliminate weights with small magnitude and perform retraining to regain accuracy. Guo et al.  incorporate connection splicing into the pruning process to dynamically recover the pruned connections that are found to be important. Later, heuristic pruning algorithms have been generalized to the more hardware-friendly structured sparsity schemes. A Transformable Architecture Search (TAS)  realizes the pruned network and knowledge is transferred from the unpruned network to the pruned version. Luo et al.  leverage a greedy algorithm to guide the pruning of the current layer with input information of the next layer, while Yu et al.  define a “neuron importance score” and propagate this score to conduct the weight pruning process.
Regularization-based pruning algorithms, on the other hand, have the unique advantage for dealing with structured pruning problems through group Lasso regularization . Early work [38, 15] incorporate or
regularization in loss function to solve filter/channel pruning problems. Zhuang et al. introduce an -norm variant indicating the number of selected channels in each layer. A number of subsequent works are dedicated to making the regularization penalty a dynamic and “soft” term. The method in  selects filters based on -norm and updates the filters that have been previously pruned, while [43, 21] incorporate the advanced optimization solution framework Alternating Direction Methods of Multipliers (ADMM) to achieve dynamic regularization penalty, thereby improving accuracy. We take advantage of the state-of-the-art ADMM-based pruning strategy by  and . Moreover, we integrate a novel selective knowledge sharing scheme into the ADMM optimization framework, captured by learnable masks. Furthermore, our whole pipeline can be trained in an end-to-end fashion performing learn, prune, share simultaneously through ADMM.
Iii Problem Formulation
In supervised lifelong learning, we are given a sequence of datasets , where each dataset , , contains tuples of the input feature and its corresponding label . Each dataset corresponds to a distinct classification task: labels are disjoint across datasets . Datasets are revealed sequentially: dataset becomes accessible only at the -th task, which corresponds to, e.g., moving to a new environment. Our goal is to train a classifier sequentially on the datasets such that it achieves good performance on all tasks.
Formally, we are given a feature extractor parameterized by . After the network is trained on , along with a task-specific output layer, its parameters are updated. If are the parameters of the feature extractor at task , a final classifier is obtained after training the extractor (and the correponding output layers) on all datasets in sequentially, as illustrated in Fig. 1. The overall performance of is then assessed via the average classification accuracy on separate testsets, one for each task . Note that, at test time, we are aware of which task/environment is operating over, so that we can classify using the appropriate output layer.
While the problem setting is straightforward, we need to point out three desiderata that must be addressed by a supervised lifelong learning solution.
Catastrophic Forgetting. Catastrophic forgetting is the widely reported phenomenon [27, 32] that models, especially neural networks, tend to “forget” information from previous tasks when incorporating knowledge from new tasks. This is observed in accuracy performance degradation on previous tasks after being exposed to new tasks. Addressing catastrophic forgetting is a central issue, and the main objective of most lifelong learning algorithms [34, 40, 4, 6, 26].
Parsimony. Due to limited computation and memory in real world applications, but also to avoid overfitting, the model should be as compact as possible. It is therefore desirable to maintain a single model and adapt it to various tasks, instead of, e.g., training multiple specialized models.
Knowledge Reuse. Related to both parsimony and catastrophic forgetting, beyond memorizing knowledge acquired from previous tasks, we also want to exploit it when encountering new tasks. For example, parts of the model could be shared across tasks; this leverages relevant/reusable features across tasks, leading to further parsimony and avoiding overfitting, while also ameliorating catastrophic forgetting. Thus, it is important to strike a balance between reuse vs. growth or plasticity in a network, in a way that performance improves.
We propose a learn-prune-share (LPS) algorithm, a novel deep learning framework for lifelong learning incorporating neural network pruning via ADMM. Our method maintains a single neural network for the sequence of tasks, while learning the tasks, pruning the neural network, and sharing knowledge among tasks; these three happen synergistically. Departing from conventional regularization-based or network-expansion-based methods, LPS fully exploits the capacity of the neural network by splitting it into disjoint partitions specialized for each task via pruning; in turn, this mitigates catastrophic forgetting. Simultaneously, to exploit earlier knowledge obtained from previous tasks, LPS shared parameters between different partitions of the network, in an adaptive, tunable fashion.
Iv-a Architecture Overview
We assume that we are given a legacy neural network architecture (e.g., ResNet ), parameterized by weights
. Recall that the support of a vector is the set of its non-zero coordinates. Our solution satisfies the following two properties: first, at the conclusion of task, the weights of the network are partitioned into task-specific weights that have disjoint supports. Formally, for all with :
Second, these disjoint weights do not exhaust the entire representation capacity of the network: the union of their supports is smaller than . The remaining weights are treated as excess capacity, to be utilized in future tasks. Formally, let
be the sum of the task-specific weights.222As , have disjoint supports, can also be thought of as their superposition. Then,
Figure 2 illustrates the weight split for a single layer at task . Weights are partitioned to two classes and with disjoint support. Moreover, the excess capacity (the complement of ’s support) can be used for future tasks.
Under this configuration, to make predictions for task , our network uses , i.e. the portion of the network representing task-specific knowledge, as well as as many of the weights dedicated to previous tasks as we wish to leverage. Formally, the network we use for task has weights
where represents element-wise multiplication and are a set of learnable knowledge sharing masks.
Our solution, and in particular the weight design in Eq. (4), has several advantages, each addressing directly the issues of catastrophic forgetting, parsimony, and knowledge reuse. First, our approach does not experience any catastrophic forgetting. This is precisely because additional tasks are accommodated in excess capacity; classification for earlier tasks (also through Eq. (4)) remains unaltered. Second, by utilizing only a portion of the overall capacity of the network, we attain parsimony. As we discuss below, this happens at almost no accuracy loss: we learn the small-support parameters , through state-of-the art pruning methods. Finally, the use of masks enables arbitrary levels of reuse: setting them to 1 fully reuses weights learned from previous tasks, while setting them to 0 limits the network for task to only its dedicated weights. Note that this flexibility comes at the expense of parsimony, as we also need to keep track of masks for each task. As these are binary, however, they are not as memory-intensive as the model weights.
Iv-B The Learn-Prune-Share (LPS) Algorithm
Our learn-prune-share algorithm learns task-specific weights as well as knowledge-sharing masks as the datasets are revealed. It is an iterative process, summarized in Figure 3. At each task, we use the full excess capacity of the network to train a dense network. Using a state-of-the-art pruning method, we reduce this to weights with small support ; simultaneously, we determine how much of the old weights to reuse via mask . This process is repeated until we run out of tasks.
Formally, at each task , the input to the algorithm consists of (a) earlier weights from previous tasks through , i.e., , as well as, (b) the dataset of task , i.e., . Our goal is to learn sparse, small-support task-specific weights , as well as the knowledge-sharing mask . Note that for task , we only need to learn , as there is no previous knowledge yet. As our pruning happens layer-wise, we introduce the following notation. We re-write the weights and masks as and where are the weights and masks, respectively, corresponding to the -th layer, for . We denote the loss of a network with weights under dataset as , where is the final (classification) layer. In light of Eq. (4), we formulate the learning process determining at task as an optimization problem: equationparentequation
The constraint in Eq. (5d) enforces that weights are indeed disjoint: the weights of are taken from the current excess capacity pool – the complement of . Similarly, the constraint in Eq. (5e) enforces that the knowledge-sharing mask are applied to the past weights only. Note that, together, they imply that and have disjoint supports. Finally, the fully connected classifier/output weights are unconstrained.
Iv-C Task-Specific Weight Constraints
To obtain , we need to create constraints on in Prob. (5) that enforce sparsity. Recall that we denote the weights of the -th layer of our neural network as . In convolutional layers, the weight for
-th layer is represented by a four-dimensional tensor, where dimensionscorrespond to the number of filters, number of channels, filter width, and filter height, respectively. In fully connected layers, weights are matrices, where and represent the input and output layer size, respectively. We nevertheless assume that all layers are represented in a GEneral Matrix Multiplication operations (GEMMs) format, which is a standard practice in tensor framework implementations: that is, we assume all tensors are reshaped to two dimensional matrices. This is already the case for fully connected layers; for convolutional layers, the reshaping can take the form and . We thus assume every layer is represented by a (reshaped) weight matrix , as illustrated in Figure 4. Note that, under this assumption, the total number of weights in the model is .
Under this representation, we consider the following sets of constraints for layer :
Irregular Pruning. For irregular pruning, we have:
where the size of ’s support (i.e., the number of non-zero elements), and is a constant limiting the proportion of non-zero elements. Intuitively, this implies that has no more than non-zero elements.
Structured Pruning. Given a Boolean predicate, let to be 1 if is true, and 0 otherwise. Moreover, given matrix , let be the -th column of . In column pruning, the constraint set is defined as:
where This enforces that the number of non-zero columns in the -th layer’s GEMM representation does not exceed . A similar constraint can be placed on filters/rows of to form structured filter pruning, which enforces that the number of non-zero filters does not exceed .
All three types of constraints (irregular, column, and filter pruning) are illustrated in Fig. 4. They all lead to disjoint supports if used consistently across tasks: for example, filter pruning ends up partitioning rows of the GEMM representation of every later, column pruning partitions columns, etc., while irregular pruning partitions individual matrix entries.
Iv-D Knowledge-Sharing Mask Constraints
To control knowledge sharing, we impose a sparsity constraint on as well, allowing only of entries in the mask to be non-zero. Formally:
Adjusting the “sharing parameter” allows us to limit the proportion of old weights shared (i.e., the non-zero elements of ). By forcing to be sparse, we force training to select the most beneficial weights for the current task from previously learned weights. Sharing parameter also conveys the usefulness of previous knowledge: e.g. when tasks are similar, previous knowledge would indeed be useful for subsequent tasks, thus should be big; conversely, for dissimilar tasks we expect fewer sharing opportunities.
Iv-E Solving LPS via ADMM
The optimization problem defined in Eq. (5
) for LPS has non-convex constraints, and solving it via standard stochastic gradient descent is not possible. We use the widely deployed Alternating Direction Method of Multipliers (ADMM), that has been extensively applied in pruning literature [43, 33]. For completeness, we describe the ADMM solution to Problem (5) in detail in Appendix A. In short, ADMM decomposes the original non-convex problem with constraints into subproblems that can be solved separately. It alternates between (a) standard gradient descent with a quadratic proximal penalty (Eq. (13)), that forces the solution to be close to a value in the (non-convex) constraint space, and (b) an orthogonal projection operation to the constraint space (Eq. (14a)). Hence starting from full weights and masks set to 1, we can progressively prune and constrain both, producing a feasible solution at convergence. Most importantly, the weights and masks can be trained jointly and dynamically.
From an implementation standpoint, to incorporate our constraints to ADMM, it suffices to produce polynomial-time functions that compute the orthogonal projection into constraints (5b) – (5c). For (5b), polynomial algorithms are well known for irregular, column, and filter pruning constraints . For example, for irregular pruning, the orthogonal projection of a matrix to set given by Eq. (6) can be computed by keeping the entries of of largest absolute value intact, and setting the rest to zero. For column pruning (Eq. (7)), projection of to can be computed by similarly keeping the columns with largest norm intact, and setting all other rows to 0.
Our mask constraint (8) is more complex, as projection requires not only enforcing sparsity exactly, but also that the values of the matrix become binary. Nevertheless, we can compute the projection of to in polynomial time via the following steps:
We prove the correctness of this algorithm in Appendix B.
In our experiments, (a) we show that our method outperforms current state-of-the-art methods on both benchmark and real datasets; (b) we assess the importance of the knowledge-sharing mask under different task settings; and (c) we explore how different pruning strategies affect the prediction accuracy.
V-a Experimental Setting.
To evaluate the performance of our approach empirically, we experiment with two standard lifelong learning benchmark datasets, permuted MNIST[20, 7]
and split CIFAR-10/100, and a real world radiofrequency fingerprinting dataset (split RF) , summarized in Table I. The original MNIST dataset [20, 7] contains black and white images of handwritten digits of 10 classes. Following , we construct 10 tasks by applying the same random permutation across all MNIST images, using a different permutation for each task. CIFAR-10  comprises 10 classes of 32x32 colour images. CIFAR-100 is just like CIFAR-10 in image format and total number of images, but has 100 classes. Following , we set the first task as the whole CIFAR-10 dataset. We then create 5 additional tasks, each containing 10 consecutive classes from the CIFAR-100 dataset. Finally, the split RF dataset [16, 9] contains radio transmissions from 50 WiFi devices recorded in the wild. We randomly partition these 50 classes into 5 tasks.
Lifelong Learning Methods. We compare LPS to the following methods:
Elastic Weight Consolidation (EWC) :
EWC applies Laplace Approximation to estimate the importance scores of parameters for previous tasks and uses a quadratic regularizer weighted by the importance scores.
Learning without Forgetting (LwF) : LwF maintains responses for previous tasks via a knowledge distillation loss.
Deep Generative Replay (DGR) :
DGR uses generative adversarial networks (GAN) to mimic the data distribution for each task. A generator is updated at every task to incorporate its data distribution. A corresponding classifier is trained using the mixture of generated and new data.
Gradient Episodic Memory (GEM) : GEM proposes an episodic memory saving a portion of previous data and use the loss on this data a constraint when training a new task.
PackNet : PackNet iteratively prunes the model to accommodate new tasks by removing parameters of smaller magnitude heuristically. Similar formulation is proposed by  under a lifelong learning setting.
We use the implementation from the original authors for all methods, including the recommended hyperparameter settings or tuning strategies. The same network architectures are used among all methods for fair comparison.
|Stat. & Param.||Datasets|
|Permuted MNIST||Split CIFAR||Split RF|
|# tasks ()||10||6||5|
|# classes per task||10||10||10|
|# train samples per task||60,000||50,000||5,000||1,410|
|# test samples per task||10,000||10,000||1,000||550|
|(% total layer params)||10%||50%||10%||20%|
|(% total params)||90%||92%||90%|
|Architecture||Two FC layers||CIFAR-10||ResNet50-1D|
|# params ()||5,568,000||884,576||15,901,568|
|# layers ()||2||5||49|
Architectures. We implement different architectures for permuted MNIST, split CIFAR-10/100, and split RF, respectively. The architecture for permuted MNIST dataset 
contains two hidden layers, each with 2000 neurons and ReLU activations. For split CIFAR-10/100 dataset, we use the default CIFAR-10 architecture fromKeras . For split RF dataset, we use ResNet50-1D 
, which is the 1D-convolutional version of ResNet50, targeting inputs as 2D fixed-length sequences. For all three architectures, we learn the biases and batch normalization parameters for the first task and keep these terms fixed for subsequent tasks.
LPS Implementation. For each task, we run LPS in three phases. In the warm-up phase, we first train a over the full free parameters with . In the ADMM phase, we then prune the network Eq. (11). In the final stage, we do a final projection to the constraint sets of both masks and weights, and retrain the weights, changing only non-zero values. We set all and increase by a factor of 10 at equal intervals during ADMM iterations. We use the following hyperparameters, which we determine using a validation set. Unless otherwise noted, sparsity parameters and are as shown in Table I. We explored the impact of both in Section V-B. For all experiments, we use a batch size of 128 and Adam 
as an optimizer with default values and initialize the learning rate to 0.001. Our proposed LPS approach is implemented in Python using PyTorch and NVIDIA CUDA support. All experiments are carried out on an Tesla V100 GPU with 32 GB memory and 5120 cores.
Evaluation Metrics. We evaluate the final obtained model (associated with masks and multi-head output layers) on all tasks testsets via (Top-1) accuracy.
V-B Results on Benchmark Datasets
Effectiveness of the proposed LPS approach.
Table II shows the overall performance, in terms of the final average accuracy across all tasks, of all lifelong learning methods. For reference purposes, we also include the accuracy attained when training a full-capacity (non-parsimonious) single model separately for each task (SM). LPS outperforms all competitors across all datasets. Most methods perform well on permuted MNIST; the margin is wider on the remaining two datasets, that are more challenging. To further scrutinize the performance of LPS across tasks, we show in Table III-IV the per task accuracy. Interestingly LPS outperforms all competitors across all tasks on both datasets; we also observed this on the 10 tasks of the permuted MNIST, which we omit for brevity. Overall, our LPS approach achieves both the best average and the best task-specific accuracy for all three datasets.
We further observe that regularization-based methods like EWC and IS perform relatively well on benchmarks, while they fail on split RF. One possible explanation may be that when tasks are more diverse and model is large, regularizers do not suffice to keep the learned information. Evidence of forgetting is present in LwF, for split CIFAR, and almost all methods (except LPS and PackNet) on split RF. This is expected, as both LPS and PackNet are immune to forgetting.
We also observe that LSP even outperforms the full-capacity SM trained from scratch on each task for split CIFAR-10/100 and split RF, and is very close to it over permuted MNIST. This happens despite the fact that it uses only a small fraction of the parameters used by SM, indicating that it avoids overfitting. Also, we see a clear benefit of reuse of parameters across tasks in split CIFAR (Table III): by partially utilizing past weights, prediction on later tasks improves under LPS compared to SM.
|task 1||task 2||task 3||task 4||task5||task 6||Avg.|
|task 1||task 2||task 3||task 4||task 5||Avg.|
|task 1||task 2||task 3||task 4||task5||task 6||task 7||task 8||task 9||task 10||Avg.|
Share Parameter Effects.
We further explore the impact of knowledge-sharing in Figure 5. The figure shows how average and per task accuracy changes as we modify : the -axis is the share ratio, indicating the ratio of the parameter over the total number of past weights per layer on the CIFAR dataset. The optimal value is at 92%. Moreover, we clearly see that a large reduction in sharing has a bigger impact on later tasks-which otherwise would benefit from knowledge reuse.
We also show the results of models with no (0%) and full (100%) share on all datasets as well as our best performing model with selective sharing in Table V. We follow the same parameter searching strategy as in split CIFAR-10/100 to get the best performing model on validation set. Interestingly, for all three datasets, we observe the best performance achieved by setting share ratio around 90%. This also indicates that many (not all) past weights are valuable or meaningful for new tasks.
To explore this notion of knowledge re-use further, we conducted an experiment in which tasks vary drastically. To do so, we construct a 5-task “mixed” dataset, where tasks 1,3,5 are from the MNIST dataset, with different permutation patterns and tasks 2, 4 both contain 10 different classes from CIFAR-100. Images from permuted MNIST are augmented to RGB images by repeating 3 channels using the original image and resized to to be compatible with CIFAR images. Similar to Figure 5, Figure 6 shows the effect of the sharing ratio on the mixture dataset. Not surprisingly, the behavior is quite different from Fig. 5. The highest accuracy (89.22%) is achieved by 20% share, which demonstrates that LPS does adaptively select useful knowledge for the current task. Note that, faced with these dissimilar tasks, full share (88.15%) performs even worse than no share (88.23%), indicating the share strategy choice should be flexible and guided by the inter-task similarity.
Comparing different pruning strategies.
We compared three different pruning strategies (i.e., column, filter, and irregular pruning) on split CIFAR-10/100 and split RF datasets, summarized in Table VI and Table VII, respectively. Both irregular and column pruning obtain satisfactory performance, achieving 80.13% and 79.56% on split CIFAR-10/100, 80.55% and 81.22% on split RF, respectively. However, filter pruning reflects an unstable performance, obtaining 68.11% and 80.12% on split CIFAR-10/100 and split RF datasets, respectively.
Impact of Model Capacity.
Figure 7 measures how model capacity usage affects the accuracy on the split CIFAR-10/100 dataset. For this experiment, instead of using the whole model capacity for the 6 tasks, we use only a fraction (e.g., ) of the full model by the -th task, leaving parameters free for future growth; all other parameters are set as in Table I. Figure 7 shows the impact on average and per task accuracy as we vary fraction . We clearly observe that a model performs better when more capacity is available. Nevertheless, accuracy performance is also robust to this shrinkage – it achieves 75.32% accuracy with only 50% model capacity, which is even better than the best non-pruning method IS (74.97%) with full model capacity. Surprisingly, at only 10% of the total capacity of the network, accuracy does not collapse, but still remains above 72.5%. This indicates that our method has the potential capacity to scale to even more future tasks.
|task 1||task 2||task 3||task 4||task5||task 6||Avg.|
|task 1||task 2||task 3||task 4||task 5||Avg.|
Vi Conclusions and Future Work
In this paper, we propose the learn-prune-share (LSP) algorithm for lifelong learning. Our method maintains a parsimonious neural network model and achieves exact no forgetting by splitting the network into task-specific partitions via ADMM-based pruning method. Moreover, a novel selective knowledge sharing scheme is integrated seamlessly into the ADMM optimization framework to address knowledge reuse. Experiments on permuted MNIST, split CIFAR10/100 and split RF demonstrates our approach achieves significant improvement over the state-of-the-art methods. Future directions include applying more advanced pruning strategies on the lifelong learning problem and exploring how to measure the capacity of a model quantitatively.
The authors gratefully acknowledge support by the National Science Foundation (grant CCF-1937500).
-  (2018) Memory aware synapses: learning what (not) to forget. In ECCV, pp. 139–154. Cited by: §I, §II-A.
Distributed optimization and statistical learning via the alternating direction method of multipliers.
Foundations and Trends® in Machine learning3 (1), pp. 1–122. Cited by: §IV-E.
-  (2019) Network pruning via transformable architecture search. In NeurIPS, pp. 759–770. Cited by: §II-B.
-  (2017) Neurogenesis deep learning: extending deep networks to accommodate new classes. In IJCNN, pp. 526–533. Cited by: §I, §II-A, §III.
-  (2019) Continual learning via neural pruning. arXiv preprint arXiv:1903.04476. Cited by: §I, §V-A.
-  (2019) Continual learning via neural pruning. arXiv preprint arXiv:1903.04476. Cited by: §II-A, §III.
-  (2013) An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211. Cited by: §V-A.
-  (2014) Generative adversarial nets. In NeurIPS, pp. 2672–2680. Cited by: §V-A.
-  (2019) Finding a ‘new’needle in the haystack: unseen radio detection in large populations using deep learning. In DySPAN, pp. 1–10. Cited by: §V-A.
-  (2016) Dynamic network surgery for efficient dnns. In NeurIPS, pp. 1379–1387. Cited by: §II-B.
-  (2016) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, Cited by: §II-B.
-  (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §IV-A.
-  (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §V-A.
Soft filter pruning for accelerating deep convolutional neural networks. In IJCAI, Cited by: §II-B.
-  (2017) Channel pruning for accelerating very deep neural networks. In ICCV, pp. 1389–1397. Cited by: §II-B.
-  (2020) Deep learning for rf fingerprinting: a massive experimental study. In IEEE Internet of Things Magazine, Cited by: §V-A.
-  (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: §V-A.
-  (2017) Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114 (13), pp. 3521–3526. Cited by: §I, §II-A, §V-A.
-  (2009) Learning multiple layers of features from tiny images. Cited by: §V-A.
-  (1998) The mnist database of handwritten digits, 1998. URL http://yann. lecun. com/exdb/mnist 10, pp. 34. Cited by: §V-A.
-  (2019) Compressing convolutional neural networks via factorized convolutional filters. In CVPR, pp. 3977–3986. Cited by: §II-B.
-  (2017) Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence 40 (12), pp. 2935–2947. Cited by: §I, §II-A, §V-A.
-  (2019) Rethinking the value of network pruning. In ICLR, Cited by: §II-B.
-  (2017) Gradient episodic memory for continual learning. In NeurIPS, pp. 6467–6476. Cited by: §I, §II-A, §V-A.
-  (2017) Thinet: a filter level pruning method for deep neural network compression. In ICCV, pp. 5058–5066. Cited by: §II-B.
-  (2018) Packnet: adding multiple tasks to a single network by iterative pruning. In CVPR, pp. 7765–7773. Cited by: §I, §II-A, §III, §V-A.
-  (1989) Catastrophic interference in connectionist networks: the sequential learning problem. In Psychology of learning and motivation, Vol. 24, pp. 109–165. Cited by: §I, §III.
-  (2018) Variational continual learning. In ICLR, Cited by: §I, §II-A.
-  (2019) Learning to remember: a synaptic plasticity driven framework for continual learning. In CVPR, Cited by: §I, §II-A.
-  (2019) Continual lifelong learning with neural networks: a review. Neural Networks. Cited by: §I.
-  (2019) PyTorch: an imperative style, high-performance deep learning library. In NeurIPS, pp. 8024–8035. Cited by: §V-A.
-  (1990) Connectionist models of recognition memory: constraints imposed by learning and forgetting functions.. Psychological review 97 (2), pp. 285. Cited by: §I, §III.
-  (2019) Admm-nn: an algorithm-hardware co-design framework of dnns using alternating direction methods of multipliers. In ASPLOS, pp. 925–938. Cited by: §IV-E.
-  (2016) Progressive neural networks. arXiv preprint arXiv:1606.04671. Cited by: §I, §II-A, §III.
-  (2017) Continual learning with deep generative replay. In NeurIPS, pp. 2990–2999. Cited by: §I, §II-A, §V-A.
-  (1995) Lifelong robot learning. Robotics and autonomous systems 15 (1-2), pp. 25–46. Cited by: §I.
-  (2019) Generative replay with feedback connections as a general strategy for continual learning. In COSYNE Workshop, Cited by: §I, §II-A.
-  (2016) Learning structured sparsity in deep neural networks. In NeurIPS, pp. 2074–2082. Cited by: §II-B.
-  (2018) Progressive weight pruning of deep neural networks using admm. arXiv preprint arXiv:1810.07378. Cited by: §II-A.
-  (2018) Lifelong learning with dynamically expandable networks. In ICLR, Cited by: §I, §II-A, §III.
-  (2018) Nisp: pruning networks using neuron importance score propagation. In CVPR, pp. 9194–9203. Cited by: §II-B.
-  (2017) Continual learning through synaptic intelligence. In ICML, pp. 3987–3995. Cited by: §I, §II-A, §V-A, §V-A, §V-A.
-  (2018) A systematic dnn weight pruning framework using alternating direction method of multipliers. In ECCV, pp. 184–199. Cited by: §II-B, §IV-E, §IV-E.
-  (2018) Discrimination-aware channel pruning for deep neural networks. In NeurIPS, pp. 875–886. Cited by: §II-B.
Appendix A Solving Problem (5) via ADMM
To begin with, constraints (5d), (5c) are easy to satisfy: we basically partition variables of and to sets and its complement, and only optimize over the appropriate set (the complement of for and for ). We thus ignore these constraints below. We similarly omit , which is unconstrained and can be learned via SGD. Rewriting the loss as , we convert the non-convex optimization problem formulated in (5) into the ADMM form by introducing auxiliary variables and for constraints (5b) and (5c) respectively:equationparentequation
The augmented Lagrangian of (9) is:
where and are penalty terms, and and are dual variables, rescaled by and , respectively. ADMM proceeds iteratively as follows; at the -th iteration: equationparentequation
The problem (12a) is equivalent to:
The first term in (13) is a standard DNN loss while the second and the third terms are quadratic and differentiable. Thus, this subproblem can be solved by classic stochastic gradient descent. Problem (12b) is equivalent to: equationparentequation
where are the Euclidean projections onto sets , respectively.
Appendix B Proof the correctness of Mask Projector
For simplicity, we prove this for the projection to the set: i.e., the set of binary elements containing k zeros. Let , then is computed by: (a) sort all elements from smallest to largest; (b) set the largest values to 1 an the rest to 0. We make use of the following lemma.
For , where ,
This can be easily proved by considering all positional cases of . Let be the solution of the algorithm, and be an optimal solution. Assume indices are order based on the elements of , as in the algorithm. Let be the first position at which . Then, is mapped to 0 in and is mapped to 1 in . Moreover, as both have exactly ones, there must be a such that (i) , (ii) , and (iii) . By the lemma, since , we have . So, setting and would only improve distance from . As is optimal, this swap must maintain optimality; repeating this procedure as long as there exist indices at which and differ will convert to , while maintaining optimality. ∎