I Introduction
Human beings have a natural ability to adapt to different tasks sequentially without forgetting what they have learned. They can also seamlessly leverage knowledge learned from past tasks to tackle new tasks. This impressive ability is crucial for learning systems deployed in the real world. Lifelong learning [36] aims to develop models that mimic this human ability to learn continually without forgetting knowledge acquired earlier. In concrete terms, in a lifelong learning setting, we wish to maintain and update a model (e.g., a neural network classifier) in the presence of new classification tasks that arise sequentially. The model should both exhibit high accuracy on new tasks as well as perform well on old classification tasks, even if the old data is no longer accessible. However, learning algorithms are often designed to operate under stationary data distributions – typically, only a single task needs to be addressed. Under the lifelong learning setting, applying standard learning algorithms may lead to forgetting what has been learned on old tasks: this phenomenon, known as catastrophic forgetting [27, 32], results in severe performance degradation on old tasks after adapting to a new task.
A large body of work has been proposed to address catastrophic forgetting, using a varied arsenal of techniques [30]. Despite advances in lifelong learning, there are still limitations. Most of the methods, including, e.g., regularizationbased [18, 42, 22, 28, 1] and rehearsalbased [35, 24, 37, 29]
methods, mitigate catastrophic forgetting under relatively restictive conditions, e.g., assuming a small number of highly related tasks. When tasks differ drastically, and the number of tasks grows, these methods suffer significant degradation. Another approach is to increase the model capacity (i.e., add parameters, neurons, layers, etc.), to accommodate new tasks, while preserving parts of the model for old tasks
[34, 40, 4]. However, increasing complexity makes such methods prone to overfitting, and can be undesirable when models are to be deployed over memorylimited devices. Therefore, a competing objective of parsimony is desirable.Another related challenge in lifelong learning is how to reuse learned knowledge to help the model learn future tasks better. Current research work often ignores this critical point by, e.g., independently considering different tasks [5], or by addressing it only partialy, e.g., using past parameters as an initialization during training [26]. However, the usefulness of knowledge gained from old tasks may depend on the relevance between old and new tasks. For example, a classifier trained for classifying dogs may be more helpful for classifying cats than digits. Thus, how to adaptively select useful past knowledge is critical for improving the performance on a new task.
Our proposed method, named learnpruneshare
(LPS), is a novel deep learning framework aimed at addressing these challenges. LPS learns sequential tasks without experiencing catastrophic forgetting, by partitioning the neural network and dedicating portions to each task. It also prunes the neural network, thereby maintaining parsimony and avoiding overfitting. Finally, it selectively shares knowledge from old tasks and reuses them on new tasks. All of these happen simultaneously, in a unified optimization framework trained in an endtoend fashion. Our contributions are as follows:

We incorporate the stateoftheart Alternating Direction Method of Multipliers (ADMM) based pruning strategy to solve the lifelong learning problem, maintaining a single parsimonious neural network model and eliminating catastrophic forgetting thoroughly.

We design a novel knowledge sharing scheme, which learns to select useful knowledge from old tasks and adapt them to the current task. Our knowledgesharing scheme is seamlessly integrated with our ADMM pruning strategy, and is trained jointly with the classifier parameters. We make our code publicly available^{1}^{1}1https://github.com/neuspiral/LPSforLifelong to accelerate community contributions in this exciting topic.

Our method, LPS, shows superior performance on two standard lifelong learning benchmark datasets as well as a challenging real world radio fingerprinting dataset. LPS beats stateoftheart methods by a 2%–54% margin.
Ii Related Work
Iia Lifelong Learning
Regularizationbased methods [18, 42, 22, 28, 1] limit plasticity of the network via regularization terms or by limiting the learning rate on parameters learned from previous tasks. While regularizationbased methods mitigate catastrophic forgetting to some extent, performance on previous tasks gets increasingly worse when more diverse tasks are seen. By design, our method deals with catastrophic forgetting problem more effectively, as performance on previous tasks remains unchanged.
Rehearsalbased methods capture the data distribution in previous tasks by learning a generative model. When a new task arrives, data from previous tasks is simulated via the generative model and combined with current data to reinforce previous knowledge [35, 24, 37, 29]. Though saving the generative model is less memory intensive than saving data, such models can still be big. Performance largely depends on the quality generative model on careful tuning of the mix of generated and new data. Our approach avoids the additional cost of training and storing an external generative model, again while experiencing no catastrophic forgetting.
Expansionbased methods accommodate new tasks by gradually increasing capacity of the model [34, 40, 4]. These methods generally outperform regularization and rehearsal based methods, which maintain a model with fixed capacity. However, the size of model parameters grows linearly with the number of tasks. This limits their practical usage, and makes them prone to overfitting. On the contrary, our approach fully exploits the potential of a fixedcapacity model.
Our method is closest to Continual Learning via Neural Pruning (CLNP) [6] and PackNet [26]
. In these works, model pruning techniques are utilized to compress the original model iteratively to allocate free capacity for new tasks. However, both of these methods use simple thresholdbased heuristics to prune the model with no structure constraint, resulting in a sparse, irregular matrix which limits further acceleration at inference time. Additionally, both of these methods consider tasks independently, ignoring the relationship between the current and previous tasks. In contrast, our approach adopts a systematic pruning strategy via Alternating Direction Method of Multipliers (ADMM), where structural constraints, e.g. filter pruning or column pruning
[39], can be specified as needed. Moreover, our proposed novel knowledge inheritance scheme adaptively select weights shared from previous tasks to facilitate learning the current and future tasks. Our experimental results in Section VB show that, due to these improvements, LPS outperforms these two algorithms.IiB Neural Network Weight Pruning
The rich literature in neural network weight pruning can be categorized into heuristic pruning algorithms and regularizationbased pruning algorithms. The former starts from the early work on irregular, unstructured weight pruning where arbitrary weights can be pruned. Han et al. [11] use an iterative algorithm to eliminate weights with small magnitude and perform retraining to regain accuracy. Guo et al. [10] incorporate connection splicing into the pruning process to dynamically recover the pruned connections that are found to be important. Later, heuristic pruning algorithms have been generalized to the more hardwarefriendly structured sparsity schemes. A Transformable Architecture Search (TAS) [3] realizes the pruned network and knowledge is transferred from the unpruned network to the pruned version. Luo et al. [25] leverage a greedy algorithm to guide the pruning of the current layer with input information of the next layer, while Yu et al. [41] define a “neuron importance score” and propagate this score to conduct the weight pruning process.
Regularizationbased pruning algorithms, on the other hand, have the unique advantage for dealing with structured pruning problems through group Lasso regularization [23]. Early work [38, 15] incorporate or
regularization in loss function to solve filter/channel pruning problems. Zhuang et al.
[44] introduce an norm variant indicating the number of selected channels in each layer. A number of subsequent works are dedicated to making the regularization penalty a dynamic and “soft” term. The method in [14] selects filters based on norm and updates the filters that have been previously pruned, while [43, 21] incorporate the advanced optimization solution framework Alternating Direction Methods of Multipliers (ADMM) to achieve dynamic regularization penalty, thereby improving accuracy. We take advantage of the stateoftheart ADMMbased pruning strategy by [43] and [21]. Moreover, we integrate a novel selective knowledge sharing scheme into the ADMM optimization framework, captured by learnable masks. Furthermore, our whole pipeline can be trained in an endtoend fashion performing learn, prune, share simultaneously through ADMM.Iii Problem Formulation
In supervised lifelong learning, we are given a sequence of datasets , where each dataset , , contains tuples of the input feature and its corresponding label . Each dataset corresponds to a distinct classification task: labels are disjoint across datasets . Datasets are revealed sequentially: dataset becomes accessible only at the th task, which corresponds to, e.g., moving to a new environment. Our goal is to train a classifier sequentially on the datasets such that it achieves good performance on all tasks.
Formally, we are given a feature extractor parameterized by . After the network is trained on , along with a taskspecific output layer, its parameters are updated. If are the parameters of the feature extractor at task , a final classifier is obtained after training the extractor (and the correponding output layers) on all datasets in sequentially, as illustrated in Fig. 1. The overall performance of is then assessed via the average classification accuracy on separate testsets, one for each task . Note that, at test time, we are aware of which task/environment is operating over, so that we can classify using the appropriate output layer.
While the problem setting is straightforward, we need to point out three desiderata that must be addressed by a supervised lifelong learning solution.
Catastrophic Forgetting. Catastrophic forgetting is the widely reported phenomenon [27, 32] that models, especially neural networks, tend to “forget” information from previous tasks when incorporating knowledge from new tasks. This is observed in accuracy performance degradation on previous tasks after being exposed to new tasks. Addressing catastrophic forgetting is a central issue, and the main objective of most lifelong learning algorithms [34, 40, 4, 6, 26].
Parsimony. Due to limited computation and memory in real world applications, but also to avoid overfitting, the model should be as compact as possible. It is therefore desirable to maintain a single model and adapt it to various tasks, instead of, e.g., training multiple specialized models.
Knowledge Reuse. Related to both parsimony and catastrophic forgetting, beyond memorizing knowledge acquired from previous tasks, we also want to exploit it when encountering new tasks. For example, parts of the model could be shared across tasks; this leverages relevant/reusable features across tasks, leading to further parsimony and avoiding overfitting, while also ameliorating catastrophic forgetting. Thus, it is important to strike a balance between reuse vs. growth or plasticity in a network, in a way that performance improves.
Iv LearnPruneShare
We propose a learnpruneshare (LPS) algorithm, a novel deep learning framework for lifelong learning incorporating neural network pruning via ADMM. Our method maintains a single neural network for the sequence of tasks, while learning the tasks, pruning the neural network, and sharing knowledge among tasks; these three happen synergistically. Departing from conventional regularizationbased or networkexpansionbased methods, LPS fully exploits the capacity of the neural network by splitting it into disjoint partitions specialized for each task via pruning; in turn, this mitigates catastrophic forgetting. Simultaneously, to exploit earlier knowledge obtained from previous tasks, LPS shared parameters between different partitions of the network, in an adaptive, tunable fashion.
Iva Architecture Overview
We assume that we are given a legacy neural network architecture (e.g., ResNet [12]), parameterized by weights
. Recall that the support of a vector is the set of its nonzero coordinates. Our solution satisfies the following two properties: first, at the conclusion of task
, the weights of the network are partitioned into taskspecific weights that have disjoint supports. Formally, for all with :(1) 
Second, these disjoint weights do not exhaust the entire representation capacity of the network: the union of their supports is smaller than . The remaining weights are treated as excess capacity, to be utilized in future tasks. Formally, let
(2) 
be the sum of the taskspecific weights.^{2}^{2}2As , have disjoint supports, can also be thought of as their superposition. Then,
(3) 
Figure 2 illustrates the weight split for a single layer at task . Weights are partitioned to two classes and with disjoint support. Moreover, the excess capacity (the complement of ’s support) can be used for future tasks.
Under this configuration, to make predictions for task , our network uses , i.e. the portion of the network representing taskspecific knowledge, as well as as many of the weights dedicated to previous tasks as we wish to leverage. Formally, the network we use for task has weights
(4) 
where represents elementwise multiplication and are a set of learnable knowledge sharing masks.
Our solution, and in particular the weight design in Eq. (4), has several advantages, each addressing directly the issues of catastrophic forgetting, parsimony, and knowledge reuse. First, our approach does not experience any catastrophic forgetting. This is precisely because additional tasks are accommodated in excess capacity; classification for earlier tasks (also through Eq. (4)) remains unaltered. Second, by utilizing only a portion of the overall capacity of the network, we attain parsimony. As we discuss below, this happens at almost no accuracy loss: we learn the smallsupport parameters , through stateofthe art pruning methods. Finally, the use of masks enables arbitrary levels of reuse: setting them to 1 fully reuses weights learned from previous tasks, while setting them to 0 limits the network for task to only its dedicated weights. Note that this flexibility comes at the expense of parsimony, as we also need to keep track of masks for each task. As these are binary, however, they are not as memoryintensive as the model weights.
IvB The LearnPruneShare (LPS) Algorithm
Our learnpruneshare algorithm learns taskspecific weights as well as knowledgesharing masks as the datasets are revealed. It is an iterative process, summarized in Figure 3. At each task, we use the full excess capacity of the network to train a dense network. Using a stateoftheart pruning method, we reduce this to weights with small support ; simultaneously, we determine how much of the old weights to reuse via mask . This process is repeated until we run out of tasks.
Formally, at each task , the input to the algorithm consists of (a) earlier weights from previous tasks through , i.e., , as well as, (b) the dataset of task , i.e., . Our goal is to learn sparse, smallsupport taskspecific weights , as well as the knowledgesharing mask . Note that for task , we only need to learn , as there is no previous knowledge yet. As our pruning happens layerwise, we introduce the following notation. We rewrite the weights and masks as and where are the weights and masks, respectively, corresponding to the th layer, for . We denote the loss of a network with weights under dataset as , where is the final (classification) layer. In light of Eq. (4), we formulate the learning process determining at task as an optimization problem: equationparentequation
(5a)  
subj. to:  (5b)  
(5c)  
(5d)  
(5e)  
(5f)  
(5g) 
where are sparsity constraints on , and are knowledgesharing constraints on . We describe both in detail below, in Sections IVC and IVD, respectively.
The constraint in Eq. (5d) enforces that weights are indeed disjoint: the weights of are taken from the current excess capacity pool – the complement of . Similarly, the constraint in Eq. (5e) enforces that the knowledgesharing mask are applied to the past weights only. Note that, together, they imply that and have disjoint supports. Finally, the fully connected classifier/output weights are unconstrained.
IvC TaskSpecific Weight Constraints
To obtain , we need to create constraints on in Prob. (5) that enforce sparsity. Recall that we denote the weights of the th layer of our neural network as . In convolutional layers, the weight for
th layer is represented by a fourdimensional tensor, where dimensions
correspond to the number of filters, number of channels, filter width, and filter height, respectively. In fully connected layers, weights are matrices, where and represent the input and output layer size, respectively. We nevertheless assume that all layers are represented in a GEneral Matrix Multiplication operations (GEMMs) format, which is a standard practice in tensor framework implementations: that is, we assume all tensors are reshaped to two dimensional matrices. This is already the case for fully connected layers; for convolutional layers, the reshaping can take the form and . We thus assume every layer is represented by a (reshaped) weight matrix , as illustrated in Figure 4. Note that, under this assumption, the total number of weights in the model is .Under this representation, we consider the following sets of constraints for layer :
Irregular Pruning. For irregular pruning, we have:
(6) 
where the size of ’s support (i.e., the number of nonzero elements), and is a constant limiting the proportion of nonzero elements. Intuitively, this implies that has no more than nonzero elements.
Structured Pruning. Given a Boolean predicate, let to be 1 if is true, and 0 otherwise. Moreover, given matrix , let be the th column of . In column pruning, the constraint set is defined as:
(7) 
where This enforces that the number of nonzero columns in the th layer’s GEMM representation does not exceed . A similar constraint can be placed on filters/rows of to form structured filter pruning, which enforces that the number of nonzero filters does not exceed .
All three types of constraints (irregular, column, and filter pruning) are illustrated in Fig. 4. They all lead to disjoint supports if used consistently across tasks: for example, filter pruning ends up partitioning rows of the GEMM representation of every later, column pruning partitions columns, etc., while irregular pruning partitions individual matrix entries.
IvD KnowledgeSharing Mask Constraints
To control knowledge sharing, we impose a sparsity constraint on as well, allowing only of entries in the mask to be nonzero. Formally:
(8) 
Adjusting the “sharing parameter” allows us to limit the proportion of old weights shared (i.e., the nonzero elements of ). By forcing to be sparse, we force training to select the most beneficial weights for the current task from previously learned weights. Sharing parameter also conveys the usefulness of previous knowledge: e.g. when tasks are similar, previous knowledge would indeed be useful for subsequent tasks, thus should be big; conversely, for dissimilar tasks we expect fewer sharing opportunities.
IvE Solving LPS via ADMM
The optimization problem defined in Eq. (5
) for LPS has nonconvex constraints, and solving it via standard stochastic gradient descent is not possible. We use the widely deployed Alternating Direction Method of Multipliers (ADMM)
[2], that has been extensively applied in pruning literature [43, 33]. For completeness, we describe the ADMM solution to Problem (5) in detail in Appendix A. In short, ADMM decomposes the original nonconvex problem with constraints into subproblems that can be solved separately. It alternates between (a) standard gradient descent with a quadratic proximal penalty (Eq. (13)), that forces the solution to be close to a value in the (nonconvex) constraint space, and (b) an orthogonal projection operation to the constraint space (Eq. (14a)). Hence starting from full weights and masks set to 1, we can progressively prune and constrain both, producing a feasible solution at convergence. Most importantly, the weights and masks can be trained jointly and dynamically.From an implementation standpoint, to incorporate our constraints to ADMM, it suffices to produce polynomialtime functions that compute the orthogonal projection into constraints (5b) – (5c). For (5b), polynomial algorithms are well known for irregular, column, and filter pruning constraints [43]. For example, for irregular pruning, the orthogonal projection of a matrix to set given by Eq. (6) can be computed by keeping the entries of of largest absolute value intact, and setting the rest to zero. For column pruning (Eq. (7)), projection of to can be computed by similarly keeping the columns with largest norm intact, and setting all other rows to 0.
Our mask constraint (8) is more complex, as projection requires not only enforcing sparsity exactly, but also that the values of the matrix become binary. Nevertheless, we can compute the projection of to in polynomial time via the following steps:
We prove the correctness of this algorithm in Appendix B.
V Experiments
In our experiments, (a) we show that our method outperforms current stateoftheart methods on both benchmark and real datasets; (b) we assess the importance of the knowledgesharing mask under different task settings; and (c) we explore how different pruning strategies affect the prediction accuracy.
Va Experimental Setting.
Datasets.
To evaluate the performance of our approach empirically, we experiment with two standard lifelong learning benchmark datasets, permuted MNIST
[20, 7]and split CIFAR10/100
[19], and a real world radiofrequency fingerprinting dataset (split RF) [16], summarized in Table I. The original MNIST dataset [20, 7] contains black and white images of handwritten digits of 10 classes. Following [42], we construct 10 tasks by applying the same random permutation across all MNIST images, using a different permutation for each task. CIFAR10 [19] comprises 10 classes of 32x32 colour images. CIFAR100 is just like CIFAR10 in image format and total number of images, but has 100 classes. Following [42], we set the first task as the whole CIFAR10 dataset. We then create 5 additional tasks, each containing 10 consecutive classes from the CIFAR100 dataset. Finally, the split RF dataset [16, 9] contains radio transmissions from 50 WiFi devices recorded in the wild. We randomly partition these 50 classes into 5 tasks.Lifelong Learning Methods. We compare LPS to the following methods:
Elastic Weight Consolidation (EWC) [18]:
EWC applies Laplace Approximation to estimate the importance scores of parameters for previous tasks and uses a quadratic regularizer weighted by the importance scores.
Intelligent Synapses (IS)
[42]: IS uses an importance score based regularizer similar to EWC. However, a path integral based method is proposed to evaluate the importance score.Learning without Forgetting (LwF) [22]: LwF maintains responses for previous tasks via a knowledge distillation loss.
Deep Generative Replay (DGR) [35]:
DGR uses generative adversarial networks (GAN)
[8] to mimic the data distribution for each task. A generator is updated at every task to incorporate its data distribution. A corresponding classifier is trained using the mixture of generated and new data.Gradient Episodic Memory (GEM) [24]: GEM proposes an episodic memory saving a portion of previous data and use the loss on this data a constraint when training a new task.
PackNet [26]: PackNet iteratively prunes the model to accommodate new tasks by removing parameters of smaller magnitude heuristically. Similar formulation is proposed by [5] under a lifelong learning setting.
We use the implementation from the original authors for all methods, including the recommended hyperparameter settings or tuning strategies. The same network architectures are used among all methods for fair comparison.
Stat. & Param.  Datasets  
Permuted MNIST  Split CIFAR  Split RF  
# tasks ()  10  6  5  
# classes per task  10  10  10  
# train samples per task  60,000  50,000  5,000  1,410 
# test samples per task  10,000  10,000  1,000  550 
(% total layer params)  10%  50%  10%  20% 
(% total params)  90%  92%  90%  
Pruning strategy  Irregular  Irregular  Column  
LPS Epochs 
30/90/30  200/600/200  20/60/20  
(warmup/ADMM/final)  
Architecture  Two FC layers  CIFAR10  ResNet501D  
# params ()  5,568,000  884,576  15,901,568  
# layers ()  2  5  49  
Architectures. We implement different architectures for permuted MNIST, split CIFAR10/100, and split RF, respectively. The architecture for permuted MNIST dataset [42]
contains two hidden layers, each with 2000 neurons and ReLU activations. For split CIFAR10/100 dataset, we use the default CIFAR10 architecture from
Keras [42]. For split RF dataset, we use ResNet501D [13], which is the 1Dconvolutional version of ResNet50, targeting inputs as 2D fixedlength sequences. For all three architectures, we learn the biases and batch normalization parameters for the first task and keep these terms fixed for subsequent tasks.
LPS Implementation. For each task, we run LPS in three phases. In the warmup phase, we first train a over the full free parameters with . In the ADMM phase, we then prune the network Eq. (11). In the final stage, we do a final projection to the constraint sets of both masks and weights, and retrain the weights, changing only nonzero values. We set all and increase by a factor of 10 at equal intervals during ADMM iterations. We use the following hyperparameters, which we determine using a validation set. Unless otherwise noted, sparsity parameters and are as shown in Table I. We explored the impact of both in Section VB. For all experiments, we use a batch size of 128 and Adam [17]
as an optimizer with default values and initialize the learning rate to 0.001. Our proposed LPS approach is implemented in Python using PyTorch
[31] and NVIDIA CUDA support. All experiments are carried out on an Tesla V100 GPU with 32 GB memory and 5120 cores.Evaluation Metrics. We evaluate the final obtained model (associated with masks and multihead output layers) on all tasks testsets via (Top1) accuracy.
VB Results on Benchmark Datasets
Effectiveness of the proposed LPS approach.
Table II shows the overall performance, in terms of the final average accuracy across all tasks, of all lifelong learning methods. For reference purposes, we also include the accuracy attained when training a fullcapacity (nonparsimonious) single model separately for each task (SM). LPS outperforms all competitors across all datasets. Most methods perform well on permuted MNIST; the margin is wider on the remaining two datasets, that are more challenging. To further scrutinize the performance of LPS across tasks, we show in Table IIIIV the per task accuracy. Interestingly LPS outperforms all competitors across all tasks on both datasets; we also observed this on the 10 tasks of the permuted MNIST, which we omit for brevity. Overall, our LPS approach achieves both the best average and the best taskspecific accuracy for all three datasets.
We further observe that regularizationbased methods like EWC and IS perform relatively well on benchmarks, while they fail on split RF. One possible explanation may be that when tasks are more diverse and model is large, regularizers do not suffice to keep the learned information. Evidence of forgetting is present in LwF, for split CIFAR, and almost all methods (except LPS and PackNet) on split RF. This is expected, as both LPS and PackNet are immune to forgetting.
We also observe that LSP even outperforms the fullcapacity SM trained from scratch on each task for split CIFAR10/100 and split RF, and is very close to it over permuted MNIST. This happens despite the fact that it uses only a small fraction of the parameters used by SM, indicating that it avoids overfitting. Also, we see a clear benefit of reuse of parameters across tasks in split CIFAR (Table III): by partially utilizing past weights, prediction on later tasks improves under LPS compared to SM.
Datasets  Methods  
SM  EWC  IS  LwF  DGR  GEM  PackNet  LPS  
Permuted MNIST  98.80  96.81  97.52  68.22  90.73  93.03  98.14  98.58 
Split CIFAR10/100  75.14  71.13  74.97  54.68  63.61  66.05  77.79  80.13 
Split RF  81.15  37.01  42.63  27.75  48.27  68.38  79.37  81.22 
Methods  Tasks  
task 1  task 2  task 3  task 4  task5  task 6  Avg.  
SM  82.32  75.40  70.20  75.90  71.70  75.30  75.14 
EWC  71.23  72.50  69.25  71.34  67.52  74.93  71.13 
IS  74.59  74.28  74.19  75.54  75.58  75.62  74.97 
LwF  40.32  56.77  48.60  53.94  60.04  68.43  54.68 
DGR  64.36  62.01  63.02  67.34  65.28  59.64  63.61 
GEM  68.52  65.34  63.88  70.12  65.23  63.23  66.05 
PackNet  82.33  79.30  73.90  78.80  74.30  78.10  77.79 
LPS  82.97  80.00  76.50  79.90  78.40  83.00  80.13 
Methods  Tasks  
task 1  task 2  task 3  task 4  task 5  Avg.  
SM  76.33  73.50  85.30  85.60  85.00  81.15 
EWC  25.73  35.32  30.85  45.81  47.24  37.01 
IS  27.08  40.72  37.25  50.66  57.34  42.63 
LwF  14.62  20.37  23.45  33.58  46.72  27.75 
DGR  43.50  49.37  43.87  50.25  54.38  48.27 
GEM  67.24  63.45  68.53  70.26  72.44  68.38 
PackNet  78.15  74.14  82.56  80.54  81.45  79.37 
LPS  78.33  77.55  84.19  82.63  83.39  81.22 
Datasets  Tasks  
task 1  task 2  task 3  task 4  task5  task 6  task 7  task 8  task 9  task 10  Avg.  
Permuted MNIST  0%  98.92  98.77  98.47  98.51  98.58  98.49  98.29  97.91  97.78  85.82  97.15 
100%  98.92  98.56  98.51  98.39  98.35  98.24  98.26  98.19  98.25  98.14  98.38  
90%  98.92  98.68  98.71  98.64  98.55  98.61  98.49  98.51  98.42  98.23  98.58  
Split CIFAR10/100  0%  82.97  72.40  64.20  75.70  68.90  69.60          72.30 
100%  82.97  79.70  76.10  80.50  76.60  78.70          79.10  
92%  82.97  80.00  76.50  79.90  78.40  83.00          80.13  
Split RF  0%  78.33  77.33  83.29  81.90  82.20            80.61 
100%  78.33  77.59  84.93  81.90  83.12            81.17  
90%  78.33  77.55  84.19  82.63  83.39            81.22 
Share Parameter Effects.
We further explore the impact of knowledgesharing in Figure 5. The figure shows how average and per task accuracy changes as we modify : the axis is the share ratio, indicating the ratio of the parameter over the total number of past weights per layer on the CIFAR dataset. The optimal value is at 92%. Moreover, we clearly see that a large reduction in sharing has a bigger impact on later taskswhich otherwise would benefit from knowledge reuse.
We also show the results of models with no (0%) and full (100%) share on all datasets as well as our best performing model with selective sharing in Table V. We follow the same parameter searching strategy as in split CIFAR10/100 to get the best performing model on validation set. Interestingly, for all three datasets, we observe the best performance achieved by setting share ratio around 90%. This also indicates that many (not all) past weights are valuable or meaningful for new tasks.
To explore this notion of knowledge reuse further, we conducted an experiment in which tasks vary drastically. To do so, we construct a 5task “mixed” dataset, where tasks 1,3,5 are from the MNIST dataset, with different permutation patterns and tasks 2, 4 both contain 10 different classes from CIFAR100. Images from permuted MNIST are augmented to RGB images by repeating 3 channels using the original image and resized to to be compatible with CIFAR images. Similar to Figure 5, Figure 6 shows the effect of the sharing ratio on the mixture dataset. Not surprisingly, the behavior is quite different from Fig. 5. The highest accuracy (89.22%) is achieved by 20% share, which demonstrates that LPS does adaptively select useful knowledge for the current task. Note that, faced with these dissimilar tasks, full share (88.15%) performs even worse than no share (88.23%), indicating the share strategy choice should be flexible and guided by the intertask similarity.
Comparing different pruning strategies.
We compared three different pruning strategies (i.e., column, filter, and irregular pruning) on split CIFAR10/100 and split RF datasets, summarized in Table VI and Table VII, respectively. Both irregular and column pruning obtain satisfactory performance, achieving 80.13% and 79.56% on split CIFAR10/100, 80.55% and 81.22% on split RF, respectively. However, filter pruning reflects an unstable performance, obtaining 68.11% and 80.12% on split CIFAR10/100 and split RF datasets, respectively.
Impact of Model Capacity.
Figure 7 measures how model capacity usage affects the accuracy on the split CIFAR10/100 dataset. For this experiment, instead of using the whole model capacity for the 6 tasks, we use only a fraction (e.g., ) of the full model by the th task, leaving parameters free for future growth; all other parameters are set as in Table I. Figure 7 shows the impact on average and per task accuracy as we vary fraction . We clearly observe that a model performs better when more capacity is available. Nevertheless, accuracy performance is also robust to this shrinkage – it achieves 75.32% accuracy with only 50% model capacity, which is even better than the best nonpruning method IS (74.97%) with full model capacity. Surprisingly, at only 10% of the total capacity of the network, accuracy does not collapse, but still remains above 72.5%. This indicates that our method has the potential capacity to scale to even more future tasks.
Prun. Appr.  Tasks  

task 1  task 2  task 3  task 4  task5  task 6  Avg.  
SM  82.32  75.40  70.20  75.90  71.70  75.30  75.14  
Irregular  0%  82.97  72.40  64.20  75.70  68.90  69.60  72.30 
100%  82.97  79.70  76.10  80.50  76.60  78.70  79.10  
92%  82.97  80.00  76.50  79.90  78.40  83.00  80.13  
Column  0%  82.04  68.80  56.50  71.00  63.90  63.00  67.54 
100%  82.04  80.80  76.20  80.30  76.40  77.90  78.94  
92%  82.04  80.90  76.30  80.60  77.10  80.40  79.56  
Filter  0%  79.95  56.50  50.40  62.20  54.60  55.80  59.91 
100%  79.95  60.20  60.00  60.40  58.90  61.10  63.43  
92%  79.95  62.10  61.70  67.70  66.50  70.70  68.11 
Prun. Appr.  Tasks  

task 1  task 2  task 3  task 4  task 5  Avg.  
SM  76.33  73.50  85.30  85.60  85.00  81.15  
Irregular  0%  78.33  75.14  83.74  82.19  73.03  78.49 
100%  78.33  75.14  84.01  79.71  82.20  79.88  
90%  78.33  74.21  84.56  83.00  82.65  80.55  
Column  0%  78.33  77.33  83.29  81.90  82.20  80.61 
100%  78.33  77.59  84.93  81.90  83.12  81.17  
90%  78.33  77.55  84.19  82.63  83.39  81.22  
Filter  0%  77.59  70.32  82.64  80.36  82.39  78.66 
100%  77.59  73.65  82.90  80.44  82.85  79.49  
90%  77.59  74.54  83.64  81.72  83.12  80.12 
Vi Conclusions and Future Work
In this paper, we propose the learnpruneshare (LSP) algorithm for lifelong learning. Our method maintains a parsimonious neural network model and achieves exact no forgetting by splitting the network into taskspecific partitions via ADMMbased pruning method. Moreover, a novel selective knowledge sharing scheme is integrated seamlessly into the ADMM optimization framework to address knowledge reuse. Experiments on permuted MNIST, split CIFAR10/100 and split RF demonstrates our approach achieves significant improvement over the stateoftheart methods. Future directions include applying more advanced pruning strategies on the lifelong learning problem and exploring how to measure the capacity of a model quantitatively.
Vii Acknowledgements
The authors gratefully acknowledge support by the National Science Foundation (grant CCF1937500).
References
 [1] (2018) Memory aware synapses: learning what (not) to forget. In ECCV, pp. 139–154. Cited by: §I, §IIA.

[2]
(2011)
Distributed optimization and statistical learning via the alternating direction method of multipliers.
Foundations and Trends® in Machine learning
3 (1), pp. 1–122. Cited by: §IVE.  [3] (2019) Network pruning via transformable architecture search. In NeurIPS, pp. 759–770. Cited by: §IIB.
 [4] (2017) Neurogenesis deep learning: extending deep networks to accommodate new classes. In IJCNN, pp. 526–533. Cited by: §I, §IIA, §III.
 [5] (2019) Continual learning via neural pruning. arXiv preprint arXiv:1903.04476. Cited by: §I, §VA.
 [6] (2019) Continual learning via neural pruning. arXiv preprint arXiv:1903.04476. Cited by: §IIA, §III.
 [7] (2013) An empirical investigation of catastrophic forgetting in gradientbased neural networks. arXiv preprint arXiv:1312.6211. Cited by: §VA.
 [8] (2014) Generative adversarial nets. In NeurIPS, pp. 2672–2680. Cited by: §VA.
 [9] (2019) Finding a ‘new’needle in the haystack: unseen radio detection in large populations using deep learning. In DySPAN, pp. 1–10. Cited by: §VA.
 [10] (2016) Dynamic network surgery for efficient dnns. In NeurIPS, pp. 1379–1387. Cited by: §IIB.
 [11] (2016) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, Cited by: §IIB.
 [12] (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §IVA.
 [13] (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §VA.

[14]
(2018)
Soft filter pruning for accelerating deep convolutional neural networks
. In IJCAI, Cited by: §IIB.  [15] (2017) Channel pruning for accelerating very deep neural networks. In ICCV, pp. 1389–1397. Cited by: §IIB.
 [16] (2020) Deep learning for rf fingerprinting: a massive experimental study. In IEEE Internet of Things Magazine, Cited by: §VA.
 [17] (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: §VA.
 [18] (2017) Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114 (13), pp. 3521–3526. Cited by: §I, §IIA, §VA.
 [19] (2009) Learning multiple layers of features from tiny images. Cited by: §VA.
 [20] (1998) The mnist database of handwritten digits, 1998. URL http://yann. lecun. com/exdb/mnist 10, pp. 34. Cited by: §VA.
 [21] (2019) Compressing convolutional neural networks via factorized convolutional filters. In CVPR, pp. 3977–3986. Cited by: §IIB.
 [22] (2017) Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence 40 (12), pp. 2935–2947. Cited by: §I, §IIA, §VA.
 [23] (2019) Rethinking the value of network pruning. In ICLR, Cited by: §IIB.
 [24] (2017) Gradient episodic memory for continual learning. In NeurIPS, pp. 6467–6476. Cited by: §I, §IIA, §VA.
 [25] (2017) Thinet: a filter level pruning method for deep neural network compression. In ICCV, pp. 5058–5066. Cited by: §IIB.
 [26] (2018) Packnet: adding multiple tasks to a single network by iterative pruning. In CVPR, pp. 7765–7773. Cited by: §I, §IIA, §III, §VA.
 [27] (1989) Catastrophic interference in connectionist networks: the sequential learning problem. In Psychology of learning and motivation, Vol. 24, pp. 109–165. Cited by: §I, §III.
 [28] (2018) Variational continual learning. In ICLR, Cited by: §I, §IIA.
 [29] (2019) Learning to remember: a synaptic plasticity driven framework for continual learning. In CVPR, Cited by: §I, §IIA.
 [30] (2019) Continual lifelong learning with neural networks: a review. Neural Networks. Cited by: §I.
 [31] (2019) PyTorch: an imperative style, highperformance deep learning library. In NeurIPS, pp. 8024–8035. Cited by: §VA.
 [32] (1990) Connectionist models of recognition memory: constraints imposed by learning and forgetting functions.. Psychological review 97 (2), pp. 285. Cited by: §I, §III.
 [33] (2019) Admmnn: an algorithmhardware codesign framework of dnns using alternating direction methods of multipliers. In ASPLOS, pp. 925–938. Cited by: §IVE.
 [34] (2016) Progressive neural networks. arXiv preprint arXiv:1606.04671. Cited by: §I, §IIA, §III.
 [35] (2017) Continual learning with deep generative replay. In NeurIPS, pp. 2990–2999. Cited by: §I, §IIA, §VA.
 [36] (1995) Lifelong robot learning. Robotics and autonomous systems 15 (12), pp. 25–46. Cited by: §I.
 [37] (2019) Generative replay with feedback connections as a general strategy for continual learning. In COSYNE Workshop, Cited by: §I, §IIA.
 [38] (2016) Learning structured sparsity in deep neural networks. In NeurIPS, pp. 2074–2082. Cited by: §IIB.
 [39] (2018) Progressive weight pruning of deep neural networks using admm. arXiv preprint arXiv:1810.07378. Cited by: §IIA.
 [40] (2018) Lifelong learning with dynamically expandable networks. In ICLR, Cited by: §I, §IIA, §III.
 [41] (2018) Nisp: pruning networks using neuron importance score propagation. In CVPR, pp. 9194–9203. Cited by: §IIB.
 [42] (2017) Continual learning through synaptic intelligence. In ICML, pp. 3987–3995. Cited by: §I, §IIA, §VA, §VA, §VA.
 [43] (2018) A systematic dnn weight pruning framework using alternating direction method of multipliers. In ECCV, pp. 184–199. Cited by: §IIB, §IVE, §IVE.
 [44] (2018) Discriminationaware channel pruning for deep neural networks. In NeurIPS, pp. 875–886. Cited by: §IIB.
Appendix A Solving Problem (5) via ADMM
To begin with, constraints (5d), (5c) are easy to satisfy: we basically partition variables of and to sets and its complement, and only optimize over the appropriate set (the complement of for and for ). We thus ignore these constraints below. We similarly omit , which is unconstrained and can be learned via SGD. Rewriting the loss as , we convert the nonconvex optimization problem formulated in (5) into the ADMM form by introducing auxiliary variables and for constraints (5b) and (5c) respectively:equationparentequation
(9a)  
subject to:  (9b)  
(9c) 
where and correspond to the indicator functions for constraints (5b) and (5c) respectively, i.e.,:
(10) 
The augmented Lagrangian of (9) is:
(11) 
where and are penalty terms, and and are dual variables, rescaled by and , respectively. ADMM proceeds iteratively as follows; at the th iteration: equationparentequation
(12a)  
(12b)  
(12c)  
(12d) 
The problem (12a) is equivalent to:
(13) 
The first term in (13) is a standard DNN loss while the second and the third terms are quadratic and differentiable. Thus, this subproblem can be solved by classic stochastic gradient descent. Problem (12b) is equivalent to: equationparentequation
(14a)  
(14b) 
where are the Euclidean projections onto sets , respectively.
Appendix B Proof the correctness of Mask Projector
For simplicity, we prove this for the projection to the set: i.e., the set of binary elements containing k zeros. Let , then is computed by: (a) sort all elements from smallest to largest; (b) set the largest values to 1 an the rest to 0. We make use of the following lemma.
Lemma 1.
For , where ,
This can be easily proved by considering all positional cases of . Let be the solution of the algorithm, and be an optimal solution. Assume indices are order based on the elements of , as in the algorithm. Let be the first position at which . Then, is mapped to 0 in and is mapped to 1 in . Moreover, as both have exactly ones, there must be a such that (i) , (ii) , and (iii) . By the lemma, since , we have . So, setting and would only improve distance from . As is optimal, this swap must maintain optimality; repeating this procedure as long as there exist indices at which and differ will convert to , while maintaining optimality. ∎
Comments
There are no comments yet.