Humans can learn new visual concepts without significantly forgetting previously learned ones and without necessarily having to revisit previous ones. In contrast, the majority of existing artificial visual deep learning systems assume a replay-access to all the training images and all the concepts during the entire training phase – e.g., going a large number of epochs over the 1000 classes of ImageNet. This assumption also applies to systems that learn concepts by reading the web (e.g.,[20, 5, 6]) or that augment CNNs with additional units to better transfer knowledge to new tasks such as .
To get closer to human visual learning and to practical application scenarios, where data often cannot be stored due to physical restrictions (e.g. robotics) or policy (e.g. privacy), the scenario of lifelong learning (LLL) has been proposed. The assumption of LLL is that only a subset of the concepts and corresponding training instances are available at each point in time during training. Each of these subsets is referred to as a “task”, originating from robotics applications . This leads to a chain of learning tasks trained on a time-line. While training of the first task is typically unchanged, the challenge is how to train the remaining tasks without reducing performance on the earlier tasks. Indeed, when doing so naively, e.g. by fine-tuning previous models, this results in what is known as catastrophic forgetting, i.e., the accuracy on the earlier tasks drops significantly. Avoiding such catastrophic forgetting is the main challenge addressed in the lifelong learning literature.
Lifelong Fact Learning (LLFL). Existing works on LLL have focused mostly on image classification tasks (e.g. [2, 11, 16, 23, 28, 30]), in a relatively small-scale and somewhat artificial setup. A sequence of tasks is defined, either by combining multiple datasets (e.g., learning to recognize MITscenes, then CUB-birds, then Flowers), by dividing a dataset (usually CIFAR100 or MNIST) into sets of disjoint concepts, or by permuting the input (permuted MNIST). Instead, in this work we propose a LLL setup with the following more realistic and desirable learning characteristics:
2. Concepts of varying complexity: We want to learn diverse concepts, including not only objects but also actions, interactions, attributes, as well as combinations thereof.
3. Semantic and structure aware: We want to connect semantically related visual facts. For example, if we have learned “lion” and “tiger” earlier, that can help us later in time to learn a “liger” (a rare hybrid cross between a male lion and a female tiger), even with just a few examples. Relating this to point (2) above, this further allows compositional lifelong learning to help recognize new facts (e.g. dog, riding, wave) based on facts seen earlier in time (e.g. person, riding, wave and girl, walking, dog).
To the best of our knowledge, none of the existing LLL literature explored these challenges. We denote studying lifelong learning with the aforementioned characteristics as lifelong fact learning (LLFL); see Fig. 2.
A Note on Evaluation Measures. We argue that the evaluation of LLL methods should be reconsidered. In the standard LLL (with a few notable exceptions, such as [2, 4]), the trained models are judged by their capability to recognize each task’s categories individually assuming the absence of the categories covered by the remaining tasks – not necessarily realistic
. Although the performance of each task in isolation is an important characteristic, it might be deceiving. Indeed, a learnt representation could be good to classify an image in a restricted concept space covered by a single task, but may not be able to classify the same image when considering all concepts across tasks. It is therefore equally important to measure the ability to distinguish the learnt concepts across all the concepts over all tasks. This is important since the objective of LLL is to model the understanding of an ever growing set of concepts over time.To better understand how LLL performs in real world conditions, we advocate evaluating the existing methods across different tasks. We named that evaluation Generalized lifelong learning (G-LLL), in line with the idea of Generalized zero-shot learning proposed in 
. We detail the evaluation metric in Sec.5.1.
Advantages of a Visual-Semantic Embedding. As illustrated in Fig. 2, we expect to better understand liger, running by leveraging previously learnt facts such as lion, tiger, jumping and dog, running. This shows how both semantics and structure are helpful for understanding. To our knowledge, such semantic awareness has not been studied in a LLL context. To achieve this, we use a visual-semantic embedding model where semantic labels and images are embedded in a joint space. For the semantic representation, we leverage semantic external knowledge using word embeddings – in particular word2vec 
. These word embeddings were shown to efficiently learn semantically meaningful floating point vector representations of words. For example, the average vector of lion and tiger is closest to liger. This can help semantically similar concepts to learn better from one another, as shown in[31, 7] in non LLL scenarios. Especially in our long-tail setting, this can be advantageous. Additionally, by working with an embedding instead of discrete concept labels as in [8, 15, 12, 30], we avoid that the model keeps growing as new concepts get added, which would make the model less scalable and limit the amount of sharing.
Contributions. First, we introduce a midscale and a large scale benchmark for Lifelong Fact Learning (LLFL), with two splits each, a random and a semantic split. Our approach for creating a semantically divided benchmark is general and could be applied similarly to other datasets or as more data becomes available. Second, we advocate to focus on a more generalized evaluation (G-LLL) where test-data cover the entire label space across tasks. Third, we evaluate existing LLL approaches in both the standard and the generalized setup on our new LLFL benchmarks. Fourth, we discuss the limitations of the current generation of LLL methods in this context, which forms a basis for advancing the field in future research. Finally, this paper aims to answer the following questions: How do existing LLL methods perform on a large number of concepts? What division of tasks is more helpful to continually learn facts at scale (semantically divided vs randomly divided)? How does the long-tail distribution of the facts limit the performance of the current methods?
2 Related Work
|Dataset||Structured/Diverse||Long-Tail||Classes||Examples||Task Count||Split Type|
|MNIST||✗||✗||10||60000||2 to 5||R|
|CIFAR (used in [23, 30, 18])||✗||✗||100||60000||2 and 5||R|
|ImageNet and CUB datasets (used in )||✗||✗||1200||1211000||2||R|
|Scenes, CUB, VOC, and Flowers (used in [16, 2, 28])||✗||✗||122-526||5908-1211000||2||S|
|8 Dataset Sequence ||✗||✗||889||714387||8||S|
|CORe50  / iCUBWorld-Transf( ||✗||✗||10 (50)/15(150)||550/900 sessions||10||S|
|Our Mid-Scale LLFL Benchmark||✓||✗||186||28624||4||S R|
|Our Large Scale LLFL Benchmark||✓||✓||165150||906232||8||S R|
, there is limited agreement about the setup. Most build a task sequence by combining or dividing standard object/scene recognition datasets. In the context of robotics, Lomonaco and Maltoni introduced the CORe50 dataset which consists of relatively short RGB-D video fragments (15sec) of handheld domestic objects. They focus both on category-level as well as instance-level object recognition. With 50 objects belonging to 10 different categories it is, however, relatively small scale and limited in scope. Pasquale et al. with a similar focus proposed the iCUBWorld-Transf dataset  with 200 real objects divided in 20 categories. For CORe50 and iCUBWorld-Transf, the number of instances is shown in parenthesis in Table 1
. In a reinforcement learning setup, Kirkpatricket al.  and Fernando et al.  performed interesting LLL experiments using a sequence of Atari Games as tasks. In contrast to all of the above, we aim at a more natural and a larger-scale setup; see last two rows in Table 1. Our benchmarks are more structured and challenging, due to the large number of classes and the long-tail distribution.
Existing LLL Approaches. LLL works may be categorized into data-based and model-based approaches. In this work, we do not consider methods that require storing samples from previous tasks in an episodic memory [23, 18].
, the new task data is used to estimate and preserve the model behavior on previous tasks, mostly via a knowledge distillation loss as proposed inLearning without Forgetting . These approaches are typically applied to a sequence of tasks with different output spaces. To reduce the effect of distribution difference between tasks, Triki et al.  propose to incorporate a shallow auto-encoder to further control the changes to the learned features, while Aljundi et al.  train a model for every task (an expert) and use auto-encoders to help determine the most related expert at test time given an example input.
Model-based approaches [8, 15, 12, 30] on the other hand focus on the parameters of the network.
The key idea is to define an importance weight for each parameter in the network indicating the importance of this parameter to the previous tasks. When training a new task, network parameters with high importance are discouraged from being changed.
In Elastic Weight Consolidation, Kirkpatrick et al.  estimate the importance weights based on
the inverse of the Fisher Information matrix.
Zenke et al.  propose Synaptic Intelligence, an online continual model where is defined by the contribution of each parameter to the change in the loss,
and weights are accumulated for each parameter during training.
Memory Aware Synapses Incremental Moment Matching
Memory Aware Synapses measures by the effect of a change in the parameter to the function learned by the network, rather than to the loss. This allows to estimate the importance weights not only in an online fashion but also without the need for labels. Finally,
Incremental Moment Matching is a scheme to merge models trained for different tasks. Model-based methods seem particularly well suited for our setup, given that we work with an embedding instead of disjoint output spaces.
3 Our Lifelong Fact Learning Setups
We aim to build two LLL benchmarks that consist of a diverse set of facts (two splits for large-scale and two splits for mid-scale). The benchmarks capture different types of facts including objects (e.g., lion, tiger), objects performing some activities (e.g., tiger, jumping, dog,running), and interactions between objects (e.g., lion, eating, meat). Before giving details on the benchmark construction, we first explain how we represent facts.
A visual-semantic embedding for facts. Inspired by [22, 7], we represent every fact for our LLL purpose by three pieces represented in a semantic continuous space. represents object or scene categories. represents predicates, e.g. actions or interactions. represents objects that interact with S. Each of S, P, and O lives in a high dimensional semantic space. By concatenating these three representations, we obtain a structured space that can represent all the facts that we are interested to study in this work. Here, we follow  and semantically represent each of S, P, and O by their corresponding word2vec embeddings .
where is the concatenation operation and means undefined and set to zeros. The rationale behind this notation convention is that if a ground truth image is annotated as man, this could also be man, standing or man, wearing, t-shirt. Hence, we represent the man as man, *,*, where indicates that we do not know if that “man” is doing something. Figure 2 shows how different fact types could be represented in this space, with S, P, and O visualized as a single dimension. Note that S facts like lion are represented as a hyper plane in this space. While tiger, jumping and lion, eating, meat are represented as a hyper-line and a point respectively.
3.1 Large Scale LLFL Benchmark
We build our setup on top of the large scale fact learning dataset introduced by , denoted as Sherlock LSC (for Large SCale). It has more than 900,000 images and 200,000 unique facts, from which we excluded attributes. The dataset was created by extracting facts about images from image descriptions and image scene graphs. It matches our desired properties of being long-tailed and semantic-aware due to its structure.
Given this very large set of facts and examples for each of them, we want to learn them in a LLL setting. This involves splitting the data into a sequence of disjoint tasks (that is, with no overlap in the facts learned by different tasks). However, due to their structured nature, facts may be partially overlapping across tasks, e.g. have the same subject or object. In fact, we believe that some knowledge reappearing across different tasks is a desired property in many real-life LLL settings, as it facilitates knowledge transfer. On the other hand, one could argue that the different tasks that real world artificial agents are exposed to, are likely to cover different domains – a setting more in line with existing LLL works. To study both scenarios, we built a semantically divided split (less sharing among tasks) and a randomly divided one (with more sharing).
Large Scale Semantically Divided Split. We semantically group the facts to create the tasks needed to build our benchmark, i.e. we cluster similar facts and assign each cluster to a task . In particular, we first populate the structured embedding space with all the training facts and then cluster the facts semantically with a custom metric. Since our setting allows diverse facts where one or two of the three components might be undefined, we need to consider a proper similarity measure to allow clustering the facts. We assume that the structured fact space is Euclidean and has unit norm (i.e., cosine distance). Hence, we define the distance between two facts and as follows:
with an indicator value distinguishing between singleton facts, pairs or triplets. The intuition behind this distance measure is that we do not want to penalize the (undefined) part when comparing for example person,*,* to person,jumping,*. In this case the distance should be zero since the piece does not contribute to the distance measure. We rely on bottom-up hierarchical agglomerative clustering which clusters facts together monotonically based on their distance into disjoint tasks using the aforementioned distance measure. This clustering algorithm recursively merges the pair of clusters that minimally increases a given linkage metric. In our experiments, we use the nearest point algorithm, i.e. clustering with single linkage. An advantage of the agglomerative clustering algorithm is that the distance measure need not be a metric.
The result of the clustering is shown in the form of a Dendrogram in Fig. 3. By looking at the clustered facts, we choose a threshold of 85, shown by the red-dashed line, leading to tasks in our work, as detailed further in Table 2. We attach in the supplementary a PCA visualization of the generated tasks using the word embedding representation of each fact and histogram over facts to illustrate the long-tail. We note that the number of facts and images is not uniform across tasks, and some tasks are likely easier than others. We believe this mimics realistic scenarios, where an agent will have to handle tasks which are of diverse challenges.
Large Scale Randomly Divided Split. We also introduce a randomly divided benchmark where the facts are divided randomly over tasks rather than based on semantics. The semantic overlap between randomly split tasks is expected to be higher than for the semantically-split tasks where the semantic similarity between tasks is minimized. Table 2 shows the task information some further information for both types of splits. For the random split, we make sure that the tasks contain a balanced number of facts and of corresponding training and test images by selecting the most balanced candidate out of 100 random trials. Hence, the random split is more balanced by construction in terms of training images per task. Since we split the data randomly into tasks, semantically related facts would be distributed across tasks.
3.2 Mid Scale Balanced LLFL Benchmark
Compared to the large scale dataset, this dataset is more balanced, with the long-tail effect being less pronounced. This allows us to contrast any change in the behavior of the LLL methods going from a uniform distribution to a long-tail distribution. We build the mid-scale LLFL dataset on top of the 6DS dataset introduced in. It is composed of unique facts and images, divided in training samples and test samples. We divided this dataset randomly and semantically into 4 tasks.
Mid-Scale Semantic Split. We use the same mechanism for clustering as described above to create a benchmark of 4 tasks that are semantically divided. By visually analyzing the clusters, we find the following distribution: - Task 1: facts describing human actions such as person,riding bike, person, jumping, - Task 2: facts of different objects such as battingball, battingstumps, dog, car, - Task 3: facts describing humans holding or playing musical instruments, such as person, playing,flute, person, holding, cello, etc. - Task 4: facts describing human interactions such as person, arguing with, person, person, dancing with, person.
Mid-Scale Random Split. We followed the same procedure described in the large scale benchmarks to split the facts into 4 different random groups. Note that 
evaluated image retrieval (with average precision) on a similar random-split of 6DS while in this work we look at the task of fact recognition (measured in accuracy), which is meaningful for both the mid-scale and the large-scale benchmarks (our focus) since the vast majority of the facts has only one image example.
4 Lifelong Learning Approaches
In this section, we first formalize the life-long learning task, then we review the evaluated methods, and finally we explain how we adapt them to fact learning.
4.1 LLL Task
Given a training set , we learn from different tasks over time where . in our benchmarks are structured labels. For most model-based approaches, we can formalize the LLL loss as follows. The loss of training the new task is , where are the parameters of the network such that is the
parameter of an arbitrary neural network (a deep neural network with both convolutional and fully connected layers, in our case).is defined as , where
is a hyperparameter for the regularizer,the previous task’s network parameters, and a weight indicating the importance of parameter for all tasks up to . Hence, we strongly regularize the important parameters at the previous time step (i.e., high ) and weak regularization on the non-important parameters (i.e., low ). This way, we allow changing the latter more freely. Under this importance weight based framework, Finetuning, Intelligent Synapses  and Memory Aware Synapses  are special cases.
4.2 Evaluated Methods
(1) Finetuning (FT): FT is a common LLL baseline. It does not involve any importance parameters, so .
(2) Synaptic Intelligence  (Int.Synapses) estimates the importance weights in an online manner while training based on the contribution of each parameter to the change in the loss. The more a parameter contributes to a change in the loss, the more important it is.
(3) Memory Aware Synapses  (MAS) defines importance of parameters in an online way based on their contribution to the change in the function output. , where is the gradient of the learned function with respect to evaluated at the data point . maps the input to the output . This mapping is the target that MAS preserves to deal with forgetting.
(4) ExpertGate . ExpertGate is a data-based approach that learns an expert model for every task , where every expert is adapted from the most related task. An auto-encoder model is trained for every task . These auto-encoders help determine the most related expert at test time given an example input . The expert is then to make the prediction on . Note the memory storage requirements of ExpertGate is times the number of parameters of a single model which might limit its practicality.
(5) Incremental Moment Matching  (IMM). For sequential tasks, IMM finds the optimal parameter and of the Gaussian approximation function from the posterior parameter for each task, . At the end of the learned sequence, the obtained models are merged through a first or second moment matching. Similarly to ExpertGate, IMM needs to store all models - at least if one wants to be able to add more tasks in the future. We find the mode IMM to work consistently better than the mean IMM so we report it in our experiments.
(6) Joint Training (Joint): In joint training, the data is not divided into tasks and the model is trained on the entire training data at once. As such, it violates the LLL assumption. This can be seen as an upper bound for all LLL methods that we evaluate.
4.3 Adapting LLL methods to fact learning
We use the joint-embedding architecture proposed in  as our backbone architecture to compare the evaluated methods. We chose this architecture due its superior performance compared to other joint-embedding models like [9, 13, 24] and its competitive performance to multi-class cross-entropy. The main difference between joint embedding models and standard classification models is in the output layer. Instead of a softmax output, the last layer in a joint-embedding model consists of a projection onto a joint embedding space. This allows exploiting the semantic relation between facts as well as the structure in the data, as explained before. However, as discussed in the related work section, this is problematic for some of the LLL methods, such as [15, 28] that assume a different output space for each task. This makes the problem challenging and may raise other forgetting aspects. Note that we used the same data loss term in all the evaluated methods in the previous section.
We first present the evaluation metrics, then evaluate the different methods on our benchmarks and discuss the results, and finally we provide a more detailed analysis on long-tail, knowledge acquisition over time, and few-shot learning.
5.1 Fact Learning Evaluation metrics
Evaluation Metric (Standard vs Generalized). A central concept of LLL is that at a given time we can only observe a subset of the labeled training data . Over time, we learn from different tasks . The categories in the different tasks are not intersecting, i.e., if is the set of all category labels in task then . Let denote the entire label space covered by all tasks, i.e., . Many existing works assume that one does not have to disambiguate between different tasks, i.e. for a predictive function , we compute as the accuracy of classifying test data from (the task) into (the label space of ). The accuracy is computed per task.
where is the ground truth label for instance . This metric assumes that at test time one knows the task of the input image. This is how most existing works are evaluated. However, this ignores the fact that determining the right task can be hard, especially when tasks are related. Therefore, we also evaluate across all tasks, which we refer to as Generalized LLL.
In the generalized LLL metric, the search space at evaluation time covers the entire label space across tasks (i.e., ). Hence, we compute as the accuracy of classifying test data from (the task) into (the entire label space) which is more realistic in many cases. In our experiments, is a visual-semantic embedding model, i.e., where is a similarity function between the visual embedding of image denoted by and the semantic embedding of label denoted by . is typically a CNN sub-network and is a semantic embedding function of the label (e.g., word2vec ). The above two metrics can easily be generalized to Top K standard and generalized accuracy that we use in our experiments.
For each metric, we summarize results by averaging over tasks (“mean”) and over examples (“mean over examples”), creating slightly different results when tasks are not balanced.
Similarity Measure between Tasks (word2vec, SPO overlap).
As an analysis tool, we measure similarity between tasks in both the Semantic and the Random splits using two metrics. In the first metric, the similarity is measured by the cosine similarity between average word2vec representation of the facts in each task. In the second metric, we computed the overlap between the two tasks, separately for S, P, and O. For example, to compute the overlap in S, we first compute the number of intersecting unique subjects and divide that by the union of unique subjects in both tasks. This results in a ratio between 0 and 1 that we compute for subjects and similarly for objects and predicates. Based on these three ratios, we compute their geometric mean as an indicator for the similarity between the two tasks. We denote this measure as the SPO overlap.
In this section we compare several state-of-the art LLL approaches on the mid-scale and the large-scale LLL benchmark which we introduced in Sec. 3. Tables 3 and 4 show the Top5 accuracy for the random and the semantic splits on the mid-scale dataset. Each table shows the performance using the standard metric (Eq. 3) and the generalized metric (Eq. 4). For the two large-scale benchmarks, the results are reported in Tables 5, 6 7 and 8. Note that the reported Joint Training violates the LLL setting as it trains on all data jointly. Looking at these results, we make the following observations:
|Random Split||standard metric||generalized metric||Drop (standard to generalized)|
|T1||T2||T3||T4||mean||mean over examples||T1||T2||T3||T4||mean||mean over examples||over tasks||over examples|
|Semantic Split||standard||generalized||Drop (standard to generalized)|
|T1||T2||T3||T4||mean||mean over examples||T1||T2||T3||T4||mean||mean over examples||over tasks||over examples|
|Random||T1||T2||T3||T4||T5||T6||T7||T8||mean||mean over examples|
|Random||T1||T2||T3||T4||T5||T6||T7||T8||mean||mean over examples|
(1) The generalized LLL accuracy is always significantly lower than the standard LLL accuracy. On the large scale benchmarks it is on average several percent lower: and for the random and the semantic splits, respectively. While the large-scale benchmarks are more challenging than the mid-scale benchmarks, as apparent from the reported accuracies, the drop in performance when switching to the generalized accuracy on the mid-scale benchmarks is significantly larger: and , respectively. This could be due to more overlap between tasks on the large-scale dataset as we discuss later, which reduces forgetting leading to better discrimination across tasks.
(2) The LLL performance of the random split is much better compared to the semantic split. Note that the union of the test examples across tasks on both splits are the same. Hence, the ‘mean over examples” performance on the random and semantic splits are comparable. Looking at the performance of the evaluated methods on both random and semantic splits on the large scale dataset, the average relative gain in performance over the methods by using the random split instead of the semantic split is for the generalized metrics. This gain is not observed for ExpertGate which has only relative gain when moving to the random split (small compared to other methods). We discuss ExpertGate behavior in a separate point below. The same ratio goes up to on the mid-scale dataset excluding ExpertGate. What explains these results is that the similarity between tasks in the random split is much higher in the large-scale dataset compared to the mid-scale dataset (i.e., 0.96 vs 0.22 using the word2vec metric and 0.84 vs 0.25 using the SPO metric – see Table 9 for the task correlation in the LSc dataset and the corresponding table for the mid-scale dataset in the supplementary. This shows the learning difficulty of the semantic split and partially explains the poor performance.
|Semantic||T1||T2||T3||T4||T5||T6||T7||T8||mean||mean over examples|
|Semantic||T1||T2||T3||T4||T5||T6||T7||T8||mean||mean over examples|
|Semantic(0.07 mean similarity)||Random (0.96 mean similarity)|
|Semantic (0.238 g-mean of S,P, and O overlap)||Random (0.453 g-mean of S,P, O overlap)|
(3) ExpertGate is the best performing model on the semantic split. However, it is among the worst performing models on the random split. We argue that this is due to the setup of the semantic split, where sharing across tasks is minimized. This makes each task model behave like an expert of a restricted concept space, which follows the underlying assumption of how ExpertGate works. However, this advantage comes at the expense of storing one model for every task which can be expensive w.r.t. storage requirements which might not always be feasible as the number of tasks increases. Additionally, having separate models, requires to select a model at test time and also removes the ability to benefit from knowledge learnt with later tasks, in case there is a semantic overlap between tasks. This can be seen on the random split on the mid-scale dataset (see Table 3) where ExpertGate underperforms several other LLL models: 43.69% generalized accuracy for ExpertGate vs 58.77% generalized accuracy for the best performing model. Similarly on the large scale dataset, ExpertGate performs significantly lower for the random split (15.34% generalized accuracy for ExpertGate vs 22.48% generalized accuracy for the best performing model); see Table 8. The shared information across tasks on the random split is high which violates the assumption of expert selection in the ExpertGate method and hence explains its relatively poor performance on the random split.
(4) For the midscale dataset and with the generalized metric, Incremental Moment Matching (IMM) is the best performing of the model-based methods using a single model (Finetune, IMM, Int.Synapses, MAS) on both the random and the semantic splits (see Tables 3,4). Only for the random split evaluated with the standard metric MAS is slightly better, indicating that MAS might be better at the task level. We hypothesize that IMM benefits from its access to the distribution of the parameters after training each task before the distributions’ mode is computed. This is an advantage that MAS and Int.Synapses do not have and hence the IMM model can generalize better across tasks. For the large-scale dataset, we observe that MAS is performing better than IMM on both the random and the semantic split, but especially on the random split; see Table 6. This may be because MAS has a better capability to learn low-shot classes as we discuss later in our Few-shot Analysis; see tables 12 and 12. This is due to the high similarity between the tasks as we go to that much larger scale; see Table 9. This makes the distribution of parameters that work well across tasks similar to each other and hence IMM no longer has the aforementioned advantage.
5.3 Detailed Analysis
Long-tail Analysis. We show in Fig 4 on left and middle the head-to-tail performance on the random split and the semantic split respectively. Specifically, the figure shows the Top5 generalized accuracy over different ranges of seen examples per class (i.e., the x-axis in the figure). On the right, the figure shows the relative improvement of the model trained on the random split over the semantic split. Using the standard metrics, the head classes perform better using models trained on the semantic split compared to the random split. It also shows that the random splits benefit the tail-classes the most; shown in supplementary materials (Section 4). However as shown on Fig 4 (right), random split benefits everywhere with no clear relation to the class frequency (x-axis).
Gained Knowledge Over Time. Figure 5 shows the gained knowledge over time measured by the generalized Top5 Accuracy of the entire test set of all tasks after training each task. Figure 5 (left) shows that the LLL methods tend to gain more knowledge over time when the random split is used. This is due to the high similarity between tasks which makes the forgetting over time less catastrophic. Figure 5 (right) shows that the models have difficulty gaining knowledge over time when the semantic split is used. This is due to the low similarity between tasks which makes the forgetting over time more catastrophic. Note that the y-axis in Figure 5 left and right parts are comparable since it measure the performance of the entire test set which is the same on both the semantic and the random splits.
For a principled evaluation, we consider measuring the forward and the backward transfer as defined in . After each model finishes learning about the task , we evaluate its test performance on all tasks. By doing so, we construct the matrix , where is the test classification accuracy of the model on task after observing the last sample from task . Letting be the vector of test accuracies for each task at random initialization, we can define the backward and the forward transfer as: and . The larger these metrics, the better the model. If two models have similar accuracy, the most preferable one is the one with larger BWT and FWT. We used the generalized accuracy for computing BWT and FWT.
Figures 6 and 7 show the performance of each task test set after training each task (from first task to last task). As expected the performance on the task set peaks after training task and the performance degrades after training subsequent tasks. Int.Synapses and Finetune show the best performance of training the current task at the expense of more forgetting on previous tasks compared to MAS. Comparing the performance of task at the task training to its performance after training the last task as a measure of forgetting, we can observe a lower drop on the performance on the random split compared to the semantic split; see the figures. This is also demonstrated by higher backward transfer on the random split; see Table 10.
Few-shot Analysis. Now, we focus on analyzing the subset of the testing examples belonging to facet with few training examples. Tables 12 and 12 show few-shot results on the semantic and the random split, respectively. As already observed earlier, the performance on the random splits is better compared to the semantic splits. We can observe here that finetuning is the best performing approach on average for few-shot performance on both splits. Looking closely at the results, it is not hard to see that the main gain of finetuning is due to its high accuracy on the last task. This shows that existing LLL methods do not learn the tail and there is need to devise new methods that have a capability to learn the tail distribution in a LLL setting.
In this paper, we proposed two benchmarks to evaluate fact learning in a lifelong learning setup. A methodology was designed to split up an existing fact learning dataset into multiple tasks, taking the specific constraints into account and aiming for a setup that mimics real world application scenarios. With these benchmarks, we hope to foster research towards more large scale, human-like artificial visual learning systems and studying challenges like long-tail distribution.
Acknowledgements Rahaf Aljundi’s research was funded by an FWO scholarship.
In the supplementary, we attach two folders that include the large-scale and mid-scale benchmarks annotations that we developed; see “large-scale_benchmarks” and “large-scale_benchmarks”folders. These folders have a comprehensive list of the tasks and the names of the facts of each of the Large-scale and mid-scale benchmarks. This document also includes additional details and results, listed below.
Mid-scale Task Similarities using average Word2vec space
Large Scale Semantic Splits 8 Tasks on word2vec space
Standard Accuracy (Long Tail/and Semantic/Random Improvement)
Long Tail Distribution Statistics on The Large Scale Dataset
Mid-Scale dataset Dendogram
7 Mid-scale Task Similarities using average Word2vec space (top-part) and geometric mean S,P, and O overlap (bottom-part)
|Semantic (0.02 word2vec mean similarity )||Random (0.22 mean similarity )|
|Semantic (0.25 g-mean of S,P, and O overlap )||Random (0.84 g-mean of S,P, and O overlap)|
7.1 Large Scale Semantic Splits 8 Tasks on word2vec space
7.2 Standard Accuracy (Long Tail/and Semantic/Random Improvement
7.3 SPO Generalization
It is desirable for each Life-long learning method to be able to generalize to understand an SPO interaction from training examples involving its components, even when there are zero or very few training examples for the exact SPO with all its parts S,P and O. For example, for dog, riding, horse SPO example , riding, horse (the PO part) might have been seen more than 15 examples (TH=15) and dog, , the S part might have been seen more than 15 examples. Table 14 and 15 shows the Top5 performance for SPOs for different LLL methods where the number of training examples is 5 for generalization cases where SP15,O, or P,SO, or PO15,S, and for Tables 14 and 15, respectively. Similarly, Table 16 and 17 shows a different set of generalization cases which are SP,PO15,SO or SP,PO or SP,SO or PO15,SO, and for Table 16 and 17 respectively.
7.4 Qualitative Examples
This section shows correctly and incorrectly classified examples for each of fine-tuning, Intl. Synapsses, and Memory Aware Synapses.
8 Mid-Scale benchmark Dendogram
Figure 13 shows the dendogram obtained from the agglomerative clustering performed in the word2vec space of the facts from the mid-scale dataset. The different colors indicate the different clusters. Each cluster later forms a task.
Cluster in magenta that mostly represents person actions and contains the person fact that is needed in the rest of that tasks.
Cluster in red resembles the second task and is mainly composed of facts of different objects.
Cluster in Cyan is the third cluster which contains facts describing humans holding or playing musical instruments.
Cluster in Green (last) is composed of the fact belonging to the green cluster that is composed of facts describing human interactions.
-  Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: Learning what (not) to forget. In: ECCV (2018)
-  Aljundi, R., Chakravarty, P., Tuytelaars, T.: Expert gate: Lifelong learning with a network of experts. In: CVPR (2017)
Chao, W.L., Changpinyo, S., Gong, B., Sha, F.: An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. In: European Conference on Computer Vision. pp. 52–68. Springer (2016)
Chaudhry, A., Dokania, P.K., Ajanthan, T., Torr, P.H.: Riemannian walk for incremental learning: Understanding forgetting and intransigence. In: International Conference on Machine Learning (2018)
-  Chen, X., Shrivastava, A., Gupta, A.: Neil: Extracting visual knowledge from web data. In: Computer Vision (ICCV), 2013 IEEE International Conference on. pp. 1409–1416. IEEE (2013)
Divvala, S.K., Farhadi, A., Guestrin, C.: Learning everything about anything: Webly-supervised visual concept learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3270–3277 (2014)
-  Elhoseiny, M., Cohen, S., Chang, W., Price, B.L., Elgammal, A.M.: Sherlock: Scalable fact learning in images. In: AAAI. pp. 4016–4024 (2017)
-  Fernando, C., Banarse, D., Blundell, C., Zwols, Y., Ha, D., Rusu, A.A., Pritzel, A., Wierstra, D.: Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734 (2017)
-  Gong, Y., Ke, Q., Isard, M., Lazebnik, S.: A multi-view embedding space for modeling internet images, tags, and their semantics. International journal of computer vision 106(2), 210–233 (2014)
-  He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
-  Käding, C., Rodner, E., Freytag, A., Denzler, J.: Fine-tuning deep neural networks in continuous learning scenarios. In: Asian Conference on Computer Vision. pp. 588–605. Springer (2016)
-  Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences p. 201611835 (2017)
-  Kiros, R., Salakhutdinov, R., Zemel, R.S.: Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539 (2014)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. pp. 1097–1105 (2012)
-  Lee, S.W., Kim, J.H., Jun, J., Ha, J.W., Zhang, B.T.: Overcoming catastrophic forgetting by incremental moment matching. In: Advances in Neural Information Processing Systems. pp. 4652–4662 (2017)
-  Li, Z., Hoiem, D.: Learning without forgetting. In: European Conference on Computer Vision. pp. 614–629. Springer (2016)
-  Lomonaco, V., Maltoni, D.: Core50: a new dataset and benchmark for continuous object recognition. In: Conference on Robot Learning (2017)
-  Lopez-Paz, D., Ranzato, M.: Gradient episodic memory for continual learning. In: Advances in Neural Information Processing Systems (2017)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems. pp. 3111–3119 (2013)
-  Mitchell, T.M., Cohen, W.W., Hruschka Jr, E.R., Talukdar, P.P., Betteridge, J., Carlson, A., Mishra, B.D., Gardner, M., Kisiel, B., Krishnamurthy, J., et al.: Never ending learning. In: AAAI. pp. 2302–2310 (2015)
-  Pasquale, G., Ciliberto, C., Rosasco, L., Natale, L.: Object identification from few examples by improving the invariance of a deep convolutional neural network. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 4904–4911 (Oct 2016), http://ieeexplore.ieee.org/document/7759720/
-  Plummer, B.A., Mallya, A., Cervantes, C.M., Hockenmaier, J., Lazebnik, S.: Phrase localization and visual relationship detection with comprehensive image-language cues. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1928–1937 (2017)
-  Rebuffi, S.A., Kolesnikov, A., Lampert, C.H.: icarl: Incremental classifier and representation learning. arXiv preprint arXiv:1611.07725 (2016)
-  Romera-Paredes, B., Torr, P.: An embarrassingly simple approach to zero-shot learning. In: International Conference on Machine Learning. pp. 2152–2161 (2015)
-  Shmelkov, K., Schmid, C., Alahari, K.: Incremental learning of object detectors without catastrophic forgetting. In: The IEEE International Conference on Computer Vision (ICCV) (2017)
-  Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
-  Thrun, S., O’Sullivan, J.: Clustering learning tasks and the selective cross-task transfer of knowledge. In: Learning to learn, pp. 235–257. Springer (1998)
-  Triki, A.R., Aljundi, R., Blaschko, M.B., Tuytelaars, T.: Encoder based lifelong learning. arXiv preprint arXiv:1704.01920 (2017)
-  Wang, Y.X., Ramanan, D., Hebert, M.: Growing a brain: Fine-tuning by increasing model capacity. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
-  Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: Proceedings of the 34th International Conference on Machine Learning. vol. 70, pp. 3987–3995. PMLR (06–11 Aug 2017)
-  Zhang, J., Kalantidis, Y., Rohrbach, M., Paluri, M., Elgammal, A., Elhoseiny, M.: Large-scale visual relationship understanding. arXiv preprint arXiv:1804.10660 (2018)