Modern deep-learning based object detection approachesRen et al. (2015); Bochkovskiy et al. (2020); Cai and Vasconcelos (2018) have achieved remarkable results on benchmarks such as Pascal VOC and MS-COCO datasets. This, however, usually requires a large amount of labeled data, which can be labor-intensive and time-consuming to acquire. On the contrary, humans can recognize and locate new objects after observing only one or a few instances. Such ability to learn from few examples is desirable for many applications in a low-data regime where the labeled data is rare and hard to collect. The challenge of detecting novel classes from few samples is usually referred to as Few-Shot Object Detection (FSOD), which is usually framed as either metric learning or meta-learning problems in previous studies. For instance, TFA Wang et al. (2020) is one of the most representative approaches that adopting metric learning into few-shot detection. Meta-RCNN Yan et al. (2019) propose a meta-learning attention mechanism to emphasize correlated feature channels for better classification. Despite their successes, when extending to novel classes, they require to store and replay a memory buffer for old classes, which often results in heavily model retraining and inevitably hinders their real-world applications.
In this paper, we study a more challenging problem sets, i.e., Class-Incremental Few-Shot Object Detection (CI-FSOD) Perez-Rua et al. (2020). Unlike the conventional FSOD methods, CI-FSOD requires not only to reduce the hardware resource requirement but also to retain the comparable detection performance. Hence, a desirable solution should contain three important aspects. First, the model can learn efficiently only from a continual data stream without catastrophic forgetting. For example, when the model adapts to new tasks, it cannot be fine-tuned with previous task data to achieve fast adaption and storage saving. Second, the model should generalize well even when the volume of training samples is small for novel classes. Third, previous knowledge gained from the large-scale base training set should be well preserved. It is noted that many previous approaches only focus on promoting feature representation for novel classes but fail to achieve good knowledge retention on the base classes.
Catastrophic forgetting affects the training of deep models, limiting their abilities to learn multiple tasks sequentially. To tackle catastrophic forgetting for the large-scale base classes to maintain the overall performance, TFA only fine-tunes the last two layers on a small balanced training set containing both base and novel classes, while the feature extractor is kept to be fixed. This could be viewed as a practical approach to preserve the previously learned feature distribution for base classes. However, it is unreasonable that the pre-trained RoI feature extractor, which contains much semantic information of base classes, could be used directly to represent unseen novel classes without further fine-tuning. In contrast, when we fine-tune TFA by unfreezing RoI feature extractor, it is surprising to see that the AP performance on novel classes could be significantly improved from to points. Such findings indicate that fine-tuning deeper layers of CNN are crucial for learning unseen concepts since it can obtain better features. However, the worse results on base class(decreasing from to points) reveal that fine-tuning base classes with limited data may inevitably lead to catastrophic forgetting of the originally generalizable representation Zhou et al. (2020). To better preserve base classes’ performance and incorporate our method with non-incremental settings, we propose to decouple the representation learning of base and novel classes into two independent branches. In particular, we propose a novel Double-Branch Framework to take care of both base-class retention and novel-class adaption simultaneously.
Upon overcoming knowledge retention on base classes, we are still facing catastrophic forgetting on those sequentially encountered novel classes. In this work, we find that such issues can be attributed to two reasons. First,for models learn in a few-shot sequential manner, overfitting due to data scarcity may further exacerbate catastrophic forgetting. According to the well-established plasticity-stability theory Mirzadeh et al. (2020b), model not only relies on plasticity to adapt quickly to novel tasks but also requires stability to prevent catastrophic forgetting for old tasks. From a loss landscape perspective, the flatness of the local minima found for each task plays an essential role in the overall degree of forgetting. Especially, wide and flat local minima is desired for each incremental task, so as to guarantee that minima found for different tasks could overlap with each other to counter catastrophic forgetting, Mirzadeh et al. (2020b)
. However, under few-shot condition, optimizing neural networks on small training sets is almost same as adopting the large-batch training regime, which tends to converge to narrow and sharp local minima of the training and testing functions—and as is well known, sharp minima lead to poorer generalizationCha et al. (2020)
. As a result, networks trained from few-shot tasks tend to be less stable and forget previously seen tasks more severely. A common approach Incremental Moment Matching (mean-IMM)Lee et al. (2017) proposes to prevent catastrophic forgetting by averaging model weights of two sequential tasks, by implicitly assuming that the obtained optimum at each incremental step is flat and overlaps with each other. However, we argue that such assumption can only hold for data-sufficient scenarios but not for few-shot. Given abundant training samples, the overall incremental-learning process is unlike to suffer from the large-batch training dilemma. Even without any additional regularization, the obtained local minima at each step can still be flat and wide, thus model with flatter minima is more generalizable to unseen classes as it’s always more likely to overlap with the other optimums, which is not realistic for few-shot scenarios.
Second, in an incremental learning manner, a model can only access the data of the current task since no data of previous tasks can be stored and replayed. As a result, inter-class separation can only be achieved inside the current task, while the discrimination between inter-task classes is missed. For example, if the first task is to discriminate truck vs car and the second task is to classify bus vs airplane, no discriminative feature learned specifically to distinguish truck from the bus.
In this work, we consider tackling the issues above from two perspectives. 1) Optimization and loss landspace: We propose a novel Stable Moment Matching (SMM) algorithm for CI-FSOD setting. In particular, the pretrained base-set weights can serve as good parameter initialization because of its flatness, our goal is to pass its flatness among the sequential-encountered few-shot tasks, through restricting parameter updating process within the same local flat region. In particular, we reinforce the training stability by exerting a stronger resistance force towards model updating, by restricting overall parameter displacement to be upper-bonded. To this end, the searching space of parameter updating to a small local region around optimum of the previous task, thus high intersection of low loss surface between different tasks can be achieved. 2) Inter-task class separation: We propose to store RoI features of previous tasks rather than the original images for representing old classes. A margin-based regularization loss is proposed to optimize margins for the frequently misclassified old classes.
2 Related Works
Incremental Few-Shot Learning Incremental Few-shot learning is attracting increasing attention due to its realistic applications Finn et al. (2017); Snell et al. (2017). However, most of the existing methods are proposed to address the single image classification problem, hence not readily applicable to object detection. Our method falls within the context of the regularization-based learning approach Chen and Lee (2021); Lee et al. (2017); Kirkpatrick et al. (2017). For example, when learning incrementally, ILDVQ Chen and Lee (2021) preserves feature representation of the network on older tasks by using a less-forgetting loss, where response targets are computed using data from the current task. As a result, ILDVQ does not require the storage of older training data. However, this strategy may be inefficient if the data for the new task is too few or just belongs to a distribution different from those of prior tasks. In contrast, we address the incremental few-shot learning problem from a new perspective of parameter space. The proposed Stable Moment Matching algorithm strengthens the stability of few-shot adaptation and is more robust to data scarcity.
Few-Shot Object Detection The most recent few-shot detection approaches are adapted from the few-shot learning paradigm. Chen et al. (2018) proposes a distillation-based approach with background depression regularization to eliminate the redundant amount of distracting backgrounds. A meta-learning attention generation network is proposed in Kang et al. (2019)
to emphasize the category-relevant information by reweighting top-layer feature maps with class-specific channel-wise attention vectors. Sharing the same insight, Meta-RCNNYan et al. (2019) applies the generated attention to each region proposal instead of the top-layer feature map. TFA Wang et al. (2020) replaces the original classification head of Faster-RCNN with a cosine classifier to stabilize the adaptation procedure.
3.1 Problem Formulation
The CI-FSOD problem is normally formulated as a two-phase learning task. In the first representation learning phase (), a detection model is pretrained on a large set of base classes where denotes the learned weight parameters. In the second incremental learning phase (), given the sequential new few-shot task , the model updating is performed over multiple stages from to . During the -th learning step, the previous category space is expanded with new classes in , so that , where is assumed for simplicity. The objective of CI-FSOD is to effectively learn a which is capable to detect all the classes in with high accuracy. In this work, we use “base” and “novel” classes to differentiate the classes in representation learning phase (, not few-shot manner) and incremental learning phase (, few-shot manner). Moreover, we use “new” and “old” classes to denotes the new coming classes in and previous few-shot classes from , noted that both of them are belong to novel classes.
3.2 Overall Framework
In this work, a novel DBF is proposed to address the balance problem between base-class knowledge retention and novel-class knowledge adaptation for iFSOD. The schematics of DBF is shown as in Fig. 1, where the base detector here we use Faster-RCNN as an illustration example. It is also noted the proposed DBF framework is also compatible to other base detectors such as YOLO, etc. The speciality of Fig. 1 exists on its ROI feature extractor and sibling head (category classification and bbox regression), where DBF is implemented with two parallel branches, termed as the “Static Learning Branch” and “Dynamic Learning Branch”, respectively.
The static learning branch adopts a similar feature consolidation strategy as TFA Wang et al. (2020). When a new task
comes, we freeze its backbone, RPN, ROI feature extractor, old-class classification weights and box regressor, while only expands the classifier weight matrix with new-class weights and then fine-tunes it with cosine similarity. Thus, the original feature representation is preserved to best represent base classes. In contrast, we unfreeze the ROI feature extractor for dynamic learning branch to better adapt to novel classes, together with a proposed SMM updating rule (Section III.B), an inter-task class regularization (Section III.C) and a semi-supervised pseudo labeling approach (Section III.D) to better prevent catastrophic forgetting when learning with long-term sequential tasks. After that, the final predicted proposal classification output is concatenated from the two branches as illustrated in Fig.1, where the base-class scores are from the static learning branch and the novel-class scores are from the dynamic learning branch. Such a combination provides a better unbiased prediction of the overall learned classes in iFSOD setting.
3.3 Stable Moment Matching (SMM)
Currently, existing regularization-based approaches such as EWC Kirkpatrick et al. (2017) and IMM Lee et al. (2017) alleviate the catastrophic forgetting effect by parameter consolidation. Although these methods have shown promising results when training data is sufficient, performance in few-shot scenario is shortlived. Moreover, they cannot achieve long-term memory unless previous fisher matrix or model weights are preserved, i.e, good performance can only be achieved for the first a few incremental steps, but performance rapidly decays during the following steps. Without loss of generality, we analyze the failure cause of IMM for the case of multi-step few-shot detection.
IMM assumes there exist a linear connector without high loss barriers between the local minima of two sequential tasks, so we can expect their averaging point to perform well for both tasks. However, we argue that such linear connectivity can only be guaranteed when the loss surface around local minima is flat Mirzadeh et al. (2020b, a); Frankle et al. (2020), which allows the low-error ellipsoid around each optimum to be wide enough to overlap with each other. However, when fine-tuning with extremely small data, a model is often suffering from large-batch training dilemma Keskar et al. (2016), which results in sharp and narrow minima. Thus the chance of existing loss-smooth linear connector between different tasks quickly becomes very small as the learning continues, illustrated as in Figure 2 (a). Model is often overfitted and over-plastic (forgetting quickly).
Given the pretrained detector, it’s supposed to have a much flatter local minima since it’s trained from abundant training data. Thus, our intuition is that, to ensure the flatness of local minima found for future few-shot tasks, the overall optimization trajectory should be restricted strictly to stay within the local-flat region of the pretrained base-set minima. In that way, linear connectivity among different tasks can be guaranteed, and the ellipsoids of different local minima can always overlap with each other even when learning continues.
Assume the local minima obtained through SGD from two continual tasks and are and , respectively, where represents model parameters contained in RoI feature extractor, box classifier, and box regressor in CI-FSOD. IMM simply adopts parameter interpolation as,
where is the interpolation ratio. If , becomes the mean of and . However, IMM ignores the fact that the existence of the linear connectivity between two local minima is highly dependent on the small overall displacement between and , defined as . In this work, we propose an optimized learning regime by exerting a stronger resistance force on the overall displacement between the sequentially learned local minima. In particular, the proposed Stable Moment Matching (SMM) improves IMM by continually updating model weights through parameter interpolation for each iteration with an adaptive interpolation ratio. Specifically, the representation learning of each task are decoupled into two phases: classifier learning phase and representation learning phase .
In , assuming the model weights obtained from the previous task is , when a new task comes, we adopt the commonly used weight imprinting strategy Qi et al. (2018) to initialize the classification weights for the new classes contained in . Then, we freeze the backbone, RPN and RoI feature extractor, and only fine-tune the new imprinted weights in the classifier as well as the box regressor. The obtained model , denoted as knowledge base, preserves previous task knowledge to the best content thus can be used as a base model for . In , we unfreeze the RoI feature extractor and use the knowledge base for iterative linear interpolation. Once finish the -th gradient descent update, the linear interpolation is conducted between the knowledge base and the newly updated weights which obtained from the latest -th single gradient descent,
where is the interpolation ratio. Iteratively, the obtained interpolated weights is then used as the starting point for the next iteration, where the process is visualized in Figure 2 (c). Specifically, given a mini-batch of images and a learning rate , the model is updated according to the gradient descent for one iteration,
represents the overall loss function. During the training of each task, the knowledge base model is fixed to be the same until converging. Only when a new taskcomes, the knowledge base model will be updated accordingly as mentioned in . The overall flowchart of the proposed SMM is illustrated in Figure 3.
From the standpoint of parameter space, an observation can be made for the total displacement of model parameters for a -th new task. Assuming the learning step length is bounded by some constant , the upper bound of for SMM with iterations of fine-tuning can be derived using recursion with (2) and (3) as,
which indicates that the displacement of our approach is upper-bounded by. By Similarly, according to the mean-IMM approach, its corresponding upper bound of with iterations of fine-tuning can be derived as,
Obviously, it can be easily seen that when increases, is easy to hold as the upper-bound of our method is independent of the iteration number , which indicates the proposed SMM can provide smaller and controllable parameter displacement than mean-IMM for existence of the linear connectivity, illustrated as in Figure 2 (b). Moreover, a L2 regularization term Lee et al. (2017) for further promoting the linear connectivity between the current model and its corresponding knowledge base can be defined as .
With a constant learning step length, the interpolation ratio will determine the distance of the overall trajectory. A smaller may increase the upper bound of the overall displacement from the beginning of training to bring more plasticity. However, using a small interpolation rate means applying a larger update to the model weights for a sequential learning problem. Therefore, the resultant model may learn fast but also forget quickly. Hence, we propose an adaptive interpolation ratio that starts with a small interpolation rate at the beginning of training. Then, we slightly increase the interpolation rate for each subsequent task to stabilize the optimization trajectory and prevent forgetting. Precisely, the adaptive interpolation ratio for the -th task in
-th training epoch is calculated as,
where and are the base rate and step increasing rate. represents the total number of training epochs for each task. Note that the current epoch number will be reset to at the beginning of each task.
3.4 Inter-Task Separation Loss
During the long-sequential learning process, training samples of previous tasks cannot be stored due to computation cost and privacy safety. As a result, inter-class separation is only optimized inside the current task, while the separation among classes from different tasks is neglected. This could further cause two problems: 1) Old classes of previous tasks may suffer from feature drift as the newly adapted feature extractor is biased towards new classes from the current task. Thus the previously learned classification weights for old classes are no longer compatible to the current embedding space Yu et al. (2020); Chen and Lee (2021). 2) feature regions of novel classes in a new task could overlap with those old classes if similar discriminative features are shared, leading to more ambiguity among classes in the feature space. Such issues are conceptually illustrated in Figure 4 (a) and (b), respectively.
Eliminating such inter-task class ambiguities is challenging as it requires to recall previous images for all old classes. To address the issues above, we propose a more feasible approach that collects representative feature embeddings from the last classification layer rather than storing raw images, so as to save the knowledge of the previous incremental steps and make the model to achieve better discrimination among all classes. Moreover, our method does not need previous classes’ images when adapting to new classes, which is more memory-efficient and privacy-secure. Although feature space generally suffers from drifting along multiple incremental steps, such drift can be effectively restricted to minimum by our proposed SMM regularization. Therefore, the stored historical embedding can be used as a practical approximation to represent old-class feature distributions.
To better promote inter-class discrimination, a margin-based separation loss is proposed to push the decision regions of novel classes further away from the features of old-classes, which not only encourages a more compact representation but also alleviate category confusion. This concept is illustrated in Figure 4 (c). Specifically, the foreground RoI feature embeddings are sampled during each historical incremental task () and stored as , where denotes the extracted feature for the corresponding RoI proposals. Given a new-task few-shot dataset , we randomly sample a fixed number of their foreground RoI features through each image. In the meantime, an equal number of historical feature embeddings are sampled from for each old class. With the cosine similarity,if a RoI feature belongs to the new classes, we use the normalized weights of its ground-truth class weight vector as a positive template, and the negative template is selected from the old classes’ weight vector that yields the highest similarity to , while the same arrangement is applied to those old-class feature embeddings. Mathematically, the proposed inter-task separation loss can be formulated as
where is a margin, is a scale parameter used to ensure the convergence of training Wang et al. (2018), and and are the positive templates for new-class and old-class proposals, respectively, using the weight vectors for the ground-truth classes. and are the negative templates for new-class and old-class proposals, respectively, using the highest-similarity weight vectors from the other groups, i.e., old and new classes.
4 Training Strategy
After finishing the pre-training, we freeze the backbone and RPN as their weights are considered to be category-agnostic. In each step of few-shot adaption, when a training image is feeded, the image passes through the backbone feature extractor and RPN to get ROIs. After the ROI-align layer, the obtained ROI features are passed into the following double branches for classification weights imprinting and model training. Since the goal of static learning batch is to best preserve base-class performance without forgetting, only the newly imprinted weights for the current incremental step are fine-tuned, while the backbone, RPN, ROI feature extractor, classification weights belonging to old classes as well as the regression weights are kept fixed. For the dynamic learning branch, to better adapt pre-trained features to represent novel classes, we further unfreeze the ROI feature extractor and update its weights with the proposed SMM rule. The overall learning objective for the dynamic learning branch contains three components, , where is the standard cross-entropy loss, is the L2 regularization term and is the inter-task separation loss, and are the importance weights for each loss.
5.1 Dataset Settings
Follow the common practice of few-shot detection, we combine the 80K images training set and 35K images trainval set of MS-COCO 2014 to train our model, and report evaluation results based on the 5K minival set. We then split the overall 80 categories into two groups, 20 classes which are shared with Pascal VOC are used as novel classes, and the other 60 classes are used as base classes. To evaluate the robustness of our method against forgetting, we design a challenging incremental learning setting, where the 20 novel classes are added one by one through 20 incremental steps. In practice, it is worth to note that new-class images may also contain instances belonging to old classes. However, we do not provide any annotations for old-class instances during model adaption. That is, upon the arrival of each novel class, the model can only access to k = 10 bounding-box annotation of the current class. We then evaluate our model over all classes and report detection performance of base and novel classes separately.
For another cross-domain evaluation from MS-COCO to VOC, model is first pre-trained on the 60 base classes from MS-COCO. Then Pascal VOC 07+12 training set is employed as novel set for model adaption with 20 incremental steps. Next, we test our method to detect the 20 novel classes based on VOC2007 test set. Different from the first experiment that focuses on evaluating cross-category model generalization, this setup is further to appraise the cross-domain generalization ability.
5.2 Baseline settings
Meta-learning approaches We first compare our approach to multiple few-shot detection approaches. The first two baselines, "Feature-Reweight Kang et al. (2019)" and "ONCE Perez-Rua et al. (2020)" belong to meta-learning approaches that predict detections conditioned on a set of support examples. For instance, "Feature-Reweight" propose to reweight the deep layers of a CNN backbone with channel-wise attentive vectors, so as to amplify those informative features for detecting novel objects in the query set. "ONCE" propose a kernel-weight generator that extracts meta-information from support samples to predict convolution-kernel weights for detecting novel classes. The major drawback of these meta-learning approaches is that, during multi-step adaption, the meta-learner must be fine-tuned with sufficient training categories for generalization. On the one hand, this requires recalling historical training samples which may cause both storage and privacy issues. On the other hand, as their computational complexity is proportional to the number of categories, they will become rather slow or even unavailable when tackling continually increasing data sets.
We then compare our approach with two transfer-learning based approaches: TFA and IMTFAGanea et al. (2021). TFA alleviates catastrophic forgetting on both base and novel classes by its two-stage finetuning approach, in which only the last layer of box classification and regression is finetuned with cosine similarity while freezing backbone, RPN and ROI feature extractor. IMTFA improves TFA by setting new classification weights through weight imprinting rather than randomly initialization. However, the fixed feature extractor is biased towards features that are discriminative and salient to base classes, thus freezing it without exposure to novel classes can be suboptimal.
Conventional incremental learning approaches Compared with the general incremental learning method, our method is more suitable for the sequential adaptation with small training sets. To prove this point, we compare it with the incremental learning framework Specifically, we use the naïve fine-tuning method as baseline, denotes as FRCN-ft, where we unfreeze the ROI feature extractor to fine-tune with novel classes. Then we embed different incremental learning framework into the baseline method FRCN-ft,e.g, IMM Lee et al. (2017) and ILOD Shmelkov et al. (2017).
|Novel AP||Base AP|
|Novel AP||Base AP|
5.3 Results on MS-COCO
Now, we compare the proposed DBF framework with the previous state-of-the-art methods. A theoretical upper bound for iFSOD is the results of non-incremental training based on the same data, where model is joint training with all classes without considering incomplete annotations, while the lower bounds are obtained from continually fine-tuning a naïve Faster-RCNN model with multiple small data sets, we denote it as FRCN-ft. Except for the upper-bound method, the phenomenon of incomplete annotations is set to be appeared in all other baselines, thus for a fair comparison, the proposed semi-supervised object mining (SSOM) method is embedded into all baselines.We compare the methods under the following two scenarios, where novel class may come one by one or as a group.
Per-Class Incremental To compare our method with the other baselines towards the capability of preserving long-term memory, we first pre-train on the 60 base classes and then adapt to the remaining 20 novel classes step by step, where each step only contains one novel classes. Regarding the results, we have several observations. 1) The accuracy of the existing meta-learning approaches is still far away from satisfaction. Moreover, fine-tuning meta learners with reduced number of class is infeasible since the episodic learning scheme always requires category diversity, thus making them not feasible for real-world iFSOD scenarios. 2) Compared with the baseline TFA, unfreezing RoI feature extractor (FRCN-ft) gets even worse results, which indicates that naively fine-tuning more parameters with limited data may aggravate overfitting and then cause catastrophic forgetting. (3) IMM can relieve catastrophic forgetting to some extent. However, such improvement is still marginal since IMM only merges models rather than restricting the overall trajectory of parameter updating. 4) The proposed SMM approach outperforms the original IMM approach significantly under 10-shots, which indicates that restricting the overall parameter displacement is crucial for long-term memory. 5)Thanks to the effort of SMM that always maintains a flat local minimum during continually fine-tuning, the degree of feature drift of embedding-space can be effectively reduced to minimum. Therefore, the last-layer feature vectors(SMM+CR) can be used as effective class representatives for representing old-class distributions, which brings 0.4 mAP improvements, this also relieves the requirements of storing raw images. 6)The proposed inter-task class(SMM+CR+ Inter-Sep) separation loss consistently performs better than cross-entropy only, which indicates that a more compact intra-class representation is formed by learning a large margin between old and new classes. Thanks to the non-forgetting property, our proposed methods gain +2.4 AP for 10-shot above current SOTA, which is more significant than the gaps between any previous advancements.
Per-Group Incremental We then test our method in a group incremental scenario where the 20 novel classes are divided into four groups with five classes in each group. The incremental fine-tuning is conducted by adding one group of classes at each time until all 20 classes are learned. The results are shown in Table, where our method significantly outperforms the other baselines at each group, confirming that our method is also superior in the group-wise incremental detection setting.
5.3.1 Results on MS-COCO to Pascal VOC
We then evaluated our method in a cross-domain setting, where the data used for model pre-training and incremental fine-tuning are from different domains. Specially, we first train a model on the 60 base classes from MS-COCO, then we fine-tune it step by step on the 20 novel classes from Pascal VOC, and evaluate it on VOC2007 test set. The results in Table 3 confirm the generalization advantages of our method when transfers to a novel domain different from the training one.
We propose a generic learning scheme for CI-FSOD. First, the Double Branch Framework preserves base-class feature distribution for performance retention. Second, the Stable Moment Matching method solves the issue of catastrophic forgetting under the setting of few-shot, which provides a better trade-off between stability and plasticity. Third, the proposed Inter-Task Class Separation promotes a large separation between old and new classes. The effectiveness is validated by extensive experiments, where our approach yields the state-of-the-art performance and outperform the previous algorithm with a significant margin.
- YOLOv4: optimal speed and accuracy of object detection. External Links: Cited by: §1.
- Cascade r-cnn: delving into high quality object detection. . Cited by: §1.
- CPR: classifier-projection regularization for continual learning. arXiv preprint arXiv:2006.07326. Cited by: §1.
- LSTD: a low-shot transfer detector for object detection. In AAAI, Cited by: §2.
- Incremental few-shot learning via vector quantization in deep embedded space. In International Conference on Learning Representations, Cited by: §2, §3.4.
- Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, Cited by: §2.
Linear mode connectivity and the lottery ticket hypothesis.
International Conference on Machine Learning, pp. 3259–3269. Cited by: §3.3.
- Incremental few-shot instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1185–1194. Cited by: §5.2.
- Few-shot object detection via feature reweighting.. In ICCV, Cited by: §2, §5.2.
- On large-batch training for deep learning: generalization gap and sharp minima. arXiv preprint arXiv:1609.04836. Cited by: §3.3.
- Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114 (13), pp. 3521–3526. Cited by: §2, §3.3.
- Overcoming catastrophic forgetting by incremental moment matching. arXiv preprint arXiv:1703.08475. Cited by: §1, §2, §3.3, §3.3, §5.2.
- Linear mode connectivity in multitask and continual learning. arXiv preprint arXiv:2010.04495. Cited by: §3.3.
- Understanding the role of training regimes in continual learning. arXiv preprint arXiv:2006.06958. Cited by: §1, §3.3.
- Incremental few-shot object detection. External Links: Cited by: §1, §5.2.
- Low-shot learning with imprinted weights. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5822–5830. Cited by: §3.3.
- Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, pp. 91–99. Cited by: §1.
- Incremental learning of object detectors without catastrophic forgetting. In Proceedings of the IEEE international conference on computer vision, pp. 3400–3409. Cited by: §5.2.
- Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems 30, Cited by: §2.
CosFace: large margin cosine loss for deep face recognition. External Links: Cited by: §3.4.
- Frustratingly Simple Few-Shot Object Detection. arXiv e-prints. Cited by: §1, §2, §3.2.
- Meta R-CNN: towards general solver for instance-level low-shot learning. pp. 9576–9585. External Links: Cited by: §1, §2.
- Semantic drift compensation for class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6982–6991. Cited by: §3.4.
- Bbn: bilateral-branch network with cumulative learning for long-tailed visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9719–9728. Cited by: §1.