Learning without Forgetting for 3D Point Cloud Objects

06/27/2021 ∙ by Townim Chowdhury, et al. ∙ Australian National University 0

When we fine-tune a well-trained deep learning model for a new set of classes, the network learns new concepts but gradually forgets the knowledge of old training. In some real-life applications, we may be interested in learning new classes without forgetting the capability of previous experience. Such learning without forgetting problem is often investigated using 2D image recognition tasks. In this paper, considering the growth of depth camera technology, we address the same problem for the 3D point cloud object data. This problem becomes more challenging in the 3D domain than 2D because of the unavailability of large datasets and powerful pretrained backbone models. We investigate knowledge distillation techniques on 3D data to reduce catastrophic forgetting of the previous training. Moreover, we improve the distillation process by using semantic word vectors of object classes. We observe that exploring the interrelation of old and new knowledge during training helps to learn new concepts without forgetting old ones. Experimenting on three 3D point cloud recognition backbones (PointNet, DGCNN, and PointConv) and synthetic (ModelNet40, ModelNet10) and real scanned (ScanObjectNN) datasets, we establish new baseline results on learning without forgetting for 3D data. This research will instigate many future works in this area.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The advent of deep learning models achieves impressive performance in the image recognition task [30, 19, 37]

. In a real-life application, a trained system that can classify a given object instance within a fixed number of classes may need to readjust itself to classify a new set of classes in addition to old classes without retraining from scratch. For example, a self-driving car already recognizes street objects (vehicles, traffic lights, etc.). Now, the car manufacturer wants to increase the car’s capability in recognizing roadside objects (buildings, trees, etc.) by retraining only on instances of new classes of interest. The main issue of the retraining is the catastrophic forgetting of old class knowledge. Since this setup does not allow old class instances, the model learns new classes but forgets old ones. Researchers proposed Learning without Forgetting (LwF) methods

[12, 26, 9, 41, 24] to address this problem. Traditionally, this problem has been investigated using 2D image data. This paper explores LwF on 3D point cloud object data.

Figure 1: Effect of semantic representation while learning without forgetting. (a) Without class semantics, the network tries to form clusters of old and new class instances in feature space. Sometimes clusters overlap with each other because of the lack of class semantics. (b) After using class semantics, old and new class features cluster them around their corresponding class semantics. It helps the cluster separate enough for each other which helps to achieve better performance.

Modern 3D camera technology allows us to capture 3D point cloud data more accessible than ever [7]

. Now, it is time to adapt 3D point cloud recognition models with LwF capabilities. We identify some key difficulties to address this problem. Firstly, in comparison to image datasets like ImageNet, very large-scale 3D point cloud datasets are not available. 3D datasets usually contain a handful number of classes and instances

[38, 32]. Secondly, a typical pre-trained model for a 3D recognition system is not as robust as 2D models because of not being trained on a large dataset [5]. Thirdly, 3D point cloud data (especially real scanned objects) contains more noise than 2D image data [32]. This paper investigates how far a 3D point cloud recognition model can obtain LwF capabilities considering all difficulties mentioned above.

We first train a 3D point cloud model with instances belonging to a set of pre-defined old classes. Then, we update the trained model using a popular knowledge distillation technique [8] to address the forgetting problem. Because of the difficulties of 3D data, this approach exhibits a large amount of forgetting of old classes. To minimize forgetting, we employ semantic word vectors of classes inside the network pipeline [23, 4, 42]. During both new and old task training, the network tries to align point cloud features to their corresponding semantics. The class semantics encodes similarities and dissimilarities of different objects from the natural language domain. The network learns to project new instance features around the previously obtained and fixed semantic vectors while learning new classes. By performing feature-semantic alignment in both old and new tasks, the network forgets less than the traditional semantic embedding less method. For example, during the old model training, the model learns to classify ‘bed’ via its semantic (like isFurniture, isIndoor) representation. Later, during the new model training, the model could not see ‘bed’, but it observes similar classes (like sofa, chair, table with shared ‘bed’ semantics) that helps not to forget about ‘bed’ knowledge. Experimenting on ModelNet40 [38], ScanObjectNN [32], MIT Scenes [22], and CUB [33] datasets, we show that our proposed method outperforms traditional knowledge distillation methods in both 3D and 2D data cases. The contributions of this paper are summarized below:

  • To the best of our knowledge, we are the first to experiment learning without forgetting on 3D point object cloud data.

  • Our method applies knowledge distillation to restore previously gained experience of the old mode and minimize catastrophic forgetting while learning a set of new classes. In addition, we investigate the advantage of semantic word vectors in the network distillation process.

  • We experiment on both 3D synthetic (ModelNet10, ModelNet40 [38]) and real scanned (ScanObjectNN [32]) point cloud objects and 2D image datasets (MIT Scenes [22], CUB [33]), establishing the robustness of the method.

2 Related Works

3D Point Cloud Architecture: There are two streams of works for 3D point cloud classification: feature-based and end-to-end approaches. Feature-based methods mostly use Multi-view representation and Volumetric CNNs. Multi-view representation methods [31, 20, 40] convert 3D point cloud into 2D images, which are then classified using 2D convolutional networks. Volumetric CNNs [14, 34]

project point cloud objects on a volumetric grid or a set of octrees. Then, they apply a computationally expensive 3D convolutional neural network. The main drawback of feature-based methods is that they do not work directly on the raw point cloud. End-to-end approaches like PointNet

[19], PointNet++[21]

use raw point cloud data as input to multi-layer perceptron networks followed by maxpooling layers. Several other works

[37, 10, 11] apply improved convolution operation on point cloud objects. Moreover, [29, 35]

use Graph neural networks to extract features from 3D point clouds. In this paper, we build our model on several end-to-end architectures.

Learning Without Forgetting: Many methods have been proposed to solve the catastrophic forgetting problem [15, 6, 2]. Exemplar-free methods [12, 1, 41] do not require any samples from base/old task. Li et.al. [12] proposed to use Hinton’s [8] knowledge distillation loss to preserve old task’s knowledge in 2D images, but the domain shift between tasks makes this method weak. Rehearsal methods [26, 3, 9] keep a small number of exemplars from the old task. Rebuffi et.al. [26] first introduced replay-based method with bounded memory, but it fails to represent the main distribution of old task when there is a lot of variations. The Pseudo-rehearsal process used in [28, 36, 17] learns to produce examples from the old task. Some methods [1, 13] minimizes additional parameters to solve the problem of model expansion. All of the approaches mentioned above have proposed solutions to the catastrophic forgetting of 2D image data. Our method is the first to use knowledge distillation to address LwF of 3D data.

Word embedding for catastrophic forgetting: The use of semantic representation to prevent catastrophic forgetting is relatively new [23, 4, 42]. Such approaches explore the semantic relation between old and new classes to reduce the forgetting of old classes while training new classes. Zhu et.al. [42] suggested using semantic representation to train the object detection model by projecting the feature vector into the semantic space. Similarly, Rahman et.al. [23] proposed to use semantic representations of class labels as anchors in the prediction space for not forgetting the acquired knowledge of old classes. Cheraghian et.al. [4] proposed a knowledge distillation strategy by using semantic representation as an auxiliary knowledge. Even though semantic representation has yielded promising results, the experiments are limited to 2D data. In this paper, we use word vectors in knowledge distillation for 3D point clouds object classification.

3 Method

Problem Formulation: Assume, we have a set of old, , and a set of new, , classes, where, , and . The 3D point cloud recognition model initially observes classes and gets trained to classify only old classes. Later, classes are added to the model to update previous training. Suppose, a 3D point cloud input sets, for , can get a label from either old or new classes. Additionally, there is a set of -dimensional semantic class embeddings for each of the old and new classes, denoted as and , respectively. We define the old set as , where, is the -th point cloud object belonging to old set with the class label , and the class embedding , and is the number of old class instances. Similarly, there is a set for new classes , where, is the -th point cloud object having the class label , and the class embedding , and is the number of new class instances. We build a 3D point cloud object recognition model (termed as old model) using set. Then, we aim to update the same model (termed as new model) using only newly available data that can predict a class label for a test sample belonging to either old or new sets, i.e., . We assume the model has prior knowledge about the test sample during inference, whether it belongs to old or new classes.

Main challenges: While updating the model with new data, , the model gradually forgets the old training (done on ) because of the restriction of not using old class instances. Previous works address this problem with 2D image data. In this paper, we address the same problem on 3D point cloud objects. Due to the unavailability of large-scale datasets and pre-trained models, the problem becomes more complex in the 3D than 2D domain.

Figure 2: Our proposed architecture. We train the old model using the cross-entropy loss . Then, we build a new model by modifying a copy of the trained old model. Both cross-entropy, and knowledge distillation, losses are used to train the new model. The new model can classify both old and new classes.

3.1 Model Overview

Our proposed method is shown in Fig 2, which includes old and new models. The new model is the updated variant of the old model to accommodate new classes. Both old and new models are presented together because, during the training of the new model, we use the output of the old model. For both models, the point cloud input is fed into the backbone , which can be any point cloud architecture (PointNet, DGCNN, PointConv etc.), to extract feature input,i.e., . Additionally, a semantic representation unit is employed to generate class embedding,i.e., , given class label. While training the old model using old classes, the feature input g and the class embedding are mapped into a common -dimensional space using projection functions and , respectively. The new feature representations for the point cloud feature and the class embedding are and , respectively. Finally, dot multiplication between and form the output for the old classes. A cross-entropy loss, , is adopted to train the model for the old classes. While updating the same model with the new classes, we add a parallel pipeline from the output of backbone . Two projection modules and are added to map features of new classes and class embedding into the common -dimensional space. The new representations of feature and class semantics are and , respectively. At the end, is dot-multiplied with to generate output,

for the new classes. In order to prevent forgetting of the old classes, a knowledge distillation loss, function,

, [8] is employed between output of the old and new models.

3.2 Training and Inference

We train the proposed architecture with two stages: old and new model training. Unlike traditional approaches for learning without forgetting (LwF) [12], both stages use semantic word vectors of classes to remember past knowledge.

Training old model: At the first stage of training, we learn the old model using the training data of employing a cross-entropy loss. Unlike 2D image cases, we perform this training from scratch because no pre-trained model is available to initialize the weights of the backbone, . The output of the old model for the th 3D point cloud instance is

(1)

where, and are learnable weights associate with and layers, respectively. After finishing the training, the old model can classify the old set of classes, . This old model remains frozen during the second stage training.

Training new model: We build a new model during the second training stage by updating a copy of the old model, which is trained in the previous stage. This new model gives predictions for both old and new classes. But, we are not allowed to cannot any old class instances during training the new model. We add and layers to and layers. Only the training data of new classes is used to train the new model. Similar to Eq. 1, both pipeline of the new model can produce output for old and new classes.

(2)

where, and are weights associated with and layers, respectively. Among all trainable weights of new model, and are initialized from the old model but and are trained from the scratch. While forwarding an input 3D point cloud object , old model outputs for old classes and new model outputs and for old and new classes, respectively.

We calculate a traditional cross-entropy loss between and ground-truth . This loss is used to learn new classes. Additionally, using old class outputs from old and new model, we calculate a knowledge distillation [8] loss . This loss is employed to prevent the forgetting of the old classes. Unlike the traditional , we consider class semantics in the pipeline, which further helps the LwF process. The entire loss () to train this model is

(3)

where, hyperparameter

controls the contribution of . To calculate , we use negative log likelihood loss common in 3D backbones. To calculate , we record the output from old model for new class dataset’s 3D point clouds . The equations for and are:

(4)

where, is the temperature and is the softmax function.

Inference: For any test instance, a forward pass to the new model calculates old and new class scores. We classify old and new classes by selecting the maximum score from and , respectively.

4 Experiment

Dataset: We evaluate our method on 3D datasets, ModelNet10, ModelNet40 [38], ScanObjectNN [32] and two 2D datasets, MIT Scenes [22], CUB [33]. For the 3D experiment, we use two different settings related to synthetic and real scanned point cloud data. The synthetic experiment, ModelNet40 ModelNet10 setting use 30 classes of ModelNet40 as old and non-overlapped 10 classes of ModelNet10 as new classes. The real scanned object experiment, ModelNet40 ScanObjectNN use 26 classes of ModelNet40 as old and 11 classes of ScanObjectNN as new classes. Both of these setups were previously introduced in [5]. For the 2D experiment, Scenes CUB considers 67 classes of MIT Scenes as old and 200 classes of CUB as new. In another setup, 150 and 50 classes of CUB dataset are used as old and new, respectively. These setups are proposed in [12, 39]. The statistics of train test instances are summarized in Table 1.

Dataset Settings Task # Classes # Train # Test
3D ModelNet40 ModelNet10 old 30 5852 1560
new 10 3991 908
ModelNet40 ScanObjectNN old 26 4999 1496
new 11 1496 475
2D Scenes CUB old 67 5360 1340
new 200 5994 5794
CUB old 150 4495 4326
new 50 1499 1468
Table 1: Statistics of training and testing instances used in different experiments.

Semantic embedding: For semantic representation of classes, we use 300 dimensional word2vec (w2v) [16] and GloVe (glo) [18] word vectors. The word vector models are usually trained on unannotated text corpus. Unless explicitly mentioned all performance in this paper are with word2vec vectors.

Evaluation protocol: We evaluate our method using top-1 accuracy. We calculate the old model’s accuracy as . Similarly, we calculate and to represent performance of old and new classes, respectively using the final model. To measure the extent of forgetting, we calculate, . A lower indicates less forgetting of the new model.

Validation strategy: We further randomly divide the set of old classes into val-old and val-new classes for validation experiment. In the ModelNet40 ModelNet10 and ModelNet40 ScanObjectNN experiments, we choose 24 and 20 classes from old classes, respectively as val-old and the rest of the classes as val-new to find values for hyperparameters. We choose and for our 3D experiments by performing a grid search within the range .

Implementation details111Codes are available at: https://github.com/townim-faisal/lwf-3D: We use PointNet [19], PointConv [37], DGCNN [35] as 3D point cloud backbone and VGG16 [30]

(pretrained on Imagenet

[27]

) as 2D image backbone to obtain feature vector. For feature vector projection layers, we use two fully connected layers (512, 256) and (1024, 512) with ReLU activations in 3D and 2D experiments, respectively. For 3D and 2D experiments, we use one fully connected layer of size 256 and 512 with ReLU in the projection layer of semantic representation. In all experiments, we use the Adam optimizer with a learning rate of 0.0001 and batch sizes of 32 during training. We implement our work using the

PyTorch framework.

Compared methods: In this paper, we compare the results of the following methods. (a) Baseline-1: A backbone model is trained using the instances of old classes. Then, the trained backbone is further fine-tuned using new class instances only. (b) LwF [12]: The backbone training is same as Baseline-1. Then, the fine-tuning on new class samples uses a knowledge distillation loss [8] not to forget the old class knowledge. (c) Baseline-2: This method is an intermediate stage of our proposed approach. We first train the old model of Fig. 2 using semantic word vector information inside the architecture. But, it does not have any fine-tuning stage. This performance can be considered zero-shot learning [5, 25] results because it treats new classes as unseen. This method can classify new (unseen) classes without having trained on new instances. (d) Ours: This is our final recommendation as described in Sec. 3.1 and 3.2. On top of Baseline-2 training, it contains fine-tuning on new class instances.

Method
Baseline-1 89.2 41.5 90.2 53.5
LwF [12] 89.2 83.6 89.3 6.2
Baseline-2 89.4 - 22.8 -
Ours 89.4 84.4 90.4 5.5
ModelNet40 ScanObjectNN
Method
Baseline-1 89.8 51.0 76.9 43.3
LwF [12] 89.8 81.3 73.7 9.5
Baseline-2 89.9 - 21.5 -
Ours 89.9 86.2 74.6 4.1
Table 2: 3D data experiment using PointNet. () means higher (lower) is better.
ModelNet40 ModelNet10
Backbone ModelNet40 ModelNet10 ModelNet40 ScanObjectNN
PointNet [19] 89.4 84.4 90.4 5.5 89.9 86.2 74.6 4.1
PointConv [37] 90.5 86.2 87.8 4.8 90.2 73.4 66.6 18.6
DGCNN [35] 91.5 87.1 93.4 4.9 91.6 71.8 75.0 21.6
Table 3: Ablation study on varying 3D point cloud backbone.

4.1 3D point cloud experiments

Overall results: Table 2 shows the overall results using two settings, ModelNet40 ModelNet10 and ModelNet40 ScanObjectNN. Our observations are as follows. (1) Baseline-1 gets the worst results in forgetting issue showing high values because the fine-tuning for the new model does not consider about old classes. High and low value in and , respectively tells that this method learns new classes but forgets the old classes significantly. (2) LwF [12] obtains better results on forgetting issue (lower values) than Baseline-1 because this method apply a knowledge distillation loss not to forget old classes. (3) Baseline-2 shows the performance after old class training using our method. Without receiving training on new classes, this model can still classify new classes considering those as unseen class. Although no forgetting occurred in this case, there is no balance of old and new class performance. (4) Ours result makes a nice balance of old and new accuracy with maintaining minimal forgetting. (5) Although both settings achieve similar results () in old classes across methods, ModelNet40 ScanObjectNN gets less accuracy on new classes () than ModelNet40 ModelNet10. The reason is that ScanObjectNN classes (new) are real-scanned 3D objects with higher noise than synthetic data.

Settings Word
ModelNet40 glo 88.2 78.8 90.6 10.6
ModelNet10 w2v 89.4 84.4 90.4 5.5
ModelNet40 glo 89.7 85.2 70.9 5.0
ScanObjectNN w2v 89.9 86.2 74.6 4.1
(b) Using VGG16 backbone
Settings Method
Scenes LwF [12] 71.0 69.9 52.3 1.7
CUB Ours 70.7 69.9 53.0 1.1
CUB (150) LwF [12] 58.2 57.1 66.2 1.8
CUB (50) Ours 60.0 59.0 69.4 1.7
Table 4: Experiment with (a) varying semantic representation and (b) 2D images.
(a) Using PointNet backbone
Figure 3: Hyperparameter sensitivity on ModelNet40 ModelNet10 settings. Varying (left) of Eq. 3 and (right) of loss in Eq. 4.

Ablation studies: In Table 3

, we perform ablation study while varying different 3D point cloud backbone. Among all backbones, PointNet performs consistently well in both 3D experiment settings. PointConv and DGCNN have some success in forgetting issue with synthetic data of ModelNet10 but fails to generalize it for real scanned ScanObjectNN classes. The global features extracted by PointNet may be more helpful than local features from PointConv and DGCNN backbones. Table

4(a) also compares two different word vector models (word2vec and GloVe) as semantics. In most cases, word2vec achieves better accuracy and less forgetting in comparison to GloVe.

Hyperparameter sensitivity: We experiment with varying and in Fig. 3. By fixing one hyperparameter and adjusting another, we observe hyperparameter sensitivity within the range . We notice that increasing and from 0 to 3 improves the old () and new () class performance. From to higher, results do not deflect much, but values decrease gradually. We achieve best results using .

Figure 4: tSNE visualization of features and semantics for (a) 2D image and (b) 3D point cloud objects. Ten old and four new classes are shown for better visualization. 2D image features are clustered better than 3D point cloud features.

4.2 2D experiments

In addition to 3D point cloud experiments, we conduct 2D image experiments. We report our results in Table 4(b) using MIT Scenes [22], CUB [33]. For two different experiment setups, Scenes CUB and CUB (150) CUB (50), our method achieves better performance than LwF [12] in terms of less forgetting (). Moreover, we observe that the result of the 2D experiments is better than the 3D experiments (Tables 2 and 3). The amount of forgetting () is higher in 3D cases than in 2D cases (5-6% vs. 1-2%). The main reason is the 2D backbone (VGG16 [30]) has been pre-trained on a large dataset Imagenet [27], which has million training instances and thousands of classes. In contrast, the 3D backbone (PointNet, DGCNN, PointConv) used in the 3D experiments is not pre-trained on a huge dataset. Therefore, the feature vector obtained from the 2D backbone is richer and more clustered than the feature vector obtained from the 3D backbone. We notice that the feature-semantic alignment in the 2D experiment is more aligned than the 3D experiment, as shown in Fig. 4.

5 Conclusion

In this paper, we investigate LwF on 3D point cloud objects. Because of the lack of large-scale 3D datasets and powerful pre-trained models, popular knowledge distillation on prediction scores poorly performs on 3D data. To improve the performance further, we use semantic word vectors in the network pipeline. It helps to improve the traditional knowledge distillation performance. We also report performance on different 3D recognition backbones and word embeddings. We notice that the extent of forgetting in 3D is still inferior to the 2D image case. Future research in this area may investigate this issue further.

Acknowledgment: This work was supported by NSU CTRG 2020–2021 grant #CTRG-20/SEPS/04.

References

  • [1] R. Aljundi, F. Babiloni, M. Elhoseiny, M. Rohrbach, and T. Tuytelaars (2018)

    Memory aware synapses: learning what (not) to forget

    .
    In

    Proceedings of the European Conference on Computer Vision (ECCV)

    ,
    Cited by: §2.
  • [2] R. Aljundi, P. Chakravarty, and T. Tuytelaars (2017) Expert gate: Lifelong learning with a network of experts. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Cited by: §2.
  • [3] F. M. Castro, M. J. Marín-Jiménez, N. Guil, C. Schmid, and K. Alahari (2018) End-to-end incremental learning. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2.
  • [4] A. Cheraghian, S. Rahman, P. Fang, S. K. Roy, L. Petersson, and M. Harandi (2021) Semantic-aware knowledge distillation for few-shot class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
  • [5] A. Cheraghian, S. Rahman, T. F. Chowdhury, D. Campbell, and L. Petersson (2021) Zero-shot learning on 3d point cloud objects and beyond. arXiv preprint arXiv:2104.04980. Cited by: §1, §4, §4.
  • [6] I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio (2014) An empirical investigation of catastrophic forgetting in gradient-based neural networks. In 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings, Cited by: §2.
  • [7] Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, and M. Bennamoun (2020) Deep learning for 3d point clouds: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §1.
  • [8] G. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: §1, §2, §3.1, §3.2, §4.
  • [9] S. Hou, X. Pan, C. C. Loy, Z. Wang, and D. Lin (2019) Learning a unified classifier incrementally via rebalancing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
  • [10] A. Komarichev, Z. Zhong, and J. Hua (2019) A-cnn: annularly convolutional neural networks on point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [11] Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen (2018) Pointcnn: convolution on x-transformed points. Advances in neural information processing systems. Cited by: §2.
  • [12] Z. Li and D. Hoiem (2018) Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (12), pp. 2935–2947. Cited by: §1, §2, §3.2, §4.1, §4.2, Table 2, Table 4, §4, §4.
  • [13] A. Mallya and S. Lazebnik (2018-06) PackNet: adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [14] D. Maturana and S. Scherer (2015) Voxnet: a 3d convolutional neural network for real-time object recognition. In IROS, Cited by: §2.
  • [15] M. McCloskey and N. J. Cohen (1989) Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem. Psychology of Learning and Motivation - Advances in Research and Theory. Cited by: §2.
  • [16] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (Eds.), pp. 3111–3119. Cited by: §4.
  • [17] O. Ostapenko, M. Puscas, T. Klein, P. Jahnichen, and M. Nabi (2019) Learning to remember: A synaptic plasticity driven framework for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [18] J. Pennington, R. Socher, and C. D. Manning (2014) GloVe: global vectors for word representation. In EMNLP, Cited by: §4.
  • [19] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, Table 3, §4.
  • [20] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. J. Guibas (2016) Volumetric and multi-view cnns for object classification on 3d data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [21] C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017) PointNet++ deep hierarchical feature learning on point sets in a metric space. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Cited by: §2.
  • [22] A. Quattoni and A. Torralba (2009) Recognizing indoor scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: 3rd item, §1, §4.2, §4.
  • [23] S. Rahman, S. Khan, N. Barnes, and F. S. Khan (2020) Any-shot object detection. In Proceedings of the Asian Conference on Computer Vision (ACCV), Cited by: §1, §2.
  • [24] S. Rahman, S. Khan, and N. Barnes (2019-10) Transductive learning for zero-shot object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Cited by: §1.
  • [25] S. Rahman, S. H. Khan, and F. Porikli (2020) Zero-shot object detection: joint recognition and localization of novel concepts. International Journal of Computer Vision 128 (12), pp. 2979–2999. Cited by: §4.
  • [26] S. A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert (2017) iCaRL: Incremental classifier and representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
  • [27] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet Large Scale Visual Recognition Challenge. International Journal on Computer Vision (IJCV). Cited by: §4.2, §4.
  • [28] H. Shin, J. K. Lee, J. Kim, and J. Kim (2017) Continual learning with deep generative replay. In Advances in Neural Information Processing Systems, Cited by: §2.
  • [29] M. Simonovsky and N. Komodakis (2017) Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [30] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun (Eds.), Cited by: §1, §4.2, §4.
  • [31] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller (2015) Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 945–953. Cited by: §2.
  • [32] M. A. Uy, Q. H. Pham, B. S. Hua, T. Nguyen, and S. K. Yeung (2019) Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In 2019 IEEE/CVF International Conference on Computer Vision ( ICCV), Cited by: 3rd item, §1, §1, §4.
  • [33] C. Wah, S. Branson, P. Perona, and S. Belongie (2011) Multiclass recognition and part localization with humans in the loop. In International Conference on Computer Vision ( ICCV), Cited by: 3rd item, §1, §4.2, §4.
  • [34] P. Wang, Y. Liu, Y. Guo, C. Sun, and X. Tong (2017) O-cnn: octree-based convolutional neural networks for 3d shape analysis. ACM Transactions on Graphics (TOG). Cited by: §2.
  • [35] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon (2019) Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog). Cited by: §2, Table 3, §4.
  • [36] C. Wu, L. Herranz, X. Liu, Y. Wang, J. Van De Weijer, and B. Raducanu (2018) Memory replay Gans: Learning to generate images from new categories without forgetting. In Advances in Neural Information Processing Systems, Cited by: §2.
  • [37] W. Wu, Z. Qi, and L. Fuxin (2019) PointCONV: Deep convolutional networks on 3D point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, Table 3, §4.
  • [38] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao (2015) 3D ShapeNets: A deep representation for volumetric shapes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: 3rd item, §1, §1, §4.
  • [39] Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata (2018) Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. IEEE transactions on pattern analysis and machine intelligence. Cited by: §4.
  • [40] Z. Yang and L. Wang (2019) Learning relationships for multi-view 3d object recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [41] J. Zhang, J. Zhang, S. Ghosh, D. Li, S. Tasci, L. Heck, H. Zhang, and C. J. Kuo (2020) Class-incremental learning via deep model consolidation. In Workshop on Applications of Computer Vision (WACV), Cited by: §1, §2.
  • [42] C. Zhu, F. Chen, U. Ahmed, and M. Savvides (2021) Semantic relation reasoning for shot-stable few-shot object detection. arXiv preprint arXiv:2103.01903. Cited by: §1, §2.