ProtoGAN: Towards Few Shot Learning for Action Recognition

09/17/2019 ∙ by Sai Kumar Dwivedi, et al. ∙ Daimler AG IIT Bombay 0

Few-shot learning (FSL) for action recognition is a challenging task of recognizing novel action categories which are represented by few instances in the training data. In a more generalized FSL setting (G-FSL), both seen as well as novel action categories need to be recognized. Conventional classifiers suffer due to inadequate data in FSL setting and inherent bias towards seen action categories in G-FSL setting. In this paper, we address this problem by proposing a novel ProtoGAN framework which synthesizes additional examples for novel categories by conditioning a conditional generative adversarial network with class prototype vectors. These class prototype vectors are learnt using a Class Prototype Transfer Network (CPTN) from examples of seen categories. Our synthesized examples for a novel class are semantically similar to real examples belonging to that class and is used to train a model exhibiting better generalization towards novel classes. We support our claim by performing extensive experiments on three datasets: UCF101, HMDB51 and Olympic-Sports. To the best of our knowledge, we are the first to report the results for G-FSL and provide a strong benchmark for future research. We also outperform the state-of-the-art method in FSL for all the aforementioned datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Action recognition has been a long-standing and actively pursued problem in the computer vision community due to its practical applications in areas such as surveillance, semantic video retrieval and multimedia mining. Recently, Convolutional Neural Network (CNN) based methods 

[3, 22, 24]

have achieved tremendous success in recognizing actions from videos in supervised learning paradigm. However, the performance of these methods deteriorates drastically 

[25] when recognizing action classes that are not adequately represented in their training data (novel classes). This problem limits the deployment of these methods in real world applications where the number of action classes to be recognized increases rapidly with use cases. In certain cases, the number of samples for novel classes are quite few for even traditional data augmentation techniques [11] to work. Therefore, systems with more advanced learning paradigm like Few Shot Learning [8], which learns to recognize novel classes from only a few examples (few-shots), have come into prominence.

Few-shot learning problem can be classified into two broad settings [8] based on their evaluation protocols: standard FSL (FSL) and the more realistic Generalized-FSL (G-FSL). FSL focuses on recognition of only novel classes during evaluation whereas under G-FSL, a combination of both seen (adequately represented at training) and novel classes is considered. The presence of seen classes during evaluation can incorrectly bias a classifier to predict a seen class when the input belongs to a novel class. Hence, the G-FSL setting is considered more challenging. There have been several approaches [29, 19, 8] to tackle FSL for image classification. The most notable among them use meta-learning, representation learning and generative modelling. Meta-learning mimics the few-shot inference time scenario during training, representation learning tries to learn the similarity of new samples to existing few-shots and generative modelling augments the novel classes with synthetic data. However, similar approaches for action recognition remain quite under-explored.

Figure 1: Illustration of our proposed ProtoGAN framework. (a) Encoder extracts video features using spatio-temporal CNN (b) Class Prototype vector for a novel class is learnt by mapping video features of seen data in CPTN to target formed through feature aggregator function

using cosine similarity loss

. The solid line denotes the training time path and dotted line denotes the inference path. (c) Learnt class prototype vector is used as a conditioning element in CGAN to synthetically generate more samples using generator and discriminator using adversarial loss . Decoder is used to reconstruct the prototype vector from generator’s output using to ensure discriminative properties. (d) Classifier takes real seen features, few-shot features and synthetic features of novel classes to train with loss

Generative methods like Conditional GAN (CGAN) [13] synthesizes additional data for novel classes using a conditioning element. In image classification, Antoniou et al. [1] explored data augmentation GAN which uses the few-shot samples directly as a conditioning element to generate synthetic data. Zhang et al. [29] use statistics from the samples as an alternative for the conditioning element. However, when the number of shots is less, conditioning element in these methods does not represent the class semantics.

To this end, we propose the ProtoGAN framework which conditions CGAN on a class-prototype vector to synthesize additional video features for the action recognition classifier. Class-prototype vector is learnt through a feature aggregator network called Class Prototype Transfer Network (CPTN). The synthetic features generated using our learned conditioning element for a class are semantically similar to actual features of videos belonging to that class. We justify our claims by performing extensive experiments on three publicly available datasets namely UCF101 [20], HMDB [12] and Olympic-Sports [16] under G-FSL and FSL settings. We focus the necessary ablations in our framework on G-FSL as it presents a more practical setting applicable for real world scenarios. The key contributions of our paper can be summarized as follows:

  1. Introducing a learned class-prototype vector for videos capturing class level semantics which is subsequently used as conditioning element for a CGAN to generate relevant synthetic features for novel classes.

  2. To the best of our knowledge, we are the first to demonstrate results for Generalized-Few Shot Learning (G-FSL) on publicly available action recognition datasets and hence provide a strong benchmark for future research.

  3. Outperform state-of-the-art method under the standard FSL setting for the three aforementioned datasets and across different number of samples or shots.

2 Related Work

FSL approaches in image classification literature are pre-dominantly based on meta-learning  [18, 29, 6]. Under the umbrella of meta-learning, various approaches like metric learning [19, 23, 21], learning optimization [17] and learning to initialize and finetune [6]

are proposed. Metric learning methods learn the similarity between images 

[23] to classify samples of novel classes at inference based on nearest neighbors with labelled samples. Learning optimization involves memory based networks like LSTM [26] or memory augmented networks [15]

which aim to replace stochastic gradient descent optimizer with the help of external memory. Learning to initialize and fine-tune aims to make minimal changes in the network when adapting to new task of classifying novel classes with few examples. Although, all the works mentioned above attempt to solve the broad problem of few-shot learning, many of them do not consider the full potential of seen classes as they learn the models in k-shot n-way method 

[15]. In [8], the authors propose a method of representation learning of visual features from the seen classes and translate it to novel classes for images. Recently, GAN based synthesizing of novel classes is gaining importance [1] where the novel class is augmented with synthetic features. In [29], a generic framework to augment data with conditioning GAN on sample mean class-representative vector is explored. A detailed overview of existing methods of FSL in images is given in [4].

In contrary to FSL in images, video-based FSL methods such as action recognition have got less attention. In [30], authors have proposed a compound memory architecture which transforms a variable length video sequence into a fixed length matrix helping in few shot classification. A method for action localization in FSL setting is explored in [27]. Attribute-based feature generation for unseen classes from GAN by using Fisher vector representation was explored in zero-shot learning in [28]. Authors in [14]

used Gaussian based generative approach to augment data for novel classes, where each action is represented in the visual space as a probability distribution. Since, the experimental setup in 

[14] includes classifying all novel classes together in contrary to k-shot n-way setting, this forms a strong baseline to compare with our proposed framework in FSL setting.

Our approach: To avoid classifier’s bias towards seen classes, unlike the approaches mentioned above, we propose learning a prototype vector which captures the semantics of a class by using features from seen classes. A CGAN is trained to learn the mapping from prototype vector to visual features. Additional data for novel classes is synthesized by using the predicted prototype vector from CPTN as conditioning element in CGAN.

3 Proposed Method

An overview of our proposed framework can be seen in Fig. 1 which can be broadly segregated into four major blocks: (a) Encoder for extracting video features (b) Class-Prototype Transfer Network for creating class prototype vector, (c) CGAN for generating synthetic features when conditioned on class-prototype vector, and (d) Classifier for predicting correct action class.

3.1 Preliminaries

Let , , , represent real video features, synthetic video features, class labels and class prototype vectors, respectively. Let = , , be the training set for seen classes where denotes the spatio-temporal features, denotes the class labels in with seen classes. The class prototype vector is calculated from video features. Let the cardinality of each member in be denoted by . Additionally, is available during training from novel categories where is a class from a disjoint label set of novel labels. Similarly, the cardinality of each member in is denoted by where and is defined as minimum value in the set. The class prototype vector of novel class is inferred through CPTN. In GFSL, the task is to learn a classifier and in FSL the task is to learn a classifier .

3.2 Class Prototype Transfer Network (CPTN)

Generation of synthetic samples for a class using a CGAN [13] requires a conditioning element capturing the semantics of that class. When the number of samples belonging to the class is high, statistical methods like mean of the samples tend to accurately represent those semantics [7]. However, in the case of novel classes with few examples, this doesn’t hold true as such methods are susceptible to capture noisy intricacies due to lack of sufficient data. To this end, we learn a mapping function Class-Prototype Transfer Network (CPTN) through feature aggregation where features of each video belonging to a particular class is mapped to a lower dimension embedding serving as the representative of that class. We term this vector embedding as the class-prototype vector . We use seen classes data containing large number of samples to model this mapping and then transfer the mapping to novel categories.

Feature aggregation is a two step process where the sample representation is followed by dimensionality reduction as per the following equation:

(1)

where is the number of instances of class , is the mean of the samples,

is a spatial dimensionality reduction function like average/max pool which is applied on the output of

and is the prototype vector of seen class formed through process . We have used average-pool in our approach. The feature aggregation step ensures that meaningful information is retained while getting rid of intricate and noisy details specific to individual videos. Cosine similarity loss

is used as a loss function for training the CPTN and is given by the equation:

(2)

where, is the predicted class-prototype vector and is the target formed by the Eq. 1 for seen classes. Once the network is trained, the prototype vector for a sample video belonging to a novel class is obtained by passing its input video features through the CPTN. For subsequent stages, we use and as class prototype vectors for novel and seen classes, respectively. The training framework is shown in (b) block of Fig. 1.

UCF101
1-shot 3-shot 5-shot
S N H S N H S N H
Base-Classifier 82.70.6 38.42.0 52.41.9 83.15.4 61.23.0 70.31.2 88.20.9 68.81.9 77.30.9
Heuristic-Proto 88.01.0 46.32.2 60.72.0 92.00.7 62.11.9 74.11.2 94.01.4 68.71.5 79.30.7
Sample-Proto 88.01.9 45.92.4 60.32.2 92.00.9 61.91.9 74.01.3 94.40.8 68.41.3 79.30.8
Learned-Proto 75.31.3 52.32.2 61.71.6 87.70.8 64.91.7 74.61.0 90.50.9 71.31.2 79.70.8
HMDB
1-shot 3-shot 5-shot
Methods S N H S N H S N H
Base-Classifier 59.91.3 12.72.4 20.93.3 52.51.1 35.72.0 42.41.4 61.44.6 38.73.0 47.21.0
Heuristic-Proto 53.30.9 19.51.6 28.51.8 59.72.4 35.42.1 44.31.3 57.91.9 44.51.4 50.30.6
Sample-Proto 53.41.8 19.71.6 28.71.6 59.62.1 35.32.0 44.31.3 58.83.5 43.81.7 50.10.7
Learned-Proto 51.91.5 25.81.4 34.41.3 58.91.8 37.41.4 45.70.9 61.63.0 43.31.5 50.90.6
Olympic
1-shot 3-shot 5-shot
Methods S N H S N H S N H
Base-Classifier 94.53.1 16.93.1 28.64.4 93.23.2 41.12.9 56.92.9 92.03.3 54.93.2 68.72.0
Heuristic-Proto 95.53.2 18.52.9 30.94.0 95.33.5 41.22.8 57.52.6 95.03.7 54.83.2 69.42.1
Sample-Proto 95.53.7 18.62.9 31.04.8 95.13.4 41.92.8 58.12.6 94.83.9 55.73.4 70.02.3
Learned-Proto 94.83.4 20.92.6 34.14.7 93.53.0 46.02.7 61.52.2 93.23.4 59.23.4 72.22.0
Table 1: Comparison of Action Recognition accuracy (%) of our proposed framework with other baselines on UCF101, HMDB and Olympic datasets under G-FSL settings. Base-Classifier denotes vanilla classifier with standard augmentation. Heuristic-Proto and Sample-Proto denotes proposed baselines inspired from prototype-vectors used in image classification. S, N, H represents seen, novel and harmonic mean

, respectively. We report mean accuracy from 20 different training runs and standard deviation is reported by

.

3.3 Conditional GAN

This module in our ProtoGAN framework involves a CGAN [13] to generate synthetic video features () for novel classes by using the conditioning element learned in the previous CPTN stage. The Wasserstein loss [2] is chosen over vanilla GAN loss as it provides more stability in training in low-data regime [28]. The Wasserstein loss in a CGAN between real features and synthetically generated feature is given by,

(3)

where ’s are real video features drawn from . is a convex combination of and , is the conditioning element of a particular class, () is the discriminator, () is the generator, is the noise vector and is the penalty coefficient. The first two terms in Equation. 3 approximate the Wasserstein distance and the third term is the penalty for constraining the gradient of () to have unit norm along the convex combination of real and generated pairs. The Class-prototype vector embedding along with random noise are sent as input the generator (). It generates an output with a dimension same as that of input video features. The generated video features along with real video features are passed to the discriminator () which is trained with an adversarial loss as per Eq. 3. Hence at equilibrium, the generator produces video features which are similar to the real features.

The generated features of a particular class should be similar to the real features of the that class and farther away from the features of other classes. As it is compute intensive to find the closest match for a generated feature from a set of real features of that class, generated and real features are grouped to form unmatched pairs (for different classes). Cosine embedding loss (for ) is used to compute the distance for unmatched pairs.

To ensure that the generated features contain class semantics for subsequent supervised classification, similar to [5], an decoder () is used. It generates as an output in an attempt to reconstruct back from . We use cosine similarity loss for the reconstruction. Thus, the net loss of CGAN is given by,

(4)

where is the hyper-parameter for weighting the reconstruction loss and for embedding loss. Training framework of the current stage is shown in (c) block of Fig. 1.

UCF101 HMDB Olympic
Method 1-shot 5-shot 1-shot 5-Shot 1-shot 5-shot
Sample-Proto 0.248 0.117 0.341 0.205 0.297 0.193
Learned-Proto 0.219 0.106 0.323 0.198 0.272 0.191
Table 2: Cosine distance between mean of synthetic features and mean of test data for novel classes. Results are reported for all the datasets in 1-shot and 5-shot setting. Lower number is better.
UCF101 HMDB
1-shot 5-shot 1-shot 5-Shot
No-Pool 57.81.7 79.81.1 23.32.3 51.20.8
Max-Pool 60.61.3 79.60.9 24.32.5 50.70.8
Average-Pool 61.71.6 79.70.8 34.41.3 50.90.6
Table 3: Comparison of Action Recognition accuracy (%) of our proposed framework with different dimensionality reduction function on UCF101 and HMDB datasets under G-FSL settings. No-Pool denotes use of no dimensionality reduction function, Max-Pool denotes max-pooling and Average-Pool denotes average-pooling. We report harmonic mean accuracy from 20 different training runs and standard deviation is reported by .

3.4 Classifier

We utilize the generator learned in the previous stage to produce additional video features for novel classes given their corresponding class-prototype obtained through the CPTN network. The classifier is trained in a supervised learning paradigm with the real features of seen and novel classes and synthetic features of novel classes. We subsequently train action recognition classifiers and for G-FSL and FSL settings, respectively, with Cross-Entropy loss .

We made an insightful observation that generated samples having high reconstruction error () are not suitable for training the classifier as they are semantically different than its corresponding class. Hence, we use a pruning method to remove all the synthetic features with high reconstruction loss. Features with low reconstruction loss tend to be more discriminative. This is because the decoder tries to reconstruct the class-prototype vector which is the representative of the class.

4 Experimental Setup

Details about the experimental setup are discussed in this section. Implementation details such as network architecture and training procedures are explained in Section. 4.1, dataset used for evaluation are described in Section. 4.3 and evaluation protocol followed for experiments under G-FSL and FSL setting are highlighted in Section. 4.2.

Figure 2: Mean classification accuracy (in %) of our proposed ProtoGAN framework with other baselines for all classes (when present as novel classes in different experimental runs) for olympic sports dataset. Classes are numbered according to alphabetical order of names. We report mean accuracy from 10 different runs. Best viewed in color.
UCF101 HMDB
Method 1-shot 5-shot 1-shot 5-Shot
Without-Pruning 60.81.9 79.50.9 23.72.6 50.70.8
With-Pruning 61.71.6 79.70.8 34.41.3 50.90.6
Table 4: Comparison of action recognition accuracy (%) of our proposed framework with and without pruning the synthetic features under G-FSL setting. We report harmonic mean accuracy from 20 different training runs and standard deviation is reported by .

4.1 Implementation Details

  • Encoder - We extracted video features from C3D network [22] pre-trained on Sports-1M dataset [9] where each video was divided into non-overlapping chunks of 16 frames and the mean of the fc6 layer outputs in [22], of size 4096, was taken.

  • Class-Prototype Transfer Network (CPTN) - A two-layer MLP was used with an input size of 4096, a hidden layer of 1024 and output layer (class-prototype vector embedding

    ) of 128 neurons with

    Sigmoid

    as activation function. The model was trained with cosine similarity loss with ADAM 

    [10]

    optimizer initialized with a learning rate of 0.005 and weight-decay of 0.0005. The training was run for 50 epochs.

  • CGAN - The generator is a two layer fully connected network with input size of ( + ) and output size of . The decoder used in cyclic-reconstruction is a two-layer fully connected with input and output size being 4096 and 128, respectively. The discriminator is a two layer fully connected neural network with input dimension of 4096 and output dimension of 1 deciding whether the features are real or fake. Learning rate of 0.001 and with a weight decay of 0.0001 was used to train the CGAN for 25 epochs using ADAM optimizer. LeakyReLU was used as activation function for , and . We set and in Eq. 4 to be and , respectively. The gradient penalty parameter for WGAN is set to 10.

  • Classifier - The final action recognition classifier is a single layer MLP with input as and output as the number of classes. Cross-entropy loss was used with ADAM optimizer with a learning rate of . For each novel class, we generated twice the maximum number of samples present in a seen class, sort them in ascending order of their reconstruction loss and pick the top .

4.2 Evaluation Protocols

To highlight the efficacy of our proposed framework, we thoroughly evaluate our method under G-FSL and FSL setting for all the action recognition datasets. A brief description of the evaluation protocol are as follows:

  • [noitemsep]

  • G-FSL setting - In this setting, the test set consists of seen and novel classes and the model is evaluated based on their classification accuracy. The accuracy of seen, novel and harmonic mean H as per Eq. 5 of the two are reported.

    (5)

    where and are accuracy of seen and novel classes, respectively.

  • FSL setting - In this setting, the test split consists of only novel classes .

In our experiments, under -shot setting, random samples were chosen from a set of available examples for each novel class. To eliminate bias towards any given set of samples and (seen, novel) class split, we repeat our experiments 20 times, with randomly chosen splits and samples. We report the mean results. For instance, in the 3-shot learning setting, for each of the 20 training runs over different class splits, 3 random samples were drawn for each novel class.

k-shots
Methods 1 2 3 4 5
UCF101 [14] - 68.73.3 73.52.2 76.52.1 78.6 1.8
Learned-Proto 57.83.0 71.13.0 75.32.7 78.01.8 80.21.3
HMDB [14] - 42.13.6 47.53.3 50.33.4 52.53.1
Learned-Proto 34.79.2 43.85.4 49.15.1 52.34.2 54.03.9
Olympic [14] - 73.27.4 75.37.3 80.27.2 83.87.1
Learned-Proto 71.69.4 75.07.4 78.46.2 82.15.6 86.35.1
Table 5: Comparison of action recognition accuracy (in %) of our proposed ProtoGAN framework against current state-of-the-art on UF101, HMDB and Olympic dataset under FSL settings. We report mean accuracy from 20 different training runs and standard deviation is reported by . Recognition accuracy for 1-shot is missing for [14].

4.3 Datasets

We evaluate our method on three publicly available datasets for action recognition,

  • [noitemsep]

  • UCF101 [20]: contains 13320 videos spanned over 101 classes. For our experiments we randomly split the data into 51 seen and 50 novel classes.

  • HMDB51 [12]: contains 6766 videos spanned over 51 classes. For our experiments we randomly split the data into 26 seen and 25 novel classes.

  • Olympic Sports [16]: contains 783 videos spanned over 16 classes. We refer Olympic Sports as Olympic in subsequent sections. For our experiments we randomly split the data into 8 seen and 8 novel classes.

5 Experimental Results

Details about our experimental results are discussed in this section. Results for our proposed framework under G-FSL setting are compared with different baselines in Section. 5.1. Section. 5.2 describes ablation studies highlighting the importance of various components in our framework. Comparison with state-of-the art method under FSL setting is discussed in Section.  5.3.

5.1 Generalized-Few Shot Learning (G-FSL)

As the results under G-FSL on the aforementioned action recognition datasets are reported for the first time, we designed the baseline methods by taking inspiration from the Few-Shot Learning methods proposed in image classification tasks. Similar to [29], sample mean of the examples from a class was taken as its class-prototype and we term this method as Heuristic-Proto in our experiments. In Sample-Proto the video features were directly taken as the class-prototype as described in [1]. Our proposed class prototype vectors generated through CPTN are termed as Learned-Proto in the experiments. To evaluate the performance of the entire framework, a vanilla classifier - Base-Classifier was directly trained on C3D video features with standard augmentation mentioned in [11]. The results for all the above approaches on UCF101, HMDB and Olympic datasets are reported in Table. 1.

As can be seen in Table. 1, Heuristic-Proto, Sample-Proto and Learned-Proto outperform the Base-Classifier with a huge margin for all the shots in all the datasets. This establishes that, the addition of generated features removes classifier’s bias towards seen classes which is depicted by the gain of accuracy for novel classes. Taking mean of the samples for novel classes, decreases the performance of Heuristic-Proto for 1-shot while giving competitive results for 5-shots. This can be attributed to poor representation of the conditional element by taking mean of single example. Sample-Proto performs better than other two baselines but shows higher standard deviation. This is because its performance depends on the quality of chosen samples as the class prototype is represented by the samples themselves without any aggregation. In contrast, transferred statistics through CPTN in our proposed ProtoGAN framework reduces this adverse effect. Learned-Proto outperforms baseline methods by a maximum of 1.4%, 5.7% and 2.9% on UCF101, HMDB and Olympics, respectively, for the 1-shot setting.

Figure. 2 illustrates the comparison of per class mean accuracy of our proposed ProtoGAN framework with other baselines. Mean accuracy is taken for all classes when present as novel classes during 10 different experimental runs. As can be seen from the figure, our proposed framework performs better in 11 out of 16 classes in Olympic Sports dataset in the range of 1-4%. However, the margin of inaccuracies is less significant as compared to the accurate classes. This re-establishes the superiority of the ProtoGAN framework.

5.2 Ablation Studies

5.2.1 Quality of Synthetic Features

To verify and compare the quality of the synthesized features of novel classes using different conditional elements, we try to quantify their similarity with real video features available from the test set. Specifically, we take mean of all the synthesized and real features for a novel class separately and compute cosine distance between them. The mean cosine distance over all the novel classes for 1 and 5-shots are reported in Table. 2. Results are reported for UCF101, HMDB and Olympic datasets. One can observe, examples generated using our Learned-Proto for 1-shot matches the distribution of real features much more accurately than that of Sample-Proto.

5.2.2 Dimensionality Reduction

Dimensionality reduction function () plays a crucial role in getting rid of intricate details and preserving class semantics. To demonstrate the effect of (), a model was trained without applying and another with Max-Pool instead of Average-Pool. As can be in seen in Table. 3, the recognition accuracy drops significantly when no () is used and thus restating that dropping intricacies helps in creating better class-prototype vector. Max-Pool is slightly inferior in performance compared to Average-Pool. A reason for this is that features obtained after average-pool has more aggregate information than max-pool, hence provides more stable features.

5.2.3 Pruning

To validate the effect of pruning synthetic features in increasing order of their reconstruction loss, the proposed ProtoGAN framework was trained without it and the results are reported in Table. 4. The superior performance of the classifier trained on data after pruning validates the efficacy of the reconstruction loss in creation and subsequent selection of meaningful examples. However, the gain is higher for HMDB as compared to UCF101. This can be attributed to less number of seen examples which affects the quality of class-prototype vectors and hence the generator.

5.3 Few Shot Learning (FSL)

A comparison of our ProtoGAN framework against the current state-of-the-art approach [14] under FSL setting is presented in Table. 5. The authors of  [14] have also used C3D [22] video features in their evaluation. Our method outperforms [14] in all k-shots with similar or lower standard deviation. Note that the results for 1-shot are not reported in [14]. The improved performance can be attributed to the usage of a learned prototype vector. The learned vector computed through a non-linear function via a network provides a better alternative to a vector formed by a linear combination of basis vectors as mentioned in [14]. This establishes the wide applicability of our approach in both G-FSL and conventional FSL settings.

6 Conclusion

In this paper, we present a novel ProtoGAN framework which synthesizes video features for novel categories by conditioning a CGAN with a class-prototype vector embedding to address the problem of Few-Shot Learning for action recognition. Class-prototype vector is learnt through a feature aggregator network called Class Prototype Transfer Network (CPTN). The performance of the proposed framework was evaluated on three publicly available datasets for both seen and novel classes under G-FSL setting for the first time. We obtained encouraging results showing the efficacy of the proposed framework under G-FSL settings on action recognition and established a strong benchmark for future research. Under standard FSL setting, we outperform state-of-the-art method on all the datasets across different shots.

Acknowledgement: We gratefully acknowledge Brijesh Pillai and Partha Bhattacharya at Mercedes-Benz R&D India, Bangalore for providing the funding and infrastructure for this work.

References

  • [1] A. Antoniou, A. Storkey, and H. Edwards. Data augmentation generative adversarial networks. In arXiv:1711.04340, 2018.
  • [2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. In arXiv:1701.07875, 2017.
  • [3] J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017.
  • [4] W.-Y. Chen, Y.-C. Liu, Z. Kira, Y.-C. F. Wang, and J.-B. Huang. A closer look at few-shot classification. In ICLR, 2019.
  • [5] R. Felix, B. V. Kumar, I. Reid, and G. Carneiro. Multi-modal cycle-consistent generalized zero-shot learning. In ECCV, 2018.
  • [6] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017.
  • [7] S. Guerriero, B. Caputo, and T. Mensink. Deepncm: Deep nearest class mean classifiers. In ICLRw, 2018.
  • [8] B. Hariharan and R. B. Girshick. Low-shot visual object recognition. In ICCV, 2017.
  • [9] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
  • [10] D. Kingma and J. Ba. Adam: a method for stochastic optimization. In arXiv:1412.6980, 2015.
  • [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In ACM Communication, 2017.
  • [12] H. Kuehne, H. Jhuang, R. Stiefelhagen, and T. Serre. A large video database for human motion recognition. In HPCSE, 2013.
  • [13] M. Mirza and S. Osindero. Conditional generative adversarial nets. In arXiv:1411.1784, 2014.
  • [14] A. Mishra, V. K. Verma, M. S. K. Reddy, A. Subramaniam, P. Rai, and A. Mittal. A generative approach to zero-shot and few-shot action recognition. In WACV, 2018.
  • [15] N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel. A simple neural attentive meta-learner. In ICLR, 2018.
  • [16] J. C. Niebles, C.-W. Chen, and L. Fei-Fei. Modeling temporal structure of decomposable motion segments for activity classification. In ECCV, 2010.
  • [17] S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In ICLR, 2016.
  • [18] M. Ren, S. Ravi, E. Triantafillou, J. Snell, K. Swersky, J. B. Tenenbaum, H. Larochelle, and R. S. Zemel. Meta-learning for semi-supervised few-shot classification. In ICLR, 2018.
  • [19] J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In NIPS, 2017.
  • [20] K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. In arXiv:1212.0402, 2012.
  • [21] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales. Learning to compare: Relation network for few-shot learning. In CVPR, 2018.
  • [22] D. Tran, L. D. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015.
  • [23] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In NIPS, 2016.
  • [24] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In PAMI, 2018.
  • [25] B. Xu, H. Ye, Y. Zheng, H. Wang, T. Luwang, and Y.-G. Jiang. Dense dilated network for few shot action recognition. In ICMR, 2018.
  • [26] Z. Xu, L. Zhu, and Y. Yang. Few-shot object recognition from machine-labeled web images. In CVPR, 2017.
  • [27] H. Yang, X. He, and F. Porikli. One-shot action localization by learning sequence matching network. In CVPR, 2018.
  • [28] C. Zhang and Y. Peng. Visual data synthesis via gan for zero-shot video classification. In IJCAI, 2018.
  • [29] R. Zhang, T. Che, Z. Ghahramani, Y. Bengio, and Y. Song. Metagan: An adversarial approach to few-shot learning. In NIPS. 2018.
  • [30] L. Zhu and Y. Yang. Compound memory networks for few-shot video classification. In ECCV, 2018.