Recent developments in the computer vision field have witnessed rapid expansion of transformer based model family, which has demonstrated remarkable potential in various computer vision applications, such as image recognition [dosovitskiy2020vit, zhou2021deepvit], point cloud classification [zhao2020pointtransformer] as well as video understanding [arnab2021vivit, bertasius2021timesformer]. They are shown to supersede the performance of convolutional networks when given proper combinations of augmentation strategies [touvron2020deit].
In this paper, we report our recent exploration on the training techniques for the video vision transformers. Specifically, we employ ViViT [arnab2021vivit] as our base model, and explored the influence of the quality of the data source, augmentations, input resolutions as well as the initialization of the network. The resultant training techniques enable ViViT to achieve on the action recognition accuracy of Epic-Kitchen-100 dataset. Additionally, it is noticed that although ViViT performs better than convolutional networks by a notable margin on the action classification, it underperforms convolutional ones on verb classification. This means that the ensemble of them can be beneficial to increasing the final accuracy. By combining video transformers with the convolutional ones, this paper finally presents our solution to the Epic-Kitchen-100 Action Recognition challenge.
2 Training video vision transformers
We use the ViViT-B/16x2 with factorized encoder as our base model. Two classification heads are connected to the same class token to predict the verb and the noun for the input video clip respectively. We first pre-train the networks on large video datasets that are available publicly, and then fine-tune the ViViT on the epic-kitchen dataset.
2.1 Initialization preparation
There are multiple ways to prepare the pre-trained model, such as supervised pre-training [tran2019csn, feichtenhofer2019slowfast, arnab2021vivit, carreira2017i3d] as is used in [song2019tacnet, qing2021tca, wang2020cbr] as well as unsupervised ones [huang2021mosi, han2020coclr]
. Here we adopt the supervised pre-training as it yields a better downstream performance. The model is firstly trained on Kinetics 400[kay2017k400], Kinetics 700 [carreira2019k700]goyal2017ssv2]. respectively. For this part, we mostly follow the training recipe in DeepViT [zhou2021deepvit] to boost the performance. Specifically, we use AdamW as our optimizer [loshchilov2017adamw] and set the base learning rate to 0.0001. The weight decay is set to 0.1. We initialize the ViViT model with the ViT weight pre-trained on ImageNet21k following the initialization approach in [arnab2021vivit]
, and train the model for 30 epochs with cosine learning rate schedule. The training is warmed up with 2.5 epochs, with the start learning rate as 1e-6. We enable color augmentation, mixup and label smoothing. The model is additionally regularized with a droppath rate of 0.2. The results on the Kinetics and SSV2 are as in Table1. We also trained ViViT on the optical flow of Kinetics 400, which we extract using Raft [teed2020raft] and TVL1 [zach2007tvl1].
2.2 Training video transformers on Epic-Kitchen
For training video transformers on Epic-Kitchen, we ablate on the training recipes in terms of initialization, the quality of data source, augmentations, input resolutions, the action calculation strategy, as well as the temporal sampling stride. The training parameters including the optimizer, the base learning rate. The training schedule is set to be overall 50 epochs and warmup for 10 epochs. The results can be observed in Table2. If not otherwise specified, we sample frames with one as the interval.
: we ablate initialization by pre-training from ImageNet-21K, Kinetics400, Kinetics700 as well as SSV2. The reason that we also ablated the SSV2 initialization is that SSV2 is also egocentric action recognition datasets with complex spatio-temporal interactions. It can be observed that using a strong initialization (from ImageNet21K to Kinetics400, and further to Kinetics700) lead to a notable improvement on the action recognition accuracy. If we decompose the improvement on verb and noun predictions, we can see that stronger initialization model brings the most improvements on noun predictions. However, although higher verb prediction accuracy can be observed by replacing the initialization from K700 to SSV2 (0.1%), there is a notable decrease on the noun prediction (1.4%). Therefore, in the final submission, we did not include models initialized with SSV2.
Quality of data source: to mitigate the pressure on the hard drive i/o and thus to speed up training, we resize the short side of the videos to 256 and 512 respectively. It can be observed that raising the quality of the input data source can have an improvement of 1.2%, 1.0% and 1.3% on the action, verb and noun predictions respectively.
Augmentations: we observe the benefit of utilizing stronger augmentations (mixup [zhang2017mixup], cutmix [yun2019cutmix] and random erasing [zhong2020randomerasing]). Compared to only using random color jittering, stronger augmentations brings an improvement of 3.2% on the action prediction, and 1.7% as well as 2.9% on the verb and noun predictions respectively.
Input resolutions: we further alter the input resolutions. Raising input resolution from 224 to 320 brings about 2.4% improvement on the action prediction, 2.2% on verb prediction and 2.7% on the noun prediction. A saturation of the prediction accuracy is observed when we further raise the input resolution from 320 to 384, where only an improvement of 0.6% on the action prediction is observed.
Action score calculation: as indicated in the table as numbers with †, calculating action scores differently could lead to different action prediction results. As two predictions are made for each video clip, there are two ways of aggregating action predictions for multiple views. Suppose we have predictions for verb and noun respectively, where and denotes the number of class for verbs and nouns and denotes the index for a view, the first way of aggregating the predictions are:
where denotes the prediction for actions. This approach aggregates the verb and noun predictions for multiple views first, before calculating the action predictions directly. The second approach calculate the action prediction for each view respectively, before aggregating them:
As can be seen from the Table 2, aggregating action scores for each view can outperform the other variant by around 1%. What’s more important is that this improvement can be reflected in the test set as well.
Temporal sampling stride. Since Epic-Kitchen dataset has a relatively higher FPS, sampling frames with one frame as the interval (which means the temporal sampling rate is 2) can be insufficient for the temporal coverage. When sampling 32 frames, only one second is covered. Therefore, we also ablated on the temporal sampling rate, and the result is presented in Table 5. As can be seen, a minor modification on the temporal sampling rate can have notable improvement on the performance. One reason for this may be the longer temporal coverage. Another possible reason is that the resultant FPS genearted by the sampling rate of 3 is closer to the pretraining FPS. Our final single model performance of ViViT-B/16x2-I outperforms the reported performance in [arnab2021vivit] by 3.4%.
2.3 Training video transformers with optical flow
In order to capture better motion features, optical flow is utilized as another data source. The video transformers with optical flow as the data source are trained using the same training recipe as mentioned before. We trained two optical flow models, whose inputs are respectively optical flow extracted using Raft [teed2020raft] and TVL1 [zach2007tvl1]. The results are presented in
2.4 Other transformer based models
Another transformer based video classification model that we use is the TimeSformer [bertasius2021timesformer]
. For TimeSformer, we directly use the open-sourced pretrained model on K600 and first kept its default settings and trained for 15 epochs. Then we used our training recipe in comparison. It shows that our training recipe improves the original one by 5% on the action prediction accuracy. Further increasing the input resolution gives an improvement of 3% on the action prediction accuracy.
denotes frozen batch norm mean and variance.Res indicates the resolution of the input video to the model. Aug indicates the augmentation strategy besides random cropping and random flipping. A, V and N denotes respectively the action, verb and noun prediction accuracies. A* denotes the action prediction accuracy on the test set. †indicates that the action prediction is calculated for each view first before aggregating them together. Blue font highlights the change in the respective experiment. Bold font in the performance columns indicate the best performing model.
3 Training convolutional video networks
Although video vision transformers can have a strong performance, complementary predicions are also needed from the convolutional networks. As we will shown in the following parts, the convolutional networks such as CSN [tran2019csn] and SlowFast [feichtenhofer2019slowfast] are relatively stronger at predicting verbs.
We use the ir-CSN-152 and SlowFast--101 as our base model. Similar to the training process in ViViT model, we obtain the pre-trained weights by training these two models on Kinetics 700 [carreira2019k700]. For training on the EPIC-KITCHENS-100 dataset, we use the same training parameters as the ViViT, including the optimizer and the learning rate schedule, etc. We follow [damen2020ek100] and freeze the batch norm mean and variance during training. The results can be seen in Table 6. As can be seen, freezing batch norm mean and variance gives about improvement on the action recognition accuracy. Applying mixup, cutmix as well as random erasing yields further improvements both on the validation and the test set. However, different from the experimental result in ViViT, although increasing the input resolution indeed increases the performance on the validation set, the accuracy on the test set is not improved. Therefore, we keep the training resolution as 224 for SlowFast-168-101 as well. It is interesting to see that the convolutional models can outperform most ViViT in terms of verb prediction even when the input resolution is only 224224.
In order to cover a longer period for one video clip, we additionally employ the long-term feature banks (LFB) [wu2019lfb] for the CSN models. For these experiments, we initialize the model with Epic-Kitchen trained ir-CSN-152s, and further train the model for 10 epochs with the same base learning rate as before, with 2 warm-up epochs. The results are shown in Table 7
. When using the features extracted by the original model that is used for initializing the training, we observe an improvement for ir-CSN-152-C on the noun prediction. When using the ViViT feature as the feature bank, the noun predictions are further improved, thus notably improving the final action prediction accuracy. In comparison, the verb accuracy is hardly affected.
4 Ensembling models
To utilize the complementary predictions of different models, we ensembled a selected subset of the presented models. The selected model set is presented in Table 8. The ensemble of models boost the performance of the best performing one by 4.3% on the action prediction. The final test accuracy that we obtained is 48.5% on action prediction, 69.2% on verb prediction and 60.3% on noun prediction.
This paper presents our solution for the EPIC-KITCHENS-100 action recognition challenge. We set out to train a stronger video vision transformer, and reinforce its performance by ensembling multiple video vision transformers as well as convolutional video recognition models.
Acknowledgement. This work is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project A18A2b0046), the National Natural Science Foundation of China under grant 61871435 and the Fundamental Research Funds for the Central Universities no. 2019kfyXKJC024 and by Alibaba Group through Alibaba Research Intern Program.