Annotation-Efficient Untrimmed Video Action Recognition
Deep learning has achieved great success in recognizing video actions, but the collection and annotation of training data are still laborious, which mainly lies in two aspects: (1) the amount of required annotated data is large; (2) temporally annotating the location of each action is time-consuming. Works such as few-shot learning or untrimmed video recognition have been proposed to handle either one aspect or the other. However, very few existing works can handle both aspects simultaneously. In this paper, we target a new problem, Annotation-Efficient Video Recognition, to reduce the requirement of annotations for both large amount of samples from different classes and the action locations. Challenges of this problem come from three folds: (1) action recognition from untrimmed videos, (2) weak supervision, and (3) novel classes with only a few training samples. To address the first two challenges, we propose a background pseudo-labeling method based on open-set detection. To tackle the third challenge, we propose a self-weighted classification mechanism and a contrastive learning method to separate background and foreground of the untrimmed videos. Extensive experiments on ActivityNet v1.2 and ActivityNet v1.3 verify the effectiveness of the proposed methods. Codes will be released online.
READ FULL TEXT