Without temporal ordering, individual frames from a video clip of an ‘open jar’ action cannot be distinguished from frames of a ‘close jar’. Tampering with the temporal order, whether through shuffling or reversing the order of frames, has been frequently used to assess the utilisation of temporal signals in action recognition models [14, 29, 30]. Recent convolutional models [27, 30, 3, 25, 29] demonstrate increased robustness by explicitly modelling temporal relations in video. In a related problem, Arrow of Time (AoT) classification [23, 9, 28] (the task of determining whether a video is being played forwards or backwards) has been used for pretraining video understanding models.
In this work, we apply the time-reversal video transform on videos to produce new ones that cannot be differentiated from forward-time videos by a human observer. We validate the realism of these examples through a forced-choice human perception study. We observe that when time-reversed, reversible videos either maintain their label or undergo a label transformation (Fig. 1). We develop a technique for automatically extracting this label transform for each class from the predictions of a trained classification model. Next, we apply our findings to other video transforms: horizontal-flipping and the composition of time-reversal with horizontal-flipping. We then put label transforms to work in zero-shot learning and data augmentation. Our contributions are summarised as follows:
We introduce label-altering video transforms, and identify their corresponding label transforms from model predictions.
We evaluate our proposal on two datasets, demonstrating the efficacy of example synthesis for both zero-shot learning and data augmentation.
Our zero-shot learning results demonstrate novel opportunities for learning additional classes through video transforms. On Something-Something, we learn 16 zero-shot classes without a single example (out of 174 total classes), and report 46.6% accuracy compared to 49.5% with full supervision. On Jester, we learn 7 zero-shot classes (out of 27 total classes), and report 92.4% accuracy compared to 94.9% with full supervision.
2 Related work
In this section, we review relevant works to our proposal related to: 1) the rise in temporally-sensitive video recognition models, 2) using time reversal in video and 3) using video transforms for self-supervision. To the best of our knowledge, no prior work has investigated label-altering video transforms for the automatic synthesis of additional labelled training data.
Action recognition is the task of classifying the action demonstrated in a trimmed video segment. Classification in early video action recognition datasets[24, 18] has been shown to be solvable largely through visual appearance alone [14, 30]. These datasets have been supplanted by larger and more temporally challenging datasets [17, 11, 12, 21, 4] where this is no longer the case. This gave rise to papers questioning the ability of both convolutional and recurrent models to capture the temporal order or evolution of the action [14, 29, 30, 8, 13, 5]. For example, in  a C3D network trained with hallucinated motion and a single frame from the video is shown to perform comparably to the original video.
Accompanying this evolution has been an increased focus in proposing models that exploit temporal signals in video [26, 3, 27, 30, 25, 29, 6]. In , actions are modelled as state transformations, showing improved performance and better generality across actions. Zhou  introduce a dedicated layer to correlate the predictions of multiple temporally-ordered video segments, averaging over multiple temporal scales. The model’s ability to exploit time is tested by shuffling frames in the video. They report no drop in performance for UCF101, but a clear degradation on Something-Something  showing the latter is more suitable for learning and evaluating temporal features.
Time-reversal in video. Time-reversing videos is used for Arrow of Time (AoT) classification [23, 9, 28]. First introduced in  and recently revisited in , AoT classification is successfully used in self-supervision for pre-training action recognition models. Of particular relevance to our work is the human perception study of time-reversed videos on Kinetics by Wei , showing humans achieve a 20% error-rate classifying a video’s AoT, thus demonstrating that dataset subsets contain realistic videos when reversed.
Video transforms for self-supervision. Video transforms offer a form of self-supervision [2, 28, 16, 7, 20]. In , a video-jigsaw solving task is used for pre-training before fine-tuning for action recognition, and in  geometric rotation classification is used for pre-training. In all these works, video transforms are only used in a separate task from which knowledge is transferred to the target task. The only prior work that has used video transforms for what could be seen as zero-shot learning is . They utilise time-reversal for training a robot arm to put two blocks together by observing these blocks exploding apart.
3 Label-altering video transforms
In this section we introduce label-altering (video) transforms (LATs) and describe how their corresponding class transforms can be determined from predictions of trained models.
Introducing LATs. Given an oracle video labelling function and a dataset with videos and labels , we aim to learn the parameters of a model using the videos and the supervision . We define a video transform as an operation that takes a video and transforms it into another video that is a valid input to the trainable model . We restrict our study to video transforms that satisfy the self-inversion property , and distinguish between two types: label-preserving video transforms (LPTs), and label-altering video transforms (LATs). In LPTs, the mapping between a video and its label remains intact
however in LATs, the video transforms which we are interested in, result in a label change such that
Of all possible LATs, we are interested in ones where the application of the video transform to every example of a given class results in transformed labels belonging to the same class, we call these class homogeneous LATs:
Without class homogeneity, new ground-truth of all transformed videos would be required. However, when class homogeneity is preserved, class transforms are sufficient to label all transformed videos. Accordingly, for a class homogeneous LAT, we aim to define the corresponding class transform for all where possible. Given , we identify three categories of classes:
Invariant classes, : classes whose examples maintain their label after transformation
The class transform for invariant classes can thus be defined: .
Equivariant, : classes whose examples change label after transformation
We thus define , referring to as a pair of equivariant classes where is the counterpart of and vice versa. Since we desire to be equivariant to , we restrict to be self-invertible, in line with the self-invertible behaviour of :
Novel-generating, : these include classes whose transformed examples no longer belong to any of the dataset’s classes . We revisit these classes later, using them for zero-shot learning.
3.1 Discovering class transforms
In order to automatically determine the class transform , we propose a method based on the response of the trained model to all videos from the same dataset transformed by . We first calculate the recall of each class using the model . We define
and measure the class recall, . If ( the model performs sufficiently well on that class), the model can be used to establish the class transform , assuming minimal noise exists in the dataset labels. Conversely, if , the class transform cannot be established for from predictions of the model . We then calculate the proportion of videos in that are predicted as when is applied
and measure affinity between the two classes
We calculate a candidate target class per class :
and introduce a novel target for the class. Finally, the approximated class transform is:
where controls the trade off between extracting invariant and equivariant transforms.
3.2 Applications of class tranforms
Next, we describe how class homogenous LATs with their class transforms () can be used for data augmentation and zero-shot learning.
Data augmentation. LPTs have long been used for data augmentation and range from the simple, like adjusting the frame rate of a video, to the complex, like the learnt transformations used in adversarial training . We propose using LATs for augmenting both invariant and equivariant classes through target-conditional data augmentation
Zero-shot learning. The novel-generating (NG) classes of facilitate zero-shot learning by synthesising examples of a novel class as follows:
The model is trained with synthesised examples of the zero-shot class and tested on real examples.
3.3 LAT examples
We apply the generalisation above on two LATs, as well as their composition (Fig. 2):
Transform 1: horizontal-flipping. While in some video datasets, horizontal-flipping is a LPT, it is a LAT when the dataset includes classes with a defining uni-directional horizontal movement, ‘swipe right’ or ‘rotate clockwise’.
Transform 2: time-reversal. Unlike horizontal-flipping, time-reversal is a fairly new transform used by the community. Whilst many classes in action datasets are irreversible, we show that a subset of these maintain realism under time-reversal—an observation that has received little attention. For example, time reversing an action such as ‘cover’ reverses the state change to produce an ‘uncover’ action. We note that many classes invariant under horizontal-flipping become equivariant under time-reversal (Fig. 2). A number of classes can’t be mapped to semantically meaningful classes after the transform as a result of the irreversibility of their examples.
What makes a video irreversible? We find the realism of reversed videos to be betrayed by reversal artefacts, aspects of the scene that would not be possible in a natural world. Some artefacts are subtle, while others are easy to spot, like a reversed ‘throw’ action where the thrown object spontaneously rises from the floor. We observe two types of reversal artefacts, physical, those exhibiting violations of the laws of nature, and improbable, those depicting a possible but unlikely scenario. These are not exclusive, and many reversed actions suffer both types of artefacts, like when uncrumpling a piece of paper. Examples of physical artefacts include: inverted gravity ( ‘dropping something’), spontaneous impulses on objects ( ‘spinning a pen’), and irreversible state changes ( ‘burning a candle’). An example of an improbable artefact: taking a plate from the cupboard, drying it, and placing it on the drying rack.
Transform 3: horizontal flipping + time reversal. We also explore the composition of the two transforms above. This not only offers new opportunities for data augmentation and zero-shot learning, but also removes some of the biases from the dataset or model. For example, we note that motion blur affects zero-shot learning when using time-reversal. Combining both transforms removes the model’s bias. Similarly, when a dataset is biased (e.g. more right-handed than left-handed people in our datasets), this composition assists in balancing the dataset.
4 Datasets and perception study
To showcase how LATs can be utilised for action recognition, we use two large-scale crowd-sourced datasets. Jester  is a gesture-recognition dataset with 148k videos split into 119k/15k/15k for training/validation/testing with 27 classes ( ‘sliding two fingers down’, ‘thumb up’). Something-Something (v2)  is an object interaction dataset containing 221k videos split into 169k/25k/27k for training/val/testing with 174 classes (‘taking something out of something’, ‘tearing something a little bit’).
Class transforms. We manually define a class transform for each LAT; this is used as ground truth for both the assessment of the automated discovery of , and in evaluating its applications. We obtain this through inspection of class semantics followed by visual verification. For horizontal-flipping, we map pairs of classes with defining horizontal motions (e.g. ‘left to right’) to one another and map other classes to themselves. For time-reversal, we consider what motions and state changes are reversed and how these interact across classes, then examine reversed examples checking for reversal artefacts that prevent otherwise reasonable mappings from being defined.
Table 1 shows the number of classes within each category for the ground truth . As the table shows, time-reversal results in more equivariant classes than horizontal-flipping. We find 5 and 28 novel-generating reversible classes in Jester and Something-Something where the transformed label is not part of the label set ( ‘putting S underneath S’ has no counterpart ‘taking S from underneath S’, S = something).
Arrow of Time: perception study Before attempting to use time-reversal as a video transform in our applications, we crowd-sourced a human perception study to confirm the similarity between forward-time and reversed-time examples of our reversible classes. In Table 1, we highlight (in blue) the 22 classes from Jester and 66 classes from Something-Something that we deemed time-reversible and on which we conducted this study.
Participants were asked to select the better example of two videos in a forced-choice setup (UI shown in Fig. 3). They were not given any further instructions of what makes a video a better/worse example of the class, and were not informed that one video was time-reversed. In each pair, one of the videos was randomly sampled from the training set for that class, while the other was a reversed video sampled from the training set of the label-transformed class. We randomised the left-right placement of videos.
We used Amazon Mechanical Turk (AMT) for the study, testing 20 video pairs in each task. In video pairs, the reversed video was replaced with a forward-time video from an unrelated class as a way to filter out low quality annotations. We used in Jester and
in Something-Something, only accepting submissions that correctly chose 3/3 and 3/5 of these examples respectively. The bar was set lower for Something-Something due to overlapping classes and occasional low video quality. In total, 257 individuals annotated 200 videos per class in Jester, and 120 videos per class in Something-Something amounting to 5.8% and 10.4% of videos in the reversable class subsets. To determine which classes are reversible, we model the results for each class as a binomial distribution with
approximated by a normal distribution. We consider classes reversible if their forward-time preference is within. We present the results of this study in Fig. 4, showing all classes in Jester are within bounds, and only 2 are outside for Something-something (both invariant). The class with the largest preference for forward-time is ‘pretending to throw something’ which exerts asymmetric impulses that participants seem to detect when time-reversed.
Having confirmed that reversed-time examples were sufficiently similar to forward-time ones in our chosen classes, we move on to using these time-reversed examples in zero-shot learning and data augmentation.
5 Experiments and results
Following a description of implementation details, we examine the behaviour of the network when exposed to transformed videos, and evaluate our method to automate class transforms (Section 5.1). We then present experiments using LATs for zero-shot learning (Section 5.2) and data augmentation (Section 5.3).
Implementation details. We employ a Temporal Relational Network (TRN)  with a batch-normalised Inception (BNInception) backbone  trained on RGB video due to its temporal-sensitivity, computational efficiency through sparse sampling, and high performance on benchmark datasets (including those we test on). In TRNs, the input video is split into
segments from which a frame is randomly sampled. Segment features, extracted by the backbone network, are combined by an MLP to compute temporal relations, followed by class predictions.
We first replicated the validation set results reported by the authors 
and assessed the effect of multi-scale model variant and number of segments before settling on 8-segment single-scale TRN for our experiments. We restrict our model evaluations to single center-crops to avoid unintended label transformations introduced by horizontal-flipping. In all experiments, we train our networks for 100 epochs with an initial learning rate ofdivided by 10 at epochs 40 and 80. We use a batch size of 80 for Jester and 128 for Something-Something training on 4 GPUs. All other parameters follow the default values from the TRN GitHub codebase111Data loading issue: We re-implemented the data loading code but used a different data layout (CTHW) to the original codebase (TCHW), we actually found this improves results by so opted to keep the change. Details on the impact to what the backbone network is fed with are given in Appendix A.. We report all our results on the validation set of both datasets.
5.1 Discovering class transforms
This first experiment assesses how a model trained on forward-time videos responds to video transforms, with and without label transformation. We show the confusion matrices for each LAT in Fig. 5. For each dataset, we show the baseline performance and the performance after horizontal-flipping or time-reversal without (red) and with (green) label transformation (using the manually defined ground truth label transform). For easier viewing, we re-order classes so equivariant class pairs are adjacent. These figures show that equivariant classes are misclassified into their counterparts without the application of LTs and that when employed, LTs resolve this misclassification whilst maintaining the correct classification of invariant classes.
One case worthy of note relates to the confusion between ‘turning hand clockwise’ and ‘turning hand counterclockwise’ in Jester. Horizontal flipping with LT increases the confusion, which we believe is a result of a population bias towards right-handed people; in a right-handed clockwise hand turn, the back of the hand is shown first then the front, whereas the order is reversed for a left-handed person.
Having shown the base model’s response to video transforms matches the manually defined ground truth label transform, we evaluate our method for automatically extracting through the process described in Section 3.1. In Table 2, we report true/false positives/negatives for the , that maximises the true positive count when treating the extraction of a mapping as a binary classification task. Note that the optimal ,
seem to be independent of the transform, and only different for the dataset/model. Most class transforms are correctly estimated in Jester for both horizontal-flipping and time-reversal. For Something-Something, we attribute the larger number of FP due to the models’s lower performance (49/78% top-1/5 accuracy) and overlapping classes in the dataset. Frequently, the established class transforms were reasonable. For example ‘Moving S away from S’ ‘Putting S next to S’ is a logical mapping, compared to an equally logical ground truth ‘Moving S away from S’ ‘Moving S closer to S’.
We investigated the use of NLP for semantically renaming into its time-reversed class by their antonyms, however we found existing lexical databases lacking. WordNet  does contain antonym relations, but these are quite sparse and are missing for common words like ‘put’, ‘take’, and ‘remove’. Additionally, the antonym relations that are present are general and don’t always embody the time-reversed class ‘move’ has the antonym ‘stay’.
In the following sections (Sections 5.2 and 5.3), we report results using the manual ground-truth rather than the discovered ones, avoiding propagating errors into the data augmentation and zero-shot evaluation. Finally, in Section 5.4, we test data augmentation and zero-shot learning using the automatically discovered class transforms.
5.2 Zero-shot learning
The novel-generating classes are ideally suited for zero-shot learning, extending the model’s recognition abilities to previously-unseen classes. However, without a test set that includes examples of zero-shot classes, the model cannot be evaluated. We instead construct four train/test subsets to evaluate our approach. We turn pairs of equivariant classes into pairs of novel-generating many-shot and zero-shot classes. For each equivariant class pair, we retain the class with the highest training support as the novel-generating many-shot class and remove all examples of its counterpart, which then becomes a zero-shot class. The number of zero-shot classes and corresponding instances synthesised within those classes are listed in Table 3.
For each of the four sub-datasets (Jester-HF, Jester-TR, SS-HF, SS-TR), we compare chance (no supervision) as a lower bound to full supervision on all classes as an upper bound. We present our results in Table 4. Note that the number of zero-shot and many-shot classes differs per horizontal block. The results show that training for these zero-shot classes does not affect the performance of the many-shot classes compared to their full supervision performance. Over the four subsets, we report an overall drop in top-1 accuracy compared to full supervision of 1.7%, 2.9%, 0.1%, 3.4%, when dropping all training examples of 11%, 26%, 2% and 9% classes, respectively.
Figure 6 shows the confusion matrices for the pairs of many-shot and zero-shot classes in each subset. For Jester-HF and Jester-TR, we see good performance, but with the same confusion between classes ‘turning hand clockwise’ and ‘turning hand counterclockwise’ as in the base model. However, all other classes can be learnt in the zero-shot setting using horizontal flipping. For SS-HF, zero-shot classes are distinguishable from their many-shot counterparts. In SS-TR, the camera movement zero-shot classes: ‘turning the camera upwards/right’ have been confused with their many-shot class counterparts: ‘downwards/left’. This suggests that the model may be using motion blur in individual frames to classify the action as the model has never seen upwards/right motion blur effects. Overall these confusion matrices show that in the majority of cases, the use of LATs has resulted in impressive performance on zero-shot classes.
In Fig. 7, we show qualitative results on six examples from Something-Something. Top Row: A zero-shot model trained only on left-to-right examples can correctly classify zero-shot right-to-left actions. The final example shows a case where, although both models incorrectly predict the ground-truth class, their predictions are both reasonable. The zero-shot model has a greater difference between the top-2 scores indicating increased discriminative ability in the model. Bottom Row: The time-reversal zero-shot model has been able to learn state inversions like ‘close’ (first) and ‘uncover’ (second) from time-reversed examples of ‘open’ and ‘cover’.
5.3 Data augmentation
We train a model using data augmentation as described in Section 3.1
. For each video, the transform is applied with a probability of 0.5 along with the corresponding label transform. This approach results in balancing class support within each equivariant class pair. In TR+HF, we stack the randomly applied transforms to produce a mixture of videos with time-reversal, horizontal-flipping, or their composition. The results are presented inTable 6. Additionally, we include the results of augmenting with horizontal-flipping but without label transformation, as this is a default, yet incorrect, data augmentation technique implemented in TRN and similar video recognition networks. This shows a clear drop (highlighted in Table 6) for equivariant classes.
On Jester, we find the best two configurations to be horizontal-flipping with label transformation and horizontal-flipping of invariant classes only. Horizontal-flipping with label-transformation improves performance on invariant classes by reducing confusion with equivariant classes. Time-reversal with label transformation slightly improves performance on equivariant classes.
On Something-Something, We find the combination of time-reversal and horizontal flipping improves top-1/5 accuracy by 0.8/1.0%, performing comparably to horizontal flipping with label transformation alone. Notably, without label transformation, horizontal flipping results in a model that underperforms the one trained without augmentation, but with label transformation, the model outperforms the unaugmented model by 0.8%. Note that we used all training examples in addition to transformed ones in this experiment. Data augmentation for few-shot learning (i.e. by using a subset of the training videos) is left for future work.
5.4 Using discovered class transforms
Up until this point, we have used manually defined class transforms to report results. This allowed evaluating LATs separately from the discovery of their class transforms. We report results on our whole pipeline, on Something-Something, for both TR and HF + TR from discovered class transforms, in Table 5. The performance is comparable (with a small drop 0.14-1.32% shown in red) to zero-shot learning in Table 4 and data augmentation in Table 6.
In this paper, we introduced the notion of label-altering video transforms, label transforms. We show example synthesis can be used for zero-shot learning and data augmentation with evaluations on two datasets: Something-Something and Jester. Future directions involve investigating other label-altering video transforms like video trimming or looping and exploring additional applications of these transforms, in few shot learning. We aim to also investigate learning optimal video transforms to achieve a particular class transform.
-  (2017) The 20BN-Jester Dataset. Note: https://20bn.com/datasets/jesterOnline; accessed 2019-02-17 Cited by: §4.
Video jigsaw: unsupervised learning of spatiotemporal context for video action recognition. In
IEEE Winter Conference on Applications of Computer Vision (WACV), External Links: Cited by: §2.
Quo vadis, action recognition? a new model and the kinetics dataset.
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
-  (2018) Scaling egocentric vision: the epic-kitchens dataset. In European Conference on Computer Vision (ECCV), Cited by: §2.
Temporal reasoning in videos using convolutional gated recurrent units. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §2.
-  (2018-12) SlowFast Networks for Video Recognition. arXiv e-prints, pp. arXiv:1812.03982. External Links: Cited by: §2.
Self-supervised video representation learning with odd-one-out networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
-  (2015-06) Modeling video evolution for action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
-  (2018) Video Time: Properties, Encoders and Evaluation. In British Machine Vision Conference (BMVC), pp. 160. External Links: Cited by: §1, §2.
-  (2015) Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), External Links: Cited by: §3.2.
-  (2017-10) The “Something Something” Video Database for Learning and Evaluating Visual Common Sense. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2, §2, §4.
-  (2018-06) AVA: a video dataset of spatio-temporally localized atomic visual actions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
Action completion: a temporal model for moment detection. In British Machine Vision Conference (BMVC), Cited by: §2.
-  (2018-06) What makes a video a video: analyzing temporal information in video understanding models and datasets. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
Batch normalization: accelerating deep network training by reducing internal covariate shift.
Proceedings of the 32nd International Conference on Machine Learning (ICML), F. Bach and D. Blei (Eds.), External Links: Cited by: §5.
-  (2018-11) Self-Supervised Spatiotemporal Feature Learning via Video Rotation Prediction. arXiv e-prints, pp. arXiv:1811.11387. External Links: Cited by: §2.
-  (2017-05) The Kinetics Human Action Video Dataset. arXiv e-prints, pp. arXiv:1705.06950. External Links: Cited by: §2.
-  (2011) HMDB: A large video database for human motion recognition. In The IEEE International Conference on Computer Vision (ICCV), External Links: Cited by: §2.
-  (1995-11) WordNet: a lexical database for english. Commun. ACM 38 (11), pp. 39–41. External Links: Cited by: §5.1.
-  (2016) Shuffle and Learn: Unsupervised Learning using Temporal Order Verification. In The European Conference on Computer Vision (ECCV), Cited by: §2.
-  (2019) Moments in time dataset: one million videos for event understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). External Links: Cited by: §2.
-  (2018-10) Time Reversal as Self-Supervision. arXiv e-prints, pp. arXiv:1810.01128. External Links: Cited by: §2.
-  (2014-06) Seeing the arrow of time. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
-  (2012-12) UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild. arXiv e-prints, pp. arXiv:1212.0402. External Links: Cited by: §2.
-  (2018-06) A closer look at spatiotemporal convolutions for action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
-  (2016) Actions ~ transformations. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
Non-local neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
-  (2018-06) Learning and using the arrow of time. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §2.
-  (2018-09) Rethinking spatiotemporal feature learning: speed-accuracy trade-offs in video classification. In The European Conference on Computer Vision (ECCV), Cited by: §1, §2, §2.
-  (2018-09) Temporal relational reasoning in videos. In The European Conference on Computer Vision (ECCV), Cited by: §1, §2, §2, §5, §5.
Appendix A Data loading issue
After the camera-ready submission of our ICCVW paper, we found a set of mismatched assumptions in our codebase: the data loading code loaded videos with CTHW (channels, time, height, width) layout tensors, but the TRN implementation expected TCHW layout tensors. The TRN implementation uses a reshape operation on its input to squash the time dimension into the batch dimension for propagating all frames in the batch through the 2D CNN backbone. The effects of this mismatch between the expected and actual data-layout on the input to the 2D CNN backbone are visualised in Fig. 8.
To quantify the impact of this error, we re-ran a set of experiments focusing on those evaluating the use of time-reversal on Something-Something, the results of which are presented for zero-shot and data-augmentation in Table 7 and Table 8. Surprisingly, we found that feeding data in the incorrect CTHW format led to improved performance across all our experiments, including standard experiments that do not employ any video transforms. The CTHW layout improved overall recognition results on Something-Something validation set from 44.95% to 49.45% (4.5% improvement). We posit this is due to the backbone being able to exploit temporal signals as temporal information is fed into the 2D CNN backbone unlike with the TCHW format tensor.
This issue has not materially impacted the conclusions of the paper, and since we achieve improved performance with the CTHW data layout, we have opted to keep the current set of experimental results.