Semantic understanding of sequential visual input is vital for autonomous robots to perform localization and object detection. Moreover, robust models have to be adaptive to scenes containing multiple interacting objects. For instance, self-driving cars need to classify scenes that contains small and large vehicles for loop closure. Similarly, robots have to infer the classes of objects in order to grasp them, while the objects could be rotated in different ways. Although videos contain richer information than individual images, how to effectively harness the temporal dependency in sequential visual input remains an open problem. Particularly, this paper investigates video classification in the presence of moving objects with rotation and scale changes. Preliminary results on digit gesture classification demonstrate the extensibility of the proposed model on handling articulated objects such as hand.
. Given a training dataset that contains videos , where is the video, is the size for each frame, is the number of frames, and is the class label, we aim to train a classifier for unseen data , such that . For instance, in the context of action recognition, each video is labeled with a class label such as walking or running.
Initial work  approaches this task using CNNs, but the majority of recent works such as  rely more on RNNs that are adept at processing sequential data. While RNNs are good at dealing with contextual information from the past, CNNs are useful to extract hierarchical features from video frames. Motivated by the advantages of both types of networks, this paper proposes a spatiotemporal model in which CNN and RNN are concatenated, as shown in Fig. 1.
Despite the impressive performance of deep neural networks, previous models have fixed network depth and convolutional kernel size, which may lead to brittle performance when dealing with large and rapid variations in scale and rotation of objects in high speed robot navigation. In the proposed spatiotemporal model, an attention module is introduced in the network architecture. This is the key idea that increases the robustness of the model to geometric transformations such as object rotation and scale changes commonly seen in the real world. Previously, networks with visual attention were developed to limit the amount of information that needs to be processed and increase their robustness against cluttered background. However, these attention modules have been only used for single image recognition and classification tasks. This work applies attention mechanism for video classification.
. This work focuses on the latter since the former is stochastic and non-differentiable. The soft attention modules considered in the proposed spatiotemporal model include Spatial Transformer Networks (STN) and Deformable Convolutional Networks (DCN) . STN performs affine transformation on the feature map globally, whereas DCN samples the feature map locally and densely via learnt offsets to deform the convolutional kernel. LeNet  which has a simple structure and no attention module is used as baseline for comparison.
To summarize, the novelty of the paper is a spatiotemporal model with visual attention tailored for video classification, which enables robustness to multiple objects with rotation and scale changes.
Ii Spatial temporal model with visual attention
Ii-a Feature extraction
Given a sequence of images, hierarchical features are extracted using a model similar to LeNet. The lower layers of LeNet include two convolution and max-pooling blocks. Its upper layers contain a fully-connected layer, and a softmax layer used for classification during pre-training, but removed for feature extraction. Although the model is simple, it still contains the essence of deeper CNNs. The trainable parameters are the weights and bias terms in the convolutional and fully-connected layers shown in Fig.1. The proposed model uses an attention module prior to the first convolution layer instead of max-pooling layers to deal with geometric transformation, which means max-pooling layers are removed and the remaining CNN layers are augmented by attention modules. There are two variations of the structure, one with STN and the other with DCN.
STN consists of three components: localization network, grid generator, and bilinear sampler as depicted in Fig. 2. The localization network consists of 2 max-pooling layers, 2 convolution layers, and 2 fully-connected layers. It takes the image as input and learns the affine transformation that should be applied to the image, i.e. , where represents the transformation parameters. The grid generator produces coordinates based on and a regular grid , which are the sampling locations in the input image to produce the output feature map. The bilinear sampler takes input image and
to generate the output feature map using bilinear interpolation. Because of the affine transformation, STN is trained to focus on the digits via cropping, translation and isotropic scaling of the input image. The output size of STN is. Different output sizes of STN are tested, but that does not affect the classification accuracy significantly. Since additional layers are introduced in the localization network, there are more trainable parameters in the proposed model as described in Fig. 1.
DCN abandons the parametric transformation adopted by STN and its variants. As shown in Fig. 2, the spatial sampling locations in the convolution kernels are augmented with additional offsets learnt from classification. For example, a kernel with dilation 1 in a convolutional layer samples on grid on input feature map .The feature at in output feature map could be obtained based on Eq. (1a). The grid is augmented by offsets in deformable convolution as in Eq. (1b), where the offset is implemented by bilinear interpolation. is learnt via applying a convolutional layer over , which has the same kernel size of , and same spatial resolution with . The output of the offset fields has channels, which encodes vectors for in denoted as offset2D layers in Fig. 1. The trainable parameters include the offsets as well as the convolutional kernels.
Ii-B Temporal processing
The aforementioned CNN layers are connected to Long Short Term Memory (LSTM) with a softmax layer to classify each frame as depicted in Fig. 1. Including LSTM breaks the dependence on the size of network, because a much deeper CNN would be required to replace the RNN for video classification. Note that the dimension of the output space of LSTM is 8 only and there is no dropout as the focus is on comparing the CNN layers. The CNN processes every temporal slice of the video independently with the weights fixed. In contrast, the LSTM has memory blocks to learn long term dependencies in the video since the output is dependent on the previous hidden state.
As shown in Eq. (2), LSTM contains a cell state , which is controlled by input gate , forget gate , and output gate . decides which information to throw away, determines what to store in , while generates the output based on the input as well as the previous hidden state . The are the trainable parameters.
Iii Dataset creation
A practical challenge for training CNNs and RNNs for video classification with visual attention is the lack of datasets that exhibit sufficient variation in rotation and scale, because existing video classification benchmarks such as UCF101  and Sports-1M  are collected from daily activities. To address this issue, this work proposes a synthetic dataset based on Moving MNIST , with augmentation that includes rotation and scaling. This dataset is also overfit-resistant as a side benefit of applying augmentation, as analyzed in .
In our dataset, and that contain MNIST digits are created. Two samples are randomly chosen from the MNIST training set, where are the images and are class labels. are placed at two random locations on a canvas of size . They are assigned with a velocity in range whose direction is chosen at random. bounce off the boundary and could overlap. Furthermore, each canvas is augmented with controllable variations of rotation and scaling to simulate the scene that contains multiple interacting objects, i.e. random rotation and scaling are applied to obtain . Several sequences of are used to produce the video , and the labels for two digits are concatenated as the new class label . Sample images in the dataset are displayed in Fig. 3. Essentially, the video classification in this context is to recognize the digits in the presence of transformation and occlusion.
Iv-a Training details
using Keras. During pre-processing, the images are centered and converted to range . The models are pre-trained on the Scaled MNIST dataset, which is the MNIST dataset augmented by translation and scaling. The maximum random horizontal and vertical shifts in terms of fraction of total width and height is 0.2. Range for random zoom is . The kernel weights matrices are initialized via Xavier uniform initializer, and the bias terms are initialized to 0. The objective function is the cross entropy loss, and the optimizer for training is Adadelta optimizer with a learning rate of 1. The cross entropy loss and accuracy are for LeNet, for STN, and for DCN.
There are 100 classes in augmented Moving MNIST dataset, and 10 different rotations or scales in each class, with 5 frames for one setting. Hence there are 5000 images for training and 5000 images for testing. The training is conducted in 10 epochs with RMSProp optimizer using a learning rate of
and a batch size of 50. The objective function is categorical cross entropy loss. The kernel weights matrix in LSTM is initialized via Xavier uniform initializer, and the recurrent kernel weights matrix is initialized by a random orthogonal matrix. The bias vector is initialized to 0. Each time the LSTM is trained for 300 rounds and the loss and accuracy are averaged to reduce the variance.
Note that the CNN and RNN are trained separately because STN and DCN introduce much more trainable parameters as shown in Fig. 1. This issue is more severe for DCN as the number of trainable offsets increases linearly with respect to the size of the input feature map. For instance, there are 102,483,280 trainable parameters in total for DCN given a image, while only 6,431,080 parameters for LeNet, meaning that DCN has 16 times more parameters than LeNet. As a result, the GB GPU memory used for the experiments becomes insufficient once the attention modules are distributed along the time dimension, making it difficult to train CNN and RNN together.
Iv-B Quantitative results on augmented Moving MNIST
The cross entropy loss and test accuracy are shown in Table I. The proposed model STN-LSTM and DCN-LSTM are compared against LeNet-LSTM as baseline. As for video classification, DCN-LSTM consistently performs better than LeNet-LSTM, as the former has lower cross entropy loss and higher accuracy in all cases. This shows that although max-pooling layers could deal with rotation or scaling to some extent, DCN is superior than the max-pooling layers at handling deformations. Moreover, Table I also shows that STN-LSTM performs poorly compared to LeNet-LSTM and DCN-LSTM. Since there are two digits in each image and STN performs affine transformation globally, it is hard to attend to the digits individually, resulting in the low accuracy.
Regarding different transformation, DCN-LSTM is hardly affected no matter which kind of augmentation is applied, as the accuracy stays above . The variance in the accuracy of LeNet-LSTM is also small, indicating that max-pooling layers achieve similar results for different augmentation. In contrast, STN-LSTM handles rotated images significantly better than scaled images. Additionally, all models struggle with scaled images, as the accuracy in the third row is lowest. Furthermore, scaling is more challenging than rotation as the architectures could reach a relatively higher accuracy for rotated images.
Iv-C Qualitative analysis on augmented Moving MNIST
This section presents a qualitative analysis to shed some light on why STN does not do well. From the images processed by STN shown in Fig. 4, it could be observed that all the images are centered and transformed to a canonical orientation. There are several reasons when this kind of processing could be disadvantageous. For example, the original image is zoomed out when the digits are far from each other, even though doing so would make the digits smaller. This may be the reason why STN-LSTM has lower accuracy for images without any augmentation as shown in Table I. In addition, STN rotates the two digits in the same way in the rotated image. This may be helpful when the same rotation is applied to both digits as what is done in the dataset, but not so when digits are rotated differently. To summarize, the inability of one STN module to attend to each digit separately affects its classification performance negatively. To address this issue, STN could be connected with RNN to detect multiple digits as proposed in . The inference framework proposed in  could be used as well to attend to scene elements one at a time.
Iv-D Classification of digit gesture
Digit gesture has been widely used for evaluating the recognition algorithms, and classification of digit gesture is more challenging when the digits are written in the air by hand . In this experiment, elastic deformation described in is applied to the images in Moving MNIST dataset, with rotation and scaling augmentation, to simulate oscillations of hand muscles. Random displacement fields
are convolved with a Gaussian filter with standard deviation, and size , then multiplied by a scaling factor . The resulting images are similar to those in the TVC-hand gesture database , which contains images generated by accumulating the center points of a hand over time. A sample image after applying elastic deformation is shown in Fig. 5.
The cross entropy loss and accuracy are for LeNet-LSTM, for STN-LSTM, and for DCN-LSTM, indicating that DCN-LSTM is still the best model to deal with distorted digits because of its deformable convolutional layers.  shows that Deformable Parts Models (DPM) could be formulated as CNNs. Similarly, deformable convolutional layers in the DCN-LSTM could be modified to explicitly learn the deformation field, which is better than DPM for certain classes . Therefore, the capability of the proposed model in recognizing digit gesture suggests that the model could be extended for tracking hand trajectory or dealing with multi-part articulated objects such as hand.
This paper describes a spatiotemporal model for video classification with the presence of rotated and scaled objects. The effectiveness of DCN-LSTM over the baseline is demonstrated using the augmented Moving MNIST dataset. Moreover, preliminary results on digit gesture recognition are also shown. Regarding future work, how to train the network end to end is worth exploring. It would be more elegant to train the two stages simultaneously in an efficient way.
- Arriaga  Octavio Arriaga. Spatial transformer networks, 2017. URL https://github.com/oarriaga/spatial_transformer_networks.
- Chollet et al.  François Chollet et al. Keras, 2015. URL https://github.com/fchollet/keras.
- Dai et al.  Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. arXiv preprint arXiv:1703.06211, 2017.
-  SM Eslami, N Heess, and T Weber. Attend, infer, repeat: Fast scene understanding with generative models. arxiv preprint arxiv:…, 2016. URL http://arxiv. org/abs/1603.08575.
- Girshick et al.  Ross Girshick, Forrest Iandola, Trevor Darrell, and Jitendra Malik. Deformable part models are convolutional neural networks. In
Five video classification methods implemented in keras and tensorflow, 2017.URL https://github.com/harvitronix/five-video-classification-methods.
- Hochreiter and Schmidhuber  Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
- Jaderberg et al.  Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in Neural Information Processing Systems, pages 2017–2025, 2015.
- Karpathy et al.  Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725–1732, 2014.
- Kim et al.  Seongwan Kim, Yuseok Ban, and Sangyoun Lee. Tracking and classification of in-air hand gesture based on thermal guided joint filter. Sensors, 17(1):166, 2017.
- Lau  Felix Lau. Notes on “deformable convolutional networks”, 2017. URL https://github.com/felixlaumon/deform-conv.
-  Yann LeCun et al. Lenet-5, convolutional neural networks.
- Lin and Lucey  Chen-Hsuan Lin and Simon Lucey. Inverse compositional spatial transformer networks. arXiv preprint arXiv:1612.03897, 2016.
- Mnih et al.  Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In Advances in neural information processing systems, pages 2204–2212, 2014.
- Pedersoli et al.  Marco Pedersoli, Radu Timofte, Tinne Tuytelaars, and Luc Van Gool. An elastic deformation field model for object detection and tracking. International Journal of Computer Vision, 111(2):137–152, January 2015.
- Sharma et al.  Shikhar Sharma, Ryan Kiros, and Ruslan Salakhutdinov. Action recognition using visual attention. arXiv preprint arXiv:1511.04119, 2015.
- Simard et al.  Patrice Y Simard, David Steinkraus, John C Platt, et al. Best practices for convolutional neural networks applied to visual document analysis. In ICDAR, volume 3, pages 958–962, 2003.
- Sønderby et al.  Søren Kaae Sønderby, Casper Kaae Sønderby, Lars Maaløe, and Ole Winther. Recurrent spatial transformer networks. arXiv preprint arXiv:1509.05329, 2015.
- Soomro et al.  Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
Srivastava et al. 
Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov.
Unsupervised learning of video representations using lstms.
International Conference on Machine Learning, pages 843–852, 2015.
Su et al. 
Hao Su, Charles R Qi, Yangyan Li, and Leonidas J Guibas.
Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views.In Proceedings of the IEEE International Conference on Computer Vision, pages 2686–2694, 2015.