Fisher Vectors have been shown to provide a significant performance gain on many different applications in the domain of computer vision[39, 33, 2, 35]. In the domain of video action recognition, Fisher Vectors and Stacked Fisher Vectors  have recently outperformed state-of-the-art methods on multiple datasets [33, 53]. Fisher Vectors (FV) have also recently been applied to word embedding (e.g. word2vec ) and have been shown to provide state of the art results on a variety of NLP tasks , as well as on image annotation and image search tasks .
In all of these contributions, the FV of a set of local descriptors is obtained as a sum of gradients of the log-likelihood of the descriptors in the set with respect to the parameters of a probabilistic mixture model that was fitted on a training set in an unsupervised manner. In spite of being richer than the mean vector pooling method, Fisher Vectors based on a probabilistic mixture model are invariant to order. This makes them less appealing for annotating, for example, video, in which the sequence of events determines much of the meaning.
This work presents a novel approach for FV representation of sequences using a Recurrent Neural Network (RNN). The RNN is trained to predict the next element of a sequence given the previous elements. Conveniently, the gradients needed for the computation of the FV are extracted using the available backpropagation infrastructure.
The new representation is sensitive to ordering and therefore mitigates the disadvantage of using the standard Fisher Vector representation. It is applied to two different and challenging tasks: video action recognition and image annotation by sentences.
Several recent works have proposed to use an RNN for sentence representation [44, 1, 31, 17]. The Recurrent Neural Network Fisher Vector (RNN-FV) method differs from these works in that a sequence is represented by using derived gradient from the RNN as features, instead of using a hidden or an output layer of the RNN.
The paper explores two different approaches for training the RNN for the image annotation and image search tasks. In the classification approach, the RNN is trained to predict the following word in the sentence. The regression approach tries to predict the embedding of the following word (i.e. treating it as a regression task). The large vocabulary size makes the regression approach more scalable and achieves better results than the classification approach. In the video action recognition task, the regression approach is the only variant being used, since the notion of a discrete word does not exist. The VGG Convolutional Neural Network (CNN) is used to extract features from the frames of the video and the RNN is trained to predict the embedding of the next frame given the previous ones. Similarly, C3D  features of sequential video sub-volumes are used with the same training technique.
Although the image annotation and video action recognition tasks are quite different, a surprising boost in performance in the video action recognition task was achieved by using a transfer learning approach from the image annotation task. Specifically, the VGG image embedding of a frame is projected using a linear transformation which was learned on matching images and sentences by the Canonical Correlation Analysis (CCA) algorithm.
The proposed RNN-FV method achieves state-of-the-art results in action recognition on the HMDB51  and UCF101  datasets. In image annotation and image search tasks, the RNN-FV method is used for the representation of sentences and achieves state-of-the-art results on the Flickr8K dataset  and competitive results on other benchmarks.
2 Previous Work
As in other object recognition problems, the standard pipeline in action recognition is comprised of three main steps: feature extraction, pooling and classification. Many works[23, 49, 19] have focused on the first step of extracting local descriptors. Laptev et al.  extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features can be used for a compact representation of video data. Wang et al. [51, 50] used low-level hand-crafted features such as histogram of oriented gradients (HOG), histogram of optical flow (HOF) and motion boundary histogram (MBH).
Recent works have attempted to replace these hand-crafted features by deep-learned features for video action recognition due to its wide success in the image domain. Early attempts[45, 12, 15] achieved lower results in comparison to hand-crafted features, proving that it is challenging to apply deep-learning techniques on videos due to the relatively small number of available datasets and complex motion patterns. More recent attempts managed to overcome these challenges and achieve state of the art results with deep-learned features. Simonyan et al. 
designed two-stream ConvNets for learning both the appearance of the video frame and the motion as reflected by the estimated optical flow. Du Tran et al. designed an effective approach for spatiotemporal feature learning using 3-dimensional ConvNets.
In the second step of the pipeline, the pooling, Wang et al.  compared different pooling techniques for the application of action recognition and showed empirically that the Fisher Vector encoding has the best performance. Recently, more complex pooling methods were demonstrated by Peng et al.  who proposed Stacked Fisher Vectors (SFV), a multi-layer nested Fisher Vector encoding and Wang et al. 
who proposed a trajectory-pooled deep-convolutional descriptor (TDD). TDD uses both a motion CNN, trained on UCF101, and an appearance CNN, originally trained on ImageNet, and fine-tuned on UCF101.
Image Annotation and Image Search
In the past few years, the state-of-the-art results in image annotation and image search have been provided by deep learning approaches [42, 29, 18, 14, 27, 16, 4, 13, 48, 26]. A typical system is composed of three important components: (i) Image Representation, (ii) Sentence Representation, and (iii) Matching Images and Sentences. The image is usually represented by applying a pre-trained CNN on the image and taking the activations from the last hidden layer.
There are several different approaches for the sentence representation; Socher et al.  used a dependency tree Recursive Neural Network. Yan et al.  used a TF-IDF histogram over the vocabulary. Klein et al.  used word2vec  as the word embedding and then applied Fisher Vector based on a Hybrid Gaussian-Laplacian Mixture Model (HGLMM) in order to pool the word2vec embeddings of the words in a given sentence into a single representation. Ma et al.  proposed a matching CNN (m-CNN) that composes words to different semantic fragments and learns the inter-modal relations between image and the composed fragments at different levels.
. To address the need for capturing long term semantics in the sentence, these works mainly use Long Short-Term Memory (LSTM)
or Gated Recurrent Unit (GRU) cells. Generally, the RNN treats a sentence as an ordered sequence of words, and incrementally encodes a semantic vector of the sentence, word-by-word. At each time step, a new word is encoded into the semantic vector, until the end of the sentence is reached. All of the words and their dependencies will then have been embedded into the semantic vector, which can be used as a feature vector representation of the entire sentence. Our work also uses an RNN in order to represent sentences but takes the derived gradient from the RNN as features, instead of using a hidden or an output layer of the RNN.
A number of techniques have been proposed for the task of matching images and sentences. Klein et al.  used CCA  and Yan et al.  introduced a Deep CCA in order to project the images and sentences into a common space and then performed a nearest neighbor search between the images and the sentences in the common space. Kiros et al. , Karpathy et al., Socher et al.  and Ma et al. 
used a contrastive loss function trained on matching and unmatching pairs of (image,sentence) in order to learn a score function for a given pair. Mao et al. and Vinyals et al. 
learned a probabilistic model for inferring a sentence given an image and, therefore, are able to compute the probability that a given sentence will be created by a given image and used it as the score.
3 Baseline pooling methods
In this section we describe two baseline pooling methods that can represent a multiset of vectors as a single vector. The notation of a multiset is used to clarify that the order of the words in a sentence does not affect the representation, and that a vector can appear more than once. Both methods can be applied to sequences, however, the resulting representation will be insensitive to ordering. To address this, we propose in Sec. 4 a novel pooling method: RNN-FV.
3.1 Mean Vector
This pooling technique takes a multiset of vectors, , and computes its mean: . Clearly, the vector that results from the pooling is in .
The disadvantage of this method is the blurring of the multiset’s content. Consider, for example, the text encoding task, where each word is represented by its word2vec embedding. By adding multiple vectors together, the location obtained – in the semantic embedding space – is somewhere in the convex hull of the words that belong to the multiset.
3.2 Fisher Vector of a GMM
Given a multiset of vectors, , the standard FV  is defined as the gradient of the log-likelihood of
with respect to the parameters of a pre-trained Diagonal-Covariance Gaussian Mixture Model (GMM). It is a common practice to limit the FV representation to the partial derivatives with respect to the means,
, and the standard deviations,, and ignore the partial derivatives with respect to the mixture weights.
It is worth noting the linear structure of the GMM FV pooling. Since the likelihood of the multiset is the multiplication of the likelihoods of the individual elements, the log-likelihood is additive. This convenient property would not be preserved in the RNN model, where the probability of an element in the sequence depends on all the previous elements.
To all types of FV, we apply the two improvements that were introduced by Perronnin et al. . The first improvement is to apply an element-wise power normalization function, where is a parameter of the normalization. The second improvement is to apply an L2 normalization on the FV after applying the power normalization function.
4 RNN-Based Fisher Vector
The pooling methods described above share a common disadvantage: insensitivity to the order of the elements in the sequence. A way to tackle this, while keeping the power of gradient-based representation, would be to replace the Gaussian model by a generative sequence model that takes into account the order of elements in the sequence. A desirable property of the sequence model would be the ability to calculate the gradient (with respect to the model’s parameters) of the likelihood estimate by this model to an input sequence.
In this section, we show that such a model can be obtained by training an RNN to predict the next element in a sequence, given the previous elements. Having this, we propose, for the first time, the RNN-FV: A Fisher Vector that is based on such an RNN sequence model.
We propose two types of RNN-FVs. One type is based on training a regression problem, and the other on training a classification problem. In practice, only the first type is directly useful for video analysis. For image annotation, the first type outperforms the second.
Given a sequence of vectors with vector elements , we convert it to the input sequence , where . This special element is used to denote the beginning of the input sequence, and we use throughout this paper. The RNN is trained to predict, at each time step , the next element of the sequence, given the previous elements . Therefore, given the input sequence, the target sequence would be: .
4.1 RNN Trained for Regression
Given a sequence of input vectors , the regression RNN is trained to predict the next vector in the sequence , i.e., the sequence . The output layer of the network is a fully-connected layer, the size of which would be , i.e., the dimension of the input vector space.
There are several regression loss functions that can be used. Here, we consider the following loss function:
where is the target vector and is the predicted vector.
After the RNN training is done, and given a new sequence , the derived sequence is fed to the RNN. Denote the output of the RNN at time step by . The target at time step is (the next element in the sequence), and the loss is:
The RNN can be seen as a generative model, and the likelihood of any vector being the next element of the sequence, given , can be defined as:
We are generally interested in the likelihood of the correct prediction, i.e., in the likelihood of the vector given : .
The RNN-based likelihood of the entire sequence X is:
The negative log likelihood of is:
In order to represent using the Fisher Vector scheme, we have to compute the gradient of with respect to our model’s parameters. With RNN being our model, the parameters are the weights of the network. By (2) and (5), we get that equals the loss that would be obtained when is fed as input to the RNN, up to an additive constant. Therefore, the desired gradient can be computed by backpropagation: we feed to the network and perform forward and backward passes. The obtained gradient would be the (unnormalized) RNN-FV representation of . Notice that this gradient is not used to update the network’s weights as done in training - here we perform backpropagation at inference time.
Other loss functions may be used instead of the one presented in this analysis. Given a sequence, the gradient of the RNN loss may serve as the sequence representation, even if the loss is not interpretable as a likelihood.
4.2 RNN Trained for Classification
The classification application is applicable for predicting a sequence of symbols ,,…, that have matching vector representations , , …, . The RNN predicts the sequence from the sequence .
the size of our symbol alphabet, i.e., the number of unique symbols in the input sequences. The output layer of the network is a softmax layer withunits, where the ’th element in the output is the probability of the ’th symbol to be the next output element. The loss function for the training of the RNN is the cross-entropy loss.
After the RNN is trained, it is ready to be used as a feature vector extractor for new sequences. Denote the new sequence by and its vector representation by as above. Consider feeding the sequence to the RNN. At time step , the output of the RNN is , where . Here, is the probability which the RNN gives to the ’th symbol at time step .
The cross-entropy loss at time step is derived from the probability given to the correct next symbol:
The RNN can be seen as a generative model which gives likelihood to the sequence :
The negative log likelihood of is:
By (6) and (8), we get that equals the loss that would be obtained when is fed as input, and as output to the RNN. Therefore, the desired gradient can be computed by backpropagation, i.e. feeding to the network and performing forward and backward passes. The obtained gradient would be the (unnormalized) RNN-FV representation of .
4.3 Normalization of the RNN-FV
It was suggested by  that normalizing the FVs by the Fisher Information Matrix is beneficial. We approximated the diagonal of the Fisher Information Matrix (FIM), which is usually used for FV normalization. Note, however, that we did not observe any empirical improvement due to this normalization, and our experiments are reported without it.
Let be a single weight of the RNN. The term in the diagonal of the FIM which corresponds to is: .
Since the probabilistic model which determines is the RNN, it is impossible to derive a closed-form expression for this term. Therefore, we approximated it directly from the gradients of the training sequences, by computing the mean of for each . The normalized partial derivatives of the FV are then: .
5 Action recognition pipeline
The action recognition pipeline contains the underlying appearance features used to encode the video, the sequence encoding using the RNN-FV, and an SVM classifier on top.
5.1 Visual features
The RNN-FV is capable of encoding the sequence properties, and as underlying features, we rely on video encodings that are based on single frames or on fixed length blocks of frames.
VGG Using the pre-trained VGG convolutional network , we extract a 4096-dimensional representation of each video frame. The VGG pipeline is used, namely, the original image is cropped in ten different ways into 224 by 224 pixel images: the four corners, the center, and their x-axis mirror image. The mean intensity is then subtracted in each color channel and the resulting images are encoded by the network. The average of the 10 feature vectors obtained is then used as the single image representation. In order to speed up the method, the input video was sub-sampled, and one in every 10 frames was encoded. Empirically, we noticed that recognition performance was comparable to that of using all video frames. To further reduce run-time, the data dimensionality was reduced via PCA to 500D. In addition, L2 normalization was applied to each vector. All PCAs in this work were trained for each dataset and each training/test split separately, using only the training data.
CCA Using the same VGG representation of video frames as mentioned above and the code of 111Available at www.cs.tau.ac.il/~wolf/code/hglmm, we represented each frame by a vector as follows: we considered the common image-sentence vector space obtained by the CCA algorithm, using the best model (GMM+HGLMM) of  trained on the COCO dataset . We mapped each frame to that vector space, getting a 4096-dimensional image representation. As the final frame representation, we used the first (i.e. the principal) 500 dimensions out of the 4096. For our application, the projected VGG representations were L2 normalized. The CCA was trained for an unrelated task of image to sentence matching, and its success, therefore, suggests a new application of transfer learning: from image annotation to action recognition.
C3D While the representations above encode single frames, the C3D method 
splits the video into sub-volumes that are encoded one by one. Following the recommended settings, we applied the Du Tran et al. pre-trained 3D convolutional neural network in order to extract 4096D representation to 16-frame blocks. The blocks are sampled with an 8 frame stride. Following feature extraction, PCA dimensionality reduction (500D) and L2 normalization were applied.
5.2 Network structure
Our RNN model consists of three layers: a 200D fully-connected layer units with Leaky-Relu activation (), a 200-units Long Short-Term Memory (LSTM)  layer, and a 500D linear fully-connected layer. Our network is trained for regression with the mean square error (MSE) loss function. Weight decay and dropouts were also applied. An improvement in recognition performance was noticed when the dropout rate was enlarged, up to a rate of 0.95, due to its ability to ensure the discriminative characteristics of each weight and hence also of each gradient.
5.3 Training and classification
We train the RNN to predict the next element in our video representation sequence, given the previous elements, as described in Sec. 4.1. In our experiments, we use only the part of gradient corresponding to the weights of the last fully-connected layer. Empirically, we saw no improvement when using the partial derivatives with respect to weights of other layers. In order to obtain a fixed size representation, we average the gradients over all time steps. The gradient representation dimension is 500x201=100500, which is the number of weights in the last fully-connected layer. We then apply PCA to reduce the representation size to 1000D, followed by power and L2 normalization.
Video classification is performed using a linear SVM with a parameter . Empirically, we noticed that the the best recognition performance is obtained very quickly and hence early stopping is necessary. In order to choose an early stopping point we use a validation set. Some of the videos in the dataset are actually segments of the same original video, and are included in the dataset as different samples. Care was taken to ensure that no such similar videos are both in the training and validation sets, in order to guarantee that high validation accuracy will ensure good generalization and not merely over-fitting.
After each RNN epoch, we extract the RNN-FV representation as described above, train a linear SVM classifier on the training set and evaluate the performance on the validation set. The early stopping point is chosen at the epoch with highest recognition accuracy on the validation set. After choosing our model this way, we train an SVM classifier on all training samples (training + validation samples) and report our performance on the test set.
6 Image-sentence retrieval
In the image-sentence retrieval tasks, vector representations are extracted separately for the sentences and the images. These representations are then mapped into a common vector space, where the two are being matched.  have presented a similar pipeline for GMM-FV. We replace this representation with RNN-FV.
A sentence, being an ordered sequence of words, can be represented as a vector using the RNN-FV scheme. Given a sentence with words , (where is considered to be the period, namely a special token), we treat the sentence as an ordered sequence , where . An RNN is trained to predict, at each time step , the next word of the sentence, given the previous words . Therefore, given the input sequence , the target sequence would be: .
The training data may be any large set of sentences. These sentences may be extracted from the dataset of a specific benchmark, or, in order to obtain a generic representation, any external corpus, e.g., Wikipedia, may be used.
The two network alternatives are explored: classification and regression. As observed in the action recognition case, we did not benefit from extracting partial derivatives with respect to the weights of the hidden layers, and hence we only use those of the output layer as our representation.
When the RNN is trained for classification, each word in the dictionary is considered as a class. The input to the network is the word’s embedding, a 300D vector in our case. The hidden layer is LSTM with 512 units, which is followed by a softmax output layer. This design creates two challenges. The first is dimensionality: the size of the softmax layer is the size of the dictionary, , which is typically large. As a result, has a high dimensionality. The second issue is with generalization capability: since the softmax layer is fixed, a network cannot handle a sentence containing a word that does not appear in its training data.
When training the RNN for regression, the same 300D input is used, followed by an LSTM layer of size 100. The output layer, in this case, is fully-connected, where the (300 dimensional) word embedding of next word is predicted. We use no activation function at the output layer. Notice that the two issues pointed out regarding the classification RNN are not present in the regression case. First, the sizeof the output layer depends only on the dimension of the word embedding. Second, the network can naturally handle unseen words, since it predicts vectors in the word vector space rather than an index of a specific word.
, where the regularization parameter is selected based on the validation set, is used to match the the VGG representation with the sentence RNN-FV representation. In the shared CCA space, the cosine similarity is used.
We explored several configurations for training the RNN. RNN training data We employed either the training data of each split in the respective benchmark, or the 2010-English-Wikipedia-1M dataset made available by the Leipzig Corpora Collection . This dataset contains 1 million sentences randomly sampled from English Wikipedia. Word embedding A word was represented either by word2vec, or by the GMM+HGLMM representation of , projected to a 300D sentence to VGG-encoded-image CCA space. We made sure to match the training split according to the benchmark tested. Sentence sequence direction We explored both the conventional left-to-right sequence of words and the reverse direction.
We evaluated the effectiveness of the various pooling methods on two important yet distinct application domains: action recognition and image textual annotation and search.
As mentioned, applying the FIM normalization (Sec. 4.3) did not seem to improve results. Another form of normalization we have tried, is to normalize each dimension of the gradient by subtracting its mean and dividing by its standard deviation. This also did not lead to an improved performance. Two normalizations that were found to be useful are the Power Normalization and the L2 Normalization, which were introduced in  (see Section 2). Both are employed, using a constant .
7.1 Action recognition
Our experiments were conducted on two large action recognition benchmarks. The UCF101  dataset consists of 13,320 realistic action videos, collected from YouTube, and divided into 101 action categories. We use the three splits provided with this dataset in order to evaluate our results and report the mean average accuracy over these splits.
The HMDB51 dataset  consists of 6766 action videos, collected from various sources, and divided into 51 action categories. Three splits are provided as an official benchmark and are used here. The mean average accuracy over these splits is reported.
Table 1 compares our RNN-FV pooling method to Mean and GMM-FV pooling. Three sets of features, as described in Sec. 5.1 are used: VGG coupled with PCA, VGG projected by the image to sentence matching CCA, and C3D.
The parameters were set on the validation split that we created for the provided training set. For GMM-FV, the only parameter is , which is the number of components in the mixture. The validated values of were in the set . The parameter for RNN-FV was the stopping point of the RNN training, as described in Sec. 5.3. Classification is conducted in all experiments using a multiclass (one-vs-all) linear SVM with C=1.
As can be seen in table 1, the RNN-FV pooling outperformed the other pooling methods by a sizable margin. Another interesting observation is that with VGG frame representation, CCA outperformed PCA consistently in all pooling methods. Not shown is the performance obtained when using the activations of the RNN as a feature vector. These results are considerably worse than all pooling methods. Notice that the representation dimension of Mean pooling is 500 (like the features we used), the GMM-FV dimension is , where k is the number of clusters and the RNN-FV dimension is 1000.
Table 2 compares our proposed RNN-FV method, combining multiple features together, with recently published methods on both datasets. The combinations were performed using early fusion, i.e, we concatenated the normalized low-dimensional gradients of the models and train multi-class linear SVM on the combined representation. We also tested the combination of our two best models with idt  and got state of the art results on both benchmarks. Interestingly, when training the RNNs on UCF101 and applying to encode HMDB51 videos, a comparable results of ( without idt) is obtained, which is also above current state of the art.
|idt + high-D encodings ||61.1||87.9|
|Two-stream CNN (2 nets) ||59.4||88|
|Multi-skip Feature Stacking ||65.4||89.1|
|C3D (1 net) ||–||82.3|
|C3D (3 nets) ||–||85.2|
|C3D (3 nets) + idt ||–||90.4|
|TDD (2 nets) ||63.2||90.3|
|TDD (2 nets) + idt ||65.9||91.5|
|stacked FV ||56.21||–|
|stacked FV + idt ||66.78||–|
|RNN-FV(C3D + VGG-CCA)||54.33||88.01|
|RNN-FV(C3D + VGG-CCA) + idt||67.71||94.08|
7.2 Image-sentence retrieval
The effectiveness of RNN-FV as sentence representation is evaluated on the bidirectional image and sentence retrieval task. We perform our experiments on three benchmarks: Flickr8K , Flickr30K , and COCO . The datasets contain , , and images respectively. Each image is accompanied with 5 sentences describing the image content, collected via crowdsourcing.
The Flickr8k dataset is provided with training, validation, and test splits. For Flickr30K and COCO, no training splits are given, and we use the same splits used by .
There are three tasks in this benchmark: image annotation, in which the goal is to retrieve, given a query image, the five ground truth sentences; image search, in which, given a query sentence, the goal is to retrieve the ground truth image; and sentence similarity, in which the goal is, given a sentence, to retrieve the other four sentences describing the same image. Evaluation is performed using Recall@K, namely the fraction of times the correct result was ranked within the top K items. The median and mean rank of the first ground truth result are also reported. For the sentence similarity task, only mean rank is reported.
As mentioned in Sec. 6, we explored RNN-FV based on several RNNs. The first RNN is a generic one: it was trained with the Wikipedia sentences as training data and word2vec as word embedding. In addition, for each of the three datasets, we trained three RNNs with the dataset’s training sentences as training data: one with word2vec as word embedding; one with the "CCA word embedding" derived from the semantic vector space of , as explained in Sec. 6; and one with the CCA word embedding, and with feeding the sentences in reverse order. These RNNs were all trained for regression. For Flickr8K, we also trained an RNN for classification (with Flickr8K training sentences, and word2vec embedding). In this network, the softmax layer was of size 8,148, corresponding to the number of unique words in the Flickr8k dataset. Since the resulting number of weights of the output layer is around 4 million, we reduced the dimension of the gradient feature vector by random sampling of 72,000 coordinates. Training a classification model on the larger datasets is virtually impractical, since the number of unique words in these datasets is much higher, resulting in a very large softmax layer and a huge number of weights.
In the regression RNNs, we used an LSTM layer of size 100. We did not observe a benefit in using more LSTM units. We used the part of the gradient corresponding to all 30,300 weights of the output layer (including one bias per word2Vec dimension). In the case of the larger COCO dataset, due to the computational burden of the CCA calculation, we used PCA to reduce the gradient dimension from 30,300 to 20,000. PCA was calculated on a random subset of 300,000 sentences (around 50%) of the training set. We also tried PCA dimension reduction to a lower dimension of 4,096, for all three datasets. We observed no change in performance (Flickr8K) or slightly worse results (Flickr30K and COCO).
The number of RNN training epochs was 400, 100, 20, and 15, for the Flickr8k, Flickr30k, COCO and Wikipedia datasets respectively.
|Image Annotation||Image Search||Sentence|
|cca + rvrs||30.8||59.8||72.9||4.0||18.2||21.8||49.6||64.4||6.0||27.3||11.2|
|cca + ||32.9||61.7||74.9||3.0||16.8||22.0||51.5||66.5||5.0||20.7||9.4|
|cca + rvrs + ||32.1||60.7||74.8||3.0||16.5||22.1||51.4||66.5||5.0||21.4||9.5|
|all rnn-fv models||29.9||60.7||73.4||4.0||17.9||22.4||52.7||67.2||5.0||20.9||8.7|
|all rnn-fv models + ||31.6||61.2||74.3||3.0||17.4||23.2||53.3||67.8||5.0||19.4||8.5|
|Image Annotation||Image Search||Sentence|
|RTP (manual annotations)||37.4||63.1||74.3||NA||NA||26.0||56.0||69.3||NA||NA||NA|
|cca + rvrs||33.6||62.4||73.4||3.0||15.5||25.0||53.6||66.9||5.0||26.2||15.5|
|cca + ||35.1||63.3||74.2||3.0||15.3||26.4||54.9||68.6||4.0||21.7||13.4|
|cca + rvrs + ||35.1||63.5||74.5||3.0||15.0||26.5||55.2||68.5||4.0||22.0||13.5|
|all rnn-fv models||34.7||62.7||72.6||3.0||15.6||26.2||55.1||69.2||4.0||21.2||12.8|
|all rnn-fv models + ||35.6||62.5||74.2||3.0||15.0||27.4||55.9||70.0||4.0||20.0||12.2|
|Image Annotation||Image Search||Sentence|
|cca + rvrs||40.8||73.4||84.1||2.0||8.2||30.4||65.5||80.9||3.0||10.7||12.3|
|cca + ||40.7||72.3||83.5||2.0||9.1||28.1||64.1||79.8||3.0||10.2||11.5|
|cca + rvrs + ||40.2||72.7||84.2||2.0||8.6||29.0||64.8||80.2||3.0||10.1||11.5|
|all rnn-fv models||40.8||71.9||83.2||2.0||8.9||29.6||64.8||80.5||3.0||9.7||10.6|
|all rnn-fv models + ||41.5||72.0||82.9||2.0||9.0||29.2||64.7||80.4||3.0||9.5||10.2|
Tables 3, 4 and 5 show the results of the different RNN-FV variants compared to the current state of the art methods. We also report results of combinations of models. Combining was done by averaging the image-sentence (or sentence-sentence) cosine similarities obtained by each model.
First, we see that regression-based RNN-FV should be preferred over the classification-based one. In addition to its lower dimension and natural handling of unseen words, the results obtained by regression RNN-FV are better. Second, we notice the competitive performance of the model trained on Wikipedia sentences, which demonstrates the generalization power of the RNN-FV, being able to perform well on data different than the one which the RNN was trained on. Training using the dataset’s sentences only slightly improves result, and not always. Improved results are obtained when using the CCA word embedding instead of word2vec. It is interesting to see the result of the “reverse” model, which is on a par with the other models. It is somewhat complementary to the “left-to-right” model, as the combination of the two yields somewhat improved results. Finally, the combination of RNN-FV with the best model (GMM+HGLMM) of  outperforms the current state of the art on Flickr8k, and is competitive on the other datasets.
This paper introduces a novel FV representation for sequences that is derived from RNNs. The proposed representation is sensitive to the element ordering in the sequence and provides a richer model than the additive “bag” model typically used for conventional FVs.
The RNN-FV representation surpasses the state-of-the-art results for video action recognition on two challenging datasets. When used for representing sentences, the RNN-FV representation achieves state-of-the-art or competitive results on image annotation and image search tasks. Since the length of the sentences in these tasks is usually short and, therefore, the ordering is less crucial, we believe that using the RNN-FV representation for tasks that use longer text will provide an even larger gap between the conventional FV and the RNN-FV.
A transfer learning result from the image annotation task to the video action recognition task was shown. The conceptual distance between these two tasks makes this result both interesting and surprising. It supports a human development-like way of training, in which visual labeling is learned through natural language, as opposed to, e.g., associating bounding boxes with nouns. While such training was used in computer vision to learn related image to text tasks, and while recently zero-shot action recognition was shown [11, 55], NLP to video action recognition transfer was never shown to be as general as presented here.
This research is supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI).
-  D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
-  K. Chatfield, V. Lempitsky, A. Vedaldi, and A. Zisserman. The devil is in the details: an evaluation of recent feature encoding methods. In British Machine Vision Conference, 2011.
-  K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. arXiv preprint arXiv:1405.3531, 2014.
-  X. Chen and C. L. Zitnick. Learning a recurrent visual representation for image caption generation. arXiv preprint arXiv:1411.5654, 2014.
-  J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
-  J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. arXiv preprint arXiv:1411.4389, 2014.
-  S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
M. Hodosh, P. Young, and J. Hockenmaier.
Framing image description as a ranking task: Data, models and evaluation metrics.J. Artif. Intell. Res.(JAIR), 47:853–899, 2013.
-  P. Y. A. L. M. Hodosh and J. Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2014.
-  H. Hotelling. Relations between two sets of variates. Biometrika, pages 321–377, 1936.
-  M. Jain, J. C. van Gemert, T. Mensink, and C. G. M. Snoek. Objects2action: Classifying and localizing actions without any video example. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, December 2015.
-  S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(1):221–231, 2013.
-  A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. Technical report, Computer Science Department, Stanford University, 2014.
-  A. Karpathy, A. Joulin, and L. Fei-Fei. Deep fragment embeddings for bidirectional image sentence mapping. arXiv preprint arXiv:1406.5679, 2014.
A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei.
Large-scale video classification with convolutional neural networks.
Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1725–1732. IEEE, 2014.
-  R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.
-  R. Kiros, Y. Zhu, R. Salakhutdinov, R. S. Zemel, A. Torralba, R. Urtasun, and S. Fidler. Skip-thought vectors. arXiv preprint arXiv:1506.06726, 2015.
-  B. Klein, G. Lev, G. Sadeh, and L. Wolf. Associating neural word embeddings with deep image representations using fisher vectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4437–4446, 2015.
-  O. Kliper-Gross, Y. Gurovich, T. Hassner, and L. Wolf. Motion interchange patterns for action recognition in unconstrained videos. In Computer Vision–ECCV 2012, pages 256–269. Springer, 2012.
-  H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: a large video database for human motion recognition. In Proc. IEEE Int. Conf. Comput. Vision, 2011.
-  Z. Lan, M. Lin, X. Li, A. G. Hauptmann, and B. Raj. Beyond gaussian pyramid: Multi-skip feature stacking for action recognition. arXiv preprint arXiv:1411.6660, 2014.
-  I. Laptev. On space-time interest points. Int. J. Comput. Vision, 64(2):107–123, 2005.
-  I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In Proc. IEEE Conf. Comput. Vision Pattern Recognition, pages 1–8, 2008.
-  G. Lev, B. Klein, and L. Wolf. In defense of word embedding for generic text representation. In Natural Language Processing and Information Systems, pages 35–50. Springer International Publishing, 2015.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. Zitnick. Microsoft coco: Common objects in context. In D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, editors, Computer Vision – ECCV 2014, volume 8693 of Lecture Notes in Computer Science, pages 740–755. Springer International Publishing, 2014.
-  L. Ma, Z. Lu, L. Shang, and H. Li. Multimodal convolutional neural networks for matching image and sentence. arXiv preprint arXiv:1504.06063, 2015.
-  J. Mao, W. Xu, Y. Yang, J. Wang, and A. Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632, 2014.
-  J. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille. Explain images with multimodal recurrent neural networks. arXiv preprint arXiv:1410.1090, 2014.
-  F. Y. K. Mikolajczyk. Deep correlation for matching images and text. 2015.
-  T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119, 2013.
-  H. Palangi, L. Deng, Y. Shen, J. Gao, X. He, J. Chen, X. Song, and R. Ward. Deep sentence embedding using the long short term memory network: Analysis and application to information retrieval. arXiv preprint arXiv:1502.06922, 2015.
-  X. Peng, L. Wang, X. Wang, and Y. Qiao. Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. arXiv preprint arXiv:1405.4506, 2014.
-  X. Peng, C. Zou, Y. Qiao, and Q. Peng. Action recognition with stacked fisher vectors. In Computer Vision–ECCV 2014, pages 581–595. Springer, 2014.
-  F. Perronnin and C. Dance. Fisher kernels on visual vocabularies for image categorization. In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8. IEEE, 2007.
F. Perronnin, Y. Liu, J. Sánchez, and H. Poirier.
Large-scale image retrieval with compressed fisher vectors.In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 3384–3391. IEEE, 2010.
-  F. Perronnin, J. Sánchez, and T. Mensink. Improving the fisher kernel for large-scale image classification. In Computer Vision–ECCV 2010, pages 143–156. Springer, 2010.
-  B. Plummer, L. Wang, C. Cervantes, J. Caicedo, J. Hockenmaier, and S. Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. arXiv preprint arXiv:1505.04870, 2015.
-  U. Quasthoff, M. Richter, and C. Biemann. Corpus portal for search in monolingual corpora. In Proceedings of the fifth international conference on language resources and evaluation, volume 17991802, 2006.
-  K. Simonyan, O. M. Parkhi, A. Vedaldi, and A. Zisserman. Fisher vector faces in the wild. In Proc. BMVC, volume 1, page 7, 2013.
-  K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems, pages 568–576, 2014.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
-  R. Socher, Q. Le, C. Manning, and A. Ng. Grounded compositional semantics for finding and describing images with sentences. In NIPS Deep Learning Workshop, 2013.
-  K. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human action classes from videos in the wild. CRCV-TR-12-01, Nov. 2012.
-  I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014.
-  G. W. Taylor, R. Fergus, Y. LeCun, and C. Bregler. Convolutional learning of spatio-temporal features. In Computer Vision–ECCV 2010, pages 140–153. Springer, 2010.
-  D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. arXiv preprint arXiv:1412.0767, 2014.
-  H. Vinod. Canonical ridge and econometrics of joint production. Journal of Econometrics, 4(2):147 – 166, 1976.
-  O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555, 2014.
-  H. Wang, A. Klaser, C. Schmid, and C. Liu. Action recognition by dense trajectories. In Proc. IEEE Conf. Comput. Vision Pattern Recognition, pages 3169–3176, 2011.
-  H. Wang, A. Kläser, C. Schmid, and C.-L. Liu. Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vision, 103(1):60–79, 2013.
-  H. Wang and C. Schmid. Action Recognition with Improved Trajectories. In International Conference on Computer Vision, Oct. 2013.
-  H. Wang and C. Schmid. Action recognition with improved trajectories. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 3551–3558. IEEE, 2013.
-  L. Wang, Y. Qiao, and X. Tang. Action recognition with trajectory-pooled deep-convolutional descriptors. arXiv preprint arXiv:1505.04868, 2015.
-  X. Wang, L. Wang, and Y. Qiao. A comparative study of encoding, pooling and normalization methods for action recognition. In Computer Vision–ACCV 2012, pages 572–585. Springer, 2013.
-  X. Xu, T. M. Hospedales, and S. Gong. Semantic embedding space for zero-shot action recognition. CoRR, abs/1502.01540, 2015.