Visual information retrieval (VIR) is the information delivery mechanism that enables users to post their queries and then obtain the answers from visual contents (Gupta and Jain, 1997). As an emerging kind of recommender system, visual question answering is an important problem for VIR sites, which automatically returns the relevant answer from the referenced visual contents according to users’ posted question (Antol et al., 2015; He et al., 2016a, 2017a, 2017b). Currently, most of the existing visual question answering methods mainly focus on the problem of static image question answering (Antol et al., 2015; Yang et al., 2016; Nie et al., 2013; Luo et al., 2016; Nie et al., 2011; Wang et al., 2012b). Although existing methods have achieved promising performance in image question answering task, they may still be ineffective applied to the problem of video question answering due to the lack of modeling the temporal dynamics of video contents (Zhao et al., 2016, 2017).
The video content often contains the evolving complex interactions and the simple extension of image question answering is thus ineffectively to provide the satisfactory answers. This is because the relevant video information is usually scattered among the entire frames. Furthermore, a number of frames in video are redundant and irrelevant to the question. We give a simple example of video question answering in Figure 1. We demonstrate that the answering for question ”What is a woman boiling in a pot of water?” requires the collective information from multiple video frames. Recently, temporal attention mechanisms have been shown to its effectiveness on critical frame extraction for video representation learning (Wang et al., 2012a). Thus, we then employ the temporal attention mechanisms to model the temporal dynamics of video contents. On the other hand, the utilization of high-level semantic attributes has demonstrated the effectiveness in visual understanding tasks (Nie et al., 2012). Furthermore, we observe that the detected attributes are able to enhance the performance of video question answering in Figure 1. Thus, leveraging both temporal dynamic modeling and semantic attributes is critical for learning effective video representation in video question answering.
In this paper, we study the problem of video question answering by modeling its temporal dynamics and semantic attributes. Specifically, we propose the attribute-augmented attention network learning framework that enables the joint frame-level attribute detection and unified video representation learning for video question answering. We then incorporate the multi-step reasoning process for our proposed attribute-augmented attention network to further improve the performance, named as r-ANL. When a certain question is issued, r-ANL can return the relevant answer for it based on the referenced video content. The main contributions of this paper are as follows:
Unlike the previous studies, we study the problem of video question answering by modeling its temporal dynamics and semantic attributes. We propose the attribute-augmented attention network learning framework that jointly detects frame-level attribute and learns the unified video representation for video question answering.
We incorporate the multi-step reasoning process for the proposed attention networks to enable the progressive joint representation learning of multimodal temporal attentional video with semantic attributes and textual question to further improve the performance of video question answering.
We construct a large-scale dataset for video question answering. We evaluate the performance of our method on both multiple choice and open-ended video question answering tasks.
2. Video Question Answering via Attention Network Learning
2.1. Problem Formulation
Before presenting our method, we first introduce some basic notions and terminologies. We denote the question by , the video by and the attributes by , respectively. The frame-level representation of video is given by , where is the length of video . We then denote the frame-level representation of attribute for video by , where is the set of the attributes for the -th frame. We then denote as the vocabulary set or dictionary, where
is the one-hot word representation. Since both video and question content are sequential data with variant length, it is natural to choose the variant recurrent neural network called long-short term memory network (LSTM)(Hochreiter and Schmidhuber, 1997).
Specifically, we learn the feature representation of both video and question by bidirectional LSTM, which consists of a forward LSTM and a backward LSTM (Zhang et al., 2016). The backward LSTM has the same network structure with the forward one while its input sequence is reversed. We denote the hidden state of the forward LSTM at time by , and the hidden state of the backward LSTM by . Thus, the hidden state of video at time from bidirectional layer is denoted by . The hidden states of video is given by . We then denote the latent representation of question from bidirectional layer by .
Using the notations above, the problem of video question answering is formulated as follows. Given the set of videos , questions and attributes , our goal is to learn the attribute-augmented attention network such that when a certain question is issued, r-ANL can return the relevant answer for it based on the referenced video content. We present the details of the attribute-augmented attention network learning framework in Figure 2.
2.2. Attribute-Augmented Attention Network Learning
In this section, we propose the attribute-augmented attention network to learn the joint representation of multimodal video content and detected attributes according to the question for both multiple choice and open-ended video question answering tasks.
We first employ a set of pre-trained attribute detectors to obtain the visual attributes for each frame in video , denoted as (Wang et al., 2015; Johnson et al., 2016; Zhang et al., 2013). Each attribute corresponds to one entry in the vocabulary set . We then obtain the representation for attribute set by , where is the embedding matrix for attribute representation and is the size of attribute set for the -th frame. We thus learn the joint representation of multimodal attributes and frame representation by , where is the element-wise product and is from bidirectional layer at time .
Inspired by the temporal attention mechanism, we introduce the attribute-augmented attention network to learn the attribute-augmented video representation according to the question for video question answering. Given the question and the -th frame of video , the temporal attention score is given by:
where and are parameter matrices and
is bias vector. Thedenotes the latent representation of question and is attribute-augmented latent representation of the -th frame from bidirectional LSTM networks, respectively. For each frame , the activations in temporal dimension by the softmax function is given by , which is the normalization of the temporal attention score. Thus, the temporally attended video representation according to question is given by .
We then incorporate the multi-step reasoning process for the proposed attribute-augmented attention networks to further improve the performance of question-oriented video representation for video question answering. Given the attribute-augmented attention network , video and question , the attribute-augmented attention network learning with multi-step reasoning process is given by:
which is recursively updated. The joint question-oriented video representation is then returned after the -th reasoning process update, given by . The learning process of reasoning attribute-augmented attention networks in case of is illustrated in Figure 2.
We next present the objective function of our method for both multiple-choice and open-ended video question answering tasks. For training the model for multiple-choice task, we model video question answering as a classification problem with pre-defined classes. Given the updated joint question-oriented video representation
, a softmax function is then employed to classifyinto one of the possible answers as
where is the parameter matrix and is the bias vector. On the other hand, for training the model for open-ended video question answering, we employ the LSTM decoder to generate free-form answers based on the updated joint question-oriented video representation . Given video , question and ground-truth answer and the generated answer
, the loss functionis given by:
where is the indicator function. We denote all the model coefficients including neural network parameters and the result embeddings by . Therefore, the objective function in our learning process is given by
is the trade-off parameter between the training loss and regularization. To optimize the objective function, we employ the stochastic gradient descent (SGD) with the diagonal variant of AdaGrad.
3.1. Data Preparation
|Data Splitting||Question Types|
We construct the dataset of video question-answering from the YouTube2Text data (Guadarrama et al., 2013) with natural language descriptions, which consists of 1,987 videos and 122,708 descriptions. Following the state-of-the-art question generation method, we generate the question-answer pairs from the video descriptions. Following the existing visual question answering approaches (Antol et al., 2015), we generate three types of questions, which are related to the what, who and other queries for the video. We split the generated dataset into three parts: the training, the validation and the testing sets. The three types of video question-answering pairs used for the experiments are summarized in Table 1. The dataset will be provided later.
We then preprocess the video question-answering dataset as follows. We first sample 40 frames from each video and then resize each frame to 300300. We extract the visual representation of each frame by the pretrained ResNet (He et al., 2016b), and take the 2,048-dimensional feature vector for each frame (Zhao et al., 2016). We employ the pretrained word2vec model to extract the semantic representation of questions and answers (Zhao et al., 2015). Specifically, the size of vocabulary set is 6,500 and the dimension of word vector is set to 256. For training model for open-ended video question answering task, we add a token eos to mark the end of the answer phrase, and take the token Unk for the out-of-vocabulary word.
3.2. Performance Comparisons
|Method||Open-ended VQA task question type||Multiple-choice VQA task question type|
|What||Who||Other||Total accuracy||What||Who||Other||Total accuracy|
We evaluate the performance of our proposed r-ANL method on both multiple-choice and open-ended video question answering tasks using the evaluation criteria of Accuracy. Given the testing question and video with the groundtruth answer , we denote the predicted answer by our r-ANL method by . We then introduce the evaluation criteria of below:
where (best) means that the generated answer and the ground-truth ones are exactly the same, while means the opposite. When we performance the multiple-choice video question answering task, we set the value of to 1.
We extend the existing image question answering methods as the baseline algorithms for the problem of video question answering.
VQA+ method is the extension of VQA algorithm (Antol et al., 2015), where we add the mean-pooling layer that obtains the joint video representation from ResNet-based frame features, and then computes the joint representation of question embedding and video representation by their element-wise multiplication for generating open-ended answers.
SAN+ method is the incremental algorithm based on stacked attention networks (Yang et al., 2016), where we add the LSTM network to fuse the sequential representation of video frames for video question answering.
Unlike the previous visual question answering works, our r-ANL method learns the question-oriented video question with multiple reasoning process for the problem of video question answering. To study the effectiveness of attribute-augmented mechanism in our attention network, we evaluate our method with the one without attributes, denoted as r-ANL. To exploit the effect of reasoning process, we denote our r-ANL method with reasoning steps by r-ANL
. The input words of our method are initialized by pre-trained word embeddings with size of 256, and weights of LSTMs are randomly by a Gaussian distribution with zero mean.
shows the overall experimental results of the methods on both open-ended and multiple-choice video question answering tasks with different types of questions. The hyperparameters and parameters which achieve the best performance on the validation set are chosen to conduct the testing evaluation. We report the average value of all the methods on three evaluation criteria. We give an example of the experimental results by our method in Figure3.
In this paper, we study the problem of video question answering from the viewpoint of attribute-augmented attention network learning. We first propose the attribute-augmented method that learns the joint representation of visual frame and textual attributes. We then develop the attribute-augmented attention network to learn the question-oriented video representation for question answering. We next incorporate the multi-step reasoning process to our proposed attention network that further improve the performance of the method for the problem. We construct a large-scale video question answering dataset and evaluate the effectiveness of our proposed method through extensive experiments.
Acknowledgements.The work is supported by the National Natural Science Foundation of China under Grant No.61572431 and No.61602405. It is also supported by the Fundamental Research Funds for the Central Universities 2016QNA5015, Zhejiang Natural Science Foundation under Grant LZ17F020001, and the China Knowledge Centre for Engineering Sciences and Technology.
- Antol et al. (2015) Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In ICCV. 2425–2433.
- Guadarrama et al. (2013) Sergio Guadarrama, Niveda Krishnamoorthy, Girish Malkarnenkar, Subhashini Venugopalan, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2013. Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In ICCV. 2712–2719.
- Gupta and Jain (1997) Amarnath Gupta and Ramesh Jain. 1997. Visual information retrieval. Commun. ACM 40, 5 (1997), 70–79.
- He et al. (2016b) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016b. Deep residual learning for image recognition. In CVPR. 770–778.
- He et al. (2017a) Xiangnan He, Ming Gao, Min-Yen Kan, and Dingxian Wang. 2017a. BiRank: Towards Ranking on Bipartite Graphs. IEEE Trans. Knowl. Data Eng. (2017), 57–71.
- He et al. (2017b) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017b. Neural Collaborative Filtering. In WWW. 173–182.
- He et al. (2016a) Xiangnan He, Hanwang Zhang, Min-Yen Kan, and Tat-Seng Chua. 2016a. Fast Matrix Factorization for Online Recommendation with Implicit Feedback. In SIGIR. 549–558.
- Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780.
et al. (2016)
Justin Johnson, Andrej
Karpathy, and Li Fei-Fei.
DenseCap: Fully Convolutional Localization Networks for Dense Captioning. InCVPR.
- Luo et al. (2016) Changzhi Luo, Bingbing Ni, Shuicheng Yan, and Meng Wang. 2016. Image Classification by Selective Regularized Subspace Learning. IEEE Trans. Multimedia (2016), 40–50.
- Nie et al. (2013) Liqiang Nie, Meng Wang, Yue Gao, Zheng-Jun Zha, and Tat-Seng Chua. 2013. Beyond Text QA: Multimedia Answer Generation by Harvesting Web Information. IEEE Trans. Multimedia (2013), 426–441.
- Nie et al. (2011) Liqiang Nie, Meng Wang, Zheng-Jun Zha, Guangda Li, and Tat-Seng Chua. 2011. Multimedia answering: enriching text QA with media information. In SIGIR. 695–704.
- Nie et al. (2012) Liqiang Nie, Shuicheng Yan, Meng Wang, Richang Hong, and Tat-Seng Chua. 2012. Harvesting visual concepts for image search with complex queries. In ACM MM. 59–68.
- Wang et al. (2012a) Meng Wang, Richang Hong, Guangda Li, Zheng-Jun Zha, Shuicheng Yan, and Tat-Seng Chua. 2012a. Event Driven Web Video Summarization by Tag Localization and Key-Shot Identification. IEEE Trans. Multimedia (2012), 975–985.
- Wang et al. (2012b) Meng Wang, Hao Li, Dacheng Tao, Ke Lu, and Xindong Wu. 2012b. Multimodal Graph-Based Reranking for Web Image Search. IEEE Trans. Image Processing (2012), 4649–4661.
- Wang et al. (2015) Meng Wang, Xueliang Liu, and Xindong Wu. 2015. Visual Classification by -Hypergraph Modeling. IEEE Trans. Knowl. Data Eng. (2015), 2564–2574.
- Yang et al. (2016) Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In CVPR. 21–29.
- Zhang et al. (2016) Hanwang Zhang, Xindi Shang, Huanbo Luan, Meng Wang, and Tat-Seng Chua. 2016. Learning from collective intelligence: Feature learning using social images and tags. IEEE Trans. Multimedia 13 (2016).
et al. (2013)
Hanwang Zhang, Zheng-Jun
Zha, Yang Yang, Shuicheng Yan,
Yue Gao, and Tat-Seng Chua.
Attribute-augmented semantic hierarchy: towards bridging semantic gap and intention gap in image retrieval. InACM MM. ACM, 33–42.
et al. (2016)
Zhou Zhao, Xiaofei He,
Deng Cai, Lijun Zhang,
Wilfred Ng, and Yueting Zhuang.
Graph Regularized Feature Selection with Data Reconstruction.IEEE Trans. Knowl. Data Eng. (2016), 689–700.
- Zhao et al. (2017) Zhou Zhao, Hanqing Lu, Vincent W. Zheng, Deng Cai, Xiaofei He, and Yueting Zhuang. 2017. Community-Based Question Answering via Asymmetric Multi-Faceted Ranking Network Learning. In AAAI. 3532–3539.
- Zhao et al. (2016) Zhou Zhao, Qifan Yang, Deng Cai, Xiaofei He, and Yueting Zhuang. 2016. Expert Finding for Community-Based Question Answering via Ranking Metric Network Learning. In IJCAI. 3000–3006.
- Zhao et al. (2015) Zhou Zhao, Lijun Zhang, Xiaofei He, and Wilfred Ng. 2015. Expert Finding for Question Answering via Graph Regularized Matrix Completion. IEEE Trans. Knowl. Data Eng. (2015), 993–1004.