Multi-modal video question answering is an important task that aims at the joint understanding of vision and language. Its application ranges from multimedia search engine to personal assistant. Deep neural network have been successfully applied to various QA tasks including textQA[1, 2, 3, 4], imageQA [5, 6, 7, 8, 9], and videoQA [10, 11, 12, 13] with significant performance improvement. Recently, the research on multi-modal videoQA [14, 15, 16, 17, 18, 19] have also benefited from deep neural networks. One of the main challenges of multi-modal video question answering is that the size of existing benchmark datasets (e.g. MovieQA , PororoQA , and TVQA ) are relatively small to provide sufficient supervision to train QA models on the complex task. This paper proposes a method to gain extra supervision via multi-task learning for multi-modal video question answering. By solving related auxiliary tasks simultaneously with shared intermediate layers, the model is provided with extra synergistic learning signals and leverages the information from auxiliary tasks to boost question-answering performance.
Constructing large-scale dataset for video question answering is difficult. The question should be diverse to prevent overfitting to certain question types. The correct answer must be correspondent to the question while the other candidate answers should be confusing enough, but not too much misleading or they could be easily excluded. The QA pairs generated by human annotators often have a bias. For example, choosing the longest candidate answer gives the accuracy of 25.33% and 30.41% for MovieQA  and TVQA , respectively, where the random baseline is 20%. Several multi-modal videoQA datasets have been introduced including MovieQA , PororoQA , and TVQA . However, these benchmark datasets have a relatively small number of QA pairs considering the complexity of the task. MovieQA  consists of 6,462 QA pairs for the video+subtitles task, PororoQA consists of 8,913 QA pairs, and TVQA consists of 152.5K QA pairs. Comparing with imageQA benchmark datasets, VQA v1.0  dataset consists of 614.2K QA pairs and VQA v2.0 dataset consists of 1.1M QA pairs.
Multi-Task Learning (MTL) is a learning paradigm in machine learning which jointly solves multiple tasks in a single model. MTL aims to leverage useful information contained in multiple related tasks to gain positive synergies across all the tasks. For example, tasks like temporal localization and visual-semantic alignment have been found useful for each other when trained jointly
. To solve multi-modal videoQA, this paper proceeds by analogy to human intelligence. Humans would first have to know the proper association between vision and language. On top of that, humans would attempt to localize the moment which is relevant to answering the question. Finally, humans could learn how to answer the questions. We formulated this as a multi-task learning problem and designed two auxiliary tasks.
This paper proposes a method to gain extra supervision via multi-task learning for multi-modal videoQA. Solving auxiliary tasks simultaneously with the QA task can provide synergistic learning signals. On top of the QA network based on Jie et al., we introduce two auxiliary tasks that hierarchically share parameters with the QA network as depicted in Fig. 1. One auxiliary task is modality alignment which aims at correctly associating video and subtitle features. It shares parameters with the lower layers of the QA network. The other task is temporal localization which aims at finding the moment in the video clip that is most relevant to answering the current question. It shares parameters with the higher layers of the QA network. In order to control the timing and strength of the objective of each task, multi-task ratio scheduling is proposed. Motivated by curriculum learning , multi-task ratio scheduling attempts to learn easier task earlier to set inductive bias at the beginning of the training. The main contribution of this paper is summarized as follows. (1) Multi-task learning method for multi-modal videoQA is proposed which recorded state-of-the-art performance on TVQA benchmark. (2) Multi-task ratio scheduling is proposed to efficiently reflect the objectives of each task during training.
Ii Related work
In this section, we introduce the related works of our paper in four categories; multi-task learning, modality alignment, temporal localization and multi-modal video question answering.
Ii-a Multi-Task Learning
Multi-task learning aims at jointly solving multiple related tasks with a single model. By sharing parameters across related tasks, the model can generalize better on the original task. Most of the multi-task learning methods share the hidden layers across all tasks and have task-specific output layers for each task. Starting from the work of Kaufmann et al.
, there have been rich research on multi-task learning across the majority of applications of machine learning from computer vision (CV)
to natural language processing (NLP). Kim et al. proposed Deep Partial Person Re-identification (DPPR) that jointly learns person classification and person re-identification for partial person re-identification. Object detection architectures such as Fast R-CNN  and Faster R-CNN  used multi-task loss for bounding box regression and object classification. Tiao et al. tackled the task of Person Search  by jointly learning pedestrian detection and person re-identification. Recently, Li et el. proposed the invertible Question Answering Network (iQAN) to leverage the complementary relations between questions and answers in the image by jointly learning the Visual Question Answering (VQA) and Visual Question Generation (VQG) tasks.
Ii-B Modality Alignment
As an auxiliary task of our proposed method, we jointly learn the modality alignment and the temporal localization along with the multi-modal video question answering. Both tasks have been extensively studied in the field of deep learning. Karpathy et al. proposed a method that captures the inter-modal correspondences between vision and language to generate natural language description of the given image. The latent alignment between the segments of the sentence and the region of the image is learned with a structured max-margin loss. Castrejón et al.  proposed a method that learn cross-modal scene representations that transfer across modalities. By regularizing cross-modal CNNs to have shared representation, the resulting representation is agnostic of the modality. Yu et al.  proposed Joint Sequence Fusion (JSFusion) model that can measure semantic similarity between any pairs of multimodal sequence data. Hierarchical attention mechanism is utilized to learn matching representation patterns among modalities.
Ii-C Temporal Localization
Temporal localization aims at localizing temporal parts from the given video. Hendricks et al. proposed the Moment Context Network (MCN) for temporal localization with natural language query. The MCN effectively localizes temporal parts related to natural language query by integrating local and global video feature over time. Gao et al. proposed a multi-task learning approach for temporal localization with natural language query. Location regression and visual-semantic alignment are jointly learned for temporal localization. Temporal Unit Regression Network (TURN)  jointly predicts action proposals and refines the temporal boundaries by temporal coordinate regression. Long untrimmed video is decomposed into video clips, which are reused as basic building blocks of temporal proposals for fast computation.
Ii-D Multi-Modal Video Question Answering
Recently, the research on multi-modal videoQA [14, 15, 16, 17, 18, 19] leverages additional text modality such as subtitle along with video modality for the joint understanding of vision and language. There are various benchmark datasets for multi-modal videoQA including MovieQA , PororoQA  and TVQA . Multi-modal videoQA is a challenging task for its relatively small size of benchmark datasets. The majority of the methods on multi-modal videoQA are motivated by memory-augmented architecture . Tapaswi et al.  utilized memory network (MemN2N)  to store video clips into memory and retrieve required information for answering question. Read-Write Memory Network (RWMN)  replaces the fully-connected layers in memory network  into the convolutional layers to capture local information in each memory slot. After the video and subtitle features are fused using bilinear operation, convolutional write/read networks store/retrieve information, respectively. Focal Visual-Text Attention (FVTA) 
applied hierarchical attention mechanism on three-dimensional tensor of question, video and text to dynamically determine which modality and what time to attend for question answering. Multimodal Dual Attention Memory (MDAM) applied multi-head attention mechanism  to learn the latent representation of multi-modal inputs.
Iii The proposed method
In this section, we describe the proposed method and training procedure in details. Fig. 2 gives the overview of overall architecture which fully utilizes multi-modal inputs (video and subtitle) and QA pairs to answer the question. The proposed method is composed of three networks which hierarchically shares intermediate layers; question-answering (QA) network, modality alignment network and temporal localization network. Note that we utilized QA network proposed by Jie et al. .
Iii-a Problem Formulation
The formal definition of Multi-modal Video Question Answering (QA) is as follows. The inputs to the model are (1) a video clip , (2) a subtitle corresponding to the video clip , (3) a question and (4) five candidate answers where only one is correct. The task is to predict correct answer for the question and the objective of training is to learn model parameters to maximize the following log-likelihood:
where denotes the dataset, represents the model parameters and y denotes the correct answer out of five candidates.
Iii-B Feature Extraction
Before describing the proposed method, we first introduce the feature extraction method. For a fair comparison, we followed the same video and text feature extraction procedure used in previous work and fixed them during training.
Iii-B1 Video Features
We extracted two types of video features; ImageNet feature and visual concept feature. First, the frames are extracted from each video clipv with the rate of 3fps.
: For each frame, the feature vector of size 2048D is extracted from “Average Pooling” layer of ResNet-101 trained on ImageNet Benchmark . The frame feature corresponding to the same video clip are first L2-normalized and then stacked, forming ImageNet feature , where represents the number of frames extracted from the video clip.
Visual Concept Feature: Inspired by recent works [39, 15] that use detected object labels as visual inputs instead of using CNN features directly, we also extracted detected labels which are referred to as visual concept feature . Faster R-CNN  trained on Visual Genome Benchmark  is utilized to detect objects in each frame. After collecting every detected concept for each video clip over all of the frames and eliminating the overlapping concepts, we utilized GloVe  to embed each concept into feature representation. The resulting visual concept feature is represented as , where denotes the number of concepts in video clip.
Iii-B2 Text Features
We used GloVe  to embed words into feature representations. Every sentences in each subtitle are flattened and tokenized into a sequence of words and GloVe  embeds the sequence of words into subtitle feature denoted as , where represents the number of words in the subtitle. Question feature and candidate answer feature are also embedded similarly, where and are the number of words in question and candidate answer , respectively.
Iii-C Question-Answering Network
Now, we describe the QA network proposed by Jie et al. . First, bi-directional LSTM (bi-LSTM) is used to encode the both video and text features into embedding space. The bi-LSTM consists of two LSTM layers; forward LSTM and backward LSTM . For the input sequence of , the forward LSTM encodes the input sequence in forward order (from to ) into hidden states (). The backward LSTM encodes the input sequence in backward order (from to ) and generate hidden states (
). The hidden states from both directions at each time step are stacked to obtain resulting hidden representation, i.e.where represents concatenation. Subtitle , question , candidate answer and visual concept features are now encoded by bi-LSTM and denoted as and , respectively. Here, is the size of hidden state which is set to in this experiment. Similarly for the ImageNet feature , it is first fed into fully-connected layer with activation function to project into word space, then encoded by Bi-LSTM producing .
The context-query attention layer [42, 43] is utilized to jointly model the encoded context (e.g. video, subtitle) and query (e.g. question, candidate answers). It feeds a set of context vectors and a set of query vectors as inputs, and constructs context-to-query attention matrix . The context-to-query attention is generated as follows: First, the similarities between each pair of context vector and query vector are computed, producing a similarity matrix , where represents the similarity between -th context word and -th query word . Instead of the original trilinear function , dot product is utilized to calculate similarity, i.e. . Then, we normalize each row of similarity matrix by applying the softmax function, producing a matrix . Finally, context-to-query attention which contains the attended query vectors for the entire context is computed as . The context-to-query attention signifies which word in query is most relevant to the each word in context.
Consider the upper stream of the Fig.2 where the visual concept is used as the context for context-query attention layer. The question and candidate answer are considered as the query to generate the context-to-query attentions , respectively. The context-to-query attentions are then fused with context as follows:
where denotes element-wise multiplication.
Finally, the fused feature vector
is again fed into Bi-LSTM and max-pooled along time to get final feature vectorfor each candidate answer . The prediction score is obtained by applying linear fully-connected layer on a set of final feature vectors . Prediction score for bottom stream can be computed similarly by utilizing the subtitle as the context for context-query attention layer. The prediction score for each stream is summed to get final score and
function is applied to produce answer probability
. The cross-entropy loss function is used to train QA model:
Iii-D Modality Alignment Network
Modality alignment network regards the pairing of video and subtitle as supervision and attempts to predict the correct alignment between two modalities. It shares parameters with the lower layers of the QA network. After the video and subtitles are embedded into the common space forming and , we denote as the positive pair and as the negative pair of encoded video-subtitle features where represents the index of element in each mini-batch and represents the index of elements in each mini-batch except . Each training mini-batch, composed of total video-subtitle pairs, includes single positive pair and negative samples.
The objective of modality alignment network is to make the features of positive pair get closer in the embedding space, and the negative pairs get farther. Intuitively, the video-subtitle pair should have a high matching score if its words have a confident support in the video. We formulated this as a metric learning problem. Motivated by Hoffer et al. , we utilized max-margin loss to pull features of positive pairs and push features of negative pairs. The distance between positive pair is defined as and the distance between negative pair is defined as where denotes the -norm of vector. The modality alignment loss constrains the positive distance to be smaller than by the margin :
Iii-E Temporal Localization Network
Temporal localization network localizes the temporal part relevant to the question. The moment in the clips where the question was generated from is regarded as supervision. It shares parameters with the higher layers of the QA network. We formulate the objective of temporal localization network as a regression problem. For each stream, the final feature vectors are concatenated and used to regress the start point () and end point () of ground-truth moment of question generation. We normalized the by the length of the video clip to have the value between 0 and 1. The loss function for the temporal localization network contains two terms:
The first term is straightforward regression loss which is the mean squared error between ground-truth and prediction. The second term is referred to as overlap loss which considers the overlap between the ground-truth and prediction. The two terms are formulated as follows:
where represents the length of overlap between and which can be formulated as .
Iii-F Multi-Task Ratio Scheduling
The entire network is trained by simultaneously optimizing the aforementioned three loss functions; one for QA network, modality alignment network, and temporal localization network, respectively. The total loss to be minimized during training is the weighted sum of all three losses:
where , , and are the weights for each loss function. In order to control the timing and strength of the objective of each task, multi-task ratio scheduling is proposed to schedule the weights for each loss function. Motivated by curriculum learning , simple tasks are focused more at the early stage of training and complex tasks are focused later. Among the designed tasks, the task of modality alignment and temporal localization are easier than question-answering. Therefore initially, the weight for modality alignment is set higher than the other weights to facilitate solving modality alignment task. Then, the weight for temporal localization is set higher and finally, the weight for question-answering is set to the highest to solve the multi-modal video question-answering task.
This section provides the experimental details and results of our proposed method. First, the benchmark dataset used to train and evaluate the proposed model is introduced. Then, we describe the experimental details. Finally, we provide quantitative results with ablation study.
The TVQA Benchmark  is a multi-modal video question answering dataset. It is collected on 6 long-running TV shows from 3 genres: (1) sitcoms: The Big Bang Theory, How I Met Your Mother, Friends, (2) medical dramas: Grey’s Anatomy, House and (3) crime drama: Castle. Total 21,793 short clips of 60/90 seconds are segmented for TVQA , accompanied with corresponding subtitles and character names. The questions in TVQA Benchmark is composed of following formal “[What/How/Where/Why/…] [when/before/after] ?” where second part localizes required point within the video clip for answering the question and first part provides the question on that point. The overall number of multiple-choice question-answer pairs are 152.5k, where train split contains 122,039 QA pairs, validation split contains 15,252 QA pairs and test split contains 7,623 QA pairs. Each QA pair has five candidate answers, but only one of them is correct. The performance of each model is measured by multiple-choice question answering accuracy.
Iv-B Implementation Details
The proposed method was implemented using PyTorch framework. All of the experiments in this paper were performed under CUDA acceleration with single NVIDIA TITAN Xp (12GB of memory) GPU and trained using the Adam optimizer with the learning rate of 0.0003 and mini-batch size of 32. On average, it took almost 12 hours for our proposed model to converge.
Iv-C Experimental Results
|Methods||Video Feature||valid Acc.||test Acc.|
|TVQA S+Q ||-||-||63.14|
|TVQA V+Q ||img||-||42.67|
|TVQA V+Q ||reg||-||42.75|
|TVQA V+Q ||cpt||-||43.38|
|TVQA S+V+Q ||img||-||63.57|
|TVQA S+V+Q ||reg||-||63.19|
|TVQA S+V+Q ||cpt||-||65.46|
The experimental results are summarized in table I. We compared the performance of our proposed method with the results reported in the TVQA paper . The random baseline shows 20.00% test accuracy for the task of multiple-choice question-answering with 5 candidate answers. The longest answer baseline selects the longest answer for each question. It achieves the performance of 30.41% which indicates that the correct answers tend to be longer than the wrong answers. Note that the validation accuracy of TVQA methods is not reported in the original paper .
Our subtitle-only (ours S+Q) method achieves test accuracy of 64.63% which is 34.22% higher than the longest answer baseline and 1.49% higher than the subtitle-only TVQA baseline (TVQA S+Q). Our video-only (ours V+Q) methods achieve the performance of 42.79% and 44.42% for ImageNet and visual concept feature, respectively. Compared to the TVQA baseline, our results on video-only methods obtains 0.12% and 1.04% performance boost. For our uni-model results (ours S+Q and ours V+Q), only temporal localization loss is utilized as an auxiliary loss. Our full model (ours S+V+Q) with ImageNet feature achieves the performance of 64.53% which is 0.96% higher than the TVQA S+V+Q with ImageNet feature. Our full (ours S+V+Q) method with visual concept feature achieves the state-of-the-art result on TVQA dataset with the performance of 67.05% which is 1.59% higher accuracy than the runner-up model, TVQA S+V+Q with visual concept feature. The experimental results verify that multi-task learning can bring additional performance boost especially when the task is complex. Especially, our S+Q model, which only uses subtitle and question but not video, achieved higher performance (64.63%) than TVQA S+V+Q model with ImageNet feature (63.57%). This shows competent performance improvement caused by extra supervision from multi-task learning.
|Methods||Video Feature||valid Acc.|
|QA + MA||img||63.67||-0.32|
|QA + TL||img||63.49||-0.50|
|QA + MA + TL||img||63.99||-|
|QA + MA||cpt||65.79||-0.43|
|QA + TL||cpt||65.64||-0.58|
|QA + MA + TL||cpt||66.22||-|
For ablation study, we only report the validation accuracy since test accuracy can only be measured through an online evaluation server for finite times. Table II summarizes the results of ablation study. Overall, the visual concept feature gives higher performance than ImageNet feature as reported in . The first block of Table II shows the ablation study with the ImageNet feature and the second block of Table II shows the ablation study with the visual concept feature. Multi-task learning with temporal localization and modality alignment showed a meaningful increase in performance for both cases when used ImageNet and visual concept features. The ablation study suggests that solving auxiliary tasks together with the main task of video question answering can improve performance. Modality alignment brought higher performance gain than temporal localization. The amount of performance gain may differ by implementation detail though, the performance gain can differ by the choice and the scheduling of auxiliary tasks. The more relevant and helpful the auxiliary task is to the main task, the higher the performance gain is.
In this paper, we proposed a method to gain extra supervision via multi-task learning for multi-modal video question answering. We argue that the existing benchmark datasets on multi-modal video question answering are relatively small to provide sufficient supervision. To overcome this challenge, this paper proposes a multi-task learning method which is composed of three main components: (1) multi-modal video question answering network that answers the question based on the both video and subtitle feature, (2) temporal retrieval network that predicts the time in the video clip where the question was generated from and (3) modality alignment network that solves metric learning problem to find correct association of video and subtitle modalities. Motivated by curriculum learning, multi-task ratio scheduling is proposed to learn easier task earlier to set inductive bias at the beginning of the training. The experiments on publicly available dataset TVQA shows state-of-the-art results, and ablation studies are conducted to prove the statistical validity.
-  J. Weston, S. Chopra, and A. Bordes, “Memory networks,” in International Conference on Learning Representations (ICLR), 2015.
-  S. Sukhbaatar, J. Weston, R. Fergus et al., “End-to-end memory networks,” in Advances in Neural Information Processing Systems (NIPS), 2015.
-  S. Min, V. Zhong, R. Socher, and C. Xiong, “Efficient and robust question answering from minimal context over documents,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), 2018.
-  C. Xiong, V. Zhong, and R. Socher, “DCN+: Mixed objective and deep residual coattention for question answering,” in International Conference on Learning Representations (ICLR), 2018.
M. Malinowski, M. Rohrbach, and M. Fritz, “Ask your neurons: A neural-based approach to answering questions about images,” inIEEE International Conference on Computer Vision (ICCV), 2015.
Z. Yang, X. He, J. Gao, and A. Smola, “Stacked attention networks for image
question answering,” in
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  H. Nam, J.-W. Ha, and J. Kim, “Dual attention networks for multimodal reasoning and matching,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  H. Ben-Younes, R. Cadène, N. Thome, and M. Cord, “Mutan: Multimodal tucker fusion for visual question answering,” in IEEE International Conference on Computer Vision (ICCV), 2017.
-  P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
-  Y. Jang, Y. Song, Y. Yu, Y. Kim, and G. Kim, “Tgif-qa: Toward spatio-temporal reasoning in visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  D. Xu, Z. Zhao, J. Xiao, F. Wu, H. Zhang, X. He, and Y. Zhuang, “Video question answering via gradually refined attention over appearance and motion,” in ACM Multimedia.
-  J. Gao, R. Ge, K. Chen, and R. Nevatia, “Motion-appearance co-memory networks for video question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
-  Y. Yu, J. Kim, and G. Kim, “A joint sequence fusion model for video question answering and retrieval,” in European Conference on Computer Vision (ECCV), 2018.
-  M. Tapaswi, Y. Zhu, R. Stiefelhagen, A. Torralba, R. Urtasun, and S. Fidler, “Movieqa: Understanding stories in movies through question-answering,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  J. Lei, L. Yu, M. Bansal, and T. L. Berg, “Tvqa: Localized, compositional video question answering,” in Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018.
-  S. Na, S. Lee, J. Kim, and G. Kim, “A read-write memory network for movie story understanding,” in IEEE International Conference on Computer Vision (ICCV), 2017.
-  J. Liang, L. Jiang, L. Cao, L.-J. Li, and A. Hauptmann, “Focal visual-text attention for visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
B. Wang, Y. Xu, Y. Han, and R. Hong, “Movie question answering: Remembering
the textual cues for layered visual contents,” in
AAAI Conference on Artificial Intelligence, 2018.
-  K.-M. Kim, S.-H. Choi, and B.-T. Zhang, “Multimodal dual attention memory for video story question answering,” in European Conference on Computer Vision (ECCV), 2018.
-  K. Kim, M. Heo, S. Choi, and B. Zhang, “Deepstory: Video story QA by deep embedded memory networks,” in International Joint Conference on Artificial Intelligence, (IJCAI), 2017.
-  A. Agrawal, J. Lu, S. Antol, M. Mitchell, C. L. Zitnick, D. Parikh, and D. Batra, “Vqa: Visual question answering,” Int. J. Comput. Vision, vol. 123, no. 1, pp. 4–31, May 2017.
-  J. Gao, C. Sun, Z. Yang, and R. Nevatia, “Tall: Temporal activity localization via language query,” in IEEE International Conference on Computer Vision (ICCV), 2017.
-  Y. Bengio, J. Louradour, R. Collobert, and J. Weston, “Curriculum learning,” in Proceedings of the 26th Annual International Conference on Machine Learning (ICML), 2009.
-  R. Caruana, “Multitask learning: A knowledge-based source of inductive bias,” in Proceedings of the Tenth International Conference on Machine Learning (ICML), 1993.
-  R. Girshick, “Fast r-cnn,” in IEEE International Conference on Computer Vision (ICCV).
-  R. Collobert and J. Weston, “A unified architecture for natural language processing: Deep neural networks with multitask learning,” in Proceedings of the 25th International Conference on Machine Learning (ICML), 2008.
J. Kim and C. D. Yoo, “Deep partial person re-identification via attention model,” inInternational Conference on Image Processing (ICIP), 2017.
-  S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems (NIPS), 2015.
-  T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang, “Joint detection and identification feature learning for person search,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  Y. Li, N. Duan, B. Zhou, X. Chu, W. Ouyang, X. Wang, and M. Zhou, “Visual question generation as dual task of visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
-  A. Karpathy and L. Fei-Fei, “Deep visual-semantic alignments for generating image descriptions,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 4, pp. 664–676, 2017.
-  L. Castrejón, Y. Aytar, C. Vondrick, H. Pirsiavash, and A. Torralba, “Learning aligned cross-modal representations from weakly aligned data,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  Y. Yu, J. Kim, and G. Kim, “A joint sequence fusion model for video question answering and retrieval,” in European Conference on Computer Vision (ECCV), 2018.
-  L. A. Hendricks, O. Wang, E. Shechtman, J. Sivic, T. Darrell, and B. C. Russell, “Localizing moments in video with natural language,” in IEEE International Conference on Computer Vision (ICCV), 2017.
-  J. Gao, Z. Yang, C. Sun, K. Chen, and R. Nevatia, “TURN TAP: temporal unit regression network for temporal action proposals,” in IEEE International Conference on Computer Vision (ICCV), 2017.
-  A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems (NIPS), 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
-  X. Yin and V. Ordonez, “Obj2text: Generating visually descriptive language from object layouts,” in Conference on Empirical Methods in Natural Language Processing (EMNLP).
-  R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, M. S. Bernstein, and L. Fei-Fei, “Visual genome: Connecting language and vision using crowdsourced dense image annotations,” International Journal of Computer Vision, vol. 123, no. 1, pp. 32–73, May 2017.
-  J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation.” in Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014.
-  A. F. H. H. Minjoon Seo, Aniruddha Kembhavi, “Bidirectional attention flow for machine comprehension,” in International Conference on Learning Representations (ICLR), 2017.
-  A. W. Yu, D. Dohan, Q. Le, T. Luong, R. Zhao, and K. Chen, “Fast and accurate reading comprehension by combining self-attention and convolution,” in International Conference on Learning Representations (ICLR), 2018.
-  E. Hoffer and N. Ailon, “Deep metric learning using triplet network,” in International Conference on Learning Representations Workshop Track (ICLR), 2015.
-  D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” CoRR, vol. abs/1412.6980, 2014. [Online]. Available: http://arxiv.org/abs/1412.6980