Weakly-Supervised Video Object Grounding from Text by Loss Weighting and Object Interaction

05/08/2018 ∙ by Luowei Zhou, et al. ∙ University of Michigan 0

We study weakly-supervised video object grounding: given a video segment and a corresponding descriptive sentence, the goal is to localize objects that are mentioned in the sentence in the video. During training, no object bounding boxes are available, but the set of possible objects to be grounded is known beforehand. Existing approaches in the image domain use Multiple Instance Learning (MIL) to ground objects by enforcing matches between visual and semantic features. A naive extension of this approach to the video domain is to treat the entire segment as a bag of spatial object proposals. However, an object existing sparsely across multiple frames might not be detected completely since successfully spotting it from one single frame would trigger a satisfactory match. To this end, we propagate the weak supervisory signal from the segment level to frames that likely contain the target object. For frames that are unlikely to contain the target objects, we use an alternative penalty loss. We also leverage the interactions among objects as a textual guide for the grounding. We evaluate our model on the newly-collected benchmark YouCook2-BoundingBox and show improvements over competitive baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Grounding language in visual regions provides a fine-grained perspective towards visual recognition and has become a prominent research problem in the computer vision and natural language processing communities 

[Rohrbach et al.(2017)Rohrbach, Rohrbach, Tang, Oh, and Schiele, Rohrbach et al.(2016)Rohrbach, Rohrbach, Hu, Darrell, and Schiele, Xiao et al.(2017)Xiao, Sigal, and Lee, Huang et al.(2018)Huang, Buch, Dery, Garg, Fei-Fei, and Niebles]. In this paper, we study the problem of video object grounding, where a video (segment) and an associated sentence are given and the goal is to localize the objects that are mentioned in the sentence in the video. This task is often formulated as a visual-semantic alignment problem [Karpathy and Fei-Fei(2015)] and has broad applications including retrieval [Karpathy and Fei-Fei(2015), Karpathy et al.(2014)Karpathy, Joulin, and Fei-Fei], description generation [Yu and Siskind(2013), Rohrbach et al.(2017)Rohrbach, Rohrbach, Tang, Oh, and Schiele], and human-robot interaction [Al-Omari et al.(2017)Al-Omari, Duckworth, Hogg, and Cohn, Thomason et al.(2017)Thomason, Padmakumar, Sinapov, Hart, Stone, and Mooney].

Like most fine-grained recognition problems [Ren et al.(2017)Ren, He, Girshick, and Sun, Plummer et al.(2015)Plummer, Wang, Cervantes, Caicedo, Hockenmaier, and Lazebnik], grounding can be extremely data intensive, especially in the context of unconstrained video. On the other hand, video-sentence pairs are easier to obtain than object region annotations (e.g, YouTube Automatic Speech Recognition scripts). We focus on the weakly-supervised version of the grounding problem where the only supervision is sentence descriptions; no spatially-aligned object bounding boxes are available for training. Sentence grounding can involve multiple interacting objects, which sets our work apart from the relatively well-studied weakly-supervised object localization problem, where one or more objects are localized independently [Prest et al.(2012)Prest, Leistner, Civera, Schmid, and Ferrari, Kwak et al.(2015)Kwak, Cho, Laptev, Ponce, and Schmid].

Existing work on visual grounding falls into two categories: multiple instance learning [Karpathy and Fei-Fei(2015), Huang et al.(2018)Huang, Buch, Dery, Garg, Fei-Fei, and Niebles] and visual attention [Rohrbach et al.(2016)Rohrbach, Rohrbach, Hu, Darrell, and Schiele]. In either case, the visual-semantic similarity is first measured between the target object/phrase and all the image-level, i.espatial, object region proposals. Then, either a ranking loss or a reconstruction loss—both of which we refer to here as matching losses—measures the quality of the matching. A naive extension of the existing approaches to the video domain is to treat the entire video segment as a bag of spatial object proposals. However, this presents two issues. First, existing methods rely on the assumption that the target object appears in at least one of the proposal regions. This assumption is weak when it comes to video, since a query object might appear sparsely across multiple frames111In YouCook2-BoundingBox, the target object appears in 60.7% of the total frames, on average. and might not be detected completely. The segment-level supervision, i.eobject labels, could be potentially strengthened if applied to individual frames. Second, a video segment can last up to several minutes. Even with temporal down-sampling, this can bring in tens or hundreds of frames and hence thousands of proposals, which compromise the visual-semantic alignment accuracy.

To address these two issues, we propose a frame-wise loss weighting framework for video grounding. We ground the target objects on a frame-by-frame basis. We face the challenge that the segment-level supervision is not applicable to individual frames where the query object is off-screen, occluded, or just not present in the proposals for that frame. Our solution is to first estimate the likelihood that the query object is present in (a proposal in) each video frame. If the likelihood is high, we judge the matching quality mainly on the matching loss. Otherwise, we down-weight the matching loss while bringing in a penalty loss. The lower the confidence, the higher the penalty. With the conditioned frame-wise grounding framework, the proposed model can avoid being flooded with massive proposals even when the sampling rate is high and only make predictions for applicable frames.

We propose two approaches to estimate frame-wise object likelihood (confidence) scores. The first one is conditioned on both visual and textual inputs, namely, the maximum visual-semantic similarity scores in each frame. The second approach is inspired by the fact that the combination of objects can imply their order of appearance in the video. For example, when a sequence of objects “tomatoes”, “pan” and “plate” appears in the description, the video scene is likely to include a shot of tomatoes being grilled in the pan at the beginning, and a shot of tomatoes being moved to the plate at the end. In the temporal domain, “pan” appears mostly ahead of “plate” while “tomatoes” intersects with both. We implicitly model the object interaction with self-attention [Vaswani et al.(2017)Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, and Polosukhin] and use textual guidance to estimate the frame-wise object likelihood.

For evaluation, due to lack of existing video grounding benchmarks, we have collected annotations over the large-scale instructional video dataset YouCook2, which provides over 15,000 video segment-description pairs. We sample the validation and testing videos at 1fps and draw bounding box for the 67 most frequent objects when they are present in both the video segment and the description. We compare our methods against competitive baselines on video grounding and our proposed methods achieve state-of-the-art performances.

Our contributions are twofold: 1) we propose a novel frame-wise loss weighting framework for the video object grounding problem that outperforms competitive baselines; 2) we provide a benchmark dataset for video grounding.

2 Related Work

Grounding in Image/Video. Supervised grounding or referring has been intensively studied [Plummer et al.(2015)Plummer, Wang, Cervantes, Caicedo, Hockenmaier, and Lazebnik, Plummer et al.(2017)Plummer, Kordas, Kiapour, Zheng, Piramuthu, and Lazebnik, Yu et al.(2018)Yu, Lin, Shen, Yang, Lu, Bansal, and Berg] in the image domain. These methods require dense bounding box annotations for training, which are expensive to obtain. Recently, an increasing amount of attention has shifted towards the weakly-supervised grounding problem  [Rohrbach et al.(2016)Rohrbach, Rohrbach, Hu, Darrell, and Schiele, Xiao et al.(2017)Xiao, Sigal, and Lee, Karpathy et al.(2014)Karpathy, Joulin, and Fei-Fei, Karpathy and Fei-Fei(2015), Huang et al.(2018)Huang, Buch, Dery, Garg, Fei-Fei, and Niebles], where only descriptive phrases, no explicit target grounding locations, are made accessible during training. Karpathy and Fei-Fei [Karpathy and Fei-Fei(2015)] propose to pair image regions to words in a sentence by computing a visual-semantic similarity score, finding the word that best describes the region. Rohrbach et al [Rohrbach et al.(2016)Rohrbach, Rohrbach, Hu, Darrell, and Schiele] ground textual phrases in images by reconstructing the original phrase through visual attention. Yu and Siskind [Yu and Siskind(2017)] ground objects from text in constrained videos. De-An et al [Huang et al.(2018)Huang, Buch, Dery, Garg, Fei-Fei, and Niebles] extend [Karpathy and Fei-Fei(2015)] to the video domain and further improve the work by modeling the reference relationships among segments. In this work, we tackle the problem from a novel aspect as fully exploiting the visual-semantic relations within each segment, i.eframe-wise supervisions and object interactions.

Weakly-supervised Object Localization. Weakly-supervised object localization has been explored in both the image [Cinbis et al.(2014)Cinbis, Verbeek, and Schmid, Divvala et al.(2014)Divvala, Farhadi, and Guestrin, Deselaers et al.(2012)Deselaers, Alexe, and Ferrari, Song et al.(2014)Song, Girshick, Jegelka, Mairal, Harchaoui, and Darrell, Oquab et al.(2015)Oquab, Bottou, Laptev, and Sivic] and the video domain [Prest et al.(2012)Prest, Leistner, Civera, Schmid, and Ferrari, Kwak et al.(2015)Kwak, Cho, Laptev, Ponce, and Schmid]. Unlike object grounding from text, object localization typically involves localizing an object class or a video tag in the visual content. Existing works in the image domain naturally pursue a multiple instance learning (MIL) approach to this problem. Positive instances are images where the label is present, and negative instances are given as images with the label absent. In the video domain, the existing methods [Prest et al.(2012)Prest, Leistner, Civera, Schmid, and Ferrari, Kwak et al.(2015)Kwak, Cho, Laptev, Ponce, and Schmid] approach this problem by taking advantage of motion information and similarity between frames to generate spatio-temporal tubes. Note that these tubes are much more expensive to obtain compared with spatial proposals, hence we only consider the latter option.

Object Interaction. Object interaction was initially proposed to detect fine-grained visual details for action detection, such as the temporal relationships between objects in a scene, to overcome changes in illumination, pose, occlusion, etc. Some works have modeled object interaction using pairwise or higher-order relationships [Ni et al.(2016)Ni, Yang, and Gao, Lea et al.(2016)Lea, Reiter, Vidal, and Hager, Ma et al.(2017)Ma, Kadav, Melvin, Kira, AlRegib, and Graf]. Ni et al [Ni et al.(2016)Ni, Yang, and Gao] consolidate object detections at each step by modeling pair-wise object relationships and hence enforce the temporal object consistency in each additional step. Ma et al [Ma et al.(2017)Ma, Kadav, Melvin, Kira, AlRegib, and Graf] implicitly model the higher-order interactions among object region proposals, using groups and subgroups rather than just pairwise interactions. Inspired by recent work [Xiao et al.(2017)Xiao, Sigal, and Lee, Cirik et al.(2018)Cirik, Berg-Kirkpatrick, and Morency], where the linguistic structure of the input phrase is leveraged to infer the spatial object locations, we propose to model object interaction from a linguistic perspective as a textual guidance for grounding.

3 Methods

We start this section by introducing some background knowledge. In Sec. 3.2, we describe the video object grounding baseline. We then propose our framework in Sec. 3.3 by extending the segment-level object label supervision to the frame-level. Two novel approaches are proposed in judging under what circumstances the frame-level supervision is applicable.

3.1 Background

In this section we provide some background on visual-semantic alignment framework (grounding by ranking) and self attention, which are building blocks of our model.

Grounding by Ranking. We start by describing ranking-based grounding approach from [Karpathy and Fei-Fei(2015)]. Given a sentence description including query objects/phrases and a set of object region proposals from an image, the goal is to target each referred object in the query as one of the object proposals. Queries and visual region proposals are first encoded in a common

-dimensional space. Denote the object query feature vectors as

, and the region proposal feature vectors as , . We pack the feature vectors into matrices and . The visual-semantic matching score of the description and the image is formulated as:

(1)

where measures the similarity between query and proposal . Defining negative samples and as the query and proposal from texts and images that are not paired with nor , the grounding by ranking framework minimizes the following margin loss:

(2)

where the first ranking term encourages the correct region proposal matching and the second ranking term encourages the correct sentence matching. is the ranking margin. During inference, the proposal with the maximal similarity score with each object query is selected.

Self Attention.

We now describe the scaled dot-product attention model. Define a set of queries

, a set of keys and values , where is the query index, is the key/value index. Given an arbitrary query , scaled dot-product attention computes the output as a weighted sum of values , where the weights are determined by the scaled dot-products of query and keys , as formulated below:

(3)

where the authors pack and into matrices and , respectively. Self-attention [Vaswani et al.(2017)Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, and Polosukhin] is a special case of the scaled dot-product attention where the queries, keys and values are all identical. In our case, they are all object encoding vectors and self-attention encodes the semantic relationships among the objects. We adopt a multi-head version of the self-attention layer [Vaswani et al.(2017)Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, and Polosukhin, Zhou et al.(2018b)Zhou, Zhou, Corso, Socher, and Xiong] for modeling object relationships, which deploys multiple paralleled self-attention layers.

3.2 Video Object Grounding

We adapt the Grounding by Ranking framework [Karpathy and Fei-Fei(2015)] to the video domain, and this adaptation will serve as our baseline. Denote the set of frames in a video segment as and the object proposals in frame as , . As before, define the object queries as , we compute the similarity between the query object and all the proposals in a segment. Note that the similarity dot product might grow large in magnitude as increases [Vaswani et al.(2017)Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, and Polosukhin]. Hence, we scale the dot-product by and restrict

to be between 0 and 1 with a Sigmoid function. The similarity function and segment-description matching score are then:

(4)

where matrix indicates the pack of all proposal features.

This “brute-force” extension of Grounding by Ranking framework to the video domain presents two issues. First, depending on the video sampling rate, the total number of proposals per segment () could be extremely large. Hence this solution does not scale well to long frame sequences. Second, an object existing sparsely across multiple frames might not be detected completely since successfully spotting it from one single frame would trigger a satisfactory match. We explain next how we propagate this weak supervisory signal from the segment level to frames that likely contain the target object.

Figure 1: An overview of our framework. Inputs to the system are a video segment and a phrase that describes the segment. The objects from the phrase are grounded for each sampled frame . Object and proposal features are encoded to size and visual-semantic similarity scores are computed. The ranking loss is weighted by a confidence score which combined with the penalty form the final loss. The object relations are further encoded to guide the loss weights (see Sec. 3.4 for details). During inference, the region proposal with the maximum similarity score with the object query is selected for grounding.

3.3 Frame-wise Loss Weighting

In our framework, each frame is considered separately to ground the same target objects. Fig. 1 shows an overview of our model. We first estimate the likelihood that the query object is present in each video frame. If the likelihood is high, we judge the matching quality mainly on the matching loss (e.g., ranking loss). Otherwise, we down-weight the matching loss while bringing in a penalty loss. The lower the confidence, the higher the penalty. For clarity, we explain our idea when the matching loss is the ranking loss

but note that this can be generalized to other loss functions.

Let the ranking loss for frame be and the similarity score between query and proposal be . Let and . We define the confidence score of the prediction at frame as the visual-semantic matching score:

(5)

where is defined in Eq. 1. The corresponding penalty is:

(6)

inspired by [Kendall et al.(2017)Kendall, Gal, and Cipolla]. The final loss for the segment is a weighted sum of frame-wise ranking losses and penalties:

(7)
(8)

where is a static coefficient to balance the ranking loss and the penalty and can be validated on the validation set. A low might cause the system to be over-confident on the prediction.

3.4 Object Interaction

We assume that the object types and their order in the language description can roughly determine when they appear in the video content, as motivated in Sec. 1. We show that this language prior can work as the frame-wise confidence score. To consider the interaction among objects, we further encode each object query feature as:

(9)

where MA is the multi-head self-attention layer [Vaswani et al.(2017)Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, and Polosukhin], taking in the (query, key, value) triplet. It represents each query as the combination of all other queries based on their inter-relations. The built-in positional encoding layer [Vaswani et al.(2017)Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, and Polosukhin] in multi-head attention captures the order of objects appearing in the description. Note that the formulation is non-autoregressive, i.e, all the objects in the same description can interact with each other.

We evenly divide each video segment into snippets and predict the confidence score for object to appear in each snippet based upon the concatenation of and . Note that is a pre-specified constant that satisfies . The language-based confidence score is formulated as:

(10)

where indicates the feature concatenation, and are embedding weights and biases. We average the language-based and the similarity-based confidence score and rewrite Eq. 7 as:

(11)

where is the snippet index and stands for the ceiling operator.

4 Experiments

4.1 Dataset

YouCook2-BoundingBox. YouCook2 [Zhou et al.(2018a)Zhou, Xu, and Corso] consists of 2000 YouTube cooking videos from 89 recipes. Each video has recipe steps temporally annotated (i.estart timestamp and end timestamp) and each segment is described by a natural language sentence. The average segment duration is 19.6s. Our training set is the same as the YouCook2 training split, only paired sentences are provided. For each segment-description pair in the validation and testing set however, we provide bounding box annotations for the most frequently appearing objects from the dataset, i.ethe top 63 recurring objects along with four referring expressions: it, them, that, they (see Fig. 2). These are used only during evaluation.

Figure 2: Frequency count of each class label (including referring expressions).

From YouCook2, we split each recipe step into a separate segment and sample it at 1 fps. We use Amazon Turk workers to draw bounding box around the objects in the video segment using the highlighted words in the sentence (from the 67 objects in our vocabulary). All annotations are further verified by the top 30 annotators. Please see the Appendix for more details on annotations and quality control.

4.2 Baselines and Metrics

Baselines. We include two competitive baselines from published work: DVSA [Karpathy and Fei-Fei(2015)] and GroundeR [Rohrbach et al.(2016)Rohrbach, Rohrbach, Hu, Darrell, and Schiele]. DVSA is the Grounding by Ranking method which we build all our methods upon. For fair comparison, all the approaches take in the same object proposals generated by Faster-RCNN [Ren et al.(2017)Ren, He, Girshick, and Sun] (pre-trained on MSCOCO). Following the convention from  [Karpathy and Fei-Fei(2015), Huang et al.(2018)Huang, Buch, Dery, Garg, Fei-Fei, and Niebles], we select the top proposals per frame and sample frames per segment unless otherwise specified. We also evaluate the Baseline Random, which chooses a random proposal as the output.

Metrics. We evaluate the grounding quality by bounding box localization accuracy (denoted as Box Accuracy). The output is positive if the proposed box has over 50% IoU with the ground-truth annotation, otherwise negative. We compute accuracy for each object and average across all the object types.

4.3 Implementation Details

The number of snippets in Sec. 3.4 is set to 5. The encoding size is 128 for all the methods. Object labels are represented as one-hot vectors, which are encoded by a linear layer without the bias term. The loss factor is cross-validated on the validation set and is set to 0.9. The ranking margin

is set to 0.1. For training, we use stochastic gradient descent (SGD) with Nesterov momentum. The learning rate is set at 0.05 and the momentum is 0.9. We implement the model in PyTorch and train it using either a single Titan Xp GPU with SGD or 4 GPUs with synchronous SGD, depending on the validation accuracy. The model typically takes 30 epochs,

i.e4 hours to converge. More details are in the Appendix.

4.4 Results on Object Grounding

The quantitative results on object grounding are shown in Tab. 1. The model with the highest score on the validation set is evaluated on the test split. We compute the upper bound as the accuracy when proposing all 20 proposals, to see how far the methods are from the performance limit. Note that the upper bound reported here is lower than that in [Rohrbach et al.(2016)Rohrbach, Rohrbach, Hu, Darrell, and Schiele]

. This is largely due to the domain shift from general scenes to cooking scenes and the large variance in our object states, e.g. zoom-in and zoom-out views, onions v.s. fried onion rings.

We show results on our proposed models, where the “Loss Weighting” model computes the confidence score with visual-semantic matching and the “Object Interaction” model computes the confidence score with textual guidance (Sec. 3.4). Our full model averages these two scores as the final confidence score (Eq. 11). The proposed methods demonstrate a steady improvement from the DVSA baseline, with a relative 1.40% boost from loss weighting and another 1.62% from combining object interaction, a total improvement of 3.02%. On the other hand, the baseline has a higher validation score, which indicates model overfitting. Note that text guidance alone (“Object Interaction”) works slightly worse than the baseline, showing that both visual and textual information are critical for inferring the frame-wise loss weights. Our methods also outperform other compared methods, GroundeR and Baseline Random by a large margin.

Method Box Accuracy (%)
Val. Test
Compared methods
Baseline Random 13.30 14.18
GroundeR [Rohrbach et al.(2016)Rohrbach, Rohrbach, Hu, Darrell, and Schiele] 19.63 19.94
DVSA [Karpathy and Fei-Fei(2015)] 30.51 30.80
Our methods
Loss Weighting 30.07 31.23
Object Interaction 29.61 30.06
Full Model 30.31 31.73
Upper bound 57.77 58.56
Table 1: Evaluation on localizing objects from the grounding-truth captions.
Figure 3: Top 10 accuracy increases & decreases by object category. (Left) Improvements of our Loss Weighting model over DVSA. (Right) Improvements of our Full Model over DVSA.
Figure 4: Visualization of localization output from baseline DVSA and our proposed methods. Red boxes indicate ground-truths and green boxes indicate proposed regions. The first two rows show examples where our methods perform better than DVSA. The last row displays a negative example where all methods perform poorly. Better viewed in color.

Analysis. We show in Fig. 3 the top 10 accuracy increases and decreases of our methods over the DVSA baseline, by object category. Our methods make better predictions on static objects such as “squid”, “beef”, and “noodle” and worse predictions on cookwares, such as “wok”, “pan”, and “oven”, which involves more state changes, such as containing/not containing food or different camera perspectives. Our hypothesis is, our loss weighting framework favors consistent objects across frames, due to the shared frame-wise supervision.

Impact of Sampling Rate. We investigate the impact of high video sampling rate on grounding accuracy by increasing the total number of frames per segment () from 5 to 20. The accuracy from DVSA drops from 30.80% to 29.90% and the accuracy from our Loss Weighted model drops from 31.23% to 30.93%. We expected these inferior performances, due to the excessive object proposals. However, our loss weighted method only compromises 0.96% of the accuracy while the accuracy from DVSA drops by 2.92%, showing that our method is less sensitive to high sampling rate and predicts better on long frame sequences.

Qualitative Results. Fig. 4 visualizes the grounded objects with DVSA and our proposed methods. The first two rows show some positive examples. In Fig. 4 (a), we see with DVSA baseline the "plate" object is grounded to the incorrect regions in the frames. However our methods correctly select regions with a large IOU with the ground truth box. In Fig. 4 (b) the labels "bacon" and "it" refer to the same target object. Per our annotation requirements, there is only one ground truth box instead of two. The full model correctly combines both "bacon" and "it" grounds them to the same region proposal. The last row that shows where all methods fail to ground the target objects adequately. This may be a result of errors in the top object proposals proposed since the scene is rather complicated. An additional explanation may be bias in the dataset, where during training the "bowl" object typically occupies the majority of the frame.

Limitations. There are two limitations in our method we hope to address in our future work. First, even though the frame-wise loss can to some degree enforce the temporal consistency between frames, we do not explicitly model the relation between frames, for instance motion information. The transition between object states across frames, e.g., raw meat to cooked meat, should be further studied. Second, our grounding performance is upper-bounded by the object proposal accuracy and we have no control over the errors from the proposals. An end-to-end version of the proposed method that solves both the proposing and the grounding problem can potentially improve the grounding accuracy.

5 Conclusion

We propose a frame-wise loss weighted grounding model for video object grounding. Our model applies segment-level labels to the frames in each segment, while being robust to inconsistencies between the segment-level label and each individual frame. We also leverage object interaction as textual guidance for grounding. We evaluate the effectiveness of our models on the newly-collected video grounding dataset YouCook2-BoundingBox. Our proposed methods outperform competitive baseline methods by a large margin. Future directions include incorporating the video motion information and exploring an end-to-end solution for video object grounding.

Acknowledgement

This work has been supported by DARPA FA8750-17-2-0112. This article solely reflects the opinions and conclusions of its authors but not DARPA. We thank Tianhang Gao, Ryan Szeto and Mohamed El Banani for their helpful discussions.

References

Appendix

More on Implementation Details

When sampling frames from a segment, we evenly divide the segment into clips and randomly sample one frame from each clip as temporal data augmentation. The negative sample sentence is randomly sampled from all available sentences, but we exclude sentences that have overlapped objects with the positive sample . For self attention, we use a 2-layer 6-head multi-head attention module with the hidden size set to 256 and the dropout ratio set to 0.2.

For fair comparison, all the approaches take in the same object proposals generated by Faster-RCNN [Ren et al.(2017)Ren, He, Girshick, and Sun]. The model is based upon ResNet-101 and pre-trained on MSCOCO for the object detection task.222 Details see https://github.com/jwyang/faster-rcnn.pytorch. We take the 2048-dimension output after the RoI pooling as the region feature. We reduce the size of the region feature from 2048 to 128 with two linear layers, followed with dropout (

) and ReLU.

More on Data Annotation

Quality Control. We use VATIC [Vondrick et al.(2013)Vondrick, Patterson, and Ramanan] as our annotation tool and Amazon Mechanical Turk (MTurk) as the crowdsourcing marketplace. To maintain quality control, a worker must annotate a gold-standard training video before being allowed to annotate the dataset. A gold-standard training video is an already annotated video segment that new workers are tested against. [Vondrick et al.(2013)Vondrick, Patterson, and Ramanan] introduced these videos to eliminate bad workers and limit annotation correction efforts. A worker is not aware that they are completing a training video, but they are given unlimited attempts until it is successfully completed. All of the gold-standard training videos consists of three objects to be annotated and the worker must achieve an IoU of at least 50% within every frame with one allowable mistake. The video segments were uploaded in batches, and with each new batch all workers were required to complete a different training video in order to continue annotating. We have a total of 94 annotators that completed the annotation tasks. The top 30 annotators (with the most accepted video segments) were selected to perform verification on the annotations.

Dataset Statistics From the validation & testing segments annotated we have a total of 4,325 annotated segments with 2,962 validation and 1,363 testing segments, respectively. These segments were extracted from 647 videos that contain words from our vocabulary list.

Fig. 5

shows the distribution of the segment durations from YouCook2, with mean and standard deviation of 19.6s and 18.2s across all splits. Fig.

6 displays the number of target objects from the annotated YouCook2-BoundingBox segments. The mean target object per sentence is 2.05 with a standard deviation of 1.49. The target objects are words that belong in our vocabulary list of 67 objects.

When completing the annotations, the workers were given the option to mark an object as "outside of view frame", "occluded", or both. We define an object’s visibility as in view of the current frame with no occlusion. From our collected annotations, Fig. 7 shows each object’s visibility duration in the validation & testing split. In the validation split objects are visible 60.72% of the time, and 60.58% for testing. Note from Fig. 7 there is a spike in objects with 100% duration, this is attributed to the shorter segments from our collected data. It is perfectly reasonable to have a visible object for the entire duration of shorter segments, some as short as 2 seconds.

Figure 5: Distribution of segment durations for train/val/test splits.
Figure 6: Distribution of number of target objects within each segment for train/val/test splits. Target objects belong in our vocabulary of 67 words.
Figure 7: Span of object duration in each segment for annotated val/test splits.
Figure 8: Annotations completed by MTurk workers; The images on the left denote correct annotations and the right shows incorrect annotations. Each image is a frame from the video segment accompanied with its descriptive phrase. Better viewed in color.

Errata

After releasing the original version of the results, we discovered an error in the calculation of the evaluation metric (i.e., a scaling issue in the object proposal coordinates). This later version fixes that error. For completeness, we include the tables from both cases here for comparison (Tab. 1 for the initial results and Tab. 2 for the updated results). We note the performance ordering does not change, that all methods see a significant rise with respect to the baseline and the relative performance improvement decreases.

Method Box Accuracy (%)
Val. Test
Compared methods
Baseline Random 8.60 9.51
GroundeR [Rohrbach et al.(2016)Rohrbach, Rohrbach, Hu, Darrell, and Schiele] 12.91 13.72
DVSA [Karpathy and Fei-Fei(2015)] 14.70 16.85
Our methods
Loss Weighting 15.80 17.74
Object Interaction 14.86 16.33
Full Model 15.83 18.39
Upper bound 45.97 47.17
Table 2: (Initial.) Evaluation on localizing objects from the grounding-truth captions.
Method Box Accuracy (%)
Val. Test
Compared methods
Baseline Random 13.30 14.18
GroundeR [Rohrbach et al.(2016)Rohrbach, Rohrbach, Hu, Darrell, and Schiele] 19.63 19.94
DVSA [Karpathy and Fei-Fei(2015)] 30.51 30.80
Our methods
Loss Weighting 30.07 31.23
Object Interaction 29.61 30.06
Full Model 30.31 31.73
Upper bound 57.77 58.56
Table 3: (Updated.) Evaluation on localizing objects from the grounding-truth captions.