Video Object Segmentation with Joint Re-identification and Attention-Aware Mask Propagation

03/12/2018 ∙ by Xiaoxiao Li, et al. ∙ The Chinese University of Hong Kong 0

The problem of video object segmentation can become extremely challenging when multiple instances co-exist. While each instance may exhibit large scale and pose variations, the problem is compounded when instances occlude each other causing failures in tracking. In this study, we formulate a deep recurrent network that is capable of segmenting and tracking objects in video simultaneously by their temporal continuity, yet able to re-identify them when they re-appear after a prolonged occlusion. We combine both temporal propagation and re-identification functionalities into a single framework that can be trained end-to-end. In particular, we present a re-identification module with template expansion to retrieve missing objects despite their large appearance changes. In addition, we contribute a new attention-based recurrent mask propagation approach that is robust to distractors not belonging to the target segment. Our approach achieves a new state-of-the-art global mean (Region Jaccard and Boundary F measure) of 68.2 on the challenging DAVIS 2017 benchmark (test-dev set), outperforming the winning solution which achieves a global mean of 66.1 on the same partition.



There are no comments yet.


page 1

page 4

page 5

page 6

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Video object segmentation aims at segmenting foreground instance object(s) from the background region in a video sequence. Typically, ground-truth masks are assumed to be given in the first frame. The goal is to begin with these masks and track them in the remaining sequence. This paradigm is sometimes known as semi-supervised video object segmentation in the literature [24, 3, 27]. A notable and challenging benchmark for this task is 2017 DAVIS Challenge  [28]. An example of a sequence is shown in Fig. 1. The DAVIS dataset presents real-world challenges that need to be solved from two key aspects. First, there are multiple instances in a video. It is very likely that they will occlude each other causing partial or even full obstruction of a target instance. Second, instances typically experience substantial variations in both scale and pose across frames.

Figure 1: In this example, we focus on the bicycle. (a) shows the result of template matching approach which is affected by large scale and pose variations. As shown in (b), the temporal propagation approach is incapable of handling occlusion. The proposed DyeNet joints them into a unified framework, first retrieves high confidence starting points and then propagates their masks bidirectionally to address those issues. The result of DyeNet is visualized in (c). Best viewed in color.

To address the occlusion problem, notable studies such as [3, 38] adapt generic semantic segmentation deep model to the task of specific object segmentation. These methods follow a notion reminiscent of the template matching based methods that are widely used in visual tracking task [2, 32]. Often, a fixed set of templates such as the masks of target objects in the first frame are used for matching targets. This paradigm fails in some challenging cases in DAVIS (see Fig. 1(a)), as using a fixed set of templates cannot sufficiently cover large scale and pose variations. To mitigate the variations in both scale and pose across frames, existing studies [31, 34, 15, 26, 33, 16] exploit temporal information to maintain continuity of individual segmented regions across frames. On unconstrained videos with severe occlusions, such as that shown in Fig. 1(b), approaches based on temporal continuity are prone to errors since there is no mechanism to re-identify a target when it reappears after missing in a few video frames. In addition, these approaches may fail to track instances in the presence of distractors such as cluttered backgrounds or segments from other objects during temporal propagation.

Solving video object segmentation with multiple instances requires template matching for coping with occlusion and temporal propagation for ensuring temporal continuity. In this study, we bring both approaches into a single unified network. Our network hinges on two main modules, namely a re-identification (Re-ID) module and a recurrent mask propagation (Re-MP) module. The Re-ID module helps to establish confident starting points in non-successive frames and retrieve missing segments caused by occlusions. Based on the segments provided by the Re-ID module, the Re-MP module propagates their masks bidirectionally by a recurrent neural network to the entire video. The process of conducting Re-ID followed by Re-MP may be imagined as dyeing a fabric with multiple color dots (

i.e., choosing starting points with re-identification) and the color disperses from these dots (i.e., propagation). Drawing from this analogy, we name our network as DyeNet.

There are a few methods [17, 21] that improve video object segmentation through both temporal propagation and re-identification. Our approach differs by offering a unified network that allows both tasks to be optimized in an end-to-end network. In addition, unlike existing studies, the Re-ID and Re-MP steps are conducted in an iterative manner. This allows us to identify confidently predicted mask in each iteration and expand the template set. With a dynamic expansion of template set, our Re-ID module can better retrieve missing objects that reappear with different poses and scales. In addition, the Re-MP module is specially designed with attention mechanism to disregard distractors such as background objects or segments from other objects during mask propagation. As shown in Fig. 1(c), DyeNet is capable of segmenting multiple instances across a video with high accuracy through Re-ID and Re-MP. We provide a more detailed discussion against [17, 21] in the related work section.

Our contributions are summarized as follows. (1) We propose a novel approach that joints template matching and temporal propagation into a unified deep neural network for addressing video object segmentation with multiple instances. The network can be trained end-to-end. It does not require online training (i.e., fine-tune using the masks of the first frame) to do well but can achieve better results with online training. (2) We present an effective template expansion approach to better retrieve missing targets that reappear with different poses and scales. (3) We present a new attention-based recurrent mask propagation module that is more resilient to distractors.

We use the challenging DAVIS 2017 dataset [28] as our key benchmark. The winner of this challenge [21] achieves a global mean (Region Jaccard and Boundary F measure) of 66.1 on the test-dev partition. Our method obtains a global mean of 68.2 on this partition. Without online training, DyeNet can still achieve a competitive -mean of while the speed is an order of magnitude faster. Our method also achieves state-of-the-art results on DAVIS 2016 [27], SegTrack [19] and YouTubeObjects [29] datasets.

2 Related Work

Image segmentation. The goal of semi-supervised video object segmentation is different to semantic image segmentation [4, 40, 23, 20, 39] and instance segmentation [9, 10, 22, 11] that perform pixel-wise class labeling. In video object segmentation, the class type is always assumed to be undefined. Thus, the challenge lies in performing accurate object-agnostic mask propagation. Our network leverages semantic image segmentation task to learn generic representation that encompasses semantic level information. The representation learned is strong, allowing our model to be applied in a dataset-agnostic manner, i.e., it is not trained with any first frame annotation of each video in the target dataset as training/tuning set, but it can also be optionally fine-tuned and adapted into the targeted video domain as practiced in [16] to obtain better results. We will examine both possibilities in the experimental section.

Visual tracking. While semi-supervised video object segmentation can be seen as a pixel-level tracking task, video object segmentation differs in its more challenging nature in terms of object scale variation across video frames and inter-object scale differences. In addition, the pose of objects is relatively stable in the tracking datasets, and there are few prolonged occlusions. Importantly, the problem differs in that conventional tracking tasks only need bounding box level tracking results, and concern about causality (i.e

., tracker does not use any future frames for estimation). In contrast, semi-supervised video object segmentation expects precise pixel-level tracking results, and typically does not assume causality.

Video object segmentation.

Prior to the prevalence of deep learning, most approaches to semantic video segmentation are graph based 

[8, 18, 36, 25]. Contemporary methods are mostly based on deep learning. A useful technique reminiscent of template matching is commonly applied. In particular, templates are typically formed by the ground-truth masks in the first frame. For instance, Caelles et al[3] adapt a generic semantic image segmentation network to the templates for each testing video individually. Yoon et al[38] distinguish the foreground objects based on the pixel-level similarity between candidates and templates, which is measured by a matching deep network. Another useful technique is to exploit temporal continuity for establishing spatiotemporal correlation. Tsai et al[31] estimate object segmentation and optical flow synergistically using an iterative scheme. Jampani et al[15] propagate structured information through a video sequence by a bilateral network that performs learnable bilateral filtering operations cross video frames. Perazzi et al[26] and Jang et al[33] estimate the segmentation mask of the current frame by using the mask from the previous frame as a guidance.

Differences against existing methods that combine template matching and temporal continuity. There are a few studies that combine the merits of the two aforementioned techniques. Khoreva et al[16] show that a training set closer to the target domain is more effective. They improve [3] by synthesizing more training data from the first frame of testing videos and employ mask propagation during the inference. Instance Re-Identification Flow (IRIF) [17] divides foreground objects into human and non-human object instances, and then apply person re-identification network [35] to retrieve missing human during mask propagation and blend them into the final prediction. For non-human object instances, IRIF degenerates to a conventional mask propagation method. Our method differs to these studies in that we do not synthesize training data from the first frames and do not explicitly divide foreground objects into human and non-human object instances.

Li et al[21] adapt person re-identification approach [35] to a generic object re-identification model and employ a two-stream mask propagation model [26]. Their method (VS-ReID) achieved the highest performance in the 2017 DAVIS Challenge [21]

, however, its shortcomings are also obvious: (1) Unlike DyeNet that adopts template expansion, VS-ReID only uses the masks of target objects in the first frame as templates. It is thus more susceptible to pose variations. (2) Their method is much slower compared to ours due to its redundant feature extraction steps and less efficient inference method. Specifically, the inference of VS-ReID takes

3 seconds per frame on DAVIS dataset. The speed is 7 times slower than DyeNet. (3) VS-ReID does not have any attention mechanism in its mask propagation. Its robustness to distractors and background clutters is thus inferior to DyeNet. (4) VS-ReID cannot perform end-to-end training. By contrast, DyeNet performs joint learning of re-identification and temporal propagation.

3 Methodology

Figure 2: (a) The pipeline of DyeNet. The network hinges on two main modules, namely a re-identification (Re-ID) module and a recurrent mask propagation (Re-MP) module. (b) The network architecture of the re-identification (Re-ID) module. Best viewed in color.

We provide an overview of the proposed approach. Figure 2(a) depicts the architecture of DyeNet. It consists of two modules, namely the re-identification (Re-ID) module and the recurrent mask propagation (Re-MP) module. The network first performs feature extraction, which will be detailed next.

Feature extraction. Given a video sequence with frames , for each frame , we first extract a feature by a convolutional feature network , i.e., . Both Re-ID and Re-MP modules employ the same set of features in order to save computation in feature extraction. Considering model capacity and speed, we use ResNet-101 [12] as the backbone of

. More specifically, ResNet-101 consists of five blocks named as ‘conv1’, ‘conv2_x’ to ‘conv5_x’. We employ ‘conv1’ to ‘conv4_x’ as our feature network. To increase the resolution of features, we decrease the convolutional strides in ‘conv4_x’ block and replace convolutions in ‘conv4_x’ by dilated convolutions similar to 

[4]. Consequently, the resolution of feature maps is of the input frame.

Iterative inference with template expansion. After feature extraction, DyeNet runs Re-ID and Re-MP in an iterative manner to obtain segmentation masks of all instances across the whole video sequence. We assume the availability of masks given in the first frame and use them as templates. This is the standard protocol of the benchmarks considered in Sec. 4.

In the first iteration, the Re-ID module generates a set of masks from object proposals and compares them with templates. Masks with a high similarity to templates are chosen as the starting points for Re-MP. Subsequently, Re-MP propagates each selected mask (i.e., starting point) bidirectionally, and generates a sequence of segmentation masks, which we call tracklet. After Re-MP, we can additionally consider post-processing steps to link the tracklets. In subsequent iterations, DyeNet chooses confidently predicted masks to expand the template set and reapplies Re-ID and Re-MP. Template expansion avoids heavy reliance on the masks provided by the first frame, which may not capture sufficient pose variations of targets.

Note that we do not expect to retrieve all the masks of target objects in a given sequence. In the first iteration, it is sufficient to obtain several high-quality starting points for the mask propagation step. After each iteration of DyeNet, we select predictions with high confidence to augment the template set. In practice, the first iteration can retrieve nearly masks as starting points on DAVIS 2017 dataset. After three iterations, this rate will increase to . In this work, DyeNet stops the iterative process when no more high-confident masks can be found by the Re-ID module. Next, we present the Re-ID and Re-MP modules.

Figure 3: (a) Illustration of bi-direction mask propagation. (b) The network architecture of the recurrent mask propagation (Re-MP) module. Best viewed in color.

3.1 Re-identification

We introduce the Re-ID module to search for targets in the video sequences. The module has several unique features that allow it to retrieve a missing object that reappears in different scales and poses. First, as discussed previously, we expand the template set in every iteration we apply Re-ID and Re-MP. Template expansion enriches the template set for more robust matching. Second, we employ the object proposal method to estimate the location of target objects. Since these proposals are generated based on anchors of various sizes, which cover objects of various scales, the Re-ID module can handle large scale variations.

Figure 2(b) illustrates the Re-ID module. For the -th frame, besides the feature , the Re-ID module also requires the object proposals as input where indicates the number of proposal bounding boxes on this frame. We employ a Region Proposal Network (RPN) [30] to propose candidate object bounding boxes on every frame. For convenience, our RPN is trained separately from DyeNet, but their backbone networks are shareable. For each candidate bounding box , we first extract its feature from , and resize the feature into a fixed size (e.g., 2828) by RoIAlign [11], which is an improved form of RoIPool that removes harsh quantization. The extracted features are fed into two shallow sub-networks. The first sub-network is a mask network that predicts a binary mask that represents the segmentation mask of the main instance in candidate bounding box . The second sub-network is a re-identification network that projects the extracted features into an L2-normalized 256-dimensional subspace to obtain the mask features. The templates are also projected onto the same subspace for feature extraction.

By computing the cosine similarities between the mask and template features, we can measure the similarity between candidate bounding boxes and templates. If a candidate bounding box is sufficiently similar to any template, that is, the cosine similarity is larger than a threshold

, we will keep its mask as a starting point for mask propagation. In practice, we set with a high value to establish high-quality starting points for our next step.

We employ ‘conv5_x’ block of ResNet-101 as the backbone of the sub-networks. However, some modifications are necessary to adapt them to the respective tasks. In particular, we decrease the convolutional strides in the mask network to capture more details of prediction. For the re-identification network, we keep the original strides and append a global average pooling layer and a fully connected layer to project the features into the target subspace.

3.2 Recurrent Mask Propagation

As shown in Fig. 3(a), we bi-directionally extend the retrieved masks (i.e., starting points) to form tracklets by using the Re-MP module. By incorporating short-term memory, the module is capable of handling large pose variations, which complements the re-identification module. We formulate the Re-MP module as a Recurrent Neural Network (RNN). Figure 3(b) illustrates the mask propagation process between adjacent frames. For brevity, we only describe the forward propagation. A backward propagation can be conducted with the same approach.

Suppose is a retrieved segmentation mask for instance in the -th frame, and we have propagated from -th frame to -th frame, is the sequence of binary masks that we obtain. We now aim to predict , i.e., the mask for instance in the -th frame. In a RNN framework, the prediction of can be solved as


where and are the recurrent function and output function, respectively.

We first explain Eq. (1). We begin with estimating the location, i.e., the bounding box, of instance in the -th frame from by flow guided warping. More specifically, we use FlowNet 2.0 [13] to extract the optical flow between -th and -th frames. The binary mask is warped to according to by a bilinear warping function. After that, we obtain the bounding box of as the location of instance in the -th frame. Similar to the Re-ID module, we extract the feature map according to this bounding box from by RoIAlign operation. The feature of this bounding box is denoted as . The historical information of instance from -th frame to -th frame is expressed by a hidden state or memory , where denotes the feature size and represents the number of channels. We warp to by optical flow for spatial consistency. With both and we can estimate by Eq. (1). Similar to the mask network described in Sec. 3.1, we employ ‘conv5_x’ block of ResNet-101 as our recurrent function . The mask for the instance in the -th frame, , can then be obtained by using the output function in Eq. (2). The output function is modeled by three convolutional layers.

Figure 4: Region attention in mask propagation.

Region attention. The quality of propagation to obtain relies on how accurate the model in capturing the shape of target instance. In many cases, a bounding box may contain distractors that can jeopardize the quality of mask propagated. As shown in Fig 4(a), if we directly generate from

, a model is likely to be confused by distractors that appear in the bounding box. To overcome this issue, we leverage the attention mechanism to filter out potentially noisy regions. It is worth pointing out that attention mechanism has been used in various computer vision tasks 

[1, 37] but not mask propagation. Our work presents the first attempt to incorporate attention mechanism in mask propagation.

Specifically, given the warped hidden state , we first feed it into a single convolutional layer and then a softmax function, to generate the attention distribution over the bounding box. Figure 4(b) shows the attention distributions we learned. Then we multiply the current hidden state by across all channels to focus on the regions we interested. And the mask is generated from enhanced by using Eq. (2). As shown in Fig. 4, the Re-MP module concentrates on the tracked object thanks to the attention mechanism. The mask propagation of an object aborts when its size is too small, indicating a high possibility of occlusion. Finally, is extended to a tracklet after the forward and backward propagation. This process is applied to all the starting points to generate a set of tracklets. However, in some cases, different starting points may produce the same tracklet, which leads to redundant computation. To speed up the algorithm, we sort all starting points descendingly by their cosine similarities against templates. We extend the starting points according to the sorted order. If a starting point’s mask highly overlaps with a mask in existing tracklets, we skip this starting point. This step does not affect the results; on the contrary, it greatly accelerates the inference speed.

Linking the tracklets. The previous mask propagation step generates potentially segmented tracklets. We introduce a greedy approach to link those tracklets into consistent mask tubes. It sorts all tracklets descendingly by cosine similarities between their respective starting point and templates. Given the sorted order, tracklets with the highest similarities are assigned to the respective templates. The method then examines the remaining tracklets in turn. A tracklet is merged with a tracklet of higher order if there is no contradiction between them. In practice, this simple mechanism works well. We will investigate other plausible linking approaches (e.g., conditional random field) in the future.

3.3 Inference and Training

Iterative inference. During inference, we are given a video sequence , and the masks of target objects in the first frame. As mentioned, we employ those masks as the initial templates. DyeNet is iteratively applied to the whole video sequence until no more high confidence instances can be found. The set of templates will be augmented by the predictions with high confidences after each iteration.

Training details.

The overall loss function of DyeNet is formulated as:

, where is the re-identification loss of re-identification network in Sec. 3.1, which follows Online Instance Matching (OIM) loss in [35]. and indicate the pixel-wise segmentation losses of the mask network in Sec. 3.1 and recurrent mask propagation module in Sec. 3.2. The overall loss is a linear combination of those three losses, where is a weight that balances the scale of those lose terms. Following [21, 16], the weights of DyeNet are initialized by a semantic segmentation network [39]. Due to memory limitation, the weights of ‘conv1’ to ‘conv4_20’ are frozen during the training. Following [21]

, we also pre-train the re-identification sub-network in Re-ID module on ImageNet 

[6] dataset. The DyeNet is than jointly trained on the DAVIS training sets using 24k iterations. We fix a mini-batch size of images (from 8 videos, 4 frames for each video), momentum and weight decay of . The initial learning rate is and dropped by a factor of 10 after every 8k iterations. For online training, we follow [16] to synthesize videos based on the first frame of testing videos and add them into the training set.

4 Experiments

Datasets. To demonstrate the effectiveness and generalization ability of DyeNet, we evaluate our method on DAVIS 2016 [27], DAVIS 2017 [28], SegTrack [19] and You-TubeObjects [29] datasets. DAVIS 2016 (DAVIS) dataset contains 50 high-quality video sequences (3455 frames) with all frames annotated with pixel-wise object masks. Since DAVIS focuses on single-object video segmentation, each video has only one foreground object. There are 30 training and 20 validation videos. DAVIS 2017 (DAVIS) supplements the training and validation sets of DAVIS with 30 and 10 high-quality video sequences, respectively. It also introduces another 30 development test videos and 30 challenge testing videos, which makes DAVIS three times larger than its predecessor. Besides that, DAVIS re-annotates all video sequences with multiple objects. All of these differences make it more challenging than DAVIS. SegTrack dataset contains 14 low resolution video sequences (947 frames) with 24 generic foreground objects. For YouTubeObjects [29] dataset, we consider a subset of 126 videos with around 20000 frames, and the pixel-level annotation are provided by [14].

Evaluation metric. For DAVIS dataset, we follow [28] that adopts region (), boundary () and their average () measures for evaluation. To be consistent with existing studies [33, 16, 26, 3], we use mean intersection over union (mIoU) averaged across all instances to evaluate the performance in DAVIS, SegTrack and YouTubeObjects.

Training modalities. In existing studies [26, 16], training modalities can be divided into offline training and online training. In offline training a model is only trained on the training set without any annotations from the test set. Since the first frame annotations are provided in the testing stage, we can use them for tuning the model, namely online training. Online training can be further divided into per-dataset and per-video training. In per-dataset online training, we fine-tune a model based on all the first frame annotations from the test set, to obtain a dataset-specific model. Per-video online training adapts the model weights to each testing video, i.e., there will be as many video specific models as the testing videos during the testing stage.

4.1 Ablation Study

In this section, we investigate the effectiveness of each component in DyeNet. Unless otherwise indicated we employ the train set of DAVIS for training. All performance are reported on the val set of DAVIS. Offline training modality is used.

Effectiveness of Re-MP module. To demonstrate the effectiveness of Re-MP module clearly, we do not involve the Re-ID module in this experiment. Re-MP module is directly applied to extend the annotations in the first frame to form mask tubes. This variant degenerates our method to a conventional mask propagation pipeline but with an attention-aware recurrent structure. We compare Re-MP module with the state-of-the-art mask propagation method, MSK [26]. To ensure a fair comparison, we re-implement MSK to have the same backbone ResNet-101 as DyeNet. We do not use online training and any post-processing in MSK either. The re-implemented MSK achieves -mean on DAVIS val set, which is much higher than the original result reported in [26].

Variant  -mean  -mean  -mean
MSK [26] ResNet-101 63.3 67.2 65.3
Re-MP  no attention 65.3 69.7 67.5
full 67.3 71.0 69.1
Table 1: Ablation study on Re-MP with DAVIS val.
Figure 5: Examples of mask propagation. Best viewed in color.
0.9 0.8 0.7 0.6
preci. recall -mean preci. recall -mean preci. recall -mean preci. recall -mean
Iter. 97.0 16.0 72.3 87.1 22.2 73.2 78.9 26.2 73.2 76.5 29.2 73.4
Iter. 90.3 29.3 73.3 75.6 32.5 73.7 68.9 33.5 74.1 65.5 34.1 74.0
Iter. 90.7 30.1 73.6 74.6 32.6 73.7 68.8 33.5 74.1 65.3 34.2 73.9
Table 2: Ablation study on Re-ID with DAVIS val. The improvement of -mean between rows is because of template expansion.

As shown in Table 1, MSK achieves -mean on DAVIS val set. Unlike MSK that propagates predicted masks only, the proposed Re-MP propagates all historical information by the recurrent architecture, and RoIAlign operation allows our network to focus on foreground regions and produce high-resolution masks, which makes Re-MP outperform MSK. The Re-MP with attention mechanism is more focused on foreground regions, which further improves -mean by .

Figure 5 shows propagation results of different methods. In this video, a dog passes in front of a woman and another dog. MSK dyes the woman and the back dog with the instance id of the front dog. The plain Re-MP does not dye other instances, but it is still confused during the crossing and assigns the front dog with two instance ids. Thanks to the attention mechanism, our full Re-MP is not distracted by other instances. Due to occlusion, the masks of other instances are lost, and they will be retrieved by the Re-ID module in the complete DyeNet.

Effectiveness of Re-ID module with template expansion. In DyeNet, we employ the Re-ID module to search for target objects in the video sequence. By choosing an appropriate similarity threshold , we can establish high-quality starting points for the Re-MP module. The threshold

controls the trade-off between precision and recall of retrieved objects. Table 

2 lists the precision and recall of retrieved starting points in each iteration as varies, and corresponding overall performance. Tracklets are linked by greedy algorithm in this experiment.

Overall, the -mean is increased after each iteration due to the template expansion. When decreases, more instances are retrieved in the first iteration, which leads to high recall and -mean. It also produces some imprecise starting points and further affects the quality of templates in subsequent iterations, so the increase of performance between each iteration is limited. In contrast, Re-ID module with high is stricter. As the template set expands, it can still achieve satisfying recall rate gradually. In practice, the iterative process stops in about three rounds. Due to our greedy algorithm, the overall performance is less sensitive to . When , DyeNet achieves the best -mean. This value is used in all the following experiments.

Effectiveness of each component in DyeNet. Table 3 summarizes how performance gets improved by adding each component step-by-step into our DyeNet on the test-dev set of DAVIS. Our re-implemented MSK is chosen as the baseline. All models in this experiment are first offline trained on the train and val set, and then per-dataset online trained on the test-dev set.

Variant  -mean  -mean  -mean  -mean
MSK[26] ResNet-101 50.9 52.6 51.7 -
Re-MP  no attention 55.4 60.5 58.0  + 6.2
full 59.1 62.8 61.0  + 9.2
+ Re-ID 65.8 70.5 68.2  + 7.2
offline offline only 60.2 64.8 62.5  - 5.6
Table 3: Ablation study of each module in DyeNet with DAVIS test-dev.

Compared with MSK, our Re-MP module with attention mechanism significantly improves -mean by . The full DyeNet that contains both Re-ID and Re-MP modules achieves by using greedy algorithm to link the tracklets. More remarkably, without online training, our DyeNet achieves a competitive -mean of .

Figure 6: Stage-wise performance increment according to specific attributes. Best viewed in color.

To further investigate the contribution of each module in DyeNet, we categorize instances in test-dev set by specific attributes, including:

  • Size: Instances are categorized into ‘small’, ‘medium’, and ‘large’ according to their size in the first frames’ annotations.

  • Scale Variation: The area ratio among any pair of bounding boxes enclosing the target object is smaller than . The bounding boxes are obtained from our best prediction.

  • Occlusion: An object is not, partially, or heavily occluded.

  • Pose Variation: Noticeable pose variation, due to object motion or relative camera-object rotation.

Figure 7: Visualization of DyeNet’s prediction. The first column shows the first frame of each video sequence with ground truth masks. The frames are chosen at equal interval. Best viewed in color.

We choose the best version of DyeNet in Table 3, and visualize its stage-wise performance according to specific attributes in Fig. 6. We find that object’s size and occlusion are most important factors that affect the performance, and scale variation has more influence on the performance than pose variation. By inspecting closer, we observe that our Re-MP module can well track those small objects, which is the shortcoming of conventional mask propagation methods. It also avoids our model being distracted by other objects in partial occlusion cases. Complementary to Re-MP, Re-ID module retrieves missing instances due to heavy occlusions, greatly improves the performance in heavy occlusion cases. Template expansion ensures Re-ID works well even if there are large pose variations.

4.2 Benchmark

In this section, we compare our DyeNet with other existing methods and show that it can achieve the state-of-the-art performance on standard benchmarks, including DAVIS, DAVIS, SegTrack and YouTubeObjects datasets. In this section, DyeNet is tested on a single scale without any post-processing. Table 4 lists the , and -means on DAVIS test-dev. Approaches with ensemble are marked with . DyeNet is trained on train and val sets of DAVIS and achieves a competitive -mean of . It further improves -mean to through online fine-tuning, which is the best-performing method on DAVIS benchmark.

 online training  -mean   -mean   -mean 
 dataset  video
OnAVOS[33] 53.4 59.6 56.5
LucidTracker[16] 60.1 68.3 64.2
VS-ReID[21] 64.4 67.8 66.1
LucidTracker[16] 63.4 69.9 66.6
DyeNet (offline) 60.2 64.8 62.5
DyeNet 65.8 70.5 68.2
Table 4: Results on DAVIS test-dev.
 online training  DAVIS   SegTrack   YoutbObjs 
 dataset  video
VPN[15] 75.0 - -
SegFlow[5] 76.1 - -
OSVOS[3] 79.8 65.4 72.5
MSK[26] 80.3 70.3 72.6
LucidTracker[16] 84.8 77.6 76.2
OnAVOS[33] 85.7 - 77.4
DyeNet (offline) 84.7 78.3 74.9
DyeNet 86.2 78.7 79.6
Table 5: Results (mIoU) across three datasets.

To show the generalization ability and transferability of DyeNet, we next evaluate DyeNet on three other benchmarks, DAVIS, SegTrack and YouTubeObjects, which contain diverse videos. For DAVIS, DyeNet is trained on its train set. Since there is no video for offline training in SegTrack and YouTubeObjects, we directly employ the model of DAVIS as their offline model. As summarized in Table 5, offline DyeNet obtains promising performance, and after online fine-tuning, our model achieves state-of-the-art performance on all three datasets. Note that although the videos in SegTrack and YouTubeObjects are very different from videos in DAVIS, DyeNet trained on DAVIS still gains outstanding performance on those datasets without any fine-tuning, which shows its great generalization ability and transferability to diverse videos. We also find that our offline predictions on YouTubeObjects are even better than most ground-truth annotations, and performance losses are mainly caused by annotation bias. In Fig. 7, we demonstrate some examples of DyeNet’s predictions.

Speed Analysis. Most of existing methods require online training with post-processing to achieve a competitive performance. Because of those time consuming processes, their speed of inference is slow. For example, the full OnAVOS [33] takes roughly 13 seconds per frame to achieve mIoU on DAVIS val set. LucidTracker [16] that achieves mIoU requires 40k iterations per-dataset, 2k per-video online training and post-processing[7]. Our offline DyeNet is capable of obtaining similar performance ( mIoU) at 2.4 FPS on a single Titan Xp GPU. After 2k per-dataset online training, our DyeNet achieves mIoU, and the corresponding running time is 0.43 FPS.

5 Conclusion

We have presented DyeNet, which joints re-identification and attention-based recurrent temporal propagation into a unified framework to address challenging video object segmentation with multiple instances. This is the first end-to-end framework for this problem with a few compelling components. First, to cope with pose variations of targets, we relaxed the reliance of template set in the first frame by performing template expansion in our iterative algorithm. Second, to achieve robust video segmentation against distractors and background clutters, we proposed attention mechanism for recurrent temporal propagation. DyeNet does not require online training to obtain competitive accuracies at a faster speed than many existing methods. With online training, DyeNet achieves state-of-the-art performance on a wide range of standard benchmarks (including DAVIS, SegTrack and YouTubeObjects).


  • [1] J. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recognition with visual attention. In ICLR. 2015.
  • [2] D. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui. Visual object tracking using adaptive correlation filters. In CVPR, 2010.
  • [3] S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixé, D. Cremers, and L. Van Gool. One-shot video object segmentation. In CVPR, 2017.
  • [4] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2015.
  • [5] J. Cheng, Y.-H. Tsai, S. Wang, and M.-H. Yang. Segflow: Joint learning for video object segmentation and optical flow. In ICCV, 2017.
  • [6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  • [7] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient belief propagation for early vision. IJCV, 70(1):41–54, 2006.
  • [8] M. Grundmann, V. Kwatra, M. Han, and I. Essa. Efficient hierarchical graph-based video segmentation. In CVPR, 2010.
  • [9] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Simultaneous detection and segmentation. In ECCV, 2014.
  • [10] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In CVPR, 2015.
  • [11] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask R-CNN. In ICCV, 2017.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [13] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. FlowNet 2.0: Evolution of optical flow estimation with deep networks. In CVPR, 2017.
  • [14] S. D. Jain and K. Grauman. Supervoxel-consistent foreground propagation in video. In ECCV, 2014.
  • [15] V. Jampani, R. Gadde, and P. V. Gehler. Video propagation networks. In CVPR, 2017.
  • [16] A. Khoreva, R. Benenson, E. Ilg, T. Brox, and B. Schiele. Lucid data dreaming for object tracking. In CVPRW, 2017.
  • [17] T.-N. Le, K.-T. Nguyen, M.-H. Nguyen-Phan, T.-V. Ton, T.-A. Nguyen, X.-S. Trinh, Q.-H. Dinh, V.-T. Nguyen, A.-D. Duong, A. Sugimoto, T. V. Nguyen, and M.-T. Tran. Instance re-identification flow for video object segmentation. In CVPRW, 2017.
  • [18] Y. J. Lee, J. Kim, and K. Grauman. Key-segments for video object segmentation. In ICCV, 2011.
  • [19] F. Li, T. Kim, A. Humayun, D. Tsai, and J. M. Rehg. Video segmentation by tracking many figure-ground segments. In ICCV, 2013.
  • [20] X. Li, Z. Liu, P. Luo, C. C. Loy, and X. Tang. Not all pixels are equal: difficulty-aware semantic segmentation via deep layer cascade. In CVPR, 2017.
  • [21] X. Li, Y. Qi, Z. Wang, K. Chen, Z. Liu, J. Shi, P. Luo, X. Tang, and C. C. Loy. Video object segmentation with re-identification. In CVPRW, 2017.
  • [22] Y. Li, H. Qi, J. Dai, X. Ji, and Y. Wei. Fully convolutional instance-aware semantic segmentation. In CVPR, 2017.
  • [23] Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang. Deep learning markov random field for semantic segmentation. TPAMI, 2017.
  • [24] N. Märki, F. Perazzi, O. Wang, and A. Sorkine-Hornung. Bilateral space video segmentation. In CVPR, 2016.
  • [25] A. Papazoglou and V. Ferrari. Fast object segmentation in unconstrained video. In ICCV, 2013.
  • [26] F. Perazzi, A. Khoreva, R. Benenson, B. Schiele, and A. Sorkine-Hornung. Learning video object segmentation from static images. In CVPR, 2017.
  • [27] F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In CVPR, 2016.
  • [28] J. Pont-Tuset, F. Perazzi, S. Caelles, P. Arbeláez, A. Sorkine-Hornung, and L. Van Gool. The 2017 davis challenge on video object segmentation. arXiv:1704.00675, 2017.
  • [29] A. Prest, C. Leistner, J. Civera, C. Schmid, and V. Ferrari. Learning object class detectors from weakly annotated video. In CVPR, 2012.
  • [30] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015.
  • [31] Y.-H. Tsai, M.-H. Yang, and M. J. Black. Video segmentation via object flow. In CVPR, 2016.
  • [32] J. Valmadre, L. Bertinetto, J. F. Henriques, A. Vedaldi, and P. H. Torr. End-to-end representation learning for correlation filter based tracking. 2017.
  • [33] P. Voigtlaender and B. Leibe.

    Online adaptation of convolutional neural networks for video object segmentation.

    In BMVC, 2017.
  • [34] F. Xiao and Y. Jae Lee. Track and segment: An iterative unsupervised approach for video object proposals. In CVPR, 2016.
  • [35] T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang. Joint detection and identification feature learning for person search. In CVPR, 2017.
  • [36] C. Xu, C. Xiong, and J. J. Corso. Streaming hierarchical video segmentation. In ECCV, 2012.
  • [37] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked attention networks for image question answering. In CVPR, 2016.
  • [38] J. S. Yoon, F. Rameau, J. Kim, S. Lee, S. Shin, and I. S. Kweon. Pixel-level matching for video object segmentation using convolutional neural networks. In CVPR, 2017.
  • [39] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.
  • [40] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, H. Chang, and P. Torr. Conditional random fields as recurrent neural networks. In ICCV, 2015.