Towards Segmenting Everything That Moves

02/11/2019 ∙ by Achal Dave, et al. ∙ Carnegie Mellon University 12

Video analysis is the task of perceiving the world as it changes. Often, though, most of the world doesn't change all that much: it's boring. For many applications such as action detection or robotic interaction, segmenting all moving objects is a crucial first step. While this problem has been well-studied in the field of spatiotemporal segmentation, virtually none of the prior works use learning-based approaches, despite significant advances in single-frame instance segmentation. We propose the first deep-learning based approach for video instance segmentation. Our two-stream models' architecture is based on Mask R-CNN, but additionally takes optical flow as input to identify moving objects. It then combines the motion and appearance cues to correct motion estimation mistakes and capture the full extent of objects. We show state-of-the-art results on the Freiburg Berkeley Motion Segmentation dataset by a wide margin. One potential worry with learning-based methods is that they might overfit to the particular type of objects that they have been trained on. While current recognition systems tend to be limited to a "closed world" of N objects on which they are trained, our model seems to segment almost anything that moves.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 6

page 7

page 8

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Discovering all the relevant objects is key to understanding videos. Consider, for instance, a sample video clip from the Charades dataset [36] for action recognition in Figure 1. To understand that this is a clip of a person eating food, we need to reliably recognize these relevant objects (the person and what he’s eating), while ignoring the sea of irrelevant objects. Indeed, state-of-the-art methods for action recognition [41, 13, 46] rely heavily on object detection frameworks, such as Mask R-CNN [14], to localize these objects. However, these approaches have two key drawbacks: they only detect objects belonging to a closed set of categories, and they detect all instances of these categories, regardless of relevance. The left part of Figure 1 shows the output of Mask R-CNN on an illustrative example. The method fails to detect the half-eaten sandwich the person holds, perhaps because it does not precisely match the definition of the sandwich label in COCO. However, it readily detects a number of background objects, such as the backpack behind the table and the cups on it, that are irrelevant for the downstream task. Is there a principled way to segment all relevant objects from a video? We claim that object motion provides a very strong signal for this task, allowing us to build an approach for segmenting generic moving objects.

This problem is well-known in the computer vision community as spatiotemporal grouping or

video instance segmentation, dating back at least to the work of [26] and [35]. Classical works in this field have generally eschewed top-down learning based approaches, focusing instead on bottom-up methods. Indeed, a key theme of these works has been a focus on the Gestaltian “common fate” principle of grouping: grouping together pixels that move similarly through clustering of long term trajectories [29, 21]. In doing so, however, these approaches miss the opportunity of leveraging other, critical notions of grouping that naturally occur in human vision: appearance similarity, edge continuity, and, perhaps most importantly, the factor of prior experience [30, 42]. In our work, we synthesize these principles in one coherent method.

Figure 1: Static objects, such as the couch, table, and unused containers (left), furnish the contextual details of the world, but the dynamics of the scene are highlighted by objects that exhibit independent motion, such as the man and his sandwich (right). We propose a learning-based approach that focuses on these dynamic objects (right), separating them from the sea of static objects in the world.

For the related problem of foreground/background video segmentation, where classical approaches also relied on grouping pixels using various heuristics 

[9, 31], a number of learning based approaches [38, 39, 19] have been proposed recently. They have substantially improved the performance on benchmark datasets [33], but similar progress has not been seen for the instance-level setting. We take inspiration from these foreground/background approaches in our proposed model, but focus on the higher-level task of individually segmenting each moving object instance.

We develop a “two-stream” architecture inspired by [39] that utilizes both motion and appearance information to segment all the active objects in a video. We show qualitatively that the motion component of our model, trained entirely on synthetic data, behaves similar to bottom-up grouping approaches on real videos, and focuses only on the active objects in a scene. Meanwhile, the appearance component is able to leverage large, image-based datasets to learn powerful object priors and, surprisingly, is able to segment objects beyond categories in its training set.

We validate the performance of the proposed approach using two datasets: the Freiburg Berkeley Motion Segmentation dataset (FBMS) [29] for video instance segmentation and DAVIS 2016 [33] for binary motion foreground-background segmentation. We observe that the standard measure used in FBMS does not penalize false positive, allowing for trivial solutions to achieve top performance. We analyze the official metric in detail in Section 4 and propose a new, more informative evaluation.

We show that our approach outperforms the state-of-the-art on both datasets. On FBMS we show improvement both on the official and on the proposed measure. In particular, on the TestSet of FBMS we outperform the method of Keuper et al. [21] by 11.4% on the proposed measure.

To sum up, our contributions are three-fold: (1) we propose the first deep learning-based method for instance segmentation of generic moving objects from video; (2) we study the standard evaluation approach on the benchmark FBMS dataset and propose a more informative measure; (3) we report state-of-the-art result for both video instance and foreground/background segmentation. The code and trained models will be made publicly available upon acceptance.

2 Related Work

Video Instance Segmentation: Segmenting and tracking objects based on their motion has a rich history. An early work [35] proposed treating this task as a spatio-temporal grouping problem, a philosophy espoused by a number of more recent approaches, including [12, 7, 21], as well as [29], which introduced the Freiburg Berkeley Motion Segmentation dataset. In particular, these methods track each pixel individually with optical flow, encode the motion information of a pixel in a compact descriptor and then obtain an instance segmentation by clustering the pixels based on motion similarity. Unlike these works, our approach is driven primarily by a top-down learning algorithm followed by a simple linking step to generate spatio-temporal segmentations. The most relevant approach in this respect is [10], which trains a CNN to detect (but not segment) moving objects, and combines these detections with clustered pixel trajectories to derive segmentations. By contrast, our approach directly outputs segmentations at each frame, which we link together with an efficient tracker. Very recently, Bideau et al[6] proposed to combine a heuristic-based motion segmentation method [28, 5] with a CNN trained for semantic segmentation for the task of moving object segmentation. Their method, however, does not handle discontinuous motion. In addition, the fact that they rely strongly on heuristic motion estimates allows our learning-based approach to outperform their method on FBMS by a wide margin. In concurrent work, Xie et al. [43]

introduced an deep learning approach for motion segmentation that simultaneously segments and tracks moving objects using a recurrent neural network. By comparison, our method uses a simple, overlap-based tracker that performs competitively with the learned tracker from Xie

et al. while producing significantly fewer false positive segmentations.

Foreground/Background Video Segmentation: Several works have focused on the binary version of the video segmentation task, separating all the moving objects object from the background. Early approaches [9, 31, 40, 24] relied on heuristics in the optical flow field, such as closed motion boundaries in [31] to identified moving objects. These initial estimates were then refined with appearance, utilizing external cues, such as saliency maps [40], or object shape estimates [24]. Another line of work focused on building probabilistic models of moving objects using on optical flow orientations [28, 5]. None of these methods are based on a robust learning framework and thus they do not generalize well to unseen videos. The recent introduction of a standard dataset, DAVIS 2016 [33], for this task has led to a renewed interest. More recent approaches propose deep models for directly estimating motion masks, as in [19, 38, 39]. These approaches are similar to ours in that they also use a two-stream architecture to separately process motion and appearance, but they are unable to individually segment object instances. Our method separately segments and tracks each individual moving object in a video.

Object Detection: The task of segmenting object instances from still images has seen immense success in recent years, bolstered by large, standard datasets such as COCO [25]. However, this standard task focuses on segmenting every instance of objects belonging to a fixed list of categories, leading to methods that are designed to be blind to objects that fall outside the categories in the training set.

Two recent works have focused on extending these models to detect generic objects. [16] aims to generalize segmentation models to new categories, but requires bounding box annotations for each new category. More relevant to our approach, [20] aims to detect all “object”-like regions in an image, outputting a binary mask of objectness. While we share their goal of segmenting objects never seen during training, our approach additionally provides instance level masks for each object.

3 Approach

We propose a two-stream instance segmentation model that uses appearance and motion information to segment all moving objects. As illustrated in Figure 3, we build upon the Mask R-CNN architecture by using an “appearance stream” (top) to extract features from RGB images and a “motion stream” (bottom) to extract features from optical flow. These features are combined and passed to the joint object detection and segmentation module.

Our approach shares inspiration with prior work that proposes two-stream approaches for object detection [13, 32, 11, 10], with two key differences. First, we design a novel region proposal module that learns to fuse both appearance and motion information to generate moving object detections. Second, to overcome the dearth of appropriate training data, we develop a stage-wise training strategy that allows us to leverage synthetic data to train our motion stream, image datasets to train our appearance stream, and a small amount of real video data to train the joint model.

We first discuss the architecture and training strategy for the motion and appearance streams individually, and then detail how to combine these streams into one coherent architecture. Finally, we describe a simple tracker that we use for linking detections across time.

3.1 Motion-based Segmentation

We start by training a motion-based instance segmentation model. As mentioned above, training this stream requires videos with segmentation masks for all moving objects, which is difficult to obtain. Fortunately, prior work has shown that synthetic data can be used for some low-level tasks, such as flow estimation [8] and binary motion segmentation [38]. Inspired by this, we train our motion stream on the FlyingThings3D dataset [27], which contains nearly 2,700 synthetically generated sequences of 3D objects traveling in randomized trajectories, captured with a camera also traveling along a random trajectory. The dataset provides groundtruth optical flow, as well as segmentations for both static and moving objects (See Figure 2). We train our motion-stream using the moving instance labels from [38], treating all moving objects as a single category, and all other pixels, including static objects, as background. In our final approach, we start by training with the groundtruth optical flow, and fine-tune on estimated optical flow to aid in transferring to real world videos. We discuss more details, and variants of this approach in Section 5.3.1.

Figure 2: We train our motion stream on FlyingThings3D [27] (top left), our appearance stream on MS COCO [25] (top right), and our joint, two-stream model on DAVIS 2016 [33] and a subset of YouTube Video Object Segmentation [44] (bottom).

3.2 Appearance-based Segmentation

In order to incorporate appearance information, we next train an image-based object segmentation model that aims to segment the full extent of generic objects. Fortunately, large datasets exist for training image-based instance segmentation models. Here, we train on the MS COCO dataset [25]

, which contains approximately 120,000 training images with instance segmentation masks for each object in 80 categories. We could train our appearance stream following the standard Mask R-CNN training procedure, which jointly localizes and classifies each object in an image belonging to the 80 categories. However, this results in a model that, while proficient at segmenting 80 categories, is blind to objects from any other, novel category. Instead, we train an “objectness” Mask R-CNN by combining each of the 80 categories into a single “object” category. In

Section 5.3.2, we will show that this “objectness” training (1) provides a significant improvement over standard training, and (2) leads to a model that generalizes surprisingly well to objects that are not labeled in MS COCO.

3.3 Two-Stream Model

Figure 3: Our two-stream Mask R-CNN model uses an appearance stream (blue) and a motion stream (orange) to extract features from RGB and optical flow frames, respectively. Our region proposal network fuses features from the appearance and motion stream, which are then passed to the bounding box and mask regression heads.

Equipped with the individual appearance and motion streams, we now propose a two-stream architecture for fusing these information sources. In order to clearly describe our two-stream model, we take a brief detour to describe the Mask R-CNN architecture. Mask R-CNN contains three stages: (1) Feature extraction: a “backbone” network, such as ResNet [15], is used to extract features from an image. (2) Region proposal: A region proposal layer uses these features to selects regions likely to contain an object. Finally, (3) Regression: for each proposed region, the corresponding backbone features are pooled to a fixed size, and fed as input to bounding box and mask regression heads.

To build a two-stream instance segmentation model, we extract the backbone from our individual appearance-based and motion-based segmentation models. Next, as depicted in Figure 3, we construct a “two-stream” Mask R-CNN that uses these two backbones, instead of a single backbone, to extract features from optical flow (blue) and RGB frames (orange). The features extracted from each of the backbones are concatenated, and fed to a short series of convolutional layers to reduce the dimensionality to match that of Mask R-CNN, allowing us to maintain the architecture of stages (2) and (3). Intuitively, we expect the appearance stream to behave as a generic object detector, and our motion stream to help detect novel objects that the appearance stream may miss and filter out static objects.

Although this may appear similar to prior approaches for building a two-stream detection model, it differs in a key detail: prior approaches obtain region proposals either only from appearance features [13, 11, 10], or from appearance and motion features individually [32]. By contrast, we propose a novel proposal module that learns to fuse motion and appearance features to find object-like regions.

We train our joint model on subsets of the DAVIS and YouTube Video Object Segmentation datasets (as detailed in Section 5.1). We experiment with various strategies for training this joint model in Section 5.3.3.

3.4 Tracking

So far, we have focused on segmenting moving objects in each frame of a video. To maintain object identities and to continue segmenting objects after they stop moving, we implement a simple, overlap-based tracker inspired by [3]. First, we remove all detections with confidence below . On the first frame, all high confidence detections (score ) are used to initialize a track, which we define simply as a sequence of linked detections. At each successive frame, we compute the mask intersection over union between the most recent segmentation for each active and predicted objects at , and use Hungarian Matching to assign predicted objects to tracks. Unmatched predictions are either discarded (if their score is ), or used to initialize a new track (if their score is ). Any track that has not been assigned a new object for up to frames is marked as inactive.

Tracking static objects: To continue tracking moving objects when they stop moving, we need to be able to detect static objects. A naïve way to do this is to run the objectness model trained in Section 3.2 in parallel with our two-stream model at every frame. However, this would be computationally expensive. Fortunately, our appearance stream shares the backbone of the objectness model. Thus, we only need to apply the (inexpensive) stages (2) and (3) of the objectness model on the appearance features extracted by our two-stream network. Using this, we can efficiently output a set of moving and static object predictions for each frame in a video. We merge the two outputs by removing any predicted static object that overlaps with a predicted moving object. We use the same tracker described above, using only moving objects to initialize tracks.

4 Evaluation

To evaluate methods for video instance segmentation, we desire a metric that appropriately rewards segmenting and tracking moving objects, but penalizes the detection of static objects or background. While there has been a rich line of prior work related to our goal, standard metrics do not satisfy these criterion. We propose a novel metric for video instance segmentation that does satisfy these properties to facilitate comparing methods on this task.

The evaluation introduced with the Freiburg Berkeley Motion Segmentation dataset [29] was designed with a focus on grouping-based approaches, and does not penalize detecting static objects. Recently, Bideau et al[4] tackled this issue by supplementing the FBMS evaluation by measuring the average difference between the number of groundtruth moving objects and the number of predicted moving objects (

Obj). However, this evaluation complicates comparisons of methods by relying on two separate evaluation metrics; by contrast, we modify the FBMS metric directly, computing a single F-measure that evaluates a method’s ability to detect all moving objects.

We base our metric on that from the Freiburg Berkeley Motion Segmentation [29] dataset. As depicted in Figure 4 (middle), this evaluation first matches each predicted segment with a groundtruth segment, and ignores any unmatched predicted segments. In particular, a method that perfectly segments only the labeled objects will have the same F-measure as a method that similarly segments the labeled objects but also outputs a number of false positive segmentations of static objects or background regions.

By contrast, our proposed metric, depicted in Figure 4 (right), appropriately rewards detecting moving objects while penalizing the detection of static, unlabeled objects. In our evaluation, candidate instances are matched to groundtruth segments and scored with intersection over union, including a penalty for leftover groundtruth segments as well as for unmatched predicted segments.

Figure 4: Left: we visualize a toy example with two predicted (red) segmentations and one groundtruth (blue) segmentation. While the original FBMS measure (middle) ignores predicted segments that do not match a groundtruth segment, such as the dashed circle, our proposed measure (right) penalizes all false-positives

More precisely, we describe our metric roughly following the notation in [29]. For each video, let be the pixels belonging to a predicted region , and be all the pixels belonging to a groundtruth non-background region . While [29] omits unlabeled pixels from evaluation, we include all pixels in the groundtruth.

Let be the precision, be the recall, and be the F-measure corresponding to this pair of predicted and groundtruth regions, as follows:

Following [29], we use the Hungarian algorithm to find a matching between predictions and groundtruth that maximizes the sum of the F-measure over all assignments. Let be the groundtruth matched to each predicted region; for any that is not matched to a groundtruth cluster, is set to an empty region. We define our metric as follows:

Note that any unlabeled pixel that appears in a predicted region will hurt the precision and F-measure, effectively penalizing the segmentation of static or unlabeled objects.

In our experiments, we report results using both the official FBMS measure as well as our proposed measure, and empirically show that reducing the detection of static objects does not improve results on the FBMS measure, but significantly improves accuracy on our proposed measure.

5 Experiments

We first analyze each component of our proposed model with experimental results. Next, we compare our approach to prior work on the task of motion foreground/background segmentation and video instance segmentation.

5.1 Datasets

An ideal dataset for training our model would contain a large number of videos where every moving object has labeled instance masks, and static objects are not labeled. Three candidate datasets exist for this task: YouTube Video Object Segmentation (YTVOS) [44], DAVIS 2016 [33], and FBMS [29]. While YTVOS contains over 3,000 short videos with instance segmentation labels, not all objects in these videos are necessarily labeled, and both moving as well as static objects may be labeled. The DAVIS 2016 dataset contains instance segmentation masks (provided with DAVIS 2017) for only the moving objects, but only contains 30 training videos. Finally, although FBMS contains a total of 59 sequences with labeled instance segmentation masks for moving objects, prior work evaluates on the entire dataset, preventing us from training on any sequences in the dataset in order to provide a fair comparison.

To overcome this lack of data, we use heterogeneous sources of data to train our model in a stagewise fashion. As described earlier, we use COCO [25], an image-based object segmentation dataset, to train our appearance stream. We use FlyingThings3D [27], a synthetic dataset of 2,700 videos of 3D objects rendered traveling along randomized trajectories, to train our motion stream. Finally, we create a subset of YTVOS [44], which we call ‘YTVOS-moving’, of about 500 videos where every moving object is labeled, and no static object is labeled. We use this subset along with DAVIS 2016 [33] to train our joint model.

5.2 Implementation Details

Network Architecture: Our two-stream model is built off of Mask R-CNN [14]

. The backbone is ResNet-50 throughout our experiments. We will publicly release the code and exact configuration for training our model, but we highlight some important details here. All of our models are trained using the publicly available PyTorch implementation of Detectron 

[2]. In general, we use the original hyper-parameters provided by the authors of Mask R-CNN [1]

. The backbone for every model, including the motion stream, is initialized using ImageNet 

[34] pre-training due to difficulties with training Mask R-CNN from scratch. When constructing our two-stream model, we initialize the weights of the bounding box and mask heads from those of the appearance-based segmentation model.

Training Regime: We train the appearance stream on COCO [25] to convergence (90,000 iterations). We train our motion stream on FlyingThings3D using Groundtruth flow as input, and then with FlowNet2 flow [18] for 10,000 iterations each. We train the joint model on YTVOS and then on DAVIS for 5,000 iterations each.

Tracking: We set the confidence threshold for initializing tracks, as described in Section 3.4, to , and remove any detections with confidence lower than . We allow tracks to stay ‘active’ for up to frames (approximately for most videos), although we found the final results are fairly insensitive to this parameter. For FBMS, many videos require detecting objects before they move; to accomplish this, we first run our tracker forward, and then backwards in time.

Miscellaneous: We encode optical flow following [39]: We use a 3-channel image for ease of use with image-based CNNs, where the first channel encodes the angle and the second channel encodes the magnitude at each pixel, and the last channel is empty. For extracting flow, we use the version of FlowNet2 trained on synthetic data and fine-tuned on real data. For all visualizations throughout this paper, we use a confidence threshold of .

5.3 Ablation analysis

Evaluation: We analyze our model by benchmarking various configurations on the DAVIS 2016 dataset [33]. For ablation, we found it helpful to use the standard detection mean average precision (mAP) metric [25] in place of video object segmentation metrics, which require tracking and obfuscate analysis of our architecture choices. We report both detection and segmentation mAP at an IoU threshold of 0.5

5.3.1 Motion stream

To begin, we explore training strategies for the motion stream of our model. We train our motion stream on the FlyingThings3D dataset, as described in Section 3.1.

The FlyingThings3D dataset provides groundtruth flow, which we could use for training our motion stream. However, at inference time, we will only have access to noisy, estimated flow. In order to match flow in the real world, we start by estimating flow on FlyingThings3D using two optical flow estimation methods: FlowNet2 and LiteFlowNet. For both methods, we use the version of their model that is trained on synthetic data and fine-tuned on real data.

In Table 1, we compare three strategies for training on FlyingThings3D. We start by training using only FlowNet2 flow as input (“FlowNet2”). We hypothesize that training directly on noisy, estimated flow can lead to difficulties in early training. To overcome this, we train a variant starting with groundtruth flow, and fine-tune on FlowNet2 flow (“FlowNet2 Groundtruth” row). We find that this provides a significant improvement (). We also considered using a more recent flow estimation method. We swap out FlowNet2 with LiteFlowNet [17] (“LiteFlowNet Groundtruth” row). Surprisingly, we find that FlowNet2 provides significant improvements for segmentation, despite performing worse on standard flow estimation benchmarks. Qualitatively, we found that FlowNet2 provides sharper results along boundaries than LiteFlowNet, which may make it difficult to localize objects.

Figure 5 shows qualitative results of the motion stream. Despite never being trained on a single real, segmentation labeled image, this model is able to group together parts that move alike, while separating objects with disparate motion.

Flow type Det @ 0.5 Seg @ 0.5
FlowNet2 40.5 23.9
FlowNet2 Groundtruth 43.2 25.0
LiteFlowNet Groundtruth 33.8 24.0
Table 1: Comparing training with different flow estimation methods on FlyingThings3D, reporting mAP on DAVIS ’16 validation. “ Groundtruth” means we first train with groundtruth (synthetic) flow. See Section 5.3.1 for details.
Figure 5: Despite being trained for segmentation only on synthetic data, our motion stream (visualized) is able to separately segment object parts in real objects. See Sec. 5.3.1 for details.

5.3.2 Appearance Stream

While our motion stream is proficient at grouping similarly-moving pixels, it lacks any priors for real world objects and will not hesitate to oversegment common objects, such as the man in Figure 5. To introduce these useful priors, we turn our attention to the appearance stream of our model.

As described in Section 3.2, we train our appearance stream on the COCO dataset [25]. We evaluate two variants of training. First, we train a standard, “class-specific” Mask R-CNN, that outputs a set of boxes and masks for each of the 80 categories in the COCO Dataset. At inference time, we combine the boxes and masks predicted for each category into a single “object” category. Second, we train an “objectness” Mask R-CNN, by collapsing all the categories in COCO to a single category before training.

We show results from these two variants in Table 2. Our “objectness” model significantly outperforms the standard “class-specific model” by nearly 8%. We further compare the two models qualitatively in Figure 6, noting that our objectness model generalizes to categories outside of COCO, unlike the standard model.

COCO Training Det @ 0.5 Seg @ 0.5
Class-specific 42.0 40.2
Objectness 49.8 48.3
Table 2: Comparison of training our appearance stream with and without category labels on MS COCO (Class-specific and Objectness, respectively), reporting mAP on DAVIS ’16 validation. Training without category labels allows the model to generalize beyond the training categories. See also Figure 6 and Section 5.3.2
Figure 6: Unlike standard object detectors trained on COCO (left), our objectness model (right) detects objects from categories outside of COCO, such as the packet of film, a roll of quarters, a rubber duck, and a packet of fasteners. Both models visualized at confidence threshold of 0.7. See Section 5.3.2 for details.

5.3.3 Joint training

Finally, we combine our appearance and flow streams in a single two-stream model. We depict this model in Figure 3, and describe it in further detail in Section 3.3.

We experiment with a few different strategies for training this joint model. Throughout these experiments, we initialize the flow stream with the “FlowNet2 Groundtruth” model from Section 5.3.1. Generally, we use the objectness model trained in Section 5.3.2 to initialize the appearance stream, the bounding box and mask prediction heads, and the region proposal network (RPN). We show the results in Table 3.

We start by training this joint model directly on the DAVIS 2016 training set, which achieves 79.1% mAP. We note that even with joint-training, using the objectness model for initialization provides a significant boost over using a category-specific detector (73.8%). Next, to maintain the generalizability of the objectness model, we also train a variant where we freeze the weights of the appearance stream. This provides nearly a 3% improvement in accuracy. Similarly, to maintain the generic “grouping” nature of the synthetically-trained flow stream, we freeze the flow stream, providing us with an additional 2% improvement.

Finally, we hypothesize that while features from the flow stream are helpful for localizing generic moving objects, appearance information is sufficient for segmentation. To test this hypothesis, we train one last variant where the mask head uses only the features from the appearance stream, and freeze its weights to those of the objectness model. The results of this model are in Table 3.

Variant Det @ 0.5 Seg @ 0.5
Joint Training, class-specific 73.8 70.3
Joint Training, objectness 79.1 73.3
+ Freeze appearance 81.9 76.7
+ Freeze motion 83.7 76.4
+ Freeze mask 83.9 77.4
Table 3: Comparing different strategies for training our two-stream model, reporting mAP on DAVIS ’16 validation. Preserving knowledge from the individual streams is critical for good accuracy. See Section 5.3.3 for details.

Training Data: Next, we incorporate the ‘YTVOS-moving’ subset described in Section 5.1, and show results in Table 4. Unfortunately, videos in this subset primarily only contain moving objects, with very few static objects throughout the dataset. Training only on these videos leads our model to detect both static and moving objects, leading to a significant (5%) drop in performance. However, fine-tuning this model on the DAVIS 16 training set leads to our best model, as seen in Table 4.

Joint Training Data Det @ 0.5 Seg @ 0.5
DAVIS 83.9 77.4
YTVOS-moving 79.9 75.8
DAVIS YTVOS-moving 85.1 77.9
Table 4: Comparing different training sources, reporting mAP on DAVIS ’16 validation. Although ‘YTVOS-moving’ is larger than DAVIS, its dearth of static objects leads to worse performance. Training on ‘YTVOS-moving’ and fine-tuning on DAVIS provides the best model. See Section 5.3.3 for details.

5.4 Comparison to prior work

5.4.1 DAVIS Motion Segmentation

Although our approach provides instance segmentation masks for each moving object in a video, we verify the ability of our method to detect moving objects by comparing to prior work on the motion foreground-background segmentation task on DAVIS 2016 [33]. We convert our instance segmentation output into binary motion masks by marking as foreground any pixel that belongs to a predicted instance with a score , and report results in Table 5. Our final method compares favorably to the state of the art approach, improving intersection over union () by a modest 0.8%, and the F-measure for boundary accuracy () by 1.5%.

Measure MP-F [38] FSG [19] ARP [23] LVO [39] Ours
Mean 70.0 70.7 76.2 78.2 79.0
Recall 85.0 83.5 91.1 89.1 92.7
Decay 1.4 1.5 7.0 4.1 3.7
Mean 65.9 65.3 70.6 75.9 78.4
Recall 79.2 73.8 83.5 84.7 86.7
Decay 2.5 1.8 7.9 3.5 5.4
Mean 56.3 32.8 39.3 20.2 25.2
Table 5: DAVIS ’16 results on the validation set using the intersection over union (), F-measure (), and temporal stability () metrics.

5.4.2 FBMS Moving Only Segmentation

A recent line of works [4, 6] proposed evaluating a subtask of video instance segmentation. Whereas video instance segmentation requires segmenting and tracking all instances that move at any point in the video, [6] focuses on segmenting and tracking instances only in frames where they move.

In order to evaluate this subtask, [6] uses an alternative labeling for FBMS introduced in [4], and supplements the official FBMS measure with a Obj metric, which indicates the average absolute difference between the number of predicted objects and groundtruth objects in each sequence. Intuitively, this penalizes false-positive detection of static objects or background; we refer the reader to [6] for further information. As with the official FBMS metric, the precision, recall and F-measure do not penalize static detections.

As described in Section 3.4, our final approach uses the appearance stream to track objects even after they stop moving. For a fair comparison, we disable this component, applying tracking directly to the output of our two-stream model that detects only moving objects in each frame.

Table 6 shows that our method significantly reduces the number of false positive segmentations, as evidenced by the improvement in Obj of nearly 65% (from 4 to 1.4), and by our qualitative results in Figure 7. Due to improved segmentation boundaries, we also improve by 1.3% on F-measure.

P R F Obj
[6] 74.2 63.1 65.0 4
Ours-J 77.1 62.6 66.3 1.4
Table 6: FBMS TestSet results with alternative evaluation from [4, 6]: precision (P), recall (R), F-measure (F), and difference in number of predicted and groundtruth objects ( Obj, lower is better).

5.4.3 FBMS Video Instance Segmentation

Finally, we present video instance segmentation evaluations on the Freiburg Berkeley Motion Segmentation (FBMS) dataset [29] on the official metric as well as our proposed metric (Section 4

), followed by qualitative results. For both the official and proposed metrics, we report precision and recall as diagnostics to understand the properties of each model, and F-measure as the final measure for comparison.

Figure 7: Qualitative results from our final approach compared to two state-of-the-art methods. Prior works frequently exhibit over- or under-segmentation, such as the cat in the middle row from [21] and the dog in the top row from [6], respectively. Our method fuses motion and appearance information to successfully segment the full extent of moving objects.
Figure 8: Illustrative failure cases of our method. The most common failures are due to tracking or mis-classification of static objects near the camera as moving objects. Top: Our overlap-based tracker fails due to complete occlusions, such as the occlusion of the horse by the person in the middle frame. Bottom: Static objects near the camera are occasionally mistakenly detected as moving objects, such as the white car on the right.

Official metric: We evaluate our method against prior work in Table 7. As explained in Section 5.1, the official FBMS metric does not penalize detecting static objects. As expected, we note that our appearance stream alone, which segments both static and moving objects, performs best on this metric (‘Ours-A’), outperforming all prior work by 6.2% in F-measure on the TestSet, and 2.5% on the TrainingSet. Recall that we do not use either sets for training, so the variation in performance between these two sets is largely due to random variation. For completion, we also report the performance of our joint model (‘Ours-J’), which compares favorably to state-of-the-art despite the flawed metric. Our improvements over prior work on this metric are likely driven by improvements in segmentation boundaries, such as the illustrative improvements in Figure 7.

Training set Test set
P R F P R F
[12] 79.2 47.6 59.4 4 77.1 43.0 55.2 5
[29] 81.5 63.2 71.2 16 74.9 60.1 66.7 20
[37] 83.0 70.1 76.0 23 77.9 59.1 67.3 15
[21] 86.9 71.3 78.4 25 87.6 70.2 77.9 25
[45] 89.5 70.7 79.0 26 91.5 64.8 75.8 27
[22] 93.0 72.7 81.6 29 95.9 65.5 77.9 28
Ours-A 89.2 79.0 83.8 43 88.6 80.4 84.3 40
Ours-J 85.1 78.5 81.7 39 80.8 75.8 78.2 39
Table 7: FBMS 59 results using the official metric  [29], which does not penalize detecting unlabeled objects. We report precision (P), recall (R), F-measure (F), and the number of objects for which the F-measure (N). Ours-A is our model’s appearance stream only, and Ours-J is our joint model. Both our joint model and our appearance stream out-perform all prior work. As expected, since this metric does not penalize detecting static objects, our appearance model out-performs our joint model.

Proposed metric: Finally, we report results on our proposed metric in Table 8. Recall that our proposed metric generally follows the official metric, but additionally penalizes detection of static objects. We compare to all methods from Table 7 whose final results on FBMS were accessible or provided by the authors through personal communication. On this proposed metric, we first note that, as expected, the performance of our appearance model baseline is significantly worse than our final, joint model, by 9.2% on TestSet and 6% on TrainingSet in F-measure. More importantly, our final model strongly out performs prior work in F-measure by 11.3% on the TestSet, and 6.1% on the TrainingSet. In addition to improvements in segmentation boundaries, our approach effectively removes a number of spurious segmentations of background regions and object parts, as seen in Figure 7.

Training set Test set
P R F P R F
[37] 74.8 61.7 65.5 66.8 49.2 53.6
[21] 68.1 68.5 67.1 70.0 64.6 65.0
Ours-A 61.6 80.4 64.0 66.8 84.7 70.3
Ours-J 75.0 77.8 73.2 77.0 83.0 76.3
Table 8: FBMS 59 results on our proposed metric, with precision (P), recall (R) and F-measure (F). Ours-A is our models’ appearance stream only, while Ours-J is our joint model.

Qualitative results: We qualitatively compare our approach with Keuper et al[21] and Bideau et al[6] in Figure 7. In the top row of Figure 7[21] oversegments the dog into multiple parts, and [6] merges the dog with the background, whereas our approach fully segments the dog. Similarly, the cat in the middle row is over-segmented by  [21] and under-segmented by [6], but well-segmented by our approach. In the final row, both [21] and [6] exhibit segmentation and tracking errors; the region corresponding to the man’s foot (colored yellow for Keuper et al. and red for Bideau et al.) are mistakenly tracked into a background region thus segmenting part of the background as a moving object. Meanwhile, our object-based tracker fully segments the person and the tennis racket with high precision.

We also present illustrative failure cases of our method in Figure 8. The most common mistakes our method makes are tracking failures and the misclassification of static objects as moving. In the top row, the furthest horse (colored purple in the first frame) is completely occluded by the man in the middle frame, leading to an identity swap in the last frame, indicated by the color swap from purple to yellow. In the bottom row, the car further to the right is parked and not moving but is classified as moving by our approach. Our hypothesis, based on viewing similar failure cases, is that this is due to the proximity of the static car to the moving van, which our learning-based approach may have learnt to use as a cue for identifying moving objects.

6 Conclusion

We have proposed the first deep-learning based approach for video instance segmentation. Our method is a spatiotemporal network based on the Mask R-CNN architecture, but driven by motion. We evaluate our proposed approach on the FBMS and DAVIS benchmark datasets, achieving state-of-the-art results. In addition, we provide an extensive ablative analysis of the properties of our model. Finally, we provide qualitative evidence that our model can generalize to objects on which it was not trained, taking a step towards the goal of segmenting anything that moves.

Acknowledgements: We thank Pia Bideau for providing evaluation code, Nadine Chang, Kenneth Marino and Senthil Purushwalkam for reviewing early versions of this paper and discussions. Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DOI/IBC) contract number D17PC00345. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright annotation theron. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied of IARPA, DOI/IBC or the U.S. Government.

References

  • [1] Detectron resnet-50 fpn 1x config. https://github.com/facebookresearch/Detectron/blob/8181a324796202e4afe7660b7458b7bf1e08cf8b/configs/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml.
  • [2] Detectron.pytorch. https://github.com/roytseng-tw/Detectron.pytorch.
  • [3] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft. Simple online and realtime tracking. In ICIP, 2016.
  • [4] P. Bideau and E. Learned-Miller. A detailed rubric for motion segmentation. arXiv preprint arXiv:1610.10033, 2016.
  • [5] P. Bideau and E. Learned-Miller. It’s moving! a probabilistic model for causal motion segmentation in moving camera videos. In ECCV, 2016.
  • [6] P. Bideau, A. RoyChowdhury, R. R. Menon, and E. Learned-Miller. The best of both worlds: Combining CNNs and geometric constraints for hierarchical motion segmentation. In CVPR, 2018.
  • [7] T. Brox and J. Malik. Object segmentation by long term analysis of point trajectories. In ECCV, 2010.
  • [8] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. In ICCV, 2015.
  • [9] A. Faktor and M. Irani. Video segmentation by non-local consensus voting. In BMVC, 2014.
  • [10] K. Fragkiadaki, P. Arbelaez, P. Felsen, and J. Malik. Learning to segment moving objects in videos. In CVPR, 2015.
  • [11] G. Gkioxari and J. Malik. Finding action tubes. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 759–768, 2015.
  • [12] M. Grundmann, V. Kwatra, M. Han, and I. Essa. Efficient hierarchical graph-based video segmentation. In CVPR, 2010.
  • [13] C. Gu, C. Sun, S. Vijayanarasimhan, C. Pantofaru, D. A. Ross, G. Toderici, Y. Li, S. Ricco, R. Sukthankar, C. Schmid, et al. AVA: A video dataset of spatio-temporally localized atomic visual actions. In CVPR, 2017.
  • [14] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask R-CNN. In ICCV, 2017.
  • [15] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [16] R. Hu, P. Dollár, K. He, T. Darrell, and R. Girshick. Learning to segment every thing. In CVPR, 2018.
  • [17] T.-W. Hui, X. Tang, and C. C. Loy.

    LiteFlowNet: A lightweight convolutional neural network for optical flow estimation.

    In CVPR, 2018.
  • [18] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In CVPR, 2017.
  • [19] S. D. Jain, B. Xiong, and K. Grauman. Fusionseg: Learning to combine motion and appearance for fully automatic segmention of generic objects in videos. In CVPR, 2017.
  • [20] S. D. Jain, B. Xiong, and K. Grauman. Pixel objectness. arXiv preprint arXiv:1701.05349, 2017.
  • [21] M. Keuper, B. Andres, and T. Brox. Motion trajectory segmentation via minimum cost multicuts. In ICCV, 2015.
  • [22] N. Khan, B.-W. Hong, A. Yezzi, and G. Sundaramoorthi. Coarse-to-fine segmentation with shape-tailored continuum scale spaces. In CVPR, 2017.
  • [23] Y. J. Koh and C.-S. Kim. Primary object segmentation in videos based on region augmentation and reduction. In CVPR, 2017.
  • [24] Y. J. Lee, J. Kim, and K. Grauman. Key-segments for video object segmentation. In ICCV, pages 1995–2002. IEEE, 2011.
  • [25] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV. Springer, 2014.
  • [26] D. Marr. Vision: A computational investigation into the human representation and processing of visual information. mit press. Cambridge, Massachusetts, 1982.
  • [27] N. Mayer, E. Ilg, P. Häusser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In CVPR, 2016.
  • [28] M. Narayana, A. Hanson, and E. Learned-Miller. Coherent motion segmentation in moving camera videos using optical flow orientations. In ICCV, 2013.
  • [29] P. Ochs, J. Malik, and T. Brox. Segmentation of moving objects by long term video analysis. TPAMI, 36(6):1187–1200, 2014.
  • [30] S. E. Palmer. Vision science: Photons to phenomenology. MIT press, 1999.
  • [31] A. Papazoglou and V. Ferrari. Fast object segmentation in unconstrained video. In ICCV, 2013.
  • [32] X. Peng and C. Schmid. Multi-region two-stream r-cnn for action detection. In European Conference on Computer Vision, pages 744–759. Springer, 2016.
  • [33] F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In CVPR, 2016.
  • [34] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. ImageNet large scale visual recognition challenge. IJCV, 115(3):211–252, 2015.
  • [35] J. Shi and J. Malik. Motion segmentation and tracking using normalized cuts. In ICCV, 1998.
  • [36] G. A. Sigurdsson, G. Varol, X. Wang, A. Farhadi, I. Laptev, and A. Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In ECCV, 2016.
  • [37] B. Taylor, V. Karasev, and S. Soatto. Causal video object segmentation from persistence of occlusions. In CVPR, 2015.
  • [38] P. Tokmakov, K. Alahari, and C. Schmid. Learning motion patterns in videos. In CVPR, 2017.
  • [39] P. Tokmakov, C. Schmid, and K. Alahari. Learning to segment moving objects. IJCV, Sep 2018.
  • [40] W. Wang, J. Shen, and F. Porikli. Saliency-aware geodesic video object segmentation. In CVPR, pages 3395–3402, 2015.
  • [41] X. Wang and A. Gupta. Videos as space-time region graphs. In ECCV, 2018.
  • [42] M. Wertheimer. Untersuchungen zur lehre von der gestalt. ii. Psychologische forschung, 4(1):301–350, 1923.
  • [43] C. Xie, Y. Xiang, D. Fox, and Z. Harchaoui. Object discovery in videos as foreground motion clustering. arXiv preprint arXiv:1812.02772, 2018.
  • [44] N. Xu, L. Yang, Y. Fan, D. Yue, Y. Liang, J. Yang, and T. Huang. YouTube-VOS: A large-scale video object segmentation benchmark. In ECCV, 2018.
  • [45] Y. Yang, G. Sundaramoorthi, and S. Soatto. Self-occlusions and disocclusions in causal video object segmentation. In ICCV, 2015.
  • [46] Y. Zhang, P. Tokmakov, C. Schmid, and M. Hebert. A structured model for action detection. arXiv preprint arXiv:1812.03544, 2018.

7 Appendix

7.1 Qualitative Results

We show qualitative results comparing our motion model, appearance model, and joint model in Figure 9. We note that our motion stream behaves surprisingly like a bottom-up grouping method: it focuses on moving objects, but tends to oversegment deformable objects with independently moving parts. Meanwhile, our appearance stream successfully detects most objects in the scene, but is unable to separate the active and static objects in the scene. Our joint model is able to combine these two approaches to segment all the moving objects.

Figure 9: From left to right: Results using our motion stream, appearance stream, and joint two-stream model. Our motion stream successfully ignores static objects, but is liable to oversegment deformable objects due to training only on synthetic data. Combined with the appearance stream (middle), our joint two-stream approach is able to segment the full extent of all the moving objects.

7.2 DAVIS ’17 Video Instance Segmentation

We further evaluate our method on a subset of the DAVIS 2017 dataset for video instance segmentation. Unlike DAVIS 2016, the 2017 version provides instance-level masks for objects, but contains sequences with labeled static or unlabeled moving objects. For evaluation, we manually select 23 of 30 validation videos without these issues. We compare to [21], the best FBMS method we can obtain code for, with our proposed metric. Surprisingly, we find a much larger gap in performance on this dataset; while [21] achieves 45.5% on F-measure with our proposed metric, our approach improves significantly to 78.3%. We hypothesize that the significant gap is likely due to faster, more articulated motion and higher resolution videos in DAVIS 2017, which severely affect [21] but not our method.