A list of video instance segmentation papers, codes and datasets.
We present a novel embedding approach for video instance segmentation. Our method learns a spatio-temporal embedding integrating cues from appearance, motion, and geometry; a 3D causal convolutional network models motion, and a monocular self-supervised depth loss models geometry. In this embedding space, video-pixels of the same instance are clustered together while being separated from other instances, to naturally track instances over time without any complex post-processing. Our network runs in real-time as our architecture is entirely causal - we do not incorporate information from future frames, contrary to previous methods. We show that our model can accurately track and segment instances, even with occlusions and missed detections, advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset.READ FULL TEXT VIEW PDF
Existing methods for instance segmentation in videos typically involve
Different to semantic segmentation, instance segmentation assigns unique...
We propose spatial semantic embedding network (SSEN), a simple, yet effi...
Tracking segmentation masks of multiple instances has been intensively
Instance-level video segmentation requires a solid integration of spatia...
Instance object segmentation and tracking provide comprehensive
3D Human Motion Indexing and Retrieval is an interesting problem due to ...
A list of video instance segmentation papers, codes and datasets.
Explicitly predicting the motion of actors in a dynamic scene is a critical component of intelligent systems. Humans can seamlessly track moving objects in their environment by using cues such as appearance, relative distance, and most of all, temporal consistency: the world is rarely experienced in a static way: motion (or its absence) provides essential information to understand a scene. Similarly, incorporating past context through a temporal model is essential to segment and track objects consistently over time and through occlusions.
From a computer vision perspective, understanding object motion involves segmenting instances, estimating depth, and tracking instances over time. Instance segmentation has gained traction with challenging datasets such as COCO(Lin et al., 2014), Cityscapes (Cordts et al., 2016) and Mapillary Vistas (Neuhold et al., 2017). Such datasets, which only contain single-frame annotations, do not allow the training of video models with temporally consistent instance segmentation, nor does it allow self-supervised monocular depth estimation, that necessitates consecutive frames. Yet, navigating in the real-world requires temporally consistent segmentation and 3D gometry understanding of the other agents. More recently, a new dataset containing video instance segmentation annotations was released: the KITTI Multi-Object and Tracking Dataset (Voigtlaender et al., 2019). This dataset contains pixel-level instance segmentation on more than 8,000 video frames which effectively enables the training of video instance segmentation models.
In this work, we propose a new spatio-temporal embedding loss that learns to map video-pixels to a high-dimensional space111See a video demo of our model.
. This space encourages video-pixels of the same instance to be close together and distinct from other instances. We show that this spatio-temporal embedding loss, jointly with a deep temporal convolutional neural network and self-supervised depth loss, produces consistent instance segmentations over time. The temporal model is a causal 3D convolutional network (only conditioned on past frames to predict the current embedding) and is capable of real-time operation. Finally, we show that predicting depth improves the quality of the embedding as the 3D geometry of an object constrains its future location given that objects move smoothly in space.
To summarise our novel contributions, we:
introduce a new spatio-temporal embedding loss for video instance segmentation,
show that having a temporal model improves embedding consistency over time,
improve how the embedding disambiguates objects with a self-supervised monocular depth loss,
handle occlusions, contrary to previous IoU based instance correspondence.
We demonstrate the efficacy of our method by advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset (Voigtlaender et al., 2019). An example of our model’s output is given by Figure 1.
Two main approaches exist for single-image instance segmentation: region-proposal based (He et al., 2017; Hu et al., 2018; Chen et al., 2018; Liu et al., 2018) and embedding based (Brabandere et al., 2017; Fathi et al., 2017; Kong and Fowlkes, 2018; Kendall et al., 2018). The former method relies on a region of interest proposal network that first predicts bounding boxes then estimates the mask of the object inside that bounding box. With such a strategy, a given pixel could belong to the overlap of many bounding boxes, and it is largely unclear how correspondence between pixels can be learned. We instead favour the embedding based method and extend it to space and time.
Capturing the inter-relations of objects using multi-modal cues (appearance, motion, interaction) is difficult, as showcased by the Multi-Object Tracking (MOT) challenge (Xiang et al., 2015). Sadeghian et al. (2017) and Son et al. (2017) learned a representation of objects that follows the ”tracking-by-detection” paradigm where the goal is to connect detections across video frames by finding the optimal assignment of a graph-based tracking formulation (i.e. each detection is a node, and an edge is the similarity score between two detections).
Collecting large-scale tracking datasets is necessary to train deep networks, but that process is expensive and time-consuming. Vondrick et al. (2018) introduced video colourisation as a self-supervised method to learn visual tracking. They constrained the colourisation problem of a grayscale image by learning to copy colours from a reference frame, with the pointing mechanism of the model acting as a tracker once it is fully trained. The colourisation model is more robust than optical flow based models, especially in complex natural scenes with fast motion, occlusion and dynamic backgrounds.
Voigtlaender et al. (2019) extended the task of multi-object tracking to multi-object tracking and segmentation (MOTS), by considering instance segmentations as opposed to 2D bounding boxes. Motivated by the saturation of the bounding box level tracking evaluations (Pont-Tuset et al., 2017), they introduced the KITTI MOTS dataset, which contains pixel-level instance segmentation on more than 8,000 video frames. They also trained a model which extends Mask R-CNN (He et al., 2017)
by incorporating 3D convolutions to integrate temporal information, and the addition of an association head that produces an association vector for each detection, inspired from person re-identification(Beyer et al., 2017). The temporal component of their model, however, is fairly shallow (one or two layers), and is not causal, as future frames are used to segment past frames. More recently, Yang et al. (2019) collected a large-scale dataset from short YouTube videos (3-6 seconds) with video instance segmentation labels, and Hu et al. (2019) introduced a densely annotated synthetic dataset with complex occlusions to learn how to estimate the spatial extent of objects beyond what is visible.
Contrary to methods relying on region proposal (He et al., 2017; Chen et al., 2018), embedding-based instance segmentation methods map the pixels of a given instance to a structured high dimensional space, overcoming several limitations of region-proposal methods: (i) each pixel belongs to one unique instance (no bounding box overlap); (ii) the number of detected objects can be arbitrarily large (not fixed by the number of proposals).
We propose a spatio-temporal embedding loss with three competing forces, similarly to Brabandere et al. (2017). The attraction force (Equation 1) encourages the video-pixels embedding of a given instance to be close to its embedding mean. The repulsion force (Equation 2) incites the embedding mean of a given instance to be far from all others instances. And finally, the regularisation force (Equation 3) prevents the embedding to diverge from the origin.
Let us denote by K the number of instances, and by the set of all video-pixels of instance . For all , we denote by the embedding for pixel and by the mean embedding of instance : . The embedding loss is given by:
defines the attraction radius, constraining the embedding to be within of its mean. is the repulsion radius, constraining the mean embedding of two different instances to be at least apart. Therefore, if we set , a pixel embedding of an instance will be closer to all the pixel embeddings of instance , than to the pixel embeddings of any other instance.
The spatio-temporal embedding loss is the weighted sum of the attraction, repulsion and regularisation forces:
During inference, each pixel of the considered frame is assigned to an instance by randomly picking an unassigned pixel and aggregating close-by pixels with the mean shift algorithm (Comaniciu and Meer, 2002) until convergence. In the ideal case, with a test loss of zero, this will result in perfect instance segmentation.
The relative distance of objects is a strong cue for segmenting instances in video. Knowing the 3D geometry of objects especially helps segmenting instances in a temporally consistent way, as the past position of an instance effectively constrains where it could be next.
Depth estimation with supervised methods requires a vast quantity of high quality annotated data, which is challenging to acquire in a range of environments. As we have access to a video instance segmentation dataset, we can use a self-supervised depth loss from monocular video, where the supervision comes from consecutive frames.
, we train a depth network with a separate pose estimation network with the hypothesis during training that scenes are mostly rigid, therefore assuming appearance change is mostly due to camera motion. Pixels that violate this assumption are masked from the view synthesis loss, as they otherwise create infinite holes during inference for objects that are typically seen in motion during training – more details inSection A.1. The training signal comes from novel view synthesis, i.e. the generation of a new image of the scene from a different camera pose. Let us denote by a sequence of images, the target view and the source view. The view synthesis loss is given by:
with the synthesised view of , from source image and using the predicted depth and the estimated camera transformation . The projection error is a weighted sum of an distance, a Structural Similarity Index (SSIM) and a smoothness regularisation term, as in Zhao et al. (2017). Let us denote by the coordinate of a pixel in the target image in homogeneous coordinates. Given the camera intrinsic matrix, , and the mapping from image plane to camera coordinate, the corresponding pixel in the source image is provided by:
Since the projected coordinates
take continuous values, we use the Spatial Transformer Network(Jaderberg et al., 2015)
sampling mechanism to bilinearly interpolate the four neighbouring pixels to populate the reconstructed image.
Some pixels are visible in the target image, but are not in the source image, leading to a large projection error. As advocated by Godard et al. (2019), instead of summing, taking the minimum projection error greatly reduces artifacts due to occlusion and results in sharper predictions. The resulting view synthesis loss is:
The final video instance embedding loss is the weighted sum of the attraction, repulsion, regularisation and geometric view synthesis losses:
Our network contains three components: the encoder, the temporal model and the decoders. Each input frame is first encoded as a compact feature , then the temporal model learns a rich spatio-temporal representation , and finally, the decoders output the instance embedding and depth prediction as illustrated by Figure 2.
We use a ResNet-18 (He et al., 2016) as our encoder, which allows the network to run in real-time on sequences of images.
The model learns scene dynamics with a causal 3D convolutional network made of 3D residual convolutional blocks (convolving in both space and time, with residual connections). For a given time index,, the network only convolves over images from indices to compute the temporal representation . It therefore does not use future frames and is completely causal. The temporal model does not decimate the spatial dimension of the encoding, but slowly accumulates information over time from the previous encodings with . It is trained efficiently with convolutions as all input images are available during training, enabling parallel computations with GPUs. However, during inference, the model is inherently sequential, but can be made significantly faster by caching the convolutional features over time and eliminating redundant operations, as proposed by Paine et al. (2016) for WaveNet (Oord et al., 2016).
From the temporal representation , the decoders output the instance embedding and estimated depth , with the embedding dimension and the input image size.
The architecture of the Pose and Mask networks are given in Section A.1.
For each new frame, we first mask the background with our mask network, then we cluster the foreground embeddings with mean shift to discover dense regions with each cluster corresponding to one instance. Tracking instances simply requires comparing the mean embedding of a newly segmented instance with previously segmented instances. A distance lower than indicates a match.
The embeddings are accumulated over time, creating increasingly denser regions over time and resulting in a better clustering. To ensure that the pixel embeddings of a particular instance can smoothly vary over time, the embeddings have a life span corresponding to the time sequence length of the embedding loss.
Next we describe experimental evidence which demonstrates the performance of our method by advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset (Voigtlaender et al., 2019).
The KITTI Multi-Object Tracking and Segmentation (MOTS) dataset contains 8,008 frames with instance segmentation labels resulting in a total of 26,899 annotated cars (see Table 1). It is composed of 21 scenes with a resolution of with consistent instance ID labels across time, allowing the training of video instance segmentation models. The frames are annotated at 10 frames per second, which is suitable for self-supervised monocular depth prediction.
|Scenes||Frames||Annotations||Avg. # frames||Avg. # annotations|
The ApolloScape dataset (Huang et al., 2018) also contains video instance segmentation labels for 49,287 frames, but the annotations are not consistent in time, rendering the training of a temporal model impossible. NuScenes (Caesar et al., 2019) features 1,000 scenes of 20 seconds with annotations at 2Hz in a diverse range of environments (different weather, daytime, city) but only contains bounding box labels, failing to represent the fine-grained details of instance segmentation. Temporal instance segmentation is also available on short snippets of the DAVIS dataset (Pont-Tuset et al., 2017), but each snippet is recorded by a different camera and is too short to effectively learn a depth model. For this reason, we focus on the KITTI MOTS dataset – it is the only dataset that contains consistent video instance segmentation in a sufficient quantity to train deep models.
We halve the input images to our encoder to use an input RGB resolution of . The spatio-temporal representation and the embedding dimension is . Except for the experiments in Table 4, we train with a sequence length of 5 which corresponds to 0.5 seconds of temporal context since the videos are 10Hz.
In the loss function, we set the attraction radiusand repulsion radius . We weight the losses with attraction and repulsion loss weight , regularisation loss and depth loss .
In this section, we define multi-object tracking and segmentation metrics, measuring the quality of the segmentation as well as the consistency of the predictions over time. Let us denote by the set of predicted ids, the set of ground truth ids and the mapping from hypothesis segmentations to ground truth segmentations. is defined as:
We further define the following sets: (true positives), (false positives), (false negatives), (the set of ID switches), and the soft number of true positives: .
Following Voigtlaender et al. (2019), we define the following MOTS metrics: multi-object tracking and segmentation precision (MOTSP), multi-object tracking and segmentation accuracy (MOTSA) and finally the soft multi-object tracking and segmentation accuracy (sMOTSA) that measures segmentation as well as detection and tracking quality.
We compare our model to the following baselines for video instance segmentation and report the results in Table 2.
Single-frame embedding loss (Brabandere et al., 2017), previous state-of-the-art method where instance segmentations are propagated in time using intersection-over-union association.
Mask R-CNN (He et al., 2017), instances are propagated with intersection-over-union.
Without temporal model, spatio-temporal embedding loss, without the temporal model.
Without depth, temporal model and spatio-temporal embedding loss, without the depth loss.
The static detection metrics (MOTSP and average precision) are evaluated image by image without taking into account the temporal consistency of the instance segmentations. As the compared models (Without temporal model, Without depth, Ours) are all using the same mask network, they show similar performance in terms of detection. However, when evaluating performance on metrics that measures temporal consistency (MOTSA and sMOTSA), our best model shows significant improvement over the baselines.
The variant without the temporal model performs poorly as it does not have any temporal context to learn a spatio-temporal embedding and therefore only relies on spatial appearance. The temporal model on the other hand learns with the temporal context and local motion, which results in a better embedding. Our model, which learns to predict both a spatio-temporal embedding and monocular depth, achieves the best performance. In addition to using cues from appearance and temporal context, estimating depth allows the network to use information from the relative distance of objects to disambiguate them. Finally, we observe that our model outperforms Mask R-CNN (He et al., 2017) on the temporal metrics (MOTSA and sMOTSA) even though the latter exhibits a higher detection accuracy, further demonstrating the temporal consistency quality of our spatio-temporal embedding.
Our model relies on segmenting the background to determine the pixel locations to consider for instance clustering when applying mean shift. We evaluate the impact of using the ground truth mask against our predicted mask in Table 3. The performance gain is significant, hinting that our model could be improved with a more powerful mask network.
Next, we evaluate the effect of clustering. In the best scenario, the validation loss would be zero, and the clustering would be perfect using the mean shift algorithm. However, this scenario is unlikely and the clustering algorithm is affected by noisy embeddings. We evaluate the effect of this noise by clustering with the ground-truth mean: we threshold with a radius
around the ground truth instance embedding mean. This also results in a boost in the evaluation metrics, but most interestingly, a model that uses both ground truth instance embedding mean clustering and ground truth mask performs worse than a model using the ground truth mask and our clustering algorithm. This is because our clustering algorithm accumulates embeddings from past frames and therefore creates an attraction force for the mean shift algorithm that enables the instances to be matched more consistently.
|GT Mean||GT Mask||MOTSA||sMOTSA||MOTSP||AP|
Our model learns a spatio-temporal embedding that enables clustering the video-pixels of each instance. Instance correspondence between frames is achieved by matching newly detected instances to previous instances if the mean embedding distance is below the repulsion radius, . Therefore, We can track instances for an arbitrarily long period of time, as long as the embedding of a given instance changes smoothly over time, which is likely the case as temporal context and depth evolve progressively.
However, when the network is trained over sequence of images which are too long, the learning process of the embedding collapses. This is because the attractive loss term is detrimental between distant frames as it pressures pixels from the same instance to have corresponding embeddings when their appearance and depth is no longer similar. It also suggests our model is able to reason over lower order motion cues more effectively than longer term dynamics. This is seen experimentally in Table 4.
We show that our model can consistently segment instances over time on the following challenging scenarios: tracking through partial (Figure 2(a) and full occlusion (Figure 2(c)), and continuous tracking through noisy detections (Figure 2(b)). Additional examples and failure cases of our model are shown in Section A.2 and in our video demo.
In each example, we show from left to right: RGB input image, ground truth instance segmentation, predicted instance segmentation, embedding visualised in 2D, embedding visualised in RGB and predicted monocular depth. The embedding is visualised in 2D and coloured with the results of the mean shift clustering. Each colour represents a different instance, the inner circle indicates the attraction radius from the instance mean embedding, and the outer circle represents the repulsion radius of each instance. Additionally, we also visualise the embedding spatially in 3D, by projecting its three principal components to an RGB image.
Finally, we show in Section A.2 that incorporating depth context greatly improves the quality of the embedding, especially in complex scenarios such as partial or total occlusion. We also observe that the embedding is much more structured when using depth, further validating that 3D geometry is essential to reason about dynamics agents in video.
We presented a new spatio-temporal embedding loss that generates consistent instance segmentation over time. The temporal network models the past temporal context and the depth network constrains the embedding to aid disambiguation between objects. We demonstrated that our model could effectively track occluded instances or instances with missed detections, by leveraging the temporal and depth context. Our method advanced the state-of-the-art at video instance segmentation on the KITTI Multi-Object and Tracking Dataset.
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Cited by: §2, §3.
The cityscapes dataset for semantic urban scene understanding. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
We report the details of each component of our model in this section. The number of parameters and layers of each module are in Table 5.
The encoder is a ResNet-18 convolutional layer (He et al., 2016), with 128 output channels.
The temporal model contains 12 residual 3D convolutional blocks. Each residual block is the succession of: a 3D projection convolution with kernel size to halve the number of channels, a 3D causal convolutional layer with kernel , and a 3D projection convolution with kernel to restore the number of channels.
We set the temporal kernel size to , and the number of output channels to 128.
The decoders for instance embedding and depth estimation are identical and consist of 7 convolutional layers with channels [64, 64, 32, 32, 16, 16] and 3 upsampling layers. The final convolutional layer contains channels for instance embedding and channel for depth.
During training, we remove from the photometric reprojection loss the pixels that violate the rigid scene assumption, i.e. the pixels whose appearance do not change between adjacents frames. We set the mask to only include pixels where the reprojection error is lower with the warped image than the unwarped source image :
The pose network is the succession of a ResNet-18 model followed by 4 convolutions with [256, 256, 256, 6] channels. The last feature map is averaged to output a single 6-DoF transformation matrix.
The mask network is trained separately to mask the background and is the succession of the Encoder and Decoder described above.
The following examples show qualitative results and failure examples of our video instance segmentation model on the KITTI Multi-Object and Tracking Dataset. From left to right: RGB input image, ground truth instance segmentation, predicted instance segmentation, embedding visualised in 2D, embedding visualised in RGB and predicted monocular depth.
We show that our model greatly benefits from depth estimation, with the learned embedding being more structured, and correctly tracking objects in difficult scenarios such as partial or total occlusion.