Object Detection Free Instance Segmentation With Labeling Transformations

11/28/2016 ∙ by Long Jin, et al. ∙ University of California, San Diego 0

Instance segmentation has attracted recent attention in computer vision and existing methods in this domain mostly have an object detection stage. In this paper, we study the intrinsic challenge of the instance segmentation problem, the presence of a quotient space (swapping the labels of different instances leads to the same result), and propose new methods that are object proposal- and object detection- free. We propose three alternative methods, namely pixel-based affinity mapping, superpixel-based affinity learning, and boundary-based component segmentation, all focusing on performing labeling transformations to cope with the quotient space problem. By adopting fully convolutional neural networks (FCN) like models, our framework attains competitive results on both the PASCAL dataset (object-centric) and the Gland dataset (texture-centric), which the existing methods are not able to do. Our work also has the advantages in its transparency, simplicity, and being all segmentation based.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Object detection and semantic segmentation are both important tasks in computer vision. The goal of object detection is to predict the bounding box as well as the semantic class of each object, whereas semantic segmentation focuses on predicting the semantic class of the individual pixels in an image. In general, object detection does not provide accurate pixel-level object segmentation and semantic segmentation ignores to distinguish different objects in the same class.

Figure 1: Illustration of the instance segmentation problem we are tackling here. The first, the second, and the third column show input images, the semantic labeling maps, and the corresponding instance labeling maps, respectively. The first and the second row display typical examples from PASCAL VOC 2012 [10], and MICCAI 2015 Gland segmentation dataset [34], respectively.

Instance segmentation has recently become an important task, which is more challenging than both object detection and semantic segmentation tasks, since its goal is to label as well as provide pixel-level segmentation to each object in the image. Figure 1 shows an illustration of intance segmentation on images from two benchmark datasets, namely PASCAL VOC [10], and the gland segmentation benchmark [34]. In the standard semantic labeling task [31], each pixel in an image is assigned with an object class label e.g. sky, road, car etc.; in the instance segmentation problem, each pixel is additionally associated with an instance ID indicating which object it belongs to. Therefore, there are two sets of labeling maps for each image: (1) a semantic class labeling map (this is a classification problem), and (2) an instance ID labeling map (this is a segmentation problem).

For the remainder of this paper, we refer to semantic labeling as the task of predicting per-pixel object class label and refer to instance labeling as the job of assigning an instance ID to each region. On one hand, object labeling and instance labeling are two different tasks, as explained above. On the other hand, the two tasks are highly correlated. If every object instance forms a connected component (not being cut into two disjoint parts due to occlusion which rarely happens in practice) and no two objects belonging to the same class are connected to each other, then a semantic labeling map can be readily converted into an instance labeling map by obtaining the connected component of each instance. This happens frequently but it is not always true, as we can see in Figure 1.

Instance segmentation methods can be roughly divided into two categories: (1) those detection-based methods that perform bounding box detection [25, 26, 8, 7, 15, 21, 19]; and (2) those segmentation-based methods that use dense per-pixel features [32, 28, 40]. Detection-based methods typically perform proposal and object detection, followed by instance masking. These methods require objects being tightly bounded by rectangular boxes, which is a strong condition to satisfy. Existing segmentation-based methods avoid the object detection stage but their application domains are not as general as the PASCAL VOC scenarios, e.g. only one foreground object class in [28, 40] or using the NYU dataset where additional depth information is available [32].

In this paper, we study the fundamental challenge in instance segmentation and aim to develop object proposal- and detection- free methods. The reason for us to avoid the object detection process is twofold: (1) predicting object bounding boxes [12] and labeling individual pixels [23] are two different tasks that involve fairly different modules, which results in large complexity in both training and testing when combining the two; (2) although fast object detectors [27] are being invented but they are fundamentally limited in making dense pixel-level labeling. Existing instance segmentation methods, however, follow a common thread by performing object detection first with additional segmentation masking [8, 26].

The fully convolutional neural networks (FCN) family models [23, 4, 5, 39] have shown the significant benefit in making dense prediction, allowing end-to-end learning for pixel-level classification and regression. The problem of image instance segmentation, however, cannot be directly formulated as a classification problem. In instance segmentation, the region ID has no direct semantic meaning and the exact ID assigned to each region is not important and unique: there exists a quotient space for the labeling; e.g. car and car can respectively be named as car and car instead. So, we need to make transformations of instance labeling to a formulation that can be tackled by as a classification/regression problem.

One possibility to leverage the power of convolutional neural networks [17] in learning complex patterns is to formulate the segmentation problem as affinity learning [11]: pixels belonging to the same segment receive a high affinity score (e.g. ) and those from different segments have a low affinity score (e.g. ). Moreover, the affinity learning problem can be in pixel-level or superpixel-level. CNN-family models have been successfully used to learn face similarity [36, 35]; FCN-like models have been used to find correspondences for the flow fields when matching two images[9]. However, learning pixel affinity within the same image is not an easy task in practice, considering the large number of pixel pairs, and some special care needs to be taken even for the foreground/background segregation problem [24].

In this paper, we propose an object proposal- and detection- free segmentation-based framework for instance segmentation problem. Our framework consists of two paths, as shown in Figure 2. The semantic labeling path focuses on the pixel classification problem. To handle the instance labeling quotient space, we introduce the second path for instance labeling transformation and prediction. Here, we explore and develop three new methods for instance labeling transformation (as shown in Figure 3): (1) pixel-based affinity mapping, (2) superpixel-based affinity learning, and (3) boundary-based component segmentation. The predictions from the instance labeling transformation path are integrated with the semantic labeling path to generate instance segmentation results.

Our framework is object proposal- and detection-free, which is simpler and more transparent than existing frameworks. Also, three new instance labeling transformation methods have been proposed. Using a similar network structure, we are able to produce competitive results on two different types of datasets, namely PASCAL VOC [10] which is object-centric, and the gland segmentation benchmark [34] which is texture-centric. We achieve competitive results on both datasets, which the existing methods fail to do. For example, due to the assumption about tightly bounded single object within a rectangle, detection-based method is not able to achieve good performance (see the gland instance segmentation task example in Figure 1 and experimental results in Section 5.2).

The significance of being object proposal- and detection- free

Our motivation to develop an object proposal- and detection- free approach for instance segmentation is threefold. First, detection-based methods require objects to be non-deforming and capable of being tightly bounded by rectangular bounding boxes. These are strong preconditions that produce great limitations. For example, adopting the system [8] to perform gland instance segmentation shows unfavorable results (see results in Section 5.2), since the glands have large deformation that are hard to be bounded by rectangular boxes. Second, detection-based methods involve many additional steps, making the algorithm complex and opaque. Third, proposal- and detection- based methods do not perform direct segmentation and have fundamental limitations when multiple objects interact and appear in the same bounding box.

Figure 2: Illustration of the pipeline of our overall method. Each input image is associated with a semantic labeling map and an instance labeling map, as shown in Figure 1. These two labeling maps are only presented during training. Our method has two basic paths that are both built on fully convolutional neural networks. The first path learns a FCN-like model for the pixel-wise classification by taking the input image and its semantic labeling; the semantic labeling prediction can be used to create an initial segmentation using a connected component extraction procedure. In the second path, we provide three alternative methods to transform the instance labeling; three alternative methods are (1) pixel-based affinity mapping, (2) superpixel-based affinity learning, and (3) boundary-based component segmentation. Another FCN-like model is trained on the input image and the transformed instance labeling. The prediction from instance labeling transformation path is further integrated with the initial segmentation extracted from semantic labeling path to generate the final instance segmentation results. Some typical examples of the results are shown in Figure 5.

2 Related Work

Semantic Segmentation Deep convolutional neural networks advances the progress of semantic segmentation research. Some recent models focus on using fully convolutional networks for the dense pixel-wise prediction [23, 4, 5, 39]. ‘Atrous convolution’ [4, 39] has been proved to be effective for explicitly enlarge the receptive field. CRF models can be applied in post-processing [4] or in-network [41] to refine segmentation contours. Semantic segmentation is part of our proposed framework, so the advances from these segmentation models will benefit our instance segmentation approach as well.

Affinity LearningSpectral clustering methods, such as normalized cuts (Ncuts) [30]

, have shown to be effective for unsupervised segmentation. However, computing accurate affinity matrix is a keystone but also a handicap for spectral clustering algorithms. In the past, the affinity matrix is mostly calculated based on hand-designed heuristics 

[30]. Previous attempts in learning the affinities [11], while being inspiring, have not shown to significantly benefit segmentation. CNN-based approach has been used for foreground and background segregation [24]. Our method focuses on the affinity mapping in order to make transformation of instance labeling.

Instance Segmentation Methods for instance segmentation can be roughly divided into two categories: (1) detection-based methods that perform bounding box detection [25, 26, 8, 7, 15, 21, 19]; and (2) segmentation-based methods that use dense per-pixel features [32, 28, 40].

Detection-based methods typically perform proposal and object detection, followed by instance masking. The Deep Mask method

[25] generates object proposals, followed by learning a binary mask for each detected bounding box; a cascade strategy is adopted in [8] for instance localization and then masking. PFN [21] is a proposal-free network, however it needs to regress the bounding box locations of the instance objects. MPA [22] aggregates mid-level patch segment prediction results by sliding on the feature maps.

Existing segmentation-based methods avoid the object detection stage but their application domains are not as general as the PASCAL VOC scenarios, e.g. assuming one foreground object class only in [28, 40] or using the NYU dataset where additional depth information is available [32]. In [28], a Hough space is created to perform the segmentation; in [32] a structured labeling formulated is proposed to explore a segmentation tree; in [40] regions/instances are assigned with depth order, allowing classification to be learned.

Our framework is object proposal- and detection-free, which is more simple and transparent than existing frameworks. Also, our approach focuses on the transformations of instance labeling and proposes three different methods.

Figure 3: Illustration of our proposed three alternative instance labeling transformation methods. The first column is the input image, and the second column is its instance labeling ground truth. Our proposed three alternative instance labeling transformation methods are shown in the third, fourth, fifth column, respectively. Transformation method 1 (pixel-based affinity mapping) will map the pixel-based local affinity pattern to a specific class (Section 4.1). Transformation method 2 (superpixel-based affinity learning) will generate affinities of superpixel pairs (Section 4.2). Transformation method 3 (boundary-based component segmentation) will produce the instance boundaries (Section 4.3).

3 Instance Segmentation Framework

In this section, we discuss our proposed framework for object proposal-free and object detection-free instance segmentation. Our framework consists of two fully convolutional neural network paths, as shown in Figure 2. The semantic labeling path focuses on the per-pixel classification problem, which predicts the category label for each pixel. To tackle the instance labeling quotient space problem, we introduce the other path for instance labeling transformation and prediction. Our system takes the original images as input and trains two FCN modules to predict the semantic labels and the instance label transformation maps separately. The predictions from instance labeling transformation path are then integrated with the semantic labeling path to generate instance segmentation results.

3.1 Semantic Labeling

The main goal of semantic labeling is to predict a detailed mask in pixel-level for different classes. FCN-based semantic segmentation models [23, 5, 39] have achieved rapid process in recent years. In our framework, this semantic labeling path takes the input image and its semantic labeling as training data and performs per-pixel prediction. We adopt the ‘DeepLab-Large-FOV’ network structure [5] as the basic network for our semantic labeling path since it delivers state-of-the-art performance. This FCN network introduces ‘atrous convolution’, which can explicitly control the resolution of the feature responses and effectively enlarge the field of view of filters to incorporate larger context. As we don’t observe much difference in the performance using the dense CRF post-processing, so we drop the CRF processing step.

3.2 Instance Labeling Transformation

Swapping the instance IDs will lead to the same instance results, which causes the quotient space problem (see Section 4 for more details). So we need to transform the instance labeling to further formulate the problem. We propose three transformation methods (as shown in Figure 3), which are all segmentation-based and object proposal- and detection-free. We find that these methods are effective for instance labeling transformation.

Affinity is the natural choice to measure the coherence of pixels in the images. From the perspective of metric learning, pixels in the same region should have small distances, hence large affinities/similarities, and pixels in different regions should have large distances, hence small affinities/similarities. Using CNN classification offers a possible solution to learn the affinity patterns in principle. We propose two affinity based transformation methods, which are both clustering based methods. The first one is pixel-based affinity mapping and the other is super-pixel affinity learning, which will be discussed in more details in Section 4.1 and 4.2, respectively.

Object boundaries provide another perspective for instance labeling transformation as they provide the cues to locate the object instances. So, our third strategy is boundary-based component segmentation, which is a non-clustering method. Its idea is to leverage the instance boundary to separate different instances in the same component from semantic labeling prediction. We will discuss this method in Section 4.3.

3.3 Integrate Instance and Semantic Labeling

Our initial segmentation results are from the semantic labeling. We extract the connected components of the same category and regard them as the potential instances. Then, we can utilize our learned instance labeling prediction to separate the neighbor instances as shown in Figure 1. We notice that in many cases, these connected components provide a good starting point for the instance segmentation task. So we start from the segmentation perspective, which is different from object proposal-and-detection based methods. Different instance labeling transformation methods will have different ways to integrate with semantic labeling predictions, which will also be discussed in Section 4.

4 Instance Labeling Transformations

For the instance segmentation problem, we are given a training set where refers to the -th input image, refers to its corresponding semantic labeling, refers to its corresponding instance labeling, and denotes the total number of images in the training set. For simplicity of annotation, we use by subsequently dropping the index to focus on one input. Grouping pixels with the same instance label allows us to have another representation, regions denoted by , where refers to the total number of regions in , , and . includes all the pixels in the image. It is worth mentioning that and is a many-to-one mapping with a quotient space, which is exactly one source of the challenges in instance segmentation being tackled here. We explain in detail below. For example, if we assign labels to as to respectively, then we obtain one instance labeling for image denoted as where indexes each pixel , is the instance label of pixel , and is the total number of pixels of . However, with the same , we instead assign as to , then the instance labeling would be . Therefore, we can see both and refer to the same . For this reason, we propose instance labeling transformation methods that map all different s correspondingly to the same/similar into a new form, which can be tackled by a classification/regression algorithm (here a FCN-like model).

Next, we preset three alternative methods to transform instance labeling (as shown in Figure 3) and apply FCN-based models to make predictions.

4.1 Method 1: pixel-based affinity mapping

The first option is to perform clustering/segmentation based on pair-wise pixel affinity to tackle the quotient space problem. Given the ground truth of an instance labeling map, we can construct the global affinity matrix. However, learning and computing the global affinity is both computationally expensive and practically infeasible. For this reason, some special care is taken in [24] for foreground and background segregation.

Here, we focus on local affinity patterns and develop a novel affinity learning method by transforming the instance labeling map into different classes. Thus, an instance labeling map is turned into a classification map, in which each pixel is associated with a class, indicating a local affinity pattern. In this way, we are able to train fully convolutional networks to perform pixel-based classification to obtain a comprehensive affinity map for the entire image.

Figure 4: Illustration of performing pixel-based affinity mapping. We use in the paper. Given an instance labeling map, every image patch of size is first extracted (this is shown in the first row); each labeling configuration has an associated affinity map defined on the labels for the total pixels in the patch, resulting in an affinity matrix of size (this is shown in the third row); the pixel index for the affinity matrix is shown on the left. Based on the element-wise distance of the resulting affinity matrices, they are clustered into 100 classes by -means. The first and the second row show that two different instance labeling for the same segmentation will lead to the same affinity map, and hence the same class, which is desirable and it makes learning a classification model feasible.

Local affinity mapping Consider the possible patterns that appear in a image patch, which centers at pixel . Here, is the size of the patch and are the coordinates of the center pixel of the patch. Given that there are different instances that appear within the same patch, there are a combinatorial number, possible cases. This leads to a large number, even if and are relatively small. However, in the presence of the quotient space of the labeling maps, swapping the instance IDs of any two regions gives rise to the identical segmentation result. As Figure 4 shows, different configurations of instance labeling (in row 1 and row 2) will lead to the same affinity pattern (in row 3). Therefore, we can leverage this property to simplify the problem at hand. Here, each labeling configuration is firstly associated with a local affinity map, which is defined on the labels for every pair of the total pixels in the patch. Therefore, the size of the local affinity matrix is . Next, we adopt -means to project the high dimensional local affinity matrix into a fixed number of classes. This embedding assigns each local labeling pattern with a class.

Training and prediction To train our model, we construct the pixel-based affinity mapping from the ground-truth instance labeling. In our experiments, we firstly resize the original images to a smaller size and then calculate the pixel-based affinity mapping in a fixed-size patch. We select the scale of the resized images as and adopt patch. Here, we consider the pair-wise relationship between each pixel in this patch, so we can construct the local affinity matrix with the size of in our setting. Then, we adopt -means to identify the classes of these affinity patterns. This mapping process is shown in Figure 4. After this mapping, each pixel is given a new affinity label and its value indicates the local patterns. In this way, we formulate this problem to a pixel-based classification task, which is suitable for FCN to solve. We modify the FCN network architecture [23]

by removing the deconvolution layer in FCN and modifying the stride of the last two pooling layers to 1. The cross entropy loss is used to train the network.

In the next step, we can use the learned pixel-based affinity values to reconstruct the local affinity matrix. Each class label corresponds to the local affinity matrix. Then, we need to use these pixel-based affinity labels to fill in the overall affinity matrix. In our setting, each pixel-based affinity label will affect the affinity relationship among all the pixels in the same patch. As two neighbor pixels can appear in different patches, their pairwise pixel affinity is voted by all these local information gathered from these different patches. Finally, we construct the overall local affinity matrix, which will be used for distinguishing different instances.

Integration with semantic labeling For each single connected component in each category, we firstly extract the affinities of the pixels belonging to that component, and then apply spectral clustering algorithms, such as the normalized cut (NCuts) algorithm [30], to the local affinity matrices. By varying the number of cuts, we can obtain multiple potential instance settings for that component. Finally, we put the local instance settings back to the global instance labeling map. After we go through all the single connected components of all the categories, the refined instance segmentation results are obtained.

4.2 Method 2: superpixel-based affinity learning

As we discussed in Section 4.1, computing the global affinity matrix is not feasible in practice, in face of the huge number of pixel pairs. Our second transformation method is to directly measure the affinity between superpixels. Leveraging superpixels have brought some benefits. First, superpixels are more natural and meaningful representation of visual scenes, which simplifies the low-level pixel grouping process. Second, it can reduce the complexity of affinity computation.

After generating superpixels from the original images, we can assign the affinity for superpixel pairs. Specifically, for super-pixels with the same instance labels, we assign the affinity metric as 1, and for super-pixels with different instance labels, we assign the affinity metric as 0. This process is shown in the fourth image of Figure 3

. So, for an input image, we can get the affinity labels of all the superpixel pairs as a vector. In this way, we formulate the problem which can be learned by FCN-like models.

Training and prediction In our experiment, we adopt the SLIC method [1] to generate superpixels and then construct the affinity for these superpixel pairs. We modify the FCN network architecture [23] to learn the superpixel affinity. For each superpixel, we randomly select a fixed number of pixels inside it to calculate the average feature map from the FCNs. Then, for a pair of superpixels, we concatenate their feature maps, and pass them to two newly added convolution layers to make the affinity prediction. The cross entropy loss is used to train the network. We make this network to be trained in an end-to-end way. The learned superpixel affinity metric can be used to construct the local affinity matrix directly, without the voting process as pixel-based affinity mapping has.

Integration with semantic labeling The integration process is similar to pixel-based affinity mapping method. For each single connected component of each category, we firstly extract the superpixels as well as the affinity predictions of these superpixel pairs. Then, we can construct the local affinity matrices, and spectral clustering algorithms, such as the normalized cut (NCuts) algorithm [30], are used to obtain potential instances. Then, we put the local instance settings back to the global instance labeling map. After we go through all the single connected components of all the categories, the final instance segmentation results for the image are generated.

4.3 Method 3: boundary-based component segmentation

Another observation of the quotient space is that no matter how we swap the instance IDs, the instance boundaries remain the same. This intrinsic property provides us with the possibility to transform the instance labeling into instance boundaries. By this transformation, edge detection methods can be applied to learn these specific instance boundaries. After obtaining predicted instance boundaries, we can use these edges to separate connected component from semantic labeling. The benefit of boundary-based component segmentation is that we don’t need to identify the number of instances as clustering based methods do. This edge-based componenet segmentation method is very simple and achieves results on par with state-of-the-art performance on the gland segmentation dataset.

Training and prediction We first generate the instance boundary labels from the ground truth instance labeling, as shown in Figure 3. Then, we adopt the recent FCN-based boundary detection model, such as HED [37], to learn the instance boundary. HED provides the holistic network to learn multi-scale and multi-level features for boundary detection. It combines fully convolutional neural networks [23] and deeply-supervised nets [18] to perform image-to-image prediction. For the boundary maps computed from HED, we also apply a standard non-maximal suppression technique to obtain thinned boundaries. Though HED is designed for boundaries in natural images, from our experiments, we show this network structure is also very effective for locating specific instance boundaries.

Integration with semantic labeling We adopt a simple method to integration the results from the predicted instance boundaries and semantic labeling. For the pixels that are predicted as boundaries, we simply assign their corresponding pixels in the semantic labeling as background label. Thus, these predicted boundaries can separate instances in the same connected component of semantic labeling if there exits a complete boundary inside the component. Finally, we get all the connected component in the updated semantic labeling map to generate the instance segmentation results.

5 Experimental Results

In this section we evaluate the performance of our proposed approach on two instance segmentation benchmarks, PASCAL VOC 2012 [10]

and MICCAI 2015 gland segmentation dataset. These two datasets are object-centric and texture-centric, respectively. For training and fine-tuning our network, our system is built upon the Caffe platform

[16]. We use the released VGG-16 [33]

model to initialize the convolution layers in our network. While for other new layers, we randomly initialize them by sampling from a zero-mean Gaussian distribution. Our training complexity is similar to those reported FCN-like models. Our testing speed, in particular method 2 and method 3, is very fast (less than 1 second).

5.1 Pascal Voc 2012

We evaluate our approach on PASCAL VOC 2012 dataset [10]. We use the augmented dataset from SBD[13] and collect the instance labels from [10] and [13] for training. At the test stage, we measure our performance on PASCAL VOC 2012 segmentation validation set. Two standard evalution metrics, and AR@N, are used for comparison.

The first evaluation metric is

, which measures the average precision under 0.5 IoU overlap with ground-truth segmentation. The evaluation results is summarized in Table 1. Here, FrontEnd (FE) refers to extracting connected components directly from semantic labeling predictions. The boundary-based component segmentation (Method 3) achieves the best performance in our three transformation methods. Also, all three transformation methods outperform detection-based models, SDS [14] and the method in [6]. However, they are worse than recent models, PFN [21], MPA [22] and R2-IOS [20]. One reason is that it is hard to give an accurate number of instances for clustering based methods (method 1 and method 2). Also, we simply use the area of each instance as the score for evaluation, which can not be optimal. Our framework is object proposal-free and detection-free, and much simpler than their models, which have multiple components.

Method ()
Proposal-and-detection based
SDS [14]
Chen et al. [6]
R2-IOS [20]
PFN unified [21] (w/ object localization)
PFN independent [21] (w/ object localization)
MPA 3-scale [22] (w/ sliding window)
Segmentation based
Direct labeling (FE [4])
FE+Method 1 (ours)
FE+Method 2 (ours)
FE+Method 3 (ours)
Table 1: Comparison on the PASCAL VOC instance segmentation validation set based on . Models are grouped into proposal-and-detection based and segmentation based. FrontEnd (FE) refers to extracting connected components directly from semantic labeling prediction. For the transformation methods, Method 1 refers to pixel-based affinity mapping, Method 2 refers to superpixel-based affinity learning, and Method 3 refers to boundary-based component segmentation. Our framework is object proposal free and detection free, and much simpler than other methods, which have multiple components. Note that PFN uses object localization and MPA has a sliding window procedure, which are both object detection like.

The second evaluation metric is AR@N, which measures the average recall between IoU overlap threshold from 0.5 to 1.0. The evaluation results is summarized in Table 2. Our proposed framework achieves the comparable results with proposal-based methods, especially in AR@10. Since our framework is object proposal- and detection- free, it doesn’t generate hundreds of proposals, leading to a relatively lower score on AR@100.

Method AR10 () AR100 ()
Proposal-and-detection based
SS [14]
MCG[2]
DeepMask [25]
MNC [8]
InstanceFCN [7]
Segmentation based
Direct labeling (FE [4])
FE+Method 1 (ours)
FE+Method 2 (ours)
FE+Method 3 (ours)
Table 2: Comparison on the PASCAL VOC instance segmentation validation set based on AR@N. Models are grouped into proposal-and-detection based and segmentation based. FrontEnd (FE) refers to extracting connected components directly from semantic labeling prediction. For the transformation methods, Method 1 refers to pixel-based affinity mapping, Method 2 refers to superpixel-based affinity learning, and Method 3 refers to boundary-based componenet segmentaion. Our method is proposal-free and hence doesn’t generate hundreds of proposals; this leads to relatively lower measure on AR@100. Our result with AR@10 measure that is fully segmentation based is comparable with other approaches that engage sliding windows for object detection. We do not find results with AR measures reported in PFN [21] and MPA [22].

Example instance segmentation results are shown in Figure 5. We observe that boundary-based component segmentation (Method 3) is more accurate to locate the object boundaries. For example, in the second example, boundary-based method can even fix the mistaken semantic segmentation results by separating the redundant part of the cat ear. However, affinity based methods may fail to identify the accurate separation of object instances in some cases.

Figure 5: Example results of the instance segmentation on PASCAL VOC 2012 dataset [10] and gland segmentation dataset [34]. The first, second, third, fourth, fifth, and sixth column respectively shows the input image, its ground truth instance labeling, instance prediction from connected component of semantic segmentation, method 1 (instance prediction using pixel-based affinity mapping), method 2 (instance prediction using superpixel-based affinity learning) and method 3 (boundary-based component segmentation).
Method F1 Object Dice Object Hausdorff
Part A Part B Part A Part B Part A Part B
Proposal-and-detection based

SDS [14]

HyperColumn [15]

MNC [8]

Frerburg2 [29]

Xu et al. [38]


Segmentation based

CUMedVision2 [3]

Direct labeling (FE [4])

FE+Method 1 (ours)

FE+Method 2 (ours)

FE+Method 3 (ours)

Table 3: Comparison on the gland instance segmentation dataset based on the challenge measure [34]. We report results on Part A and Part B. indicates a better performance if value is higher and indicates a better performance if the value is lower. FrontEnd (FE) refers to extracting connected components directly from semantic labeling prediction. Method 1 is pixel-based affinity mapping, Method 2 is superpixel-based affinity learning, and Method 3 is boundary-based componenet segmentaion. An object detection based method, MNC [8], produces much worse result than ours, for the reason discussed before: detection-based methods assume rectangular objects and are hard to deal with objects of non-rigid shape. The results of our framework is on par with state-of-the-art performance on this dataset. Xu et al. [38] is much more complex than ours (consists of multiple modules with both semantic labeling and object detection) but reports performance that is very similar to ours.

5.2 MICCAI 2015 Gland Segmentation Dataset

The MICCAI 2015 Gland Segmentation Challenge dataset [34] consists of 165 gland instance segmentation images, including 85 training images and 80 testing images. This dataset is texture-centric. We follow the data augmentation strategy in [38], which includes horizontal flipping, rotation, sinusoidal transformation, pin cushion transformation and shear transformation.

We use three standard metrics from this challenge for evaluation: F1 measures the detection performance, ObjectDice measures the segmentation performance, and ObjectHausdorff measures the shape similarity. The results are summarized in Table 3. We observe that our proposed framework achieves better performance than detection-based methods [14, 15, 8]. It is because that the shape of glands have large deformation. So they are hard to be bounded by rectangular boxes, which is the strong assumption from detection-based methods. Among three transformation methods, edge-based component segmentation method (Method 3) achieves the best performance, which is on par with state-of-the-art methods [38, 3] on this dataset. Xu et al. method [38] is much more complex than ours (consists of multiple modules with both semantic labeling and object detection) but achieves similar performance. Example instance segmentation results are shown in Fig. 5.

6 Conclusions

We have proposed an object detection free instance segmentation approach with three alternative methods when performing instance labeling transformations. We show competitive results on both PASCAL VOC 2012 and the gland segmentation datasets. Our methods have the desirable properties of being simple, transparent, easy to train, and fast to compute; it works well on both object-centric and texture-centric domains, which existing methods have not shown.

Acknowledgments We would like to thank Saining Xie and Shuai Tang for valuable discussions. This work is supported by NSF IIS-1216528 (IIS-1360566), NSF IIS-1618477, and a Northrop Grumman Contextual Robotics grant. We are grateful for the generous donation of the GPUs by NVIDIA.

References

  • [1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34:2274–2282, 2012.
  • [2] P. Agrawal, R. Girshick, and J. Malik. Analyzing the performance of multilayer neural networks for object recognition. In ECCV. 2014.
  • [3] H. Chen, X. Qi, L. Yu, and P.-A. Heng. Dcan: Deep contour-aware networks for accurate gland segmentation. In CVPR, 2016.
  • [4] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2015.
  • [5] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv:1606.00915, 2016.
  • [6] Y.-T. Chen, X. Liu, and M.-H. Yang. Multi-instance object segmentation with occlusion handling. In CVPR, 2015.
  • [7] J. Dai, K. He, Y. Li, S. Ren, and J. Sun. Instance-sensitive fully convolutional networks. In ECCV, 2016.
  • [8] J. Dai, K. He, and J. Sun. Instance-aware semantic segmentation via multi-task network cascades. CVPR, 2016.
  • [9] A. Dosovitskiy, P. Fischer, E. Ilg, P. Häusser, C. Hazırbaş, V. Golkov, P. v.d. Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. In ICCV, 2015.
  • [10] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html, 2012.
  • [11] C. Fowlkes, D. Martin, and J. Malik. Learning affinity functions for image segmentation: Combining patch-based and gradient-based approaches. In CVPR, 2003.
  • [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In CVPR, 2014.
  • [13] B. Hariharan, P. Arbelaez, L. Bourdev, S. Maji, and J. Malik. Semantic contours from inverse detectors. In ICCV, 2011.
  • [14] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Simultaneous detection and segmentation. In ECCV. 2014.
  • [15] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In CVPR, 2015.
  • [16] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe. In ACM MM, 2014.
  • [17] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1989.
  • [18] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-Supervised Nets. In AISTATS, 2015.
  • [19] K. Li, B. Hariharan, and J. Malik. Iterative instance segmentation. In CVPR, 2016.
  • [20] X. Liang, Y. Wei, X. Shen, Z. Jie, J. Feng, L. Lin, and S. Yan. Reversible recursive instance-level object segmentation. In CVPR, 2016.
  • [21] X. Liang, Y. Wei, X. Shen, J. Yang, L. Lin, and S. Yan. Proposal-free network for instance-level object segmentation. arXiv preprint arXiv:1509.02636, 2015.
  • [22] S. Liu, X. Qi, J. Shi, H. Zhang, and J. Jia. Multi-scale patch aggregation (mpa) for simultaneous detection and segmentation. In CVPR, 2016.
  • [23] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
  • [24] M. Maire, T. Narihira, and S. X. Yu. Affinity cnn: Learning pixel-centric pairwise relations for figure/ground embedding. In CVPR, 2016.
  • [25] P. O. Pinheiro, R. Collobert, and P. Dollar. Learning to segment object candidates. In NIPS, 2015.
  • [26] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dollár. Learning to refine object segments. In ECCV, 2016.
  • [27] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015.
  • [28] H. Riemenschneider, S. Sternig, M. Donoser, P. M. Roth, and H. Bischof. Hough regions for joining instance localization and segmentation. In ECCV. 2012.
  • [29] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
  • [30] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000.
  • [31] J. Shotton, J. Winn, C. Rother, and A. Criminisi. Textonboost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation. In ECCV. 2006.
  • [32] N. Silberman, D. Sontag, and R. Fergus. Instance segmentation of indoor scenes using a coverage loss. In ECCV. 2014.
  • [33] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  • [34] K. Sirinukunwattana, J. Pluim, D. Snead, and N. Rajpoot. GlaS MICCAI’2015: Gland Segmentation Challenge Contest, 2015.
  • [35] Y. Sun, Y. Chen, X. Wang, and X. Tang. Deep learning face representation by joint identification-verification. In NIPS, 2014.
  • [36] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. In CVPR, 2014.
  • [37] S. Xie and Z. Tu. Holistically-Nested Edge Detection. In ICCV, 2015.
  • [38] Y. Xu, Y. Li, M. Liu, Y. Wang, Y. Fan, M. Lai, E. I. Chang, et al. Gland instance segmentation by deep multichannel neural networks. arXiv preprint arXiv:1607.04889, 2016.
  • [39] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016.
  • [40] Z. Zhang, S. Fidler, and R. Urtasun. Instance-level segmentation for autonomous driving with deep densely connected mrfs. In CVPR, 2016.
  • [41] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. Torr.

    Conditional random fields as recurrent neural networks.

    In ICCV, 2015.