Combining Multi-level Contexts of Superpixel using Convolutional Neural Networks to perform Natural Scene Labeling

03/14/2018
by   Aritra Das, et al.
0

Modern deep learning algorithms have triggered various image segmentation approaches. However most of them deal with pixel based segmentation. However, superpixels provide a certain degree of contextual information while reducing computation cost. In our approach, we have performed superpixel level semantic segmentation considering 3 various levels as neighbours for semantic contexts. Furthermore, we have enlisted a number of ensemble approaches like max-voting and weighted-average. We have also used the Dempster-Shafer theory of uncertainty to analyze confusion among various classes. Our method has proved to be superior to a number of different modern approaches on the same dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 6

08/27/2021

ISNet: Integrate Image-Level and Semantic-Level Context for Semantic Segmentation

Co-occurrent visual pattern makes aggregating contextual information a c...
02/08/2021

Semantic Segmentation with Labeling Uncertainty and Class Imbalance

Recently, methods based on Convolutional Neural Networks (CNN) achieved ...
04/23/2022

Class Balanced PixelNet for Neurological Image Segmentation

In this paper, we propose an automatic brain tumor segmentation approach...
02/21/2016

A Survey of Semantic Segmentation

This survey gives an overview over different techniques used for pixel-l...
02/16/2021

Uncertainty-based method for improving poorly labeled segmentation datasets

The success of modern deep learning algorithms for image segmentation he...
02/26/2020

Towards Interpretable Semantic Segmentation via Gradient-weighted Class Activation Mapping

Convolutional neural networks have become state-of-the-art in a wide ran...
01/17/2022

Disentangled Latent Transformer for Interpretable Monocular Height Estimation

Monocular height estimation (MHE) from remote sensing imagery has high p...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep Learning has brought a new era in machine learning. Being able to learn more complex features from images, problems such as classification, localization, segmentation has seen remarkable progress especially for natural images. Previously most significant research in the domain of natural image processing was performed using some sort of pattern recognition over pixels  

[5, 9, 3]. The problem that has been dealt in this paper is semantic image segmentation. Image segmentation goes beyond tasks like object recognition or localization. In this problem we are mainly interested in precise segments which semantically separates one object from another. While pixel level algorithms  [12, 8, 10] provide very fine level segmentation, superpixels  [18] provide much lesser computational complexity while not compromising performance. Superpixels refer to small patches of adjacent similar pixels grouped together. We have used these superpixels for our algorithms thus providing real-time performance. Convolutional neural networks(CNNs) have showed tremendous performance in the field of natural image processing as well as segmentation. In our approach we have implemented multiple convolutional neural networks to obtain results. Any classification problem can be associated with uncertainty in the decision process. We have used some ensemble methods as well as Dempster-Shafer Theory to handle such uncertainty. The next section will give a brief review of some related works. Section 3 will explain the methodologies. In section 4 and 5 will cover the experimentations and discussions regarding obtained results.

2 Related Works

Segmentation Algorithms also gained momentum with the onset of deep learning. In 2015, Ross Girshick in his paper F-RCNN  [13] outlined the quickest way of detecting multiple regions in an image. However, his proposed architecture does not segment the whole image but can find where the objects are in the image. In SegNet  [2], the idea of convolution and the de-convolution have been used together to generated segmented regions. Farabet et al. [6] showed how superpixel level classification may be performed by using CNNs. Though superpixels were only used to generate a scene parsing tree rather than considering them for the actual segmentation. Our approach however trains the CNN directly on the superpixel patches. While a variety of superpixels have been seen in the field of image segmentation  [18], our choice is the SLIC  [1], for its speed and boundary adherence. Uncertainty is a common challenge in machine learning problems. In image segmentation we have seen the use of Dempster-Shafer Theory [16, 4, 14] for elimination of such uncertainties as well. For our current the ICCV09 Dataset  [7] was used.

3 Methodologies

First phase of our approach deals with training CNNs for classifying superpixels into 8 categories w.r.t the 8 semantically segmented classes of the ICCV 09 Dataset. Second phase deals with the ensemble of three different variations of the CNN using various methods. The overall workflow is clearly demonstrated in fig

1.

Figure 1: Overall Flowchart

The following subsections will explain each module in details.

3.1 Superpixel based Segmentation(Module 1)

Pixel level classification is a tedious process primarily due to two factors. Firstly, even a small image contains quite a high number of pixels, and secondly, the information content of a pixel is very limited to consider classification into various segments. By using superpixels, we capture much more information than a single pixel and number of superpixels in an image is much lesser than the number of pixels. First each image was divided into superpixels by using SLIC  [1]. To keep uniformity in the sizes of superpixels across images of various sizes of images, the minimum object resolution was fixed. The number of superpixels of an image is given by,

(1)

A patch of superpixel shows significant amount of texture information with respect to just pixels. However for semantic segmentation we also need to consider the context in which this superpixel occurs. So each superpixel was augmented with its neighbours to create a larger patch for the CNN to extract features from. For our experiments we have considered the first, second and third neighbour of each superpixel for its classification. Three different CNNs were trained for each of this neighbour category. It can be clearly seen in fig. 2

Figure 2: Superpixel’s Neighbor. (a) Image with selected superpixel (in red), (b) Single Superpixel patch, noted as 0N, (c) Superpixel patch with 1st neighbor, noted as 1N (d) Superpixel patch with 2n d neighbor, noted as 2N, (e) single superpixel patch or patch with neighbors cropped out from image, (f) Minimal covering bounding box, (g) Regular size cropped out patch fed into the CNNs

Each CNN for classifying the superpixels consisted of two layers. First layer has convolution kernels followed by a standard pooling. The second layer consists of convolution kernels followed by a standard pooling. This is followed by a fully connected layer with 256 hidden units and a softmax output layer.

3.2 Ensemble Strategy

Each of three CNN outputs a 8-dimensional softmax distribution. These are ensembled using three different methods, namely, max-voting, combination of mass function with the help of Dempster-Shafer theory of uncertainty and weighted sum techniques.

Max-Voting:

This techniques takes the three predictions from three CNNs and chooses the winner on the basis of votes. In case of a tie the prediction with highest score was chosen.

Weighted Average:

For weighted average, the output score is calculated by a weighted combination of all the softmax scores. The weight is determined by the training performance of each CNN. The final score for patch is given by,

(2)

Dempster-Shafer Theory of Uncertainty:

There are certain number of superpixels for which the network gives poor predictions. The uncertainty rises in training because of lots of reasons like skewed datasets, similar superpixels for different classes, wrong ground truth level annotations. To deal with such uncertainties Dempster-Shafer [50] theory of evidence is taken into account. Unlike normal classification which uses a probability distributions across the number of classes, Dempster-Shafer theory of uncertainty deal with the masses and beliefs which are distributions across the all the possible combination of the classes. Hence forth mass value of a certain combination is defined by

. So we designed an approach to simulate mass distribution by using the confusion matrix obtained during training. In theory of evidence

. The power set is written as

(3)

So this difference in the between the mass value and probability is defined in terms of the confusion (misclassification) related to that class.

(4)

The computation of mass values for other elements of the power set such as is more complicated. The confusion matrix provides us with information regarding misclassification among two classes as well. Higher combination of classes we not considered henceforth because they needlessly increase the computation while not providing significant information. In other words while considering the predicted class of a patch we are giving consideration to one more class which has a high probability of confusion with chosen class. So mass values such as are ignored. If we remember from equation 4 the probability of each class was deducted by a certain amount to obtain corresponding mass values. If all these deductions are accumulated and redistributed among other members of the power set as their mass values then the requisite of a mass distribution is satisfied which is given by . Let the accumulated deductions be defined as .

(5)

The mass values of higher order members of the power set with a cardinality of 2 is given by

(6)

After computing the mass distribution for each of the three CNNs we combine them to find the final mass distribution using the Dempster-Shafer rule of combination of evidence as described in section 2.2.1 of  [15]

4 Experimentations

The first part of our experimentation trains and tabulates the performance of the three individual CNNs. Optimum size for the raw superpixel patch and 1st and 2nd neighbour images were chosen as , , and respectively based on validation performance. The architecture and the corresponding performance is given in table 1. The second phase records the result of the various ensemble methods. The ICCV 09 Database was used for the experimentation. It contains 715 images with ground truths showing 8 semantically segmented classes. The dataset was split into 500 training, 72 validation and 143 test samples. The minimum object size considered for generating the superpixels was approximately . The total number of superpixels was 291,911.

5 Results and analysis

In figure 3 we can see some segmented examples as generated by our approach. In the next sub-sections we shall look into the performance of the individual CNNs and how they improved upon using ensemble techniques.

Figure 3: Segmentation Results for our proposed approach

5.1 Individual CNNs

Each individual CNN was trained over 500 images. The optimal architecture was selected according to their performance on the validation dataset. The final test accuracy along with the optimum configurations is shown in table 1.

Patch
Input
Size
Network
Architecture
Test Classification
Accuracy
0N 32C5-2P-64C3-2P-FC256 72.32
1N 32C7-2P-64C5-2P-FC256 72.45
2N 32C7-2P-64C5-2P-FC256 72.24
Table 1: Classification performance of Individual CNNs. number of convolution kernels, with window and , with units, refers to input patch along with -level neighbours

It can be seen that all the individual CNNs perform almost at the same level. Thus it may seem that the choice of different neighbours is ineffective. However if we ensemble the softmax outputs of these three CNNs we see a different story.

5.2 Ensemble Methods

We chose three ensemble strategies to deal with disagreement among the individual CNNs along with Dempster-Shafer theory to remove uncertainty in the obtained results. Table 2 shows performance of the three ensemble strategies across all classes.

Type Test accuracy
Avg.
Acc.
Sky Tree Grass Ground Building Mountain Water Object
Dempster
Shafer
88.07 76.74 80.29 86.59 77.8 2.56 59.41 59.18 77.07
Max
Voting
84.05 73.37 73.65 80.80 69.90 4.15 63.90 60.79 72.88
Weighted
Average
87.75 77.17 79.69 85.43 75.85 4.16 64.69 63.24 77.14
Table 2: Performance of Ensemble approaches with respect to various classes

It can bee seen that for some classes Dempster-Shafer wins, whereas for other classes the weighted average is ahead. The poor performance in the mountain category was due to the fact that segments with mountains were quite scarce throughout the dataset.

Finally, in table 3 we can see how our approach performs against some fantastic works on the database. As it can be seen all the other works that has been compared with are from world class conferences and journals. Our approach was able to beat all of them.

Approaches Methodology
Classification
Accuracy
Baseline (ICCV 09) [7] Pixel CRF 74.3
Gould et al. (ICCV 09)  [7] Region based energy 76.4
Munoz et al. (ECCV 10)  [11] Probabilistic Model 76.9
Farabetet al. (PAMI 13)  [6] CNN+Superpixel 74.56
Tighe et al. (ECCV 10)  [17] Features+Superpixel 76.3
Our Approach Superpixel + CNN + Ensemble 77.14
Table 3: Our approach compared with other approaches on the ICCV09 dataset

Moreover, the testing time has been calculated to be in the range of on a GTX 1080 GPU. Thus ensuring successful real-time implementation.

6 Conclusion

We have implemented a novel approach for superpixel-level segmentation and boosted its performance by various ensemble methods and uncertainty handling. Our approach shows a fast method for creating decent segments. When compared with other methods that were applied in this dataset it showed its strength. In the future it is possible to extend this work to video segmentations. Overall we believe that speed the algorithm combined with a relatively small sized CNN our approach shows promising results.

Acknowledgment

This work is partially supported by the project entitled “Development of knowledge graph from images using deep learning” sponsored by SERB (Government of India, order no. SB/S3/EECE/054/2016) (dated 25/11/2016), and carried out at the Centre for Microprocessor Application for Training Education and Research, CSE Department, Jadavpur University.

References

  • [1] Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: Slic superpixels compared to state-of-the-art superpixel methods. IEEE transactions on pattern analysis and machine intelligence 34(11), 2274–2282 (2012)
  • [2] Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation. arXiv preprint arXiv:1511.00561 (2015)
  • [3] Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE transactions on pattern analysis and machine intelligence 24(4), 509–522 (2002)
  • [4] Bendjebbour, A., Delignon, Y., Fouque, L., Samson, V., Pieczynski, W.: Multisensor image segmentation using dempster-shafer fusion in markov fields context. IEEE Transactions on Geoscience and Remote Sensing 39(8), 1789–1798 (2001)
  • [5] Campbell, R.J., Flynn, P.J.: A survey of free-form object representation and recognition techniques. Computer Vision and Image Understanding 81(2), 166–210 (2001)
  • [6] Farabet, C., Couprie, C., Najman, L., LeCun, Y.: Learning hierarchical features for scene labeling. IEEE transactions on pattern analysis and machine intelligence 35(8), 1915–1929 (2013)
  • [7] Gould, S., Fulton, R., Koller, D.: Decomposing a scene into geometric and semantically consistent regions. In: Computer Vision, 2009 IEEE 12th International Conference on, pp. 1–8. IEEE (2009)
  • [8] Ilea, D.E., Whelan, P.F.: Image segmentation based on the integration of colour–texture descriptors—a review. Pattern Recognition 44(10), 2479–2501 (2011)
  • [9]

    Liu, Y., Zhang, D., Lu, G., Ma, W.Y.: A survey of content-based image retrieval with high-level semantics.

    Pattern recognition 40(1), 262–282 (2007)
  • [10] Luccheseyz, L., Mitray, S.: Color image segmentation: A state-of-the-art survey. Proceedings of the Indian National Science Academy (INSA-A) 67(2), 207–221 (2001)
  • [11] Munoz, D., Bagnell, J.A., Hebert, M.: Stacked hierarchical labeling. In: European Conference on Computer Vision, pp. 57–70. Springer (2010)
  • [12] Pal, N.R., Pal, S.K.: A review on image segmentation techniques. Pattern recognition 26(9), 1277–1294 (1993)
  • [13] Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems, pp. 91–99 (2015)
  • [14] Rombaut, M., Zhu, Y.M.: Study of dempster–shafer theory for image segmentation applications. Image and vision computing 20(1), 15–23 (2002)
  • [15] Sentz, K., Ferson, S., et al.: Combination of evidence in Dempster-Shafer theory, vol. 4015. Citeseer (2002)
  • [16] Shafer, G., et al.: A mathematical theory of evidence, vol. 1. Princeton university press Princeton (1976)
  • [17] Tighe, J., Lazebnik, S.: Superparsing: scalable nonparametric image parsing with superpixels. Computer Vision–ECCV 2010 pp. 352–365 (2010)
  • [18] Wang, C., Chen, J., Li, W.: Review on superpixel segmentation algorithms. Application research of Computers 31(1), 6–12 (2014)