Reflected light from objects and radiance from a light source has information at various wavelengths both in visible and non-visible spectrum to human eye [1, 2, 3, 4, 5]. With the recent advancements on hyperspectral imaging systems, unlike the conventional cameras providing images with 1(monochrome images) or 3 channel (e.g. RGB or YCbCr images), hyperspectral imaging systems enable researchers the opportunity to capture data from the observed scenes with high spatial and redundant spectral resolution (both visible and non-visible spectrum to human eye) of the observed scenes from the radiance or reflected light source from objects [2, 3, 4, 5]. These data have been used in many applications (remote sensing [2, 6, 9], scene analysis or object detection [2, 6, 7, 8, 9, 10, 11], spectral estimation[11, 12, 13, 14], etc.).
Visual attention modeling (saliency detection)[15, 16, 17, 18, 19] is a promising research field for practical applications, which may benefit many other applications on hyperspectral data processing stated prior. For instance, a few studies using low-level features on hyperspectral images demonstrated that salient object detection can be achieved [7, 8, 9, 10]. In contrast to these models relying on low-level features or hand-crafted features to obtain saliency maps, higher-level features can be extracted and used in a self-supervised manner for hyper-spectral data, where each spectral bands’ contribution to the representation can be learned with unsupervised neural network used for segmentation task . In addition, works on hyperspectral saliency on natural scenes were mostly tested on dataset with a few hyperspectral images ( used 13 images and  used 17 images) collected and selected from various hyperspectral data. Moreover, these hyperspectral data was not collected and created for the purpose of salient object detection. And, quantitative evaluations of the models were mostly limited to Precision -Recall and F-measure metrics. Therefore, we believe that a dataset created specifically for salient object detection should be used for evaluating the models with various metrics.
Proposed work and contributions: In this work, we propose a salient object detection model (see Figure 1) on hyperspectral images by applying manifold ranking  to self-supervised Convolutional Neural Network (CNN) features (high-level features) learned by an unsupervised image segmentation task . Self-supervision of CNN continues until clustering loss or saliency map computed from CNN features converges to a defined error between each iteration. Then, saliency map at the last iteration is used as the result of proposed model when the self-supervision procedure terminates.
Conv. + Relu + BN
|64||3 3||1 1|
|Maxpooling||-||2 2||2 2|
|Conv. + Relu + BN||64||3 3||1 1|
|Maxpooling||-||2 2||2 2|
|Conv. + Relu + BN||64||3 3||1 1|
|Upsampling||-||2 2||2 2|
|Deconv. + Relu + BN||64||3 3||1 1|
|Upsampling||-||2 2||2 2|
|Deconv. + Relu + BN||64||1 1||1 1|
represents batch normalization operation.
To the best of our knowledge, there are not any works on hyperspectral salient object detection for natural scenes as a self-supervised approach, which combines unsupervised segmentation using CNN and salient object detection task on the scene. Regarding the approach, contributions or differences of the proposed model can be explained as: First, unsupervised segmentation task used in previous paper  takes advantage of cluster refinement process based on the superpixels obtained by the input color image. However, in this work, we apply the refinement process based on the superpixels obtained by the high-level features of the CNN (see Fig.1) that takes hyperspectral image as input. Interestingly, this process resulted in better saliency detection performance and it seemed faster convergence regarding the segmentation task. Second, in contrast to the saliency model with manifold ranking (MR) in  using low-level features, we utilized self-supervised deep-features with higher order semantics, which seems to improve the saliency detection performance drastically compared to study in . Then, unlike the CNN model used in 
, we included max-pooling for down-sampling and we replaced last two CNN layers with deconvolution layers as in Table1. Finally, self-supervision of CNN model does not need to finalize until a defined maximum iteration because we check clustering loss and saliency map for termination; in addition, saliency results of proposed model seems to converge faster than the segmentation task in most cases while using self-supervised deep features on manifold ranking based saliency detection. Experiments demonstrated that proposed saliency detection algorithm on hyperspectral images is outperforming state-of-the-arts hyperspectral saliency models including the original MR  saliency model.
2 Self-supervised salient object detection on hyperspectral images
|Itti et al. [15, 7]||0.7774||0.3536||0.3530||0.3754||0.1674||0.1909||0.2329||1.3636||2.3186|
|*SGC  The codes were not available for  so the implementation was done by the authors in Matlab based on the paper |
|**HS-MR  saliency detection is originally for color images; however, published codes by the authors can be used for|
|hyperspectral data for MR and superpixel methods in the code.|
To achieve salient object detection goal in Fig. 1
, we propose to use an unsupervised backpropagation semantic segmentation method to learn high-level visual features that will be used in the manifold raking algorithm  for saliency computation. Given channels hyperspectral imagery as input to our model, first, all the pixel values are normalized to [0,1]. Then, we adopt a CNN model to extract p-dimension feature maps from the Batch-Normalization (BN) output of the last Deconvolution layer of model getting hyperspectral imagery as input. The detail configuration of the CNN model is shown in Table 1. Note that the spatial resolution of output feature map and input hyperspectral image are identical. After normalize the learned response maps via batch normalization as in , we obtain cluster label
by using argmax classification to the feature maps to classify each pixel by choosing the dimension that has the maximum value as. Then, we apply the refinement process on based on the superpixels obtained by the high-level features of the CNN (see Fig. 1) in contrary to  using superpixels based on the input data (e.g. hyperspectral image). Refinement process is achieved by assigning all pixels same cluster label based on the highest frequency of label in the superpixel area .
Similar with the supervised learning, we use the softmax cross entropy loss between the network responsesand the refined cluster labels at iteration n . Using this error with back-propagation, the parameters of convolutional and deconvolution filters are updated by utilizing gradient-descent with momentum . As in , Glorot and Bengio method 
is employed for the parameter initialization, which uses uniform distribution normalized according to the input and output layer size. While self-supervising the network for unsupervised segmentation task, at each iteration,is used to obtain saliency map by employing MR  with multi-channel. For the model, we use two main termination conditions as:
where and denote the cross-entroppy losses of step and , and denote the predicted saliency maps of step and , and are defined small non-zero constants to terminate the training process. Also, when the training step achieve maximum value , it will stop the process. In Fig. 2, unsupervised segmentation outputs and computed saliency maps are shown for different iterations of self-supervised learning. It can be seen that saliency map results are converging even though clustering through self-supervision is not giving optimal segmentation result yet.
3 Experimental Results
), Cross Correlation (CC), Normalized Scanpath Saliency (NSS), Kullback-Leibler divergence (KLdiv), Precision, Recall, F-measure (F, maxF, aveF) with Precision-Recall or Precision-Recall Curves.
Dataset: We made evaluation of the model on hyperspectral salient object detection (HS-SOD) dataset  consisting of 60 hyperspectral images with their respective binary ground-truth images referring to salient objects. The dataset details can be seen in , and is available on . For each image, spatial resolution is 768x1024 pixels, and there are 81 spectral channels covering the wavelengths between 380-780nm (visible spectrum) with 5nm intervals .
Evaluation: We selected  and  for comparison as being the hyperspectral salient object detection models for natural scenes. In work , various approaches were tested on hyperspectral data so we also apply the approaches tested in  on HS-SOD dataset  for comparison. i) spectral distances between each spatial region for saliency computation by using spectral Euclidean distance (SED) and spectral Angle distances (SAD) [7, 5], ii) color opponency method in [15, 7]
is replaced by spectral grouping rather than Red-Green and Blue-Yellow differences, in which Euclidean distance between spectral group (GS) vectors by dividing spectral bands into four groups (G1,G2,G3,G4)[7, 5]. iii) In , spectral distance based saliency also combined with orientation based saliency with combinations such as SED-OCM-GS and SED-OCM-SAD. iv) saliency maps from Itti et al.  were also provided for hyperspectral saliency comparison in [7, 5] as a baseline model. As a more recent work, we also tested saliency from spectral gradient contrast (SGC) proposed by . In , local region contrast is computed from the superpixels obtained by spatial and spectral gradients, which is used to calculate spectral gradient contrast (SGC) for saliency detection [8, 5].
In addition, saliency detection by graph-based manifold ranking (MR) is also applied on hyperspectral dataset, referred as HS-MR  to compare with the proposed model SUDF (SUDF: Saliency from Unsupervised Deep Features ), which uses higher-level features for both MR based saliency and cluster refinement compared with the original approaches in  and . In addition, to demonstrate the performance improvement on saliency detection when the cluster refinement is done on high-level fetures, we also implemented and compared saliency computation when cluster refinement is done based on input hyperspectral image that is referred as SUDF.
As it can be seen in Table 2, proposed SUDF performs better than other approaches in all metrics except being second best on NSS. However, although the performance difference is very close with SUDF, best performing model in NSS metric is also variation of the proposed approach, SUDF which applies cluster refinement based on the superpixels obtaind from hyperspectral image directly as in the original work . In addition, proposed SUDF demonstrated that using higher-level features even learned from self-supervision is more beneficial to saliency computation using manifold-ranking since proposed SUDF outperformed HS-MR 
manifold-ranking using low-level features in all evaluation metrics. In Fig.3, some sample scenes for hyperspectral data are given rendered in sRGB for visualization with their respective grount-truth images for salient objects, and saliency maps results of various approaches on these scenes are also given to demonstrate the performance of proposed model SUDF with respect to other models. It can be seen from the saliency maps that our model performs better qualitatively too.
In this work, we demonstrated hyperspectral salient object detection approach based on self-supervised deep features in a multi-task model. Paramater update of the CNN model is done based on cross-entropy loss of clustering performance, and saliency is computed by the learned features of the unsupervised segmentation task, in which saliency convergence is the termination criteria for the self-supervised learning procedure. Evaluation on the HS-SOD dataset  demonstrates promising results for salient object detection with the proposed approach. As a future work, we would like to investigate how to improve representation of hyperspectral image during self-supervision process (e.g. adding sparsity loss, orthogonality constraint, decoder based image generation loss, etc. ) to improve the accuracy for salient object detection and also to increase the convergence on clustering and saliency map results. Moreover, we would like to investigate other options for saliency computation compared to MR model since it assumes boundary prior for background regions.
-  D. H. Sliney, ”What is light? The visible spectrum and beyond,” Eye (Nature), Vol.30, pp.222-229, 2016.
-  M. Borengasser, W. Hungate, R.Watkins, ”Hyperspectral Remote Sensing: Principles and Applications,” CRC Press, Boca Raton FL, 2008.
-  A. Chakrabarti and T. Zickler, ”Statistics of Real-World Hyperspectral Image”s, , 2011.
-  R. B. Smith, Introduction to ”Hyperspectral Imaging with TNTmips”, online available: http://www.microimages.com
-  N. Imamoglu, Y. Oishi, X. Zhang, G. Ding, Y. Fang, T. Kouyama, R. Nakamura, ”Hyperspectral Image Dataset for Benchmarking on Salient Object Detection”, 10th International Conference on Quality of Multimedia Experience (QoMEX), 2018.
-  D. Manolakis, R. Lockwood, T. Cooley, ”Hyperspectral Imaging Remote Sensing”, Cambridge University Prss, Cambridge, United Kingdom, 2016.
-  J. Liang J. Zhou, X. Bai, Y. Qian, ”Salient object detection in hyperspectral images”, IEEE Int. Conf. on Image Processing (ICIP), pp.2393-2397,2013.
-  H. Yan, Y. Zhang, W. Wei, L. Zhang, Y. Li, ”Salient object detection in hyperspectral imagery using spectral gradient contrast”, IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp.1560-1563,2016.
-  Y. Cao, J. Zhang, Q. Tian, L. Zhuo, Q. Zhou, ”Salient target detection in hyperspectral images using spectral saliency”, IEEE ChinaSIP, pp.1086-1090, 2015.
-  S. Le Moan, A. Mansouri, J. Hardeberg and Y. Voisin, ”Saliency in spectral images”, in Proc. of the 17th Scandinavian Conference on Image Analysis, pp. 114-123, 2011.
-  B. Arad and O. Ben-Shahar, ”Sparse recovery of hyperspectral signal from natural RGB images”, in Proc. of the Europian Conference on Computer Vision (ECCV), pp.19-34, 2016.
-  R. Kawakami, J. Wright, T. Yu-Wing, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, ”High resolution hyperspectral imaging via matrix factorization”, IEEE Conf. on Computer Vision and Pattern Recognition (IEEE CVPR), 2011.
-  B. Arad and O. Ben-Shahar, ”Filter selection for hyperspectral estimation”, in Proc. of the IEEE International Conference on Computer Vision (ICCV), pp.3153-3161, 2017.
-  H. Kwon and Y. W. Tai, ”RGB-guided hyperspectral unsampling”, in Proc. of the IEEE International Conference on Computer Vision (ICCV), pp.307-315, 2016.
-  L. Itti, C. Koch, and E. Niebur, ”A model of saliency based visual attention for rapid scene analysis”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.20, no11, pp.1254-1259, 1998.
-  A. Borji and L. Itti, ”State-of-the-art in visual attention modeling”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.35, no.1, pp.353-367, 2013.
-  C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, ”Saliency Detection via Graph-Based Manifold Ranking”, IEEE Conf. on Computer Vision and Pattern Recognition (IEEE CVPR), 2013.
-  N. Imamoglu, C. Zhang, W. Shimoda, Y. Fang, and B. Shi, ”Saliency detection by forward and backward cues in deep CNNs”, IEEE International Confetrence on Image Processing (ICIP), 2017.
-  T. Liu, N. Zheng, X. Tang, and H.-Y. Shum, ”Learning to detect salient object”, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.353-367, 2007.
-  A. Kanezaki, ”Unsupervised Image Segmentation by Backpropagation”, IEEE Conf. on Computer Vision and Pattern Recognition (IEEE CVPR), 2013.
-  X. Glorot and Y. Bengio, ”Understanding the difficulty of training deep feedforward neural networks”, in AISTATS, 2010.
-  A. Borji, H. R. Tavakoli, D. N. Sihite, and L. Itti, ”Analysis of scores, datasets, and models in visual saliency prediction”, IEEE International Conference on Computer Vision (ICCV), 2013.
-  M Kümmerer, T Wallis, M Bethge, ”Information-theoretic model comparison unifies saliency metrics”, PNAS, 112(52), 16054-16059, 2015.
-  Z Bylinskii, T Judd, A Oliva, A Torralba, F Durand, ”What do different evaluation metrics tell us about saliency models?” arXiv preprint arXiv:1604.03605, 2016.
-  S Rahman, N Bruce, ”Visual Saliency Prediction and Evaluation across Different Perceptual Tasks”, PloS one, vol.10 , no.9, 2015.
-  Hyperspectral Salient Object Detection (HS-SOD) dataset, online available: https://github.com/gistairc/HS-SOD