Saliency detection for seismic applications using multi-dimensional spectral projections and directional comparisons

01/30/2019 ∙ by Muhammad Amir Shafiq, et al. ∙ Georgia Institute of Technology 0

In this paper, we propose a novel approach for saliency detection for seismic applications using 3D-FFT local spectra and multi-dimensional plane projections. We develop a projection scheme by dividing a 3D-FFT local spectrum of a data volume into three distinct components, each depicting changes along a different dimension of the data. The saliency detection results obtained using each projected component are then combined to yield a saliency map. To accommodate the directional nature of seismic data, in this work, we modify the center-surround model, proven to be biologically plausible for visual attention, to incorporate directional comparisons around each voxel in a 3D volume. Experimental results on real seismic dataset from the F3 block in Netherlands offshore in the North Sea prove that the proposed algorithm is effective, efficient, and scalable. Furthermore, a subjective comparison of the results shows that it outperforms the state-of-the-art methods for saliency detection.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Saliency detection aims to highlight the salient regions in images and videos by taking into consideration the biological structure of the human visual system (HVS) [1]. Bottom-up saliency detection in videos exploits both spatial and temporal cues to identify visually important features in the data. Features like color, contrast, intensity, flicker, and motion all have been identified as prominent attributes that help HVS to focus processing resources on important elements in the surrounding environment. In contrast, top-down saliency detection known as task-specific visual saliency embeds a priori knowledge such as shape, orientation, size, or one or more templates of desired features into saliency detection framework. These features provide a user-assisted framework, which in turn makes the saliency detection tuneable to desired features from images or videos.

Majority of visual saliency models aim to predict the areas in images or videos that attraction human attention instantly [2, 3, 4, 5, 6, 7]. Itti and Baldi [8] proposed to detect saliency in images by modeling surprise elicited by an observer by measuring the difference between posterior and prior beliefs. Kadir and Brady [9] proposed to highlight salient regions by modeling scale selection and content description from images. Li et al. [10] proposed a novel approach for image segmentation based on sparse saliency model and graph cuts. Furthermore, authors in [11, 12, 13] present a comparison of several state-of-the-art models over seven challenging datasets to establish the benchmark for saliency detection. More recently, a saliency detection algorithm, which uses 3D FFT of a non-overlapping window in the spatial and temporal domains of video sequence to compute the spectral energy of the window and compare it with its surrounding regions to construct a saliency map is proposed in [14]. This method is effective in capturing both temporal and spatial saliency cues in a very fast and compact way. Therefore, based on the method presented in [14], we propose a new approach for detecting salient objects within seismic volumes.

Figure 1: The block diagram of the proposed method.

In seismic interpretation, visual saliency is important to predict the human interpreters attention and highlight the areas of interest in seismic sections. To extract useful information from a huge volume of seismic data, interpreters manually delineate important structures, which contain hints about petroleum and gas reservoirs such as salt domes, faults, channels, fractures, pinchouts, anticlines, synclines, and horizons. There are very limited tools available for automatic detection and manual interpretation is becoming extremely time consuming and labor intensive. Therefore, it is important to highlight, in the first stage of interpretation, useful features in seismic images that assist interpreters by directing their attention to the areas, which contain geologically important structures for the entrapment of oil and gas reservoirs. Using visual saliency, we can accomplish this task and make interpreters job relatively easy. Sivarajah et al. [15]

studied various saliency detection algorithms and observed which one closely mimics the interpreter’s visual attention when interpreting the gravity and magnetic data for exploration applications. The study concludes that saliency maps can be used to develop new techniques to compensate or augment biases and guide the interpreter’s attention to important areas in images. Similar study to develop a heuristic knowledge of the experts doing interpretation of seismic images is presented in

[16]. The authors of [17, 18] proposed algorithms for automated horizons picking by detecting salient features followed by computing pixels entropy and fragments connectivity, respectively. On the other hand, the authors in [19] and [20] proposed novel algorithms for the detection and delineation of salt domes based on visual saliency.

Majority of existing saliency detection algorithms rely on time-space domain for saliency detection. Few schemes based on transform domain have also been proposed in the past for saliency detection, yet it remains as one of the rarely explored domain for saliency detection. For saliency detection in videos, motion-related changes usually occur in the time-domain. However, in seismic data, the variation of facies, faults, salt domes, and different geological features can be observed along all three directions of a 3D seismic data. The utilization of transform domain techniques such as 3D-FFT can capture changes along all three directions in a 3D spectrum. Furthermore, using a top-down approach in saliency calculation, we can enhance the hardly conspicuous features within images and videos. In case of seismic data, certain features such as faults, horizon and sigmoid can be highlighted by defining a template or choosing the size and orientation of saliency calculation in a modified center-surround comparison. Therefore, in this paper, we propose a novel approach for saliency detection, which decomposes a 3D-FFT spectrum into three different components depicting variations along each plane of a 3D volume. Based on obtained spectral decompositions, we apply a modified center-surround model followed by a weighted combination to yield a saliency map of 3D data. Using proposed scheme, we can process visual stimuli in real-time and perform complex processing procedures faster and more efficiently. To show effectiveness of the proposed scheme, we present experimental results on a real dataset from the North Sea, F3 block in the Netherlands and show how proposed algorithm can play an effective role in a computational seismic interpretation process.

2 Saliency Detection

In this paper, we develop a novel scheme for saliency detection, which decomposes a 3D-FFT spectrum of data in conjunction with directional center-surround (DCS) model and top-down approach to depict variations in motion along all three dimensions of a 3D data. Given a 3D seismic data volume of size , where represents time or depth, represents crosslines, and represents inlines, we compute saliency using the block diagram shown in Fig. 1.

In the first step, we compute 3D-FFT of using a local cube with a sliding window having more than 50% overlap to yield a volume . The size of local cube can be adjusted to yield a fine or coarse resolution of the saliency map. In the second step, we perform decompositions of the spectral cube as explained in Fig. 2. Within a 3D spectral cube in -- coordinate system, if a spectral point is closer to --plane, then its projection on --plane i.e. along -direction will depict variations more prominently as compared to the projections on - or - planes. Therefore, we decompose the 3D spectral cube by projecting the spectral point along -direction as

(1)
(2)

where and represent the coordinates in the space and frequency domains, respectively. defines the size of the local data cube and is the seismic data within volume . Similarly, we also compute decompositions along - and -directions as

(3)
(4)
(a) Spectral cube. (b) Plane projections.
Figure 2: The illustration of spectral cube, plane projections, and decompositions.

Thus, after step two, is decomposed into three components namely , , and . The equations above for spectral decomposition do not work for a special case, i.e., . This is the center of 3D spectral cube, which is associated with the DC component of the spectrum and hence does not reflect any changes along three planes. Therefore, we do not include the center point when extracting features from spectral cube.

In the third step, we calculate the absolute mean over each local cube to obtain the corresponding features known as spectral energies, labelled as , , and

, respectively. The features extraction process enhance the motion variations along each axis and provide a pixel level descriptions of the energy variations when calculating saliency maps.

The fourth step of the proposed method applies the DCS model to construct the saliency maps using as

(5)

where , , are chosen such that point is in the immediate neighborhood of point , such as within a directional window centered at as depicted in Fig. 3. represents the total number of points included in the summation, represents Gaussian weights, represents , , or , and represents , , or , respectively.

In order to incorporate directionality into saliency calculation and consolidate the top-down salient features, tunable to certain sizes, structures, and orientations, our proposed approach above performs DCS comparison in conjunction with Gaussian weighting of pixel values away from the center point at which it is calculated. DCS comparisons along , , and - directions are illustrated in Fig. 3, where dark blue color indicates large Gaussian weights as opposed to small weights for light color pixels. For example, if we are interested in calculating saliency along -direction, then we can not only tune the number of neighboring pixels along -direction (incorporated in DCS computation) but also weights associated with each pixel value. Similarly, we can also tune features directionality (by changing the orientation of DCS comparison) and size (by changing the number of neighboring pixel values included in DCS comparison) in saliency calculation as shown in Fig. 3. In this paper, for DCS comparison, we use a neighborhood window of , where is the side length of the cube, and apply it along , , and directions, respectively.

Figure 3: Directional center-surround comparison along , , and - directions, respectively.
(a) A typical seismic inline image.
(b) Zhang et al., (2008) [21]
(c) Hou and Zhang, (2007) [22]
(d) Guo and Zhang, (2010) [23]
(e) Achanta et al., (2008) [24]
(f) Fang et al., (2014) [3]
(g) Seo and Milanfar, (2009) [25]
(h) Long and AlRegib, (2015) [14]
(i) Proposed Method
Figure 4: The output of the various saliency detection algorithms on a typical seismic inline section. Red arrows and ellipses highlight the areas, which demonstrate the excellence of the proposed saliency detection algorithm.

Finally, the saliency map , which is of the same size as that of is obtained as

(6)

The weights in saliency calculation i.e. , , and can be set either equally to construct a saliency map with equal distribution of saliency maps calculated along , , and directions or empirically to highlight certain features along any particular direction. In this work, we have used a cube of size and equal weights , , and for saliency calculation. The proposed saliency detection is based on 3D-FFT, which makes it fast and obtains saliency maps with adjustable resolution by varying the cube size. Furthermore, the proposed approach is computationally inexpensive and requires very few parameters as compared to other visual saliency algorithms.

3 Results

In this section, we present the results of saliency detection on a real seismic dataset acquired from the Netherlands offshore F3 block in the North Sea whose size is . A typical seismic inline section from this dataset containing multiple seismic facies is shown in Fig. 4a. A well-founded saliency algorithm can not only resolve spatial variations along different directions within seismic volume but also highlight the contrast of different geological structures with respect to its surrounding sediments. The results of the state-of-the-art image and video saliency detection algorithms presented in [21], [22], [23], [24], [3], [25], [14], and the proposed method are shown in Fig. 4b-i, respectively.

Subjective evaluation of the results show that the proposed method effectively highlights salient features from a seismic image as compared to other state-of-the-art algorithms. Specifically, red arrows and ellipses in Fig. 4i highlight regions in a seismic inline section, which demonstrate the excellence of proposed saliency detection as opposed to other saliency detection algorithms. As observed in Fig. 4, a majority of algorithms fail to detect a major fault in the center of a seismic image. Such kind of faults are characterized by subtle variations in intensity and texture, which make them extremely challenging and difficult to highlight using a small set of seismic attributes. Figure 4i shows that such fault is adequately highlighted by the proposed saliency detection method because it takes into account the spectral variations along all three dimensions of the seismic data. In addition, the proposed saliency detection algorithm pleasantly suppresses a sigmoidal structure with respect to its surrounding, indicated by an ellipse in the middle of seismic section, which is not distinctively detected by other algorithms. Furthermore, the amplitude of salient values detected by the proposed algorithm near the salt-dome boundary are not only more localized but also significantly higher than most of other state-of-the-art algorithms. Similarly, red arrows in the bottom left and middle right portion of the proposed saliency map highlight areas such as smaller faults and chaotic structures that are not clearly visible in other saliency maps. Finally, it can be observed from Fig. 4 that the resolution of the proposed approach is much better as compared to that of other saliency detection algorithm, which makes it advantageous for applications such as seismic interpretation, which requires not only fine perception but also efficient detection of subtle features from images and videos. Therefore, the proposed approach is expected to not only become a very handy tool for interpreter-assisted seismic interpretation but can also serve as a base attribute map for creating workflows for automated detection of various geological structures.

4 Conclusion

In this paper, we have developed a new saliency detection algorithm for seismic applications using features based on 3D-FFT local spectra and multi-dimensional plane projections. We have proposed a novel approach for feature extraction based on spectral cube coupled with directional center-surround model to estimate the salient features effectively. The proposed algorithm is based on 3D-FFT, which makes it advantageous for application on large datasets, and computationally inexpensive and real-time implementations. Simulation results on a real seismic dataset show the efficacy of the proposed scheme in the detection of salient points and subtle features observed in a geologically complex setting. Furthermore, experimental results also show that the proposed method outperforms the state-of-the-arts methods for saliency detection in seismic applications.

References

  • [1] Ali Borji and Laurent Itti,

    “State-of-the-art in visual attention modeling,”

    Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, no. 1, pp. 185–207, Jan 2013.
  • [2] Laurent Itti, Christof Koch, Ernst Niebur, et al., “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on pattern analysis and machine intelligence, vol. 20, no. 11, pp. 1254–1259, 1998.
  • [3] Yuming Fang, Zhou Wang, Weisi Lin, and Zhijun Fang, “Video saliency incorporating spatiotemporal cues and uncertainty weighting,” IEEE Transactions on Image Processing, vol. 23, no. 9, pp. 3910–3921, 2014.
  • [4] Chanchan Qin, Guoping Zhang, Yicong Zhou, Wenbing Tao, and Zhiguo Cao, “Integration of the saliency-based seed extraction and random walks for image segmentation,” Neurocomputing, vol. 129, pp. 378–391, 2014.
  • [5] Jia Li, Yonghong Tian, and Tiejun Huang, “Visual saliency with statistical priors,”

    International journal of computer vision

    , vol. 107, no. 3, pp. 239–253, 2014.
  • [6] Kathryn Koehler, Fei Guo, Sheng Zhang, and Miguel P Eckstein, “What do saliency models predict?,” Journal of vision, vol. 14, no. 3, pp. 14–14, 2014.
  • [7] Jianqiang Ren, Xiaojin Gong, Lu Yu, Wenhui Zhou, and Michael Ying Yang, “Exploiting global priors for rgb-d saliency detection,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    , 2015, pp. 25–32.
  • [8] Laurent Itti and Pierre Baldi, “Bayesian surprise attracts human attention,” Vision research, vol. 49, no. 10, pp. 1295–1306, 2009.
  • [9] Timor Kadir and Michael Brady, “Saliency, scale and image description,” International Journal of Computer Vision, vol. 45, no. 2, pp. 83–105, 2001.
  • [10] Qingshan Li, Yue Zhou, and Jie Yang, “Saliency based image segmentation,” in Multimedia Technology (ICMT), 2011 International Conference on. IEEE, 2011, pp. 5068–5071.
  • [11] Ali Borji, Ming-Ming Cheng, Huaizu Jiang, and Jia Li, “Salient object detection: A benchmark,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5706–5722, 2015.
  • [12] Ali Borji, Dicky N Sihite, and Laurent Itti, “Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study,” IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 55–69, 2013.
  • [13] Ali Borji and Laurent Itti, “State-of-the-art in visual attention modeling,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 1, pp. 185–207, 2013.
  • [14] Zhiling Long and Ghassan AlRegib, “Saliency detection for videos using 3D FFT local spectra,” Proc. SPIE, vol. 9394, pp. 93941G–93941G–6, 2015.
  • [15] Yathunanthan Sivarajah, Eun-Jung Holden, Roberto Togneri, Michael Dentith, and Mark Lindsay, “Visual saliency and potential field data enhancements: Where is your attention drawn?,” Interpretation, vol. 2, no. 4, pp. SJ9–SJ21, 2014.
  • [16] Neelu Jyothi Ahuja and Parag Diwan, “An expert system for seismic data interpretation using visual and analytical tools,” International Journal of Scientific & Engineering Research, vol. 3, no. 4, pp. 1–13, 2012.
  • [17] Noomane Drissi, Thierry Chonavel, and Jean Marc Boucher, “Salient features in seismic images,” in OCEANS 2008 - MTS/IEEE Kobe Techno-Ocean, April 2008, pp. 1–4.
  • [18] Maria Faraklioti and Maria Petrou, “Horizon picking in 3d seismic data volumes,” Machine Vision and Applications, vol. 15, no. 4, pp. 216–219, 2004.
  • [19] Muhammad Amir Shafiq, Tariq Alshawi, Zhilinh Long, and Ghassan AlRegib, “Salsi: A new seismic attribute for salt dome detection,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2016, pp. 1876–1880.
  • [20] Muhammad Amir Shafiq, Tariq Alshawi, Zhiling Long, and Ghassan AlRegib, “The role of visual saliency in the automation of seismic interpretation,” submitted to Geophysical Prospecting, Nov 2016.
  • [21] Lingyun Zhang, Matthew H Tong, Tim K Marks, Honghao Shan, and Garrison W Cottrell, “SUN: A bayesian framework for saliency using natural statistics,” Journal of vision, vol. 8, no. 7, pp. 32–32, 2008.
  • [22] Xiaodi Hou and Liqing Zhang, “Saliency detection: A spectral residual approach,” in Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on. IEEE, 2007, pp. 1–8.
  • [23] Chenlei Guo and Liming Zhang, “A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression,” IEEE transactions on image processing, vol. 19, no. 1, pp. 185–198, 2010.
  • [24] Radhakrishna Achanta, Francisco Estrada, Patricia Wils, and Sabine Süsstrunk, “Salient region detection and segmentation,” in International conference on computer vision systems. Springer, 2008, pp. 66–75.
  • [25] Hae Jong Seo and Peyman Milanfar, “Static and space-time visual saliency detection by self-resemblance,” Journal of vision, vol. 9, no. 12, pp. 15–15, 2009.