1 Introduction
Time series data[1] is defined as a sequence of events measured over the repeated intervals of time. Time series mining has a number of applications in the field of stock market analysis[2], sales forecasting[3], weather forecasting[3], reservoir characterization[4], quality control[4] and in many other areas. Seismic data is presented as a time series data with three spatial dimensions. Raw seismic data contains a set of traces. Seismic data[5] helps to determine the structure of reservoir, but it plays a little role in determining the various properties in the reservoir bodies. The act of building a reservoir model that incorporates all the characteristics of the reservoir to store hydrocarbons and also to produce them is called as reservoir characterization[5, 6]. Reservoir characterization is one of the biggest challenges that geophysicists face today. Well boring is one of the solutions to make the reservoir characterization better. However, it is an expensive and hazardous task for both humans and environment. It requires much time and makes the earth core unstable. Another way to make reservoir characterization is to do the efficient seismic facies classification[6]. The basic aim of the seismic facies classification is to identify the rock properties and lithological changes in the facies[5].
Seismic face[5] is defined as a region that exhibits some properties which distinguish it from the other areas. These variations of the properties in seismic facies would help in efficiently identifying the presence of various hydrocarbons in the seismic volume. Although there exist manual processes of analysing the seismic data in order to find out the region of interest. However, these manual methods are very time consuming and requires much expert knowledge to make accurate characterization[5]. The problem becomes more difficult as the size of data increases or the number of attributes to be analyzed increases. Therefore, various techniques were introduced to automate the process of seismic facies classification. However, most of the automated methods were based on finding the prescribed pattern that makes them unreliable with noisy data. Apart from the conventional well logs, seismic attributes are being widely used in both exploration & reservoir characterization and routinely been integrated into the seismic interpretation process.
This study focus on texture attributesbased classification[7] of seismic facies. It works by extracting a set of texture attributes from seismic data and uses those attribute for facies classification. The set of attributes are chosen on the basis of their variability around the seismic area of study. The texture attributes that shows most variations are considered for the purpose of classification. The approach can be broadly partitioned into the five steps:
(1) Collecting and Preprocessing the data.
(2) Extracting a set of features from time series data.
(3) Interpolating the missing values of attributes.
(4) Using nonlinear approach for unsupervised facies classification.
(5) Visualizing the results.
2 Literature Review
Brian P. West[8]
proposed an approach for the seismic facies classification. It begins by constructing the set of polygons on the crosssection area selected from the seismic volume. Gray level cooccurrence matrix had been calculated for each of the selected polygons. The set of different texture attributes i.e. homogeneity, energy, entropy generated by using GLCM matrix was then used as an input for the artificial neural network to identify the seismic facies.
H.Sabeti[9]
proposed an approach for seismic facies interpretation using Kmeans clustering algorithm. The method worked by calculating a synthetic seismic cube and eight different seismic attributes were calculated from the calculated seismic cube using paradigm software. The application of the approach to real seismic data demonstrated the detection of natural changes in the model. However, the biggest problem with Kmeans clustering algorithm is of determining the optimal number of clusters. So, Atish Roy
[10]introduced an algorithm that used the SelfOrganizing Maps(SOM) to classify the facies of the Mississippian Tripolitic Chert reservoir. SOM
[11]is a method for clustering the data using prototype vectors. The approach begins by using the SOM for clustering. Initially, the method chooses a large value to define numbers of clusters. In the subsequent iterations, data vectors were merged into a small number of clusters. Both the structural and texture attributes were used for the facies classification. Further, supervision was introduced by using the three average vectors obtained as a result of the unsupervised classification. Different average data vectors had different attributes as components. The classification was performed by comparing each of the samples with the generated average data vectors.
J.D.Pigott[12] used first order seismic attributes for identification of seismic facies. There are numerous attributes available for interpretation of seismic facies. John D.[12]
defined the eight first order seismic attributes that play a major role in geological interpretations. Attributes identified were: Amplitude, Instantaneous Frequency, Variance, Chaos, Envelope, Acoustic Impedance and Cosine of phase. These eight attributes were then used for basin exploration of East China Sea. Renjun Wen
[13] introduced a new approach for 3D modelling of heterogeneity present in the channelized reservoirs. The method worked by calculating the acoustic impedance cube by using the deterministic seismic inversion. The set of six seismic attributes calculated from the impedance cube were then used for the purpose of seismic facies classification. The approach implements the two methods for seismic facies classification. First, Trace based and second voxel based classification. Trace based classification method worked by assigning facies code to each trace within a cube whereas in voxel based classification each small voxel is considered for classification. The classification algorithm used the six attributes calculated previously. Resulting cube obtained by using a different combination of attributes were then compared with the Ground truth. Based on the comparison, only the significant attributes were considered for classification. However, the approach did not perform well with the noisy data.So, HaoKun Du[14]
introduced an approach for seismic facies analysis using the SelfOrganizing Maps (SOM) and Empirical Mode Decomposition(EMD). EMD is a method for denoising the data. It works by calculating the IMFs (Intrinsic Mode Functions). IMFs that shows good correlation with the data is then used to represent the data. SOM works on the principle of unsupervised learning to present the data into lowerdimensions. SOM
[11] works in two phases which are (1) Learning phase (2) Mapping phase. In the learning phase, SOM learns by using the input data and during mapping phase, each of the input vectors is mapped to one of the nodes in the SOM grid. The application of this approach to real seismic dataset shows that the facies generated were better than that of SOM without Empirical Mode Decomposition(EMD).Tao Zhao in 2015[30]
performed a comparison of various supervised and unsupervised seismic facies classification techniques including Kmeans, SOM, GTM, Gaussian Mixture Model(GMM), Support Vector Machine(SVM) & Artificial Neural Network(ANN). These six classification algorithms were applied to 3D seismic data volume acquired over the Canterbury Basin, New Zealand. The classification results obtained by the application of various algorithms to the seismic data of study area shows that supervised methods provide accurate estimates of the seismic facies. However, they fail to identify some of the important features highlighted by the unsupervised methods.
As pointed out above, Seismic facies classification using Kmeans algorithm[9] identifies the literal changes in the facies, but fails to determine the optimal numbers of clusters. Further, the facies classification using SelfOrganizing Maps(SOM)[10] introduced by HaoKun du also suffers from some limitations including deciding optimal parameters, convergence, etc. All the approaches discussed above performs a linear transformation from data space to latent space. Therefore in the present work, we introduce a classification(SFAGTM) approach that not only performs the nonlinear transformation from data space to latent space but also removes the major limitations of the existing approaches. The rest of the paper is organized as follows: an introduction to Generative Topographic Map(GTM) and data set description are given in section 3 & 4 respectively. The methodology of the proposed approach for seismic facies classification is specified in section 5. Section 6 explains the results of classification & conclusion is stated in section 7.
3 Introduction to GTM
The principal component analysis
[16]is defined as a method for projection of a point from Mdimensional space into a hyperplane of Ldimensions where L
M. Similarly the factor analysis[17] is also used for representing a large number of correlated variables into a small number of factors that better represents the data. The difference between the two is that factor analysis deals with the covariance whereas the PCA focuses on the variance in data[18]. Factor analysis is based on the EM[19](ExpectationMaximization) algorithm and it works by modeling Mdimensional variable as a function of latent variables(LDimensional) plus the noise.
Further, SOM(SelfOrganizing Maps)[11]
is also a method for lowdimensional representation of input vector space. There are three essential processes in the formation of SOM i.e. Competition, Cooperation, and Synaptic Adaptation. In Competition phase, each neuron computes discriminant function for input vector. The neuron with the largest discriminant wins the competition. In Cooperation phase wining neuron excites the adjacent neurons depending upon their distance from winning neuron. Synaptic adaptation phase enables the adjacent neurons to increase their discriminant function in response to the event.
Although SOM has been widely used in various application such as image processing, visualization, facies classification & unsupervised learning but there are various limitations associated with it. Firstly, there does not exist any framework by using which the SOM initial parameters can be chosen i.e. learning rate, neighbour function width etc. Second, there is no assurance/promise that the training algorithm will converge.
Like SelfOrganizing Maps, Generative Topographic Map (GTM)[20] represents the input data into a small number of latent variables. GTM removes some of the major drawbacks of SOM. It is a form of nonlinear transformation from the input data space to the latent space. The conversion between the two spaces is done with the help of a function , where denotes a point in the latent space and
denotes the weight and bias values. The mapping function inserts a NonEuclidean manifold onto the data space as shown with the help of an example with latent space dimension(L) equals to 2 & data space dimension(D) equals to 3. Introducing a distribution over the latent space leads to the generation of a probability distribution over the data space.
In Generative topographic map, we consider the prior distribution over the latent space to be Gaussian[20] (for given and ) given by equation (1) :
(1) 
where is noise variance and is a point in data space.
The integration over latent space distribution will result into the induction of posterior distribution[20] over the data space and is given by equation (2).
(2) 
After determining the prior distribution and the mapping function the initial value of and are determined by using principal component analysis[16]. However the integration over in Equation 2 is analytically intractable. The model can take the different form depending on the function . In order to make the model similar in powers/spirit to SOM we use a special form of delta functions centred on the nodes of grid in latent space[21] given by equation (3):
(3) 
By using this form of the integration given in equation (2) can be performed analytically. Each point mapped by using mapping function forms the center of a Gaussian density function. So the distribution function in data space becomes
(4) 
In general the objective function in GTM algorithm takes the form of log likelihood which is given by equation (5):
(5) 
3.1 ExpectationMaximization Algorithm
After deciding the mapping function & initial value of and we use the expectation Maximization[19] algorithm for nonlinear optimization. Consider at some point in algorithm we have weight matrix and variance . The Estep in EM algorithm proceeds by computing the responsibility for each combination of k and n, where k is the Gaussian component and n is the data point[21].
The responsibility value
denotes the posterior probability that
data point was generated by component and is given by equation (6)(6) 
Usually, the mapping function used in topographic mapping is linear regression model where
is a linear combination of basis functions given by:(7) 
The Mstep is used to calculate the new updated values of weights & by derivating loglikelihood () w.r.t and setting it to zero. GTM algorithm alternate between these two E and M step until objective function is converged[21].
Parameter  Range of Parameter 

Inline Range  
Crossline Range  
Z Range  
Size(km) 
4 Dataset Description
The reservoir characterization needs two types of data: hard and soft data. This study uses the data of seismic survey conducted in the block of Dutch sector of North Sea[22]. It covers an area of approximately 24*16 . The data set is collected from the dGBEarthSciences. The seismic data volume comprises of 947 crosslines and 646 inlines. Both inlines and crosslines have a line spacing of 25 m and the sample rate is 1 ms. The complete description of survey parameters is given in Table 1.
5 Proposed Work
Facies Classification is the process of assigning a label to each of face depending on its distinct properties. There are two types of learning[23] supervised & unsupervised. Supervised algorithms[23] builds a learning model by using the training data and employs that learning model to determine the output label for test data. While for unsupervised algorithms[23] no training data are made available. The seismic facies identification is one of the main problems in the area of reservoir characterization. There exist various unsupervised approaches for seismic facies identification such as SelfOrganizing Maps, kmeans clustering etc. A number of recent studies were based on the facies classification using these approaches. However, there are some limitations associated with these approaches such as convergence problem, lack of a theoretical framework for choosing model parameters etc. Therefore in this paper, we introduce a nonlinear approach(SFAGTM) for unsupervised classification of seismic facies based on the set of attributes. The classification is called unsupervised because no well data is used in the proposed approach. The proposed approach solves some of the major limitations of already existing approaches. Further, the new approach also introduces an interpolation method that is used in conjunction with GTM for filling the missing values of attributes in the seismic data. Figure2 demonstrates the flowchart of the proposed SFAGTM approach for facies classification.
5.1 Input Seismic Data
The very first step in any data mining task is collecting the data. Seismic data is collected by means of seismic surveys[5]. In seismic surveys, the waves generated by a set of seismic sources passes into the earth and the rays that reflect back to the surface are recorded by seismic sensors. The time taken by the different rays to reach the sensors provides valuable information about the rock types and any possibility of gas and fluid in rock formations[5].
5.2 Data Preprocessing
Data preprocessing plays a vital role in nearly all accurate analysis. The raw data collected from sensors is in ”unprocessed” form. It suffers from many limitations including noise, dead traces, etc. The classification accuracy highly depends on the quality of data used. Therefore, there is a rising need of preprocessing the collected data to make the accurate facies analysis. This study uses the Dipstirred filtering[24] to remove the noise present in 3D seismic data. Even though the data preprocessing makes the data suitable for the analysis, but it is an expensive and timeconsuming task.
5.3 Calculation of Set of Seismic Attributes
Seismic attributes play a major role in the interpretation of seismic data[6]. Attributes make the characterization of rock properties and lithological changes easier and better. This study performs the GTM based classification of seismic facies based on the set of GLCMtexture attributes[25].
The texture attributes are calculated by means of Gray Level Cooccurrence Matrix(GLCM)[25]. It assumes the set of traces in the seismic data as an image. Gray level matrix depends upon the organization of pixel values. The frequency with which twopixel values occurs together is output from the GLCM function[25]. The function works by generating a M*M matrix where m denotes the number of the gray level scale used. For this paper, we consider the matrix to be of 64*64. There exist some different GLCM seismic attributes named mean, variance, contrast, entropy, dissimilarity, homogeneity, energy etc. for facies analysis. However, this paper chooses the four textural attributes depending upon the degree of correlation between them. The approach proceeds by calculating the four textural attributes named Energy, Homogeneity, Dissimilarity and Contrast. Fig.3 shows the four texture attributes calculated for the seismic data.
GLCM contrast is basically used to determine any local changes in the data and is given by equation (8)[27].
(8) 
Where P is the GLCM probability matrix and N is the size of the matrix.
GLCM Energy[26] is directly related to the cooccurrence matrix and express the continuity of rock properties. It determines the presence of homogeneous or rough texture and is given by equation (9)
(9) 
The low value of energy corresponds to rough texture and high to the homogeneous texture[27].
GLCM homogeneity[26] is used to determine the smoothness in the texture i.e degree of neighbourhood similarity in the data. It identifies the region with static mean and variance and is given by equation (10)
(10) 
(11) 
5.4 Missing Values Handling
The Preprocessing task recovers the major problems present in the data including removal of noise, dead traces etc. However, there can be the case where the degree of distortion in the data is too high to be recovered by means of the preprocessing task. The level of distortion in the data depends on the various factors including recorder settings, type of source devices used in the survey, the area of seismic survey etc. So there might be the chances that calculation of attributes over that part of the dataset would make the overall process inaccurate. The most common problem in nearly all fields is that we have values of variables available at certain locations and we want to find a function that uses the existing data to determine values at locations different from the measured locations. There exists some machine learning algorithms
[23] for handling missing values in the data. In this study, we have used Radial Basis Function(RBF)[28] for interpolating the missing values of attributes in the data. RBF has a number of applications in different fields including data mining[28], machine learning[29] and statistics[29]. The overall goal of using RBF for missing values prediction is to improve the accuracy of the facies classification process.Given a set S of input vector of Ddimensions and corresponding set of output.
(12) 
The aim of RBF interpolation[29] is to find a function such that (S)=. The present work uses the RBF for filling the missing attribute values in the data. The degree of relationship between near well features and seismic attributes forms the basis for the RBF. The method works by dividing the complete labelled dataset into the train and test part. Train part is used to build a learning model. The learning model is built separately for each individual attribute. After proper training of learning models, we use them to predict the attribute values for test part of the dataset. The difference between actual and predicted value of attributes gives error value. The root mean square error values for the RBF models built in this study are given in Table 2. The Error Values are very low which shows that the models are developed well. These resulting RBF models are then used for predicting the attributes values for the missing part of the data.
Name of Attribute  Training Error  Testing Error 

Energy  
Homogeneity  
Contrast  
Dissimilarity 
5.5 Facies Classification Using Generative Topographic Map
The next step after calculating the four textural attribute energy, homogeneity, contrast, dissimilarity and missing value handling is to initialize the nonlinear mapping model. In the present work, the aim is to perform a nonlinear transformation keeping D=4 dimensional data space to L=2 Dimensional latent space. The model begins by initializing the 2D grid of latent point with 30*30 nodes. Then the algorithm generates the grid of basis function centers of 15*15 nodes and also selects the values of sigma where sigma denotes the width of the basis function.
The initial values of the parameters (Weights and Bias ) required for initializing the Mapping model is assigned by using the principal component analysis algorithm[16]
. The nonlinear model then proceeds by computing the prior probability over the latent space. As pointed out in section (III), defining a prior distribution over the latent space will induce the corresponding distribution over the data space. The algorithm goes on updating the values of Weights
and Bias by executing the ExpectationMaximization(EM) algorithm till it fulfills stopping criteria, where stopping criteria would be decided by two factors first when the error gets constant and second when the iteration reaches the maximum number of units.6 Classification Results
The nonlinear generative topographic map performs a transformation of data from fourdimensional seismic attributes plane to twodimensional seismic facies plane for visualizing the classification results. Fig. 4 shows the facies classification resulting two transformation approaches. The Fig. 3(a) presents the classification results obtained from the linear transformation approach[18] and Fig. 3(b) demonstrates the facies classification resulting from the Nonlinear GTM. As per the classification results shown in Fig. 4, four different kinds of facies are identified by the generative topographic mapping. From the Fig. 4 it is quite clear that the facies identified by using generative topographic map are more accurate than the facies identified by linear transformation approach. The comparison of regions of seismic facies highlighted with the rectangle in Fig. 3(a) & 3(b) shows that the facies identified by nonlinear approach are more clearly identifiable as compared to that of linear approach. Moreover, the comparison of different seismic regions identified by using GTM with the ground truth[22] shows that the approach performed an accurate facies classification.
7 Conclusion
In this paper, we presented a nonlinear transformation approach(SFAGTM) to identify different seismic facies. The proposed SFAGTM approach serves the two major purposes. Firstly, it provides a method for interpolating the missing entries in the data. Radial Basis Function(RBF) is used for interpolating the missing values of attributes. The Root Mean Squared(RMS) error values obtained as result of interpolation shows that the method accurately interpolates the missing values of attributes in the data. Secondly, the proposed approach performs a nonlinear mapping between data space and latent space. There exist various linear approaches such as SOM, KMeans to identify seismic facies. However, these linear approaches suffer from some major limitations such as determining convergence rate, natural clusters, width parameters etc. In contrast to linear transformation techniques, GTM solves the biggest problem of convergence by determining the value of stability of variance. The proposed SFAGTM approach identifies the seismic facies by using four textural attributes. The approach works well even in the absence of well log data and identifies the natural clusters present in the data. Furthermore, the classification result shows that the set of facies identified by the GTM is more precise than linear approaches. However, the major limitation of the proposed approach is that it has higher runtime complexity than linear approaches. This study is primarily focused on the unsupervised classification of seismic facies. The nonlinear supervised classification of facies using seismic and well log data can be taken as future work.
References
 [1] Han, Jiawei, Jian Pei, and Micheline Kamber. Data mining: concepts and techniques. Elsevier, 2011.
 [2] Kim, Kyoungjae. ”Financial time series forecasting using support vector machines.” Neurocomputing 55.1 (2003): 307319.

[3]
Lu, ChiJie, TianShyug Lee, and ChihChou Chiu. ”Financial time series forecasting using independent component analysis and support vector regression.” Decision Support Systems 47.2 (2009): 115125.
 [4] Shumway, Robert H., and David S. Stoffer. Time series analysis and its applications: with R examples. Springer Science & Business Media, 2010.
 [5] Kearey, Philip, Michael Brooks, and Ian Hill. An introduction to geophysical exploration. John Wiley & Sons, 2013.
 [6] Satinder Chopra and Kurt J. Marfurt. Seismic attributes for prospect identification and reservoir characterization. Society of Exploration Geophysicists and European Association of Geoscientists and Engineers, 2007.
 [7] Anees, Mohammad. ”Seismic Attribute Analysis for Reservoir Characterization.” 10 th Biennial International Conference and Exposition. 2013.
 [8] Brian P., and Steven R. May. ”Method for seismic facies interpretation using textural analysis and neural networks.” U.S. Patent No. 6,438,493. 20 Aug. 2002.
 [9] H. Sabeti and A. Javaherian, “Seismic Facies Analysis Based on Kmeans Clustering Algorithm Using 3D Seismic Attributes,” First Int. Pet. Conf. Exhib. Shiraz, Iran, no. May, 2009.
 [10] A. Roy, B. Dowdell, and K. Marfurt, “Characterizing a Mississippian tripolitic chert reservoir using 3D unsupervised and supervised multiattribute seismic facies analysis: An example from Osage County,” Interpret. Unconv. Resour., vol. 1, no. 2, pp. 109–124, 2013.
 [11] Kohonen, Teuvo. ”The selforganizing map.” Neurocomputing 21.1 (1998): 16.
 [12] J. D. Pigott, M. H. Kang, and H. C. Han, “First order seismic attributes for clastic seismic facies interpretation: Examples from the East China Sea,” J. Asian Earth Sci., vol. 66, pp. 34–54, 2013.
 [13] Renjun Wen. ”3D Modeling of Stratigraphic Heterogeneity in Channelized Reservoirs: Methods and Applications in Seismic Attribute Facies Classification.” Recorder official Publication of Canadian Society of Geophysicists 29.03 (2004): 114.
 [14] H. Du, J. Cao, Y. Xue, and X. Wang, “Seismic facies analysis based on selforganizing map and empirical mode decomposition,” J. Appl. Geophys., vol. 112, pp. 52–61, 2015.
 [15] Han, Chao, Leanna House, and Scotland C. Leman. ”ExpertGuided Generative Topographical Modeling with Visual to Parametric Interaction.” PloS one 11.2 (2016)
 [16] Hervé, and Lynne J. Williams. ”Principal component analysis.” Wiley interdisciplinary reviews: computational statistics 2.4 (2010): 433459.(PCA)
 [17] Bruce Thompson. Exploratory and confirmatory factor analysis: Understanding concepts and applications. American Psychological Association, 2004.
 [18] Ian T. ”Principal Component Analysis and Factor Analysis.” Principal component analysis. Springer New York, 1986. 115128.
 [19] Arthur P., Nan M. Laird, and Donald B. Rubin. ”Maximum likelihood from incomplete data via the EM algorithm.” Journal of the royal statistical society. Series B (methodological) (1977): 138.
 [20] Christopher M. Bishop, Markus Svensén, and Christopher KI Williams. ”GTM: The generative topographic mapping.” Neural computation 10.1 (1998): 215234.
 [21] Christopher M. Bishop, Markus Svensén, and Christopher KI Williams. ”Developments of the generative topographic mapping.” Neurocomputing 21.1 (1998): 203224.
 [22] ”Netherlands Offshore F3 Block  Complete.” Open Seismic Repository Main/Netherlands Offshore F3 Block  Complete. OpendTect,Web. 18 May 2017.
 [23] Witten, Ian H., et al. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann, 2016.
 [24] Shoudong, and Weihong Zhu. ”Iterative dipsteering median filter for seismic data processing.” U.S. Patent No. 9,429,668. 30 Aug. 2016.
 [25] A. U. Waldeland and A. H. S. Solberg. ”3D Attributes and Classification of Salt Bodies on Unlabelled Datasets.” 78th EAGE Conference and Exhibition 2016.
 [26] ”TextureDirectional.” OpendTect 5.0 Documentation: Appendix AAttributeTextureDirectional.OpendTect.18May.http://doc.opendtect.org/5.0.0/doc/od_userdoc/content/app_a/text_dir.htm.
 [27] Satinder Chopra and Vladimir Alexeev. ”Texture attribute application to 3D seismic data.” Sixth International Conference & Exposition on Petroleum Geophysics. Kolkata, India, Expanded Abstracts. 2006.
 [28] Buhmann, Martin D. Radial basis functions: theory and implementations. Vol. 12. Cambridge university press, 2003.
 [29] Fasshauer, Gregory E., and Jack G. Zhang. ”On choosing “optimal” shape parameters for RBF approximation.” Numerical Algorithms 45.1 (2007): 345368.
 [30] Tao Zhao, Vikram Jayaram, Atish Roy, and Kurt J. Marfurt. ”A comparison of classification techniques for seismic facies recognition.” Interpretation 3, no. 4 (2015): SAE29SAE58.
Comments
There are no comments yet.