I Introduction
Texture analysis and classification have a huge variety of applications. Although it has been widely studied it remains open for research and in fact, is one of the biggest challenges for the field of computer vision and pattern recognition. There are a lot of different methods to deal with texture analysis, which can be grouped into four classes: (i) structural methods  where textures are described as a set of primitives; (ii) statistical methods  textures are characterized by nondeterministic measures of distribution, using statistical approach; (iii)modelbased  textures are described as mathematical and physical modeling; and (iv) spectral methods, based on the analysis in the frequency domain methods, such as Fourier, cosine transform or wavelets. In the last approach, lay one of the well known and very succeed texture method: the Gabor filter, in which a feature extraction enhancement is proposed in this work.
The Gabor filter was proposed by Dennis Gabor in 1946 and extended by 2D and applied to image textures by Daugman Daugman1985 ; Daugman1980
in the 80’s. Daugman’s work main motivation was to model mathematically the receptive fields (response of neuronal cells set) of the cortical cells in the primate brain. Besides the biological motivation, the Gabor Filter has a very good performance for texture processing and still remains one of the best methods for texture analysis. Gabor texture technique consists on the convolution of an image with several multiscale and multiorientation filters. For each convolution, a transformed space is created, and the feature extraction is performed in each space. Usually, the feature vector is composed concatenating the energy measure of each convoluted image
Rajadell . This way, each convoluted image is represented by a single statistical value that is far from representing adequately the rich information present in the Gabor space. This issue has motivated the research in the field and the proposal of this work.One of the simplest Gabor enhancement was proposed by Bandzi ; Clausi ; Shahabi , which uses other basic statistical descriptors that proves to work better than energy in some situations. Another approach proposed is the use of GLCM Haralick applied over the convoluted images to extract simple features achieving good results. Tou et alTou1 ; Tou2 , proposed a simple yet powerful method to calculate the covariance matrix of all the convoluted images. More recently, the success of the LBP operator Ojala in several computer vision fields motivated the adaptation of this operator on the Gabor process yielding the best results found on the literature.
In addition fractal dimension has been successfully used in texture feature extraction BackesB12 ; BackesB09a . The fractal descriptors represent the spatial relations between pixel intensities, even small changes between texture patterns produce significant changes on the signature. In this paper, we propose the use of volumetric fractal dimension to extract the fractal descriptors of the Gabor convoluted images with the use of canonical analysis to decorrelate the signature descriptors and reduce dimensionality. The introduced approach is validated using several image texture datasets, and the results analyzed and compared against the best feature extraction methods for Gabor space found in the literature.
The paper is split into 9 sections. Next section gives a short overview of the Gabor wavelets method. Section 3 presents a brief description of the different methods implemented to compare their performance against the proposed technique. Section 4 explains the Volumetric fractal dimension method in detail. Section 5 presents the combinational approach of Gabor wavelets with volumetric fractal dimension. Section 6,7 and 8 shows the experiments conducted and the results obtained. Finally, section 9 draws conclusions and future directions.
Ii Gabor Wavelets
Since the discovery and description of the visual cortex cells of mammalian our understanding of how the human brain process texture has advanced enormously. Daugman Daugman1980 ; Daugman1985 shown that simple cells in the visual cortex can be modeled mathematically using Gabor functions. These functions Gabor46 approximate cortex cells using a fixed gaussian. Later, Daugman proposed a twodimensional Gabor wavelet Daugman2 for its application on image processing and it has been widely used in the field for its biological and mathematical properties. The 2D Gabor function is a local bandpass filter that achieves optimal localization in both spatial and frequency domain and allows multiresolution analysis by generating multiple kernels from a single core function.
The Gabor wavelets are generated by dilating and rotating a single kernel with a set of parameters. Based on this concept, we use the Gabor filter function as the kernel to generate a filter dictionary. The twodimensional Gabor transform is a complex sine wave with frequency modulated by a Gaussian function. Its form in space and frequency domains , is given by Eqs.1 and 2:
(1) 
(2) 
A selfsimilar filter dictionary can be obtained by dilating and rotating using the generating function proposed in Manjunath .
(3) 
Where and are integer values that specify the number of scales and orientations respectively and , where represents the total number scales and the total number of orientations. The and parameters are defined by:
(4) 
(5) 
Where , the scaling factor is needed to ensure that the energy is independent from . The parameters necessary to generate the dictionary could be selected empirically. However, in Manjunath , the authors present a suitable method to compose a filter dictionary that ensures a maximum spectrum coverage with the lowest redundancy possible. Based on this approach, we use the following equations to describe how to obtain the ideal sigmas.
(6) 
(7) 
(8) 
Where and and represent the minimum and maximum central frequencies respectively.
Iii Gabor descriptors
The Gabor wavelet representation of an image is the convolution of this image with the entire filter dictionary. Formally, the convolution result of an image and a Gabor wavelet dictionary named as Gabor images on the rest of the paper can be defined as follows:
(9) 
where denotes the Gabor wavelet with central frequency , scale and orientation . The number of images generated depends on the number of scales and orientations used. For example, four scales and six orientations will generate 24 Gabor images. The feature vector is composed by extracting single or multiple features from each generated image using image descriptors. A general process to describe this is shown in Figure 1.
A classical and simple approach to obtain the feature vector is just calculating the energy of each Gabor image by
(10) 
where Daugman1985 . Although it is largely used in the literature, this approach does not achieve a efficiently information of the Gabor images. It has motivated the development of the methods to extract more efficiently the Gabor images information. In the following subsections a brief overview of the most important methods found on the literature is presented.
The nonorthogonal Gabor filters produce different effects depending on the texture characteristics. It does not exist an ideal combination of parameters that ensures the maximum performance. Whilst the work presented in Manjunath help reducing the redundancy of the filters still some parameters like scales orientations and central frequencies are determined empirically. Thus, central frequencies variations seem to have a low impact on the results. They are fixed to and to reduce the number of variables for the experiments. In order to determine Gabor+method combination that obtains the best results for each combination, we performed eight experiments per method for each database. Each of those experiments represents a variation in the number of scales and orientations used in the Gabor wavelet process ranging from 2 to 6 scales and 3 to 6 orientations combined in an incremental framework: , , , , , , , being scale x orientation.
For the purpose of comparison, the experiments are replicated using several state of the art techniques found in the related literature.
iii.1 Descriptors based on first order statistics
Let be a grayscale image with dimensions and where and are the image width and height respectively. The possible intensity values that could take are where is the maximum number of intensity value. Then the histogram is a function showing the number of pixels for each possible grayscale intensity value according to:
(11) 
Where its the binary function defined by:
(12) 
Image histogram has the power to represent a large set of values in a single measure that reflects a specific property of the distribution. To compute descriptors, we use a histogram representation based on a density probability function given by:
(13) 
The density function
is a onedimensional vector that holds important information that is later extracted using distribution measures such as energy, mean, variance, etc. The most common approach to extract features in the Gabor wavelets methods is energy based descriptors. Some recent approaches use other types of descriptors in order to obtain more useful information from each image. Since each extractor generates a single value from each image the final representation is a
x  dimensional feature vector.The best firstorder statistics found in the literature are used on experimentation: Energy (Eq. 13), variance (Eq. 14) and percentil75 (Eq. 15) are used accordingly to their implementation in [34]. According to the figure 1 the extractors are applied directly over the magnitude space.
(14) 
(15) 
(16) 
Where its the ascendant sorted vector of p and .
iii.2 Descriptors based on GLCM features
Secondorder statistics derived from the gray level coocurrence matrix (GLCM) are a better representation of how humans perceive texture patterns Clausi ; Shahabi . It has been proven to be the most successful approach to many kinds of texture feature extraction problems. GLCM features capture information regarding higher frequency components in texture. The coocurrence matrix represents the histogram of the number of occurrences of graylevel pair values when a pixel neighborhood algorithm is applied.
Formally, the GLCM represents the frequency of appearance of 2 pixels with graylevel values separated by a distance and orientation for an image defined by:
(17) 
where
(18) 
For each and is created a squared matrix with a dimension the same size as the number of grayscale values present in the image, due to computational cost only a few values of and are used.
The research presented by Clausi shows the finest combination of Gabor filters and gray level coocurrence matrix features. According to Clausi these three basic statistic descriptors represent the best second order statistics extracted from the GLCM matrix obtained after processing the Gabor images:
(19) 
(20) 
(21) 
iii.3 Descriptors based on covariance matrix features
Covariance matrix is a statistical method that represents the covariance between values. Covariance matrix applied to images reflects important features of heterogeneous images while achieving a considerable dimensionality reduction. A covariance matrix can be represented as:
(22) 
where z represents the feature point and the mean of feature points. For fast computation, integral image technique is used Tuzel . The and tensor used for the computation are defined by:
(23) 
(24) 
where , represents the feature image and the number of dimensions of the covariance matrix. Hence, 24 images generate a 24x24 matrix. Finally, the covariance matrix is generated using and .
(25) 
where is the upper left coordinate and is the lower right coordinate of the image.
The covariance matrix implementation follows the directives given in Tou2 . The final covariance matrix obtained has dimensions where . Since the covariance matrix is a symmetric matrix, only the non repeated values from the matrix are used as features. Hence, a covariance matrix generates a feature vector of size .
iii.4 Descriptors based on local binary pattern features
Some of the latest work in Gabor signatures involves descriptors based on local binary patterns. The original LBP operator Ojala labels the pixels of an image by thresholding the neighborhood of each pixel with the center value and considering the result as a binary number according to:
(26) 
Then, by assigning a binomial factor for each the LBP pattern for each pixel is achieved as:
(27) 
In Zhang The LBP operator is applied to each pixel on the Gabor images to generate a LGBP map (Local Gabor Binary Map). the concatenation of the histograms of each Gabor image is used as the feature vector. In Shufu a volume approach is taken by considering all the Gabor images as a 3D volume and performing a LBP calculation in the 3D space.
The local binary pattern is applied to the Gabor images according to Zhang . A 4neighbourhood is applied to reduce the size of the histogram. since a 4neighbourhood allow a maximum of possible values on the LBP map . The final feature vector is composed of the concatenation of the histogram of each Gabor image:
(28) 
where is the number of scales and orientations used for the Gabor process and is:
(29) 
Iv The proposed method
iv.1 Volumetric fractal dimension
The fractal concept was first used by Mandelbrot in his book Mandelbrot . This concept states that natural objects cannot be described using Euclidean geometry but using persistent selfrepeating patterns. In recent years this concept has been used on the field of image analysis BackesCB09a ; BackesB12 ; BackesB09a ; BrunoPFC08 . To adapt the fractal concept to images is necessary to use a measure that captures fractal properties of non fractal objects inside discrete environments. For this purpose, the fractal dimension of an image is used to describe how selfrepetitive the objects contained within the image are. Under this concept, several types of images could be analyzed. An approach used to analyze grayscale images called volumetric fractal dimension proposed in BackesCB09a ; BackesB09a ; BackesJKB09 has proven to be a very effective fractal descriptor. On Gomez the authors successfully demonstrated the power of VFD to describe the Gabor images. On this approximation, we take a different approach to reduce and decorrelate the fractal signatures in order to improve the power of description and reduce dimensionality.
Let be a Gabor image taken from Eq. 9 the 3dimensional representation necessary to compute the VFD is given by where are the spatial coordinates of the image and is the gray level intensity. This surface is dilated by a sphere of radius and the influence volume of the dilated surface is calculated for each value of . This could be better explained by equation:
(30) 
where is a point in whose distance from is smaller or equal to . As grows the spheres start to intercept each other producing variations on the computed volume. This property makes VFD very sensitive to even small changes on the texture pattern. Each expansion of generates a singlevolume measure. Therefore, the values that takes must reflect each possible state of the expansion without redundancy. To reduce the computational costs of the volume computation, we applied an exact 3dimensional Euclidean distance transform algorithm (EDT) FabbriCTB08 over the surface. The EDT performs a calculus of the distance of all the voxels on to its closest voxel using the Euclidean distance. The most suitable way to obtain the set of radius to expand the surface is by using all the possible Euclidean distances up to a maximum radius. This is defined by:
(31) 
The fractal dimension can be estimated as
(32) 
The fractal signature (or fractal descriptors) will be composed by the logarithm of each volume according to:
(33) 
The parameters used in the feature extraction are based on previous research presented by BackesB09a ; BackesJKB09 where the expansion radius for the Volumetric Fractal dimension is set to 16. The number of canonical variables used is based on the percentage of representation of the ith most important canonical variables. Volumetric fractal dimension signature tends to be described with only 10 canonical variables. Figure 2 shows the process.
iv.2 Canonical discriminant analysis
The Canonical discriminant analysis (CDA) is dimension reduction technique closely related to principal component analysis. CDA purpose is to find linear combinations of quantitative variables that provide maximal separation between classes
McLachlan . This linear combinations posses the power of producing a reduced number of independent features also called canonical variables.The total dispersion among the feature vectors is defined as:
(34) 
where is the global mean feature vector and contains the row features of all vectors for class , defined by:
(35) 
(36) 
M is the total number of features. The matrix indicating the dispersion of objects within each class, is defined as:
(37) 
where is the mean feature vector for objects in class defined by:
(38) 
The intraclass variability indicates the combined dispersion in each class is defined by:
(39) 
The interclass variability indicates the dispersion of the classes in terms of their centroids is defined by:
(40) 
where K is the number of classes and N the number of samples on class , Finally we have the total variability represented by:
(41) 
Finally, to obtain the principal components we use the approximation taken in RossattoCKB11 :
(42) 
The ith canonical discriminant function is given by:
(43) 
where p is the number of features and
are the sorted eigenvectors of C where
is the most significant eigenvector. This definition leads to non correlated features, where is the number of features used to reduce de dimensionality of the dataset with .iv.3 Proposed Signature
Let be the convoluted image from equation 9. Let be the set of Euclidean distances for a radius . The volumetric fractal dimension signatures of each Gabor image is defined by:
(44) 
where is a radius from vector and is a vector that contains the fractal signatures for all the Gabor images. Then a canonical analysis function is applied to decorrelate the signature descriptors and principal components are selected. The computation of the canonical analysis of the signatures is defined by:
(45) 
Where is the principal components of with orientation and scale . Finally the image feature vector consists on the concatenation of the principal components previously computed defined by:
(46) 
V Evaluation strategy
Image Databases: For experimentation purposes, we used five different image databases. All the related methods and the proposed method are tested with each database. The image databases are selected based on the recurrence which each database is used in connected literature to validate feature extraction methods. The selection contains databases with a different difficulty level in classification and reported results. The selected databases were:

Brodatz: Obtained from Brodatz it contains 111 textures in grayscale each with 640 x 640 pixels. To generate a database with the appropriate number of samples per class, we took 10 nonoverlapping random windows of 200 x 200 pixels from each texture, hence, the used database contains 1110 images with 111 classes and 10 images per class.

KTHTIPS2: Obtained from Fritz the ”2b” version was selected and it contains 11 grascale textures each with 108 samples of 200 x 200 pixels.

Outex texture classification test suite 5: Obtained from Outex5 the selected contains 24 grayscale textures each with 368 samples of 32 x 32 pixels.

Outex texture classification test suite 5: Obtained from Outex5 the selected contains 68 grayscale textures each with 368 samples of 128 x 128 pixels.

Outex texture classification test suite 5: Obtained from Outex5 the selected contains 319 grayscale textures each with 368 samples of 128 x 128 pixels.
Classification:
With the extracted features are possible to perform a class separation based on the use of a statistical classifier. We have chosen the use of naive Bayes classifier
Mitchellwhich is a simple probabilistic classifier based on the Bayes theorem. This classifier uses an independent feature model where the presence or absence of a particular feature of a class is unrelated to the presence of absence of any other feature. In simple terms, it assumes the conditional independence among attributes. Despite its oversimplified assumptions, this classifier has worked very well with the real world datasets even when the attribute independence hypothesis is violated
Kuncheva , Domingos .Formally, the probability of an observation being class c is:
(47) 
where E is the defined as the class C = + if:
(48) 
Where is called a Bayesian classifier. based on the attribute independency hypothesis we can write
(49) 
The resulting naive Bayes classifier can be defined as:
(50) 
Even though the Naive Bayes classifier still does a good job with nonindependent features is not appropriate to use highly correlated features. To solve this problem, we use the canonical discriminant analysis function over de dataset to remove correlations. The application of this method maximizes the separation between classes and reduces de dimensionality of the dataset.
Vi Experimental results
The results obtained for each image database is presented in this section. Each table shows the rate of correct classifications. All the techniques implemented for the purpose of comparison are run against all the image databases.
Scales x Orientations  

Gabor +  2 x 6  3 x 4  3 x 5  4 x 4  4 x 6  5 x 5  6 x 3  6 x 6 
Energy 
62.59  79.22  80.58  84.71  86.48  86.00  83.13  87.54 
Variance 
69.65  85.25  86.39  85.56  87.40  86.30  82.84  87.16 
Percentil75 
64.71  82.86  83.53  85.59  87.45  87.68  84.68  87.77 
LBP 
92.52  92.96  92.75  92.14  90.21  89.05  90.66  81.68 
Covariance 
70.38  85.55  85.39  89.41  89.86  89.01  86.52  88.14 
GLCM 
79.90  88.90  88.54  88.37  84.55  86.00  84.91  79.51 
Enhanced Fractal 
93.51  94.05  93.88  94.05  95.59  94.68  93.87  94.14 

Scales x Orientations  

Gabor +  2 x 6  3 x 4  3 x 5  4 x 4  4 x 6  5 x 5  6 x 3  6 x 6 
Energy 
55.84  72.64  72.50  75.11  75.69  73.21  70.67  73.24 
Variance 
58.25  74.05  74.09  73.01  73.32  71.73  69.22  72.59 
Percentil75 
55.45  74.51  74.97  76.92  77.43  76.66  74.38  77.38 
LBP 
86.17  84.92  84.86  84.15  82.47  78.80  78.80  71.14 
Covariance 
51.73  83.01  81.76  76.61  75.79  74.47  72.23  74.70 
GLCM 
64.13  75.70  70.98  71.63  68.15  65.26  66.66  58.31 
Enhanced Fractal 
90.49  89.90  90.40  90.32  91.58  90.07  89.39  88.38 

Scales x Orientations  

Gabor +  2 x 6  3 x 4  3 x 5  4 x 4  4 x 6  5 x 5  6 x 3  6 x 6 
Energy 
50.77  68.74  69.32  72.31  73.94  75.73  72.94  77.08 
Variance 
56.72  65.16  66.86  68.72  72.46  74.75  68.71  76.29 
Percentil75 
62.19  74.05  74.83  80.67  82.06  82.14  78.64  80.51 
LBP 
76.13  79.23  77.15  78.43  77.67  77.32  76.99  76.79 
Covariance 
48.28  64.26  65.77  68.99  71.02  72.49  69.85  72.75 
GLCM 
18.17  26.85  26.68  26.80  23.80  24.35  25.85  21.92 
Enhanced Fractal 
77.19  81.48  80.11  83.21  83.46  82.94  83.87  83.16 

Scales x Orientations  

Gabor +  2 x 6  3 x 4  3 x 5  4 x 4  4 x 6  5 x 5  6 x 3  6 x 6 
Energy 
27.94  44.82  44.73  48.39  49.73  50.06  48.38  52.20 
Variance 
32.15  43.17  42.29  48.30  50.01  48.80  46.66  51.96 
Percentil75 
32.02  41.84  41.00  50.59  52.08  51.14  49.84  53.47 
LBP 
53.65  57.01  56.35  56.57  54.56  54.04  54.98  53.33 
Covariance 
37.34  51.62  52.71  57.90  58.89  59.02  56.52  61.23 
GLCM 
22.03  27.03  23.37  23.15  21.54  18.21  19.04  18.08 
Enhanced Fractal 
63.28  64.46  63.85  62.72  63.95  62.01  62.52  60.66 

Scales x Orientations  

Gabor +  2 x 6  3 x 4  3 x 5  4 x 4  4 x 6  5 x 5  6 x 3  6 x 6 
Energy 
36.32  61.53  60.20  66.20  66.60  66.29  64.63  66.38 
Variance 
35.60  53.45  53.24  58.69  61.05  60.30  57.54  62.55 
Percentil75 
37.45  56.21  55.45  62.91  62.77  63.03  60.00  63.09 
LBP 
65.92  68.21  66.43  65.58  62.10  59.59  63.81  55.69 
Covariance 
38.83  63.03  63.71  69.30  68.56  66.88  65.26  63.81 
GLCM 
19.73  30.22  27.05  26.72  21.49  20.77  23.02  17.07 
Enhanced Fractal 
69.58  74.12  73.90  77.02  74.82  75.66  73.88  70.20 

1 shows the results obtained for the full Brodatz database. The best result obtained by one of the compared methods (LBP) is . The proposed method obtains . Moreover, the results maintain a lower variability when more scales and orientations are used. In 2 The difference is much more significant. Our method obtains and the Gabor+LBP method obtains . 3 shows the results for the Outex 5 classification test suit. The proposed method obtained and the Gabor+Percentil75 obtains . The finest overall reported result for Outex 5 is . 4 shows the results obtained for the Outex 14 classification suit. The proposed method obtains and the Gabor+Covariance method obtains where the best overall result reported for Outex 14 is . Finally, 5 shows the results obtained for the Outex 16 classification suit. The proposed method obtains and the Gabor+Covariance method obtains .
Vii Conclusions
We have presented a novel technique that improves the Gabor wavelets to extract features from texture images. The effectiveness of the method is demonstrated by various experiments. The proposed method obtained the best results on all the image databases used. Texture feature extraction is a difficult task and it has been widely addressed but most of the approaches found in the literature only focus on a short range of texture conditions. The variability of the results of the compared methods shows the weakness of these methods when the image datasets used present a great intraclass variability a wide range of texture types and variations in the capture conditions. Different image datasets were selected with the purpose of presenting consistent results. However, this is not very common since most methods only perform well under tight image conditions. As shown in the results, most of the related methods only work well with one image dataset. Moreover, the variability of results on each compared method for a single dataset shows their sensibility to Gabor wavelets parameters. In this matter, the proposed method performs consistently in all experiments showing a clear independence of both methods and a successful conjunction to obtain rich texture features.
Acknowledgments
A. Gomez Z. gratefully acknowledges the financial support of FAPESP (The State of Sao Paulo Research Foundation) Proc. 2009/04362. J. B. Florindo gratefully acknowledges the financial support of FAPESP Proc. 2012/191433. O. M. Bruno gratefully acknowledges the financial support of CNPq (National Council for Scientific and Technological Development, Brazil) (Grant #308449/20100 and #473893/20100) and FAPESP (The State of São Paulo Research Foundation) (Grant # 2011/015231).
Viii References
References
 (1) J. G. Daugman, Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by twodimensional visual cortical filters, Journal of the Optical Society of America A: Optics, Image Science, and Vision 2 (7) (1985) 1160–1169.
 (2) J. Daugman, Twodimensional spectral analysis of cortical receptive field profiles, Vision Research 20 (10) (1980) 847–856.
 (3) O. Rajadell, P. GarcíaSevilla, F. Pla, Scale analysis of several filter banks for color texture classification, in: Proceedings of the 5th International Symposium on Advances in Visual Computing: Part II, ISVC ’09, SpringerVerlag, Berlin, Heidelberg, 2009, pp. 509–518.
 (4) P. Bandzi, M. Oravec, J. Pavlovicova, New statistics for texture classification based on gabor filters, Radioengineering 16 (3) (2007) 133–137.
 (5) D. Clausi, H. Deng, Designbased texture feature fusion using gabor filters and cooccurrence probabilities, Image Processing, IEEE Transactions on 14 (7) (2005) 925 –936.
 (6) F. Shahabi, M. Rahmati, Comparison of GaborBased Features for Writer Identification of Farsi/Arabic Handwriting, in: G. Lorette (Ed.), Tenth International Workshop on Frontiers in Handwriting Recognition, Université de Rennes 1, Suvisoft, La Baule (France), 2006.
 (7) R. M. Haralick, K. Shanmugam, I. Dinstein, Textural features for image classification, Systems, Man and Cybernetics, IEEE Transactions on SMC3 (6) (1973) 610 –621.
 (8) J. Y. Tou, Y. H. Tay, P. Y. Lau, Gabor filters and greylevel cooccurrence matrices in texture classification, in: MMU International Symposium on Information and Communications Technologies, Petaling Jaya, 2007.
 (9) J. Y. Tou, Y. H. Tay, P. Y. Lau, Advances in neuroinformation processing, SpringerVerlag, Berlin, Heidelberg, 2009, Ch. Gabor Filters as Feature Images for Covariance Matrix on Texture Classification Problem, pp. 745–751.
 (10) T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution grayscale and rotation invariant texture classification with local binary patterns, Pattern Analysis and Machine Intelligence, IEEE Transactions on 24 (7) (2002) 971 –987.
 (11) A. R. Backes, O. M. Bruno, Fractal and multiscale fractal dimension analysis: a comparative study of bouligandminkowski methodArXiv 1201.3153v1.
 (12) A. R. Backes, O. M. Bruno, Plant leaf identification using multiscale fractal dimension, in: International Conference on Image Analysis and Processing, 2009, pp. 143–150.
 (13) D. Gabor, Theory of communication, J. Inst. Elect. Eng. 93 (1946) 429–457.
 (14) J. Daugman, How iris recognition works, Circuits and Systems for Video Technology, IEEE Transactions on 14 (1) (2004) 21–30.
 (15) B. Manjunath, W. Ma, Texture features for browsing and retrieval of image data, Pattern Analysis and Machine Intelligence, IEEE Transactions on 18 (8) (1996) 837 –842.
 (16) O. Tuzel, F. Porikli, P. Meer, Region covariance: A fast descriptor for detection and classification, in: A. Leonardis, H. Bischof, A. Pinz (Eds.), Computer Vision  ECCV 2006, Vol. 3952 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2006, pp. 589–600.
 (17) W. Zhang, S. Shan, W. Gao, X. Chen, H. Zhang, Local gabor binary pattern histogram sequence (lgbphs): a novel nonstatistical model for face representation and recognition, in: Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, Vol. 1, 2005, pp. 786 – 791 Vol. 1.
 (18) S. Xie, S. Shan, X. Chen, W. Gao, Vlgbp: Volume based local gabor binary patterns for face representation and recognition, in: Pattern Recognition, 2008. ICPR 2008. 19th International Conference on, 2008, pp. 1 –4.
 (19) B. B. Mandelbrot, The fractal geometry of nature, W. H. Freeman, New York, 1983.

(20)
A. R. Backes, D. Casanova, O. M. Bruno, Plant leaf identification based on volumetric fractal dimension, International Journal of Pattern Recognition and Artificial Intelligence 23 (6) (2009) 1145–1160.
 (21) O. M. Bruno, R. de Oliveira Plotze, M. Falvo, M. de Castro, Fractal dimension applied to plant identification, Information Sciences 178 (12) (2008) 2722–2733.
 (22) A. R. Backes, J. J. de M. S Junior, R. M. Kolb, O. M. Bruno, Plant species identification using multiscale fractal dimension applied to images of adaxial surface epidermis, in: International Conference on Computer Analysis of Images and Pattern, 2009, pp. 680–688.
 (23) A. G. Zuniga, O. M. Bruno, Enhancing gabor wavelets using volumetric fractal dimension, in: I. Bloch, J. Cesar, RobertoM. (Eds.), Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Vol. 6419 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2010, pp. 362–369.
 (24) R. Fabbri, L. da Fontoura Costa, J. C. Torelli, O. M. Bruno, 2d euclidean distance transform algorithms: A comparative survey, ACM Computer Surveys 40 (1).
 (25) G. J. Mclachlan, Discriminant Analysis and Statistical Pattern Recognition (Wiley Series in Probability and Statistics), WileyInterscience, 2004.
 (26) D. Rossatto, D. Casanova, R. Kolb, O. M. Bruno, Fractal analysis of leaftexture properties as a tool for taxonomic and identification purposes: a case study with species from neotropical melastomataceae (miconieae tribe), Plant Systematics and Evolution 291 (12) (2011) 103–116.
 (27) P. Brodatz, Textures, a photographic album for artists and designers, New York:Dover, 1966.

(28)
M. Fritz, E. Hayman, B. Caputo, J.O. Eklundh, The kthtips and kthtips2 image
database (Dec. 2012).
URL http://www.nada.kth.se/cvap/databases/kthtips/  (29) O. T., M. T., P. M., V. J., K. J. . H. S., Outex  new framework for empirical evaluation of texture analysis algorithms., 2002, proc. 16th International Conference on Pattern Recognition, Quebec, Canada, 1:701  706.

(30)
T. Mitchell, Machine Learning (McgrawHill International Edit), 1st Edition, McGrawHill Education (ISE Editions), 1997.
 (31) L. I. Kuncheva, On the optimality of naive bayes with dependent binary features, Pattern Recognition Letters 27 (7) (2006) 830 – 837.
 (32) P. Domingos, M. Pazzani, Beyond independence: Conditions for the optimality of the simple bayesian classifier, in: Machine Learning, Morgan Kaufmann, 1996, pp. 105–112.
Comments
There are no comments yet.