Fusion of complex networks and randomized neural networks for texture analysis

06/24/2018 ∙ by Lucas C. Ribas, et al. ∙ 0

This paper presents a high discriminative texture analysis method based on the fusion of complex networks and randomized neural networks. In this approach, the input image is modeled as a complex networks and its topological properties as well as the image pixels are used to train randomized neural networks in order to create a signature that represents the deep characteristics of the texture. The results obtained surpassed the accuracies of many methods available in the literature. This performance demonstrates that our proposed approach opens a promising source of research, which consists of exploring the synergy of neural networks and complex networks in the texture analysis field.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Most of computer vision applications consider texture as a key factor to image discrimination, thus texture analysis has been a constant research field since the 1960s. The texture is a visual pattern related to the object surface, which in an image is represented by the pixel spatial organization. However, the interpretation of texture is ambiguous, thus there is no formal definition to the term that is widely accepted by the scientific community. This resulted in an extensive and heterogeneous literature of texture analysis methods proposed along the years

[65, 38, 8]. Usually, texture descriptors are applied in different areas such as industrial inspection [40], geology [62], medicine [14], material science [68], and so on.

Classical texture analysis techniques can be grouped into four different approaches: statistical, spectral, structural, model-based methods [66]. The earlier and most diffused methods are statistical-based, such as variants of gray-level co-occurrence matrices (GLCM) [30, 48] and local binary patterns (LBP) [44, 10]. Spectral methods explore texture in the frequency domain, some examples are Gabor filters [33] and wavelet transform [20]. On the other hand, structural methods consider texture as a combination of smaller elements, called textons, that compose the overall texture as a spatially organized pattern. A common approach of this kind of analysis is the Morphological decomposition [36]

. Finally, model-based methods represent textures through sophisticated mathematical models and estimating its parameters. Common methods of this category include Fractal models

[4, 3, 2, 22, 55, 12] and stochastic models [49]

Besides classical methods, more recent and innovative techniques are addressing texture differently, achieving promising results. An example is the set of techniques that use learning, such as descriptors based on a vocabulary of scale invariant feature transform (SIFT) [17], often called bag-of-visual-words (BOVW). Methods based on image complexity analysis are also gaining attention such as cellular automata [19] and complex networks (CN) [4, 58, 64, 25, 57]. In particular, methods based on the CN theory have achieved promising results due to its capacity to represent the relation among structural elements of texture. However, the problem of how to achieve more satisfactory modeling (i.e., a lesser number of parameters) and new ways of characterizing the network remains a challenge to overcome.

In this paper, we propose a novel approach that combines complex networks and randomized neural networks (RNN) in order to obtain a texture signature. Complex networks is attracting increasing attention due to its flexibility and generality for representing many real-world systems, including texture images. On the other hand, a randomized neural network is a neural network with a unique hidden layer and a very fast learning algorithm, which has been used in many pattern recognition tasks. Here we first model the texture image as a directed network, representing the information about the pixels and its neighbors as vertices and edges. To characterize the texture, the topological properties from the modeled network and the image pixels are used to train a randomized neural network, and the set of output weights is used as a feature vector that represents discriminative characteristics of the texture. Experimental results on four databases demonstrated a better performance of the proposed method when compared to other methods of the literature.

The remainder of this paper is organized as follows. Section 2 describes the fundamentals of complex networks and randomized neural networks. A novel method for texture classification based on fusion of complex networks and randomized neural networks is presented in Section 3. Section 4 describes the databases and experiments performed to evaluate the proposed method. The discussion about the results achieved and comparisons are presented in Section 5. Finally, in Section 6, we conclude the work with some remarks.

2 Background

2.1 Complex networks

Almost any natural phenomena can be modeled as networks by defining a set of entities and establishing a criterion of relation between them. Some classical examples are the internet, composed of various connected computers and routers, and a network of a cell, describing chemicals connected by chemical reactions. Complex networks are part of an area known as network science [6]. Network science is strongly based on graph theory. In the last decades, works have shown patterns present in many networks or graphs, which were then understood as a structural characteristic of some models such as the scale-free [7] and the small-world [61]. These findings have caused increasing interest from the scientific community on the study of complex networks, creating a new multidisciplinary research field.

The theoretical foundations of this area arise from of the intersection of the graph theory, physics, mathematics, statistics and computer science. Therefore, CN has been employed as a powerful tool for pattern recognition [42], where natural systems of many areas are modeled as networks and then quantified through its topological structure. CN applications are found in various areas of science, such as, physics, social sciences, biology, mathematics, ecology, medicine, computer science, linguistic, neuroscience, among others [18].

Formally, a network or graph is described by a tuple of vertices and edges . Let be a vertex of the set . An edge represents a connection between two vertices and , so the set is composed of all edges connecting vertices of . The network can also be directed, in this case, the edges have a direction from to . In most of the CN applications, the first step is to define how to model the target problem as a network, thus defining what are the vertices and what are the edges. Once is properly built, many measures can be computed to quantify its structure, varying from centrality, path-based measures, community structure, and many more [15]. Moreover, the structure of a real network is the result of the continuous evolution of the forces that formed it, and certainly affects the function of the system [9]. Therefore, the network dynamics can be analyzed by the characterization of its structural evolution in function of time or some modeling parameter.

2.2 Randomized neural networks

Randomized neural networks [59, 50, 51, 31]

are neural networks composed of two neuron layers (hidden and output layer), each one with a different role in the regression/classification task. The hidden layer has its neural weights determined randomly according to a probability distribution (for instance, a uniform or normal distribution). Its purpose is to project non-linearly the input data in another dimensional space where it is more likely that the feature vectors are linearly separable, as stated in Cover’s theorem

[16]. In turn, the output layer aims to linearly separate these projected feature vectors using the least-squares method.

Mathematically, letting be a matrix of input feature vectors (including for bias weight) and be the corresponding labels, the first step is to build the matrix of hidden neuron weights of dimensions , where and are the number of hidden neurons and the number of attributes in each input feature vector, respectively.

Next, the output of the hidden layer for all the feature vectors can be obtained by , where is generally a sigmoid or hyperbolic tangent function. Thus, this matrix of projected vectors (including for bias weight) can be used to compute the output neuron weights, according to the following equation

(1)

where is the Moore-Penrose pseudo-inverse [43, 53].

Sometimes, the matrix becomes singular (that is, without inverse), or close to singular, which results in unstable results in Equation 1. In order to avoid these drawbacks, it is possible to use the Tikhonov regularization [60, 13], according to

(2)

where and

is the identity matrix.

3 Proposed method

In this section, we describe the proposed method that combines a new texture modeling in complex networks and randomized neural networks for texture characterization.

3.1 Modeling texture as directed CN

Let be an image composed of pixels , which have as Cartesian coordinates and . In gray-scale images, each pixel has an intensity represented by an integer value , where is the highest gray-level value. To model a texture image as a directed network, each pixel is mapped as a vertex of a network . The set of edges is built connecting two vertices and , which represent two pixels and , by a directed edge from to , , if the Euclidean distance between them is less than or equal to a radius and , according to

(3)

where is the Euclidean distance between two pixels. Each edge has a weight defined as

(4)

It is worth mentioning that the direction of the edges is determined by the pixel gray-levels. In other words, an edge points to the vertex that represents a pixel with greater intensity. If both intensities are equal, the edge is bidirectional. It is also important to stress that the value is the unique parameter of modeling and determines the size of the neighborhood of each vertex. Thus, as increases, the reach of connection and the degree of vertices increase as well. This procedure makes the analysis of the behavior of this evolution an interesting way of studying these networks. Figure 1 shows the modeling of a texture image as a directed network for different values of .

(a) Texture image
(b)
(c)
Figure 1: Examples of a texture image modeled as a directed complex network.

3.2 Proposed signature based on RNN

Our method aims to use as texture signature the weights of the output layer of the RNN trained with information from the modeled complex networks. For this, three sources of information are considered for each vertex: out-degree, weighted out-degree, and weighted in-degree. As the out-degree is directly related to the in-degree in the modeled networks (i.e. the sum of these two degrees is equal in all the vertices) and, therefore, provide the same information, we considered only the out-degree.

The out-degree of a vertex represents the number of out-edges connected to other vertices,

(5)

On the other hand, the weighted out-degree is given by the sum of the weights of the out-degree edges of a vertex ,

(6)

Finally, the weighted in-degree is defined as the sum of the weights of the in-degree edges in ,

(7)

To build a matrix of input vector for the RNN, we adopted a strategy of analysis of the evolution of the complex network for different values of the modeling parameter . In this way, the input feature vector and the corresponding label of a vertex are built according to the following procedure: the gray-scale intensity of the pixel is considered as an output label and the values of out-degree of the vertex for different values of the modeling parameter are attributes of the input feature vector , where is the maximum values of the modeling parameter. A matrix of input feature vectors and a matrix of output labels are then built for all the vertices of the complex network. Thus, it is possible to analyze the evolution of the topology of vertices that represent pixels that have a determined gray-scale intensity. Figure 2(a) shows an example of how to build these matrices and . In addition to building the matrix of input feature vectors for the out-degree, we also built matrices of input vectors for the weighted out-degree and for the weighted in-degree .

Figure 2: Building of an input feature vector and corresponding output label for the out-degree using different values of to model the complex networks.

The next step is to define the weights of the matrix of the hidden layer of the RNN. In general, these weights are determined randomly in each training stage. Nevertheless, because we want our method to provide the same signature for the same texture image, it is necessary to use the same values in the matrix . Thus, we adopted the strategy proposed in [56] and used the Linear Congruent Generator (LCG) [37, 52] in order to obtain pseudo-random uniform values for the matrix , according to the following equation

(8)

where is a random numeric sequence and , and are parameters. The sequence has length , its first value is , and the values of the parameters are , and (values adopted in [56]). Hence, the matrix is composed of the vector divided into segments of length . Finally, all values of matrix and each line of the matrix

are normalized using z-score (zero mean and unit variance).

The proposed texture signature is built based on the matrix , which becomes a vector , where (Figure 2(b)). Notice that has length due to the bias weight. Thus, the first step is to concatenate the vectors obtained from RNNs trained with the three matrices of input data , , , according to

(9)

where is the number of neurons of the hidden layer and is the maximum radius for building the complex network.

The vector is built using a single value of and . These two parameters influence the weights of the neural network and, therefore, provide different characteristics for different values. Thus, initially we propose a vector that concatenates the vectors for different values of ,

(10)

Finally, we propose a feature vector that concatenates the vector for two values of ,

(11)

4 Experiments

In order to validate our proposed method and compare it to other texture analysis methods, the signatures were classified using linear discriminant analysis

[23]. This classifier was adopted due to its simplicity, which emphasizes the characteristics obtained by the methods. The leave-one-out cross-validation scheme was used. In this validation strategy, one sample is used for testing the model and the remainder for training it. This process is repeated times ( is the number of samples), each time with a different sample for testing. The performance measure is the average accuracy of the runnings.

The gray-scale texture databases used as benchmark to evaluate our proposed method were:

  • Brodatz [11]: just as in [5], 1776 texture images of 128 128 pixel size from this database divided into 111 classes were used in this work.

  • Outex [45]: just as in [5], the original 68 images from TC_Outex_00013 were divided into 20 sub-images 128 128 pixel size without overlapping. Thus, the database used in this work has 1360 textures.

  • USPTex [4]: this database has 2292 samples divided into 191 classes, 12 images per class, and each image has 128 128 pixel size.

  • Vistex: the database Vision Texture is provided by the Vision and Modeling Group - MIT Media Lab [54]. Just as in [5], the original 54 images were split into 16 sub-images 128 128 pixel size without overlapping. Thus, the database used in this work has 864 images.

The proposed method is applied to the aforementioned databases and the accuracy is compared to other methods of the literature. They are: Grey-Level Co-occurrence Matrix (GLCM) [30, 29], Gray Level Difference Matrix (GLDM) [63, 35], Fourier [1], Gabor Filters [41, 32], Fractal [3], Fractal Fourier [21], Local Binary Patterns (LBP) [46], Local Binary Patterns Variance (LBPV) [28], Complete Local Binary Pattern (CLBP) [27], Local Phase Quantization (LPQ) [47], Local Configuration Pattern (LCP) [26], Local Frequency Descriptor (LFD) [39]

, Binarized Statistical Image Features (BSIF)

[34], Local Oriented Statistics Information Booster (LOSIB) [24], Adaptive Hybrid Pattern (AHP) [67], Complex Network Texture Descriptors (CNTD) [5] and ELM signature [56].

5 Results and Discussion

5.1 Parameter Evaluation

Figure 3 shows the accuracies achieved on the four databases with the feature vector using . In this experiment, we used different values of , which were selected because they produce a number of features that is multiple of five for each feature vector considered. As can be seen in the figure, the success rates increase as we increase the value of . This increase is followed by an increase in the number of features used. The best accuracies are obtained using on the Vistex database and on the other databases. These values of produce feature vectors of size 45 () and 60 (). Furthermore, the success rate stabilizes when we use values of larger than on the Vistex database and values larger than on the other databases.

Table 1 shows the accuracies obtained on the four databases using the feature vector with . The results show that as the values of and its combinations increase (i.e. the number of features increases), the success rates increase as well. However, very large feature vectors do not assure the highest performance, once the success rates tend to stabilize at a determined value. For instance, if we compare the vector , which has 270 features, with the vector , which has 90 attributes, the former has a lower performance in all the databases. This suggests that the proposed signature reaches its limit in terms of discrimination. Thus, we considered the vectors and , since they presented a good trade-off between high accuracy and a small number of features.

Figure 3: Accuracies of the feature vector using different values of on the four databases.
No of features Outex USPTex Brodatz Vistex
{04, 09} 45 88.60 94.06 93.02 97.22
{04, 14} 60 88.82 94.98 93.86 98.50
{04, 19} 75 88.97 95.46 94.88 97.92
{04, 29} 105 88.38 94.81 94.48 97.80
{04, 39} 135 87.57 95.42 95.27 98.15
{09, 14} 75 88.09 94.72 93.52 97.92
{09, 19} 90 88.60 94.41 94.26 98.15
{09, 29} 120 87.50 94.24 94.14 97.80
{09, 39} 150 86.76 94.81 94.88 97.80
{14, 19} 105 89.04 94.94 94.54 98.26
{14, 29} 135 88.09 94.89 94.48 97.92
{14, 39} 165 87.65 95.29 95.33 98.15
{19, 29} 150 87.50 94.63 94.76 97.45
{19, 39} 180 87.28 95.02 95.27 98.03
{29, 39} 210 85.88 94.33 95.05 97.92
{04, 09, 14} 90 89.34 95.50 95.05 98.61
{04, 09, 19} 105 89.71 95.50 95.16 98.26
{04, 09, 29} 135 88.68 95.50 94.88 97.80
{04, 09, 39} 165 87.94 95.90 95.72 97.80
{04, 14, 19} 120 89.41 95.98 95.21 98.38
{04, 14, 29} 150 88.68 95.55 95.05 98.26
{04, 14, 39} 180 88.53 95.90 95.89 98.84
{04, 19, 29} 165 89.12 95.90 95.27 98.03
{04, 19, 39} 195 88.68 95.94 95.61 98.38
{04, 29, 39} 225 87.94 95.02 95.72 98.38
{09, 14, 19} 135 89.56 95.20 94.99 98.50
{09, 14, 29} 165 88.75 95.42 95.10 98.15
{09, 14, 39} 195 88.01 95.68 95.61 98.73
{09, 19, 29} 180 88.75 95.24 94.93 98.26
{09, 19, 39} 210 87.94 95.42 95.05 97.92
{09, 29, 39} 240 88.01 94.54 94.99 97.92
{14, 19, 29} 195 88.75 95.33 95.05 98.50
{14, 19, 39} 225 88.38 95.68 95.44 98.61
{14, 29, 39} 255 88.09 94.98 95.50 98.50
{19, 29, 39} 270 88.01 94.63 95.50 97.92
Table 1: Accuracies of the feature vector using different values of and their combinations for the maximum radius .

We also evaluated the feature vector for different values of . Figure 4 shows the accuracies yielded considering the combinations and . The value of maximum radius is associated to the zone of connection between the pixels (i.e., vertices). Thus, lower values of represent the closest pixels and, as increases, the reach of connection increases as well. The results show that the lowest values of provide better accuracies when compared to the highest values. This demonstrates that local patterns are more important than global patterns to discriminate the textures.

(a)
(b)
Figure 4: Accuracies using the feature vector for the two better set of with different values of maximum radius .

Furthermore, we analyzed the combination of vectors (showed in Table 1) with different values of maximum radius , resulting in the vector . To compute this vector, we used the combinations of that provided the best results in Table 1: and . In this experiment, we computed the vector for two values of (i.e, up to two combinations of ) due to the large number of features generated.

Table 2 shows the results of the vectors using the combination . The highest accuracy was provided by the vector . The results of the vectors built with the combination are shown in Table 3. In this experiment, the best results were obtained by using the vector . Tables 2 and 3 also show that by combining the vector with different values of , the accuracy increases approximately 1% on the databases. However, in the two cases, the combinations of high values of provide inferior results. Even though the best results of the two Tables ( and ) are similar, notice that the vector has a number of features larger than the vector .

No of features Outex USPTex Brodatz Vistex
{04, 06} 180 91.54 96.64 96.11 98.73
{04, 08} 180 91.47 96.24 95,88 98.26
{04, 10} 180 91.47 96.46 95.83 98.84
{04, 12} 180 91.69 96.25 95.72 98.26
{06, 08} 180 91.54 96.42 95.77 98.61
{06, 10} 180 90.74 96.25 95.83 98.50
{06, 12} 180 90.44 96.33 95.83 98.38
{08, 10} 180 90.58 95.98 95.15 98.49
{08, 12} 180 90.29 95.21 95.21 98.14
{10, 12} 180 90.59 95.37 94.93 98.38
Table 2: Accuracies using different sets of radius and .
No of features Outex USPTex Brodatz Vistex
{04, 06} 240 90.07 96.73 95.83 99.19
{04, 08} 240 91.17 96.68 96.05 98.95
{04, 10} 240 91.32 96.94 96.06 99.19
{04, 12} 240 91.76 96.55 96.11 98.61
{06, 08} 240 90.14 96.28 96.39 98.61
{06, 10} 240 89.63 96.60 95.95 98.50
{06, 12} 240 90.29 96.55 96.06 98.26
{08, 10} 240 90.00 95.94 95.72 98.37
{08, 12} 240 91.32 95.85 95.77 98.03
{10, 12} 240 90.51 95.94 95.05 98.03
Table 3: Accuracies using different sets of radius and .
Methods No of features Outex USPTex Brodatz Vistex
GLCM 24 80.73 83.63 90.43 92.24
GLDM 60 86.76 91.92 94.43 97.11
Gabor Filters 64 81.91 83.19 89.86 93.28
Fourier 63 81.91 67.70 75.90 79.51
Fractal 69 80.51 78.22 87.16 91.67
Fractal Fourier 68 68.38 59.45 71.96 79.75
LOSIB 8 57.50 56.61 64.64 67.71
LBP 256 81.10 85.42 93.64 97.92
LBPV 555 75.66 55.13 86.26 88.65
CLBP 648 85.80 91.13 95.32 98.03
AHP 120 88.31 94.89 94.88 98.38
BSIF 256 77.43 77.48 91.44 88.66
LCP 81 86.25 91.31 93.47 94.44
LFD 276 82.57 83.59 90.99 94.68
LPQ 256 79.41 85.29 92.51 92.48
ELM Signature 180 89.70 95.11 95.27 98.14
CNTD 108 86.76 91.71 95.27 98.03
90 89.34 95.50 95.05 98.61
180 91.54 96.64 96.11 98.73
240 91.32 96.94 96.06 99.19
Table 4: Comparison of accuracies of different texture analysis methods in four texture databases.

5.2 Comparison with other methods

To evaluate the results obtained by our proposed method, we performed comparisons with methods present in the literature. The experimental setup used was the same for all the methods (LDA with leave-one-out), except for CLBP, which used the classifier 1-Nearest Neighborhood (1-NN) with distance chi-square, according to the original paper. For our method, we adopted the two texture signatures that obtained the best results in the previous analysis: and .

Table 4 presents the results obtained by all the methods in the four image databases evaluated. The results show that our proposed method obtained the best results when compared to the other methods using both signatures. Also, it is important to stress that our method reached higher accuracies than the ELM signature and CNDT method (which is also based on complex networks). This suggests that our method obtained superior performance because it has simultaneously the main characteristics of both compared methods. In other words, the ELM signature uses only pixel intensities to train the neural network, without any valuable information from complex network modeling, and the CNTD method models images as complex networks and computes only traditional measures, without using a neural network to extract the deep characteristics from these complex networks.

Even though our proposed method has signatures with a larger number of descriptors when compared to some methods of the literature, it is important to emphasize that, if we consider only the vector , the results are still competitive. For instance, the vector , which has only 90 features, provides superior performance on the Vistex and USPTex databases. In the remainder databases, the results are very close to the highest accuracies (only 0.36% smaller than the result of ELM signature on the Outex database and 0.33% smaller than the accuracy of the CLBP on the Brodatz database)

6 Conclusion

This paper presented an innovative approach of texture feature extraction based on the fusion of complex network and randomized neural network. In the proposed method, a new approach to model the image as a CN that uses only a parameter is presented. We also proposed a new way of characterizing the CN based on the idea of using the output weights of a randomized neural network trained with topological properties of the CN. The obtained classification results on four databases outperformed other texture literature methods. Also, the proposed approach has an excellent trade-off between performance and size of the feature vectors. This demonstrates that the proposed approach is highly discriminative using the three feature vectors considered. In this way, this paper shows that the fusion of complex network and randomized neural network is a research field with great potential as a feasible texture analysis methodology.

Acknowledgments

Lucas Correia Ribas gratefully acknowledges the financial support grant #2016/23763-8, São Paulo Research Foundation (FAPESP). Jarbas Joaci de Mesquita Sá Junior thanks CNPq (National Council for Scientific and Technological Development, Brazil) (Grant: 302183/2017-5) for the financial support of this work. Leonardo Felipe dos Santos Scabini acknowledges support from CNPq (Grant #134558/2016-2). Odemir M. Bruno thanks the financial support of CNPq (Grant # 307797/2014-7) and FAPESP (Grant #s 14/08026-1 and 16/18809-9).

References

  • [1] Robert Azencott, Jia-Ping Wang, and Laurent Younes. Texture classification using windowed Fourier filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(2):148–153, 1997.
  • [2] André R Backes and Odemir M Bruno. A new approach to estimate fractal dimension of texture images. In International Conference on Image and Signal Processing, pages 136–143. Springer, 2008.
  • [3] André Ricardo Backes, Dalcimar Casanova, and Odemir Martinez Bruno. Plant leaf identification based on volumetric fractal dimension.

    International Journal of Pattern Recognition and Artificial Intelligence

    , 23(06):1145–1160, 2009.
  • [4] André Ricardo Backes, Dalcimar Casanova, and Odemir Martinez Bruno. Color texture analysis based on fractal descriptors. Pattern Recognition, 45(5):1984–1992, 2012.
  • [5] André Ricardo Backes, Dalcimar Casanova, and Odemir Martinez Bruno. Texture analysis and classification: A complex network-based approach. Information Sciences, 219:168–180, 2013.
  • [6] A.L. Barabási and M.Ã. PÃ3sfai. Network Science. Cambridge University Press, 2016.
  • [7] Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. science, 286(5439):509–512, 1999.
  • [8] Manish H. Bharati, J.Jay Liu, and John F. MacGregor. Image texture analysis: methods and comparisons. Chemometrics and Intelligent Laboratory Systems, 72(1):57 – 71, 2004.
  • [9] Stefano Boccaletti, Vito Latora, Yamir Moreno, Martin Chavez, and D-U Hwang. Complex networks: Structure and dynamics. Physics reports, 424(4):175–308, 2006.
  • [10] Sheryl Brahnam, Lakhmi C Jain, Loris Nanni, Alessandra Lumini, et al. Local binary patterns: new variants and applications. Springer, 2016.
  • [11] P. Brodatz. Textures: A photographic album for artists and designers. Dover Publications, New York, 1966.
  • [12] Odemir Martinez Bruno, Rodrigo de Oliveira Plotze, Mauricio Falvo, and Mario de Castro. Fractal dimension applied to plant identification. INFORMATION SCIENCES, 178(12):2722–2733, 2008.
  • [13] D. Calvetti, S. Morigi, L. Reichel, and F. Sgallari. Tikhonov regularization and the L-curve for large discrete ill-posed problems. Journal of Computational and Applied Mathematics, 123(1):423 – 446, 2000.
  • [14] Sugama Chicklore, Vicky Goh, Musib Siddique, Arunabha Roy, Paul K Marsden, and Gary JR Cook. Quantifying tumour heterogeneity in 18f-fdg pet/ct imaging by texture analysis. European journal of nuclear medicine and molecular imaging, 40(1):133–140, 2013.
  • [15] L da F Costa, Francisco A Rodrigues, Gonzalo Travieso, and Paulino Ribeiro Villas Boas. Characterization of complex networks: A survey of measurements. Advances in Physics, 56(1):167–242, 2007.
  • [16] T. M. Cover. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic Computers, EC-14(3):326–334, 1965.
  • [17] Gabriella Csurka, Christopher Dance, Lixin Fan, Jutta Willamowski, and Cedric Bray. Visual categorization with bags of keypoints. In ECCV International Workshop on Statistical Learning in Computer Vision, pages 1–22, 2004.
  • [18] Luciano da Fontoura Costa, Osvaldo N. Oliveira Jr., Gonzalo Travieso, Francisco Aparecido Rodrigues, Paulino Ribeiro Villas Boas, Lucas Antiqueira, Matheus Palhares Viana, and Luis Enrique Correa Rocha. Analyzing and modeling real-world phenomena with complex networks: a survey of applications. Advances in Physics, 60(3):329–412, 2011.
  • [19] Núbia Rosa da Silva, Pieter Van der Weeën, Bernard De Baets, and Odemir Martinez Bruno. Improved texture image classification through the use of a corrosion-inspired cellular automaton. Neurocomputing, 149:1560–1572, 2015.
  • [20] Esther de Ves, Daniel Acevedo, Ana Ruedin, and Xaro Benavent. A statistical model for magnitudes and angles of wavelet frame coefficients and its application to texture retrieval. Pattern Recognition, 47(9):2925 – 2939, 2014.
  • [21] João Batista Florindo and Odemir Martinez Bruno. Fractal descriptors based on Fourier spectrum applied to texture analysis. Physica A: statistical Mechanics and its Applications, 391(20):4909–4922, 2012.
  • [22] João Batista Florindo, Dalcimar Casanova, and Odemir Martinez Bruno. Fractal measures of complex networks applied to texture analysis. In Journal of Physics: Conference Series, volume 410, page 012091. IOP Publishing, 2013.
  • [23] K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, 2nd edition, 1990.
  • [24] Oscar García-Olalla, Enrique Alegre, Laura Fernández-Robles, and Víctor González-Castro. Local oriented statistics information booster (losib) for texture classification. In Pattern Recognition (ICPR), 2014 22nd International Conference on, pages 1114–1119. IEEE, 2014.
  • [25] Wesley Nunes Gonçalves, Núbia Rosa da Silva, Luciano da Fontoura Costa, and Odemir Martinez Bruno. Texture recognition based on diffusion in networks. Information Sciences, 364:51–71, 2016.
  • [26] Yimo Guo, Guoying Zhao, and Matti Pietikäinen. Texture classification using a linear configuration model based descriptor. In BMVC, pages 1–10. Citeseer, 2011.
  • [27] Zhenhua Guo, Lei Zhang, and David Zhang. A completed modeling of local binary pattern operator for texture classification. IEEE Transactions on Image Processing, 19(6):1657–1663, 2010.
  • [28] Zhenhua Guo, Lei Zhang, and David Zhang. Rotation invariant texture classification using lbp variance (lbpv) with global matching. Pattern recognition, 43(3):706–719, 2010.
  • [29] Robert M Haralick. Statistical and structural approaches to texture. Proceedings of the IEEE, 67(5):786–804, 1979.
  • [30] Robert M Haralick, Karthikeyan Shanmugam, and Its’ Hak Dinstein. Textural features for image classification. Systems, Man and Cybernetics, IEEE Transactions on, (6):610–621, 1973.
  • [31] Guang-Bin Huang, Qin-Yu Zhu, and Chee-Kheong Siew. Extreme learning machine: theory and applications. Neurocomputing, 70(1):489–501, 2006.
  • [32] Mahamadou Idrissa and Marc Acheroy. Texture classification using Gabor filters. Pattern Recognition Letters, 23(9):1095–1102, 2002.
  • [33] Anil K Jain and Farshid Farrokhnia. Unsupervised texture segmentation using Gabor filters. In Systems, Man and Cybernetics, 1990. Conference Proceedings., IEEE International Conference on, pages 14–19. IEEE, 1990.
  • [34] Juho Kannala and Esa Rahtu. Bsif: Binarized statistical image features. In Pattern Recognition (ICPR), 2012 21st International Conference on, pages 1363–1366. IEEE, 2012.
  • [35] Jong Kook Kim and Hyun Wook Park. Statistical textural features for detection of microcalcifications in digitized mammograms. IEEE transactions on medical imaging, 18(3):231–238, 1999.
  • [36] W-K Lam and C-K Li. Rotated texture classification by improved iterative morphological decomposition. IEE Proceedings-Vision, Image and Signal Processing, 144(3):171–179, 1997.
  • [37] D. H. Lehmer. Mathematical methods in large scale computing units. Annals Comp. Laboratory Harvard University, 26:141–146, 1951.
  • [38] Li Liu, Jie Chen, Paul Fieguth, Guoying Zhao, Rama Chellappa, and Matti Pietikainen. A survey of recent advances in texture representation. arXiv preprint arXiv:1801.10324, 2018.
  • [39] Rouzbeh Maani, Sanjay Kalra, and Yee-Hong Yang. Noise robust rotation invariant features for texture classification. Pattern Recognition, 46(8):2103–2116, 2013.
  • [40] Elias N Malamas, Euripides GM Petrakis, Michalis Zervakis, Laurent Petit, and Jean-Didier Legat. A survey on industrial vision systems, applications and tools. Image and vision computing, 21(2):171–188, 2003.
  • [41] Bangalore S Manjunath and Wei-Ying Ma. Texture features for browsing and retrieval of image data. IEEE Transactions on pattern analysis and machine intelligence, 18(8):837–842, 1996.
  • [42] Gisele Helena Barboni Miranda, Jeaneth Machicao, and Odemir Martinez Bruno. Exploring spatio-temporal dynamics of cellular automata for pattern recognition in networks. Scientific Reports, 6:37329 EP –, Nov 2016.
  • [43] E. H. Moore. On the reciprocal of the general algebraic matrix. Bulletin of the American Mathematical Society, 26:394–395, 1920.
  • [44] Loris Nanni, Alessandra Lumini, and Sheryl Brahnam. Survey on LBP based texture descriptors for image classification. Expert Systems with Applications, 39(3):3634–3641, 2012.
  • [45] Timo Ojala, Topi Mäenpää, Matti Pietikäinen, Jaakko Viertola, Juha Kyllönen, and Sami Huovinen. Outex: New framework for empirical evaluation of texture analysis algorithms, 2002.
  • [46] Timo Ojala, Matti Pietikainen, and Topi Maenpaa. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on pattern analysis and machine intelligence, 24(7):971–987, 2002.
  • [47] Ville Ojansivu and Janne Heikkilä. Blur insensitive texture classification using local phase quantization. In International conference on image and signal processing, pages 236–243. Springer, 2008.
  • [48] Christoph Palm. Color texture classification by integrative co-occurrence matrices. Pattern recognition, 37(5):965–976, 2004.
  • [49] Dileep Kumar Panjwani and Glenn Healey. Markov random field models for unsupervised segmentation of textured color images. IEEE Transactions on pattern analysis and machine intelligence, 17(10):939–954, 1995.
  • [50] Y-H Pao and Yoshiyasu Takefuji. Functional-link net computing: theory, system architecture, and functionalities. Computer, 25(5):76–79, 1992.
  • [51] Yoh-Han Pao, Gwang-Hoon Park, and Dejan J Sobajic. Learning and generalization characteristics of the random vector functional-link net. Neurocomputing, 6(2):163–180, 1994.
  • [52] Stephen K. Park and Keith W. Miller. Random number generators: good ones are hard to find. Communications of the ACM, 31(10):1192–1201, 1988.
  • [53] R. Penrose. A generalized inverse for matrices. Mathematical Proceedings of the Cambridge Philosophical Society, 51(3):406––413, 1955.
  • [54] Rosalind Picard, Chris Graczyk, Steve Mann, Josh Wachman, Len Picard, and Lee Campbell. Vision texture database. Media Laboratory, MIT, Cambridge, Massachusetts, 1995.
  • [55] Lucas Correia Ribas, Diogo Nunes Gonçalves, Jonatan Patrick Margarido Oruê, and Wesley Nunes Gonçalves. Fractal dimension of maximum response filters applied to texture analysis. Pattern Recognition Letters, 65:116–123, 2015.
  • [56] Jarbas Joaci Mesquita Sá Junior and André Ricardo Backes. ELM based signature for texture classification. Pattern Recognition, 51:395–401, 2016.
  • [57] Leonardo FS Scabini, Rayner HM Condori, Wesley N Gonçalves, and Odemir M Bruno. Multilayer complex network descriptors for color-texture characterization. arXiv preprint arXiv:1804.00501, 2018.
  • [58] Leonardo FS Scabini, Wesley N Gonçalves, and Amaury A Castro Jr. Texture analysis by bag-of-visual-words of complex networks. In Iberoamerican Congress on Pattern Recognition, pages 485–492. Springer International Publishing, 2015.
  • [59] Wouter F Schmidt, Martin A Kraaijveld, and Robert P W Duin. Feedforward neural networks with random weights. In Proceedings., 11th IAPR International Conference on Pattern Recognition. Vol.II. Conference B: Pattern Recognition Methodology and Systems, pages 1–4, 1992.
  • [60] A. N. Tikhonov. On the solution of ill-posed problems and the method of regularization. Dokl. Akad. Nauk USSR, 151(3):501––504, 1963.
  • [61] Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’networks. nature, 393(6684):440–442, 1998.
  • [62] Hans Rudolf Wenk. Preferred Orientation in Deformed Metal and Rocks: An Introduction to Modern Texture Analysis. Elsevier, 2013.
  • [63] Joan S Weszka, Charles R Dyer, and Azriel Rosenfeld. A comparative study of texture measures for terrain classification. IEEE transactions on Systems, Man, and Cybernetics, (4):269–285, 1976.
  • [64] Degang Xu, Xiao Chen, Yongfang Xie, Chunhua Yang, and Weihua Gui. Complex networks-based texture extraction and classification method for mineral flotation froth images. Minerals Engineering, 83:105–116, 2015.
  • [65] Jianguo Zhang and Tieniu Tan. Brief review of invariant texture analysis methods. Pattern recognition, 35(3):735–747, 2002.
  • [66] Jianguo Zhang and Tieniu Tan. Brief review of invariant texture analysis methods. Pattern recognition, 35(3):735–747, 2002.
  • [67] Ziqi Zhu, Xinge You, CL Philip Chen, Dacheng Tao, Weihua Ou, Xiubao Jiang, and Jixin Zou. An adaptive hybrid pattern for noise-robust texture analysis. Pattern Recognition, 48(8):2592–2608, 2015.
  • [68] Alexsandro Mendes Zimer, Emerson Costa Rios, Paulo de Carvalho Dias Mendes, Wesley Nunes Gonçalves, Odemir Martinez Bruno, Ernesto Chaves Pereira, and Lucia Helena Mascaro. Investigation of aisi 1040 steel corrosion in h2s solution containing chloride ions by digital image processing coupled with electrochemical techniques. Corrosion Science, 53(10):3193–3201, 2011.