From the microscopic point of view, all of natural scenes, providing our visual stimulus, can be seen as a patchwork of various texture patterns. As a biological knowledges, our visual system recognize the objects by their shapes in primary visual cortex. In addition to this, recent studies have shown that their textures also be important property for recognizing and segmentation of the objects in our higher-level visual cortex 
. Thus, availability of texture description is important for modeling of natural images in the field of computer vision. Especially, our research focus is to construct a generative model for various texture patterns.
Most of recent studies of texture modeling, achieving a good benchmark results, are based on the Markov Random Field (MRF) and Gibb’s sampling [2, 3, 4, 5, 6]. These MRF based models assume that textures are characterized as just a 2-dimensional probabilistic distributions. Therefore, the most MRF based texture models would have lack of consideration for “a perceptually equivalence”.
The perceptually equivalence is our perceptual property considering that “the pattern A and B are came from same texture”. The perceptually equivalence is not referred explicitly from structural modeling, such as MRF models, but also naturally required as in a part of modeling of natural images. Figure 1 shows the example of the perceptually equivalence, which are the pictures of same materials but different illuminance. We can find that the structural/geometric differences, that are overexposures and shadow directions, came from the different light source, on the other hand, we can also decide that the both of them are same textures.
Portilla and Simoncelli proposed a texture feature, which considers such a perceptual aspects, and a generative model on the basis of feature matching reconstruction [8, 9]. Hereafter, we call their texture feature, “a Portilla-Simoncelli statistics”, PSS for short. The PSS is influenced by bilogical knowledges, the receptive field of the primary visual cortex and Gabor-like filters especially in the area V1. The PSS is based on wavelet-like multi-scale image decomposition method, which has translation-invariance and rotation-invariance, known as a steerable filter pyramid. The PSS is consists of a marginal statistics over decomposed images, which act as constraints of the texture structure. Simoncelli et al. showed that the PSS could charactarize and generate various textures [8, 11, 12, 9]
. In recent, moreover, some studies report that the PSS could express the selective neuronal activity in area V4 of macaque and in area V2 of humans’ cerebral cortex . These reports were implying that the PSS could be appropriate representation of our texture recognition mechanisms.
Considering the modeling of textures in natural images, hereafter we call it as “natural textures”, the PSS would be good representation, however, the PSS would have redundancy between its elements, because the natural textures typically have certain structure. Therefore, the PSS extracted by natural textures could be phrased with more simplified representation.
This paper proposes a dimension reduction method to grasp the latent factors in the PSS of the natural textures. Our method is based on Probabilistic Principal Component Analysis (PPCA) and focusing on known correlation of the PSS. We achieve a 88.8[%] dimension reduction from raw PSS to phrase natural texture dataset, which is hard to apply a plain PCA.
Ii A Portilla-Simoncelli statistics
This section gives an overview of the PSS used in our study as a texture feature.
Ii-a A steerable filter pyramid
A steerable filter pyramid [10, 11] is multi-scale and directive image decomposition method which partially imitates the orientation selectivity of humans’ visual systems. The steerable filter pyramid could be “steered” its decomposition traits by 2 parameters, a number of decomposition scales and a number of decomposition orientations/directions , thus, this name was given. These properties originated in complex orthogonal Wavelet transform and Gabor filters bank.
The steerable filter pyramid was defined in Fourier domain, in similar to wavelet transform. Figure 2 shows a Fourier domain block diagram of a steerable filter pyramid interpreted as a linear system. Let be an input image in spatial domain, and be a polar representation of an input image in Fourier domain, each transfer elements in Figure 2 would be given by:
the band-pass transfer element was decomposed into radial part and angular part defined as
denotes a coordinate of which depends on number of orientation , given by
Low-pass transfer element had to be defined to reject a band exeeding the Nyquist limit, to make this decomposition a complete system.
Refer to Eq.6, the high-pass transfer element follows the low-pass residual as
As shown in Figure 2, the steerable filter pyramid would consist of recursion of subsystem . The multi-scaleness of the steerable filter pyramid was provided by the recursion of them, and the orientation selectivity was provided by a parallelism of them.
Ii-B Construction of the PSS
The PSS is constructed as a list of 10-types of statistics , came from the steerable filter pyramid.
Ii-B1 Descriptive statistics of pixel values ()
Ii-B2 Descriptive statistics of each scales ()
Skewness and kurtosis of reconstructed images with each scales of steerable filter pyramid,
including a low-pass residual .
Ii-B3 Auto-correlation of each decompositions ()
-neighbor auto-correlation of reconstructed images with each decompositions
(scales and orientations) of the steerable filter pyramid.
Ii-B4 Auto-correlaton of each scales ()
-neighbor auto-correlation of reconstructed images with each scales of steerable filter pyramid,
including a low-pass residual .
Ii-B5 Cross-correlations between decomposed images for each scales ()
Cross-correlation of decomposed image with each scales between each orientations.
Ii-B6 Cross-correlations between reconstructed images for each scales ()
Cross-correlation of reconstructed images with each scales,
including a low-pass residual ,
between each orientations.
Ii-B7 Cross-correrations between each reconstructed images ()
Cross-correration of reconstructed images with each scales,
including a low-pass residual ,
and each orientations.
Ii-B8 Cross-correlations between each decomposed images for each decompositions ()
Cross-correlation of decomposed image for each decompositions
(scales and orientations).
Ii-B9 Means of each reconstruted images ()
Means of pixel values of reconstructed images with each decompositions,
including a low-pass residual and high-pass residual .
Ii-B10 Variance of high-pass residual ()
Variance of pixel values of reconstructed images with high-pass residual .
Dimensionality of the PSS will be determined by parameters of steerable filter pyramid , , and neighbor of pixel space . In our experiments, we use parameters as according to  consistently. The dimensionality of the PSS will be 1784 by these parameters.
Iii Hierarchical Probabilistic Principal Component Analysis
In this paper, we propose a probabilistic principal component analysis (PPCA) 
based model for dimension reduction, which focuses to known structure of the PSS. PPCA is a statistical method to estimate the latent variables which generate an input data essentially. It is similar to deterministic principal component analysis (PCA) and provides a same result usually, however, PPCA is a probabilistic model which assume that the input data would be generated under the Gaussian distribution and the Gaussian noize. Therefore, PPCA has a merit of an optimization under the high-dimensional input space.
PPCA could be of use for dimension reduction in most cases, however, we could sometimes know the structure or correlation of the input data in advance. For example, construction of the PSS was built up gradually with 10-types of statistics. In other words, the PSS has a group-structure between its components. Considering such a known structure, it is possible to grasp more effective contracted representation of the input data.
We propose a novel architecture of PPCA, considering such a known structure of the input data, a “hierarchical probabilistic principal component analysis” (HPPCA). Figure 4 shows a schematic diagram of the HPPCA. The HPPCA applies a hierarchical dimension reduction which is based on a structure of the PSS. First, the HPPCA applies a dimension reduction for each classes
with distinct PPCA models. Second, reduced representations will be concatenated into an intermediate vector, finally, conclusive reduced representation will be given as an output of the final PPCA.
Iv-a Texture Dataset
shows examples of texture images in Kylberg Texture Dataset. This dataset contains 28 classes of natural textures, which are the macro photographs of real-world surfaces. Each classes have 1920 patches of gray-scale images normalized with a mean value of 127 and a standard deviation of 40. The patches have a resolution of 576576 pixels and resized into 128128 pixel to adjust the model input. To evaluate the model performance, we used 1720 patches for training, and 200 patches for evaluation.
Iv-B Performance index of texture similarity
To evaluate the performance of texture reconstruction, we chose a Texture Similarity Score (TSS) as an index of texture simirality . For a source texture image and generated texture sample , the TSS will be given by:
denotes patch within the test region of the image and
is the number of possible unique patches in the test region. In other words, the TSS denotes the maximum cosine similarity between sample patches and possible source texture region, known as “the maximum normalized cross correlation”.
A patch and sample of size pixel was adopted to define the TSS in our experiments, according to previous work .
V-a A preliminary experiment: dimension reduction with conventional PPCA
As a preliminary experiment, we tried to obrain a dimension-reduced representation with plain linear-PPCA directly. Figure 6 shows a reconstruction result by 1784 to 1000 dimension reduction. The reconstruction result does not reproduce the source texture structure by the little dimension reduction which expect to preserve the source PSS structure. This result implies that the plain PPCA could not grasp the latent variables of the PSS sufficiently.
V-B Appropriate dimensionality of HPPCA
Dimension reduction with the HPPCA would be steered by 2 part, to get an intermediate representation and to obtain a conclusive representation, respectively. Because of the top-down dimension reduction, we have to determine a dimension reduction rate of intermediate PPCAs at first. We chose the cumulative contribution ratio as an index of dimension reduction of each PPCAs. We determined the dimensions of each PPCAs by variation of the TSS with cumulative contribution ratio.
Plot of the TSS vs cumulative contribution ratio of intermediate PPCAs is shown in Figure 7. It was shown that the TSS was monotonically increased as the cumulative contribution ratio, however, it exeeded machine epsion of 32-bit float after . Thus, we chose and 965-dimensional intermediate representation.
By given intermediate layer, we chose a conclusive dimensionality of model output. Plot of the TSS vs conclusive dimensionality of the HPPCA is shown in Figure 8. The TSS seemed to saturate around equal 150 to 200. We adopt as conclusive reduced representation of PSS in our experiments.
V-C Qualitative evaluation
This section shows that the contracted representation by our proposed method can reconstruct the textures with perceptually equivalence. The reconstructed textures by contracted PSS using the HPPCA disscussed in previous section shown in Figure 9. The fine textures, that are ceiling1, cushion1, and blanket1, were well-reproduced by reduced PSS (9 first row). On the other hand, however, the coarse textures, which have large patterns, sometimes have trouble with a reproducing the continuous (9 second and third row). This could be due to the lack of reproduce the low-frequency component of reduced PSS with the HPPCA. Nevertheless, the most of reconstruction results might be said to reproduce source texture structure and we could determine these textures were almost much the same.
Vi Discussion and conclusion
The dimension reduction method of the Portilla-Simoncelli statistics, that is a perceptual texture feature, using the hierarchical probabilistic principal component analysis was introduced in this paper. We achieved a 88.8 [%] dimension reduction from the raw PSS preserving the source texture structures in reconstruction.
The HPPCA model could be read as the Gaussian-Gaussian restricted Boltzmann machine adopted a sparsity in its connections. Such connectionism would be make the PPCA easier to grasp the latent structure of the input space. In future work, we intend to analyze mathematically the machanizm of the HPPCA model and build more generalized texture model via the large-scaled natural image datasets.
-  E. H. Adelson, “On seeing stuff: the perception of materials by humans and machines,” in Photonics West 2001-electronic imaging. International Society for Optics and Photonics, 2001, pp. 1–12.
-  N. Heess, C. K. Williams, and G. E. Hinton, “Learning generative texture models with extended fields-of-experts.” in BMVC, 2009, pp. 1–11.
-  M. Ranzato, V. Mnih, and G. Hinton, “Generating more realistic images using gated MRF’s,” Nips, pp. 1–9, 2010.
-  H. Luo, P. L. Carrier, A. Courville, and Y. Bengio, “Texture Modeling with Convolutional Spike-and-Slab RBMs and Deep Extensions,” Proceedings of the 16th International Conference on Artificial Intelligence and Statistics (AISTATS), vol. 31, pp. 415–423, 2012. [Online]. Available: http://arxiv.org/abs/1211.5687
J. J. Kivinen and C. K. I. Williams, “Multiple Texture Boltzmann Machines,”
Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS 2012), vol. 22, pp. 638–646, 2012. [Online]. Available: http://homepages.inf.ed.ac.uk/s0960152/papers/MTBM-AISTATS12.pdf
-  M. Ranzato, V. Mnih, J. M. Susskind, and G. E. Hinton, “Modeling natural images using gated MRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 9, pp. 2206–2222, 2013.
-  K. J. Dana, B. Van Ginneken, S. K. Nayar, and J. J. Koenderink, “Reflectance and texture of real-world surfaces,” ACM Transactions on Graphics (TOG), vol. 18, no. 1, pp. 1–34, 1999.
-  J. Portilla, R. Navarro, O. Nestares, and A. Tabernero, “Texture synthesis-by-analysis method based on a multiscale early-vision model,” Optical Engineering, vol. 35, no. 8, pp. 2403–2417, 1996.
-  J. Portilla and E. P. Simoncelli, “Parametric texture model based on joint statistics of complex wavelet coefficients,” International Journal of Computer Vision, vol. 40, no. 1, pp. 49–71, 2000.
-  E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: A flexible architecture for multi-scale derivative computation,” in Image Processing, 1995. Proceedings., International Conference on, vol. 3. IEEE, 1995, pp. 444–447.
-  J. Portilla and E. P. Simoncelli, “Texture modeling and synthesis using joint statistics of complex wavelet coefficients,” in IEEE workshop on statistical and computational theories of vision, 1999.
-  E. P. Simoncelli and J. Portilla, “Texture characterization via joint statistics of wavelet coefficient magnitudes,” in Image Processing, 1998. ICIP 98. Proceedings. 1998 International Conference on, vol. 1. IEEE, 1998, pp. 62–66.
-  G. Okazawa, S. Tajima, and H. Komatsu, “Image statistics underlying natural texture selectivity of neurons in macaque V4.” Proceedings of the National Academy of Sciences of the United States of America, vol. 112, no. 4, pp. E351–60, 2015.
-  C. Hiramatsu, N. Goda, and H. Komatsu, “Transformation from image-based to perceptual representation of materials along the human ventral visual pathway,” NeuroImage, vol. 57, no. 2, pp. 482–494, 2011.
-  M. E. Tipping and C. Bishop, “Probabilistic Principal Component Analysis,” Journal of the Royal Statistical Society, Series B, vol. 21/3, pp. 611–622, jan 1999.
-  G. Kylberg, “The kylberg texture dataset v. 1.0,” Centre for Image Analysis, Swedish University of Agricultural Sciences and Uppsala University, Uppsala, Sweden, External report (Blue series) 35, September 2011. [Online]. Available: http://www.cb.uu.se/~gustaf/texture/
R. Karakida, M. Okada, and S.-i. Amari, “Dynamical analysis of contrastive divergence learning: Restricted boltzmann machines with gaussian visible units,”Neural Networks, vol. 79, pp. 78–87, 2016.