On Demand Solid Texture Synthesis Using Deep 3D Networks

01/13/2020 ∙ by Jorge Gutierrez, et al. ∙ 0

This paper describes a novel approach for on demand volumetric texture synthesis based on a deep learning framework that allows for the generation of high quality 3D data at interactive rates. Based on a few example images of textures, a generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes that reproduce the visual characteristics of the examples along some directions. To cope with memory limitations and computation complexity that are inherent to both high resolution and 3D processing on the GPU, only 2D textures referred to as "slices" are generated during the training stage. These synthetic textures are compared to exemplar images via a perceptual loss function based on a pre-trained deep network. The proposed network is very light (less than 100k parameters), therefore it only requires sustainable training (i.e. few hours) and is capable of very fast generation (around a second for 256^3 voxels) on a single GPU. Integrated with a spatially seeded PRNG the proposed generator network directly returns an RGB value given a set of 3D coordinates. The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch based approaches. They are naturally seamlessly tileable and can be fully generated in parallel.



There are no comments yet.


page 9

page 10

page 11

page 13

page 15

page 16

page 17

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This document is a lightweight preprint version of the journal article published in Computer Graphics Forum. DOI:10.1111/cgf.13889 (https://doi.org/10.1111/cgf.13889) Another preprint version with uncompressed images is available here: https://hal.archives-ouvertes.fr/hal-01678122v3.

2D textures are ubiquitous in 3D graphics applications. Their visual complexity combined with a widespread availability allows for the enrichment of 3D digital objects’ appearance at a low cost. In that regard, solid textures, which are the 3D equivalent of stationary raster images, offer several visual quality advantages over their 2D counterparts. Solid textures eliminate the need for a surface parametrization and its accompanying visual artifacts. They produce the feeling that the object was carved from the texture material. Additionally, the availability of consistent volumetric color information allows for the interactive manipulation including object fracturing or cut-away views to reveal internal texture details. However, unlike scanning a 2D image, digitization of volumetric color information is impractical. As a result, most of the existent solid textures are synthetic.

One early way to generate solid textures is by procedural generation [Pea85, Per85]

. In procedural methods the color of a texture at a given point only depends on its coordinates. This allows for a localized evaluation to generate only the required portions of texture at a given moment. We refer to this characteristic as

on demand evaluation. Procedural methods are indeed fast and memory efficient. Unfortunately finding the right parameters of a procedural model to synthesize a given texture requires a high amount of expertise and trial and error. Photo-realistic textures with visible elemental patterns are particularly hard to generate by these methods.

In order to give up the process of empirically tuning the model for a given texture, several by-example solid texture synthesis methods have been proposed [HB95, KFCO07, DLTD08, CW10]. These methods are able to generate solid textures that share the visual characteristics of a given target 2D texture example through all the cross-sections in a given set of slicing directions (inferring the 3D structure given the constrained directions). Although they do not always deliver perfect results, by-example methods can be used to approximate the characteristics of a broad set of 2D textures. One convenient approach to synthesize textures by-example is called lazy synthesis [DLTD08, ZDL11]. It consists of an on demand synthesis i.e. synthesizing only voxels at a specific location in contrast to generating a whole volume of values. Current lazy synthesis methods tend to deliver lower visual quality.

Several solid texture synthesis methods arise as the extrapolation of a 2D model. While some approaches can be trivially expanded (e.g. procedural), others require a more complex strategy, e.g. pre-computation of 3D neighborhoods [DLTD08] or averaging of synthesized slices [HB95]

. Currently, 2D texture synthesis methods based on Convolutional Neural Networks (CNN) 

[GEB15, UVL17] roughly define the state-of-the-art in the 2D literature. Texture networks methods [ULVL16, UVL17, LFY17a] stand out thanks to their fast synthesis times.

In this work we introduce a CNN generator capable of synthesizing 3D textures on demand. On demand generation, also referred to as local evaluation in the literature (e.g. [WL03, LH05, DLTD08]), is critical for solid texture synthesis because storing the values of a whole solid texture at a useful resolution is prohibitive for most applications. It provides the ability to generate on demand only the needed portions of the solid texture. This speeds up texturing surfaces and saves memory. It was elegantly addressed for patch based methods for 2D and solid textures [WL03, LH05, DLTD08, ZDL11]. However, none of the aforementioned CNN based methods study this property.

There are substantial challenges in designing a CNN to accomplish such task. On demand evaluation requires the association of the synthesis to a coordinate system and determinism to enforce spatial coherence. For the training, first we need to devise a way to evaluate the quality of the synthesized samples. Since the seminal work of [GEB15, JAFF16], most 2D texture synthesis methods such as  [ULVL16, UVL17, ZZB18, YBS19] use the activations in the hidden layers of VGG-19 [SZ14] as a descriptor network to characterize the generated samples and evaluate their similarity with the example. There is, however, not a volumetric equivalent of such image classification network that we could use off-the-shelf. Second, we need a strategy to surmount the enormous amount of memory demanded by the task.

We propose a compact solid texture generator model based on CNN capable of on demand synthesis at interactive rates. On training, we assess the samples’ appearance via a volumetric loss function that compares slices of the generated textures to the target image. We exploit the stationarity of the model to propose a fast and memory efficient single-slice training strategy. This allows us to use target examples at higher resolutions than those in the current 3D literature. The resulting trained network is simple, lightweight, and powerful at reproducing the visual characteristics of the example on the cross sections of the generated volume along one or up to three directions.

2 Related works

To the best of our knowledge, the method proposed here is the first to employ a CNN to generate solid textures. Here we briefly outline the state-of-the-art on solid texture synthesis, then we describe some successful applications of using CNNs to perform 2D texture synthesis. Finally we mention relevant CNN methods that use 2D views to generate 3D objects.

2.1 Solid texture synthesis

Procedural methods [Per85, Pea85] are quite convenient for computer graphics applications thanks to their real time computation and on demand evaluation capability. Essentially one can add texture to a 3D surface by directly evaluating a function given the coordinates of only the required (visible) points in the 3D space. The principle is as follows for texture generation on a surface: a colormap function, such as a simple mathematical expression, is evaluated at each of the visible 3D points. In [Per85]

, authors use pseudo-random numbers that depend on the coordinates of the local neighborhood of the evaluated point which ensures both the random aspect of the texture and its spatial coherence. Creating realistic textures with a procedural noise is a trial and error process that necessitates technical and artistic skills. Some procedural methods alleviate this process by automatically estimating their parameters from an example image 

[GD95, GLLD12, GSV14, GLM17]. However, they only deal with surface textures and most photo realistic textures are still out of the reach of these methods.

Most example-based solid texture synthesis methods aim at generating a full block of voxels whose cross-sections are visually similar to a respective texture example. These methods are in general fully automated and they are able to synthesize a broader set of textures. The texture example being only 2D, a prior model is required to infer 3D data. Some methods [HB95, DGF98, QhY07] employ an iterative slicing strategy where they alternate between independent 2D synthesis of the slices and 3D aggregation. Another strategy starts with a solid noise and then iteratively modifies its voxels by assigning them a color value depending on a set of coherent pixels from the example. Wei [Wei03] uses the 2D neighborhood of a pixel also called patch to assess coherence, then, the set of contributing pixels is formed by the closest patch along each axis of the solid. Finally, the assigned color is the average of the contributing pixels. Dong et al. [DLTD08] determine coherence using three overlapping 2D neighborhoods (i.e. forming a 3D neighborhood) around each voxel and find only the best match among a set of precomputed candidates.

The example-based solid texture synthesis methods that achieve the best visual results in a wider category of textures [KFCO07, CW10] involve a patch-based global optimization framework [KEBK05] governed by a statistical matching strategy. This improves the robustness to failure. These methods, however, require high computation times, which limits them to low resolution textures (typically input examples), and are incapable of on demand evaluation which limits their usability. Regarding speed, the patch-based method proposed by Dong et al. [DLTD08] allows for fast on demand evaluation, thus allowing for visual versatility and practicality. Here the patch-matching strategy is accelerated via the pre-processing of compatible 3D neighborhoods accordingly to the examples given along some axis. This preprocessing is a trade-off between visual quality and speed as it reduces the richness of the synthesized textures. Thus, their overall visual quality is less satisfactory than the one of the optimization methods previously mentioned.

2.2 Neural networks on texture synthesis

Our method builds upon previous work on example-based 2D texture synthesis using convolutional neural networks. We distinguish two types of approaches: image optimization and feed-forward texture networks.

Image optimization

Image optimization methods were inspired from previous statistical matching approaches [PS00] and use a variational framework which aims at generating a new image that matches the features of an example image. The role of CNN in this class of methods is to deliver a powerful characterization of the images. It typically comes in the form of feature maps at the internal layers of a pretrained deep CNN [SZ14] namely VGG-19. Gatys et al. [GEB15] pioneered this approach for texture synthesis, by considering the discrepancy between the feature maps of the synthesized image and the example ones. More precisely for texture synthesis where one has to take into account spatial stationarity, the corresponding perceptual loss as coined later by [JAFF16]

is the Frobenius distance of the Gram matrices of CNN features at different layers. Starting from a random initialization, the input image is then optimized via a stochastic gradient descent algorithm, where the gradient is computed using back-propagation through the CNN. Since then, several variants have built on this framework to improve the quality of the synthesis: for structured textures by adding a Fourier spectrum discrepancy 

[LGX16] or using co-occurence information computed between the feature maps and their translation [BM17]; for non-local patterns by considering spatial correlation and smoothing [SCO17]; for stability by considering a histogram matching loss and smoothing[WRB17]. These methods deliver good quality and high resolution results as they can process images of resolutions up to pixels. Their main drawback comes from the optimization process itself, as it requires several minutes to generate one image. Implementing local evaluation on these methods is infeasible since they use a global optimization scheme as for patch-based texture optimization methods [KEBK05, KFCO07]. Extension to dynamic texture synthesis has also been proposed in [TBD18]. The textured video is optimized using a perceptual loss combined with a loss based on estimated optical flow to take into account the time dimension.

Feed-forward texture networks

Feed-forward networks approaches were introduced by Johnson et al. [JAFF16] for style transfer and Ulyanov et al. [ULVL16] for texture synthesis. In the latter, they train an actual generative CNN to synthesize texture samples that produce the same visual characteristics as the example. These methods use the loss function in [GEB15] to compare the visual characteristics between the generated and example images. However, instead of optimizing the generated output image, the training aims at tuning the parameters of the generative network. Such optimization can be more demanding since there is no spatial regularity shared across iterations as for the optimization of a single image. However this training phase only needs to be done once for a given input example. This is achieved in practice using back propagation and a gradient-based optimization algorithm using batches of noise inputs. Once trained, the generator is able to quickly generate samples similar to the input example by forward evaluation. Originally these methods train one network per texture sample. Li et al. [LFY17a] proposed a large network architecture and a training scheme to allow one network to have the capacity to generate and mix several different textures. By improving the capacity of the generator network Li et al. [LFY17a] and Ulyanov et al. [UVL17] methods reached a modestly higher visual quality but found that the synthesized textures did not change sufficiently for different inputs. In order to prevent the generator from producing identical images they were forced to incorporate a term that encourages diversity in the objective function. Feed-forward texture networks methods generally produce results with slightly lower visual quality than the image optimization methods, however they exhibit faster generation times. Visual quality aside, the earlier architecture model proposed by Ulyanov et al. [ULVL16] holds several advantages over more recent feed-forward methods: it does not require a diversity term, it is able to generate images of arbitrary sizes and it is significantly smaller in terms of number of parameters. Moreover, as we show in Section 4 this framework can be customized to allow on demand evaluation.

Other approaches achieve texture synthesis as a feed-forward evaluation of a generator network using a different training strategy. Bergmann et al. [BJV17] train a generator network without using the perceptual loss. Instead they employ a generative adversarial network (GAN) framework [GPAM14] with a purely convolutional architecture [RMC16] to allow for flexibility on the size of the samples to synthesize. This method shares the advantages of feed-forward networks regarding evaluation but is based in a more complex training scheme where two cascading networks have to be optimized using an adversarial loss which can affect the quality on different texture examples. Zhou et al. [ZZB18] use a combination of perceptual and adversarial loss and achieve impressive results for the synthesis of non-stationary textures. This is a more general problem seldom addressed in the literature and it requires extra assumptions about the behavior of the texture at hand. Similarly, Yu et al. [YBS19] train a CNN to perform texture mixing using a hybrid approach, combining the perceptual loss and adversarial loss to help the model produce plausible new textures. Finally, Li et al. [LFY17b] proposed another strategy that leverages auto-encoders [LeC87, BK88]. They use truncated versions of VGG-19 network as encoders that map images to a feature space. Then they design decoder networks that generate images from such features. During the training stage, this generator is optimized to invert the encoder by trying to generate images that match encoded images from a large dataset of natural images. During synthesis, a random image and an example image are first encoded; random features are matched to the target ones using first and second order moment matching, and then fed to the decoder to generate a random texture, without requiring a specific training for this example. While very appealing, this approach is difficult to adapt to 3D texture synthesis where such large dataset is not available. Moreover, the quality is not as good as for previously mentioned methods [GEB15, ULVL16].

2.3 Neural networks for volumetric object generation

A related problem in computer vision is the generation of binary volumetric objects from 2D images 

[GCC17, JREM16, YYY16]. Similarly to our setting, these approaches rely on unsupervised training with a loss function comparing 3D data to 2D examples. However these methods do not handle color information and only produce low resolution volumes ().

3 Overview

Figure 1: Training framework for the proposed CNN Generator network with parameters . The Generator processes a multi-scale noise input to produce a solid texture . The loss compares, for each direction , the feature statistics induced by the example in the layers of the pre-trained Descriptor network to those induced by each slice of the set . The training iteratively updates the parameters to reduce the loss. We show in Section 5 that we can perform training by only generating single-slice solids instead of full cubes.

Figure 1 outlines the proposed method. We perform solid texture synthesis using the convolutional neural generator network detailed in Section 4. The generator with learnable parameters , takes a multi-scale volumetric noise input and processes it to produce a color solid texture . The proposed model is able to perform on demand evaluation which is a critical property for solid texture synthesis algorithms. On demand evaluation spares computations and memory usage as it allows the generator to only synthesize the voxels that are visible.

The desired appearance of the samples is specified in the form of a view dissimilarity term for each direction. The 3D generated samples are compared to example images that correspond to the desired view along directions among the 3 canonical directions of the Cartesian grid. The generator learns

to sample solid textures from the visual features extracted in the examples

via the optimization of its parameters . To do so, we formulate a volumetric slice-based loss function . It measures how appropriate the appearance of a solid sample is by comparing its 2D slices (th slice in along the th direction) to each corresponding example . Similar to previous methods, the comparison is carried out in the space of features from the descriptor network based on VGG-19.

The training scheme, detailed in Section 5, involves the generation of batches of solid samples which would a priori require a prohibitive amount of memory if relying on classical optimization approach for CNN. We overcome this limitation thanks to the stationarity properties of the model. We show that training the proposed model only requires the generation of single slice volumes along the specified directions. Section 6 presents experiments and comparative results. Finally, in Section 7 we discuss the current limitations of the model.

4 On demand evaluation enabled CNN generator

The architecture of the proposed CNN generator is summarized in Figure 2 and detailed in Subsection 4.1. The generator applies a series of convolutions to a multi-scale noise input to produce a single solid texture. It is inspired by Ulyanov et al. [ULVL16] model, which stands out for on demand evaluation thanks to its small number of parameters and its local dependency between the output and input. It is based on a multi-scale approach inspired itself from the human visual system that has been successfully used in many computer vision applications, and in particular for texture synthesis[HB95, De 97, WL00, PS00, KEBK05, RPDB12, GLR18].

This fully convolutional generator allows the generation of on demand box-shaped/rectangular volume textures of an arbitrary size (down to a single voxel) controlled by the size of the input. Formally, given an infinite noise input it represents an infinite texture model. A first step to achieve on demand evaluation is to control the size of the generated sample. To do so, we unfold the contribution of the values in the noise input to each value in the output of the generator. This dependency is described in Subsection 4.2. Then, on demand voxel-wise generation is achieved thanks to the multi-scale shift compensation detailed in Subsection 4.3. The resulting generator is able to synthesize coherent and expandable portions of a theoretical infinite texture.

4.1 Architecture

Figure 2: Schematic of the network architecture. Noise input at different scales is processed using convolution operations and non-linear activations. The information at different scales is combined using upsampling and channel concatenation. indicates the number of input channels and controls the number of channels at intermediate layers. For simplicity we consider a cube shaped output with spatial size of . For each intermediate cube the spatial size is indicated above and the number of channels below.

The generator produces a solid texture

from a set of multi-channel volumetric white noise inputs

. The spatial dimensions of directly control the size of the generated sample. The process of transforming the noise into a solid texture is depicted in Figure 2. It follows a multi-scale architecture built upon three main operations: convolution, concatenation, and upsampling. Starting at the coarsest scale, the 3D noise sample is processed with a set of convolutions followed by an upsampling to reach the next scale. It is then concatenated with the independent noise sample from the next scale, itself also processed with a set of convolutions. This process is repeated times before a final single convolution layer that maps the number of channels to three to get a color texture. We now detail the three different blocks of operations used in the generator network.

Convolution block

A convolution block groups a sequence of three ordinary 3D convolution layers, each of them followed by a batch-normalization and a leaky rectified linear unit function. Considering

and channels in the input and output respectively, the first convolution layer carries out the shift from to . The following two layers of the block have channels in both the input and the output. The size of the kernels is for the first two layers and for the last. Contrary to [ULVL16] and in order to enable on demand evaluation (see Subsection 4.2

), here the convolutions are computed densely and without padding, thus discarding the edge’s values. Applying one convolution block with these settings to a volume reduces its size by 4 values per spatial dimension.


An upsampling performs a 3D nearest neighbor upsampling by a factor of 2 on each spatial dimension (i.e. each voxel is replicated 8 times).

Channel concatenation

This operation first applies a batch normalization operation and then concatenates the channels of two multi-channel volumes having the same spatial size. If different, the biggest volume is cropped to the size of the smallest one.

The learnable parameters

of the generator are: the convolution’s kernels and bias, and the batch normalization layers’ weight, bias, mean and variance. The training of these parameters is discussed in Section 


4.2 Spatial dependency

Forward evaluation of the generator is deterministic and local, i.e. each value in the output only depends on a small number of neighboring values in the multi-scale input. By handling the noise inputs correctly, we can feed the network separately with two contiguous portions of noise to synthesize textures that can be tiled seamlessly. Current 2D CNN methods [ULVL16, LFY17a] perform padded convolutions, not addressing on demand evaluation capabilities. Let us notice that a perfect tiling between two different samples can only be achieved by using convolutions without padding. Therefore we discard the values on the borders where the operation with the kernel cannot be carried out completely.

When synthesizing a sample, the generator is fed with an input that takes into account the neighboring dependency values. Those extra values are progressively processed and discarded in the convolutional layers: for an output of size the size of the input at the -th scale has to be where denotes the additional values along each spatial dimension required due to the dependency. The size in any spatial dimension can be any positive integer (provided the memory is large enough for synthesizing the volume).

These additional values depend on the network architecture. In our case, thanks to the symmetry of the generator, the coefficients are the same along each spatial dimension. Each convolutional block requires additional support of two values on each side along each dimension and each upsampling cuts down the dependency by two (taking the smallest following integer when the result is fractional). At scale there are two convolution blocks, therefore . For subsequent scales , except for the coarsest scale where there is only one convolution block and therefore . For example, in order to generate a single voxel, the spatial dimensions of the noise inputs must be for , for , for to , and for , which totals random values.

4.3 Multi-scale shift compensation for on demand evaluation

On demand generation is a standard issue in procedural synthesis [Per85]. The purpose is to generate consistently any part of an infinite texture model. It enables the generation of small texture blocks separately, whose localization depends on the geometry of the object to be texturized, instead of generating directly a full volume containing the object. For procedural noise, this is achieved using a reproducible Pseudo-Random Number Generator (PRNG) seeded with spatial coordinates.

In our setting, we enforce spatial consistency by generating the multi-scale noise inputs using a xorshift PRNG algorithm [Mar03] seeded with values depending on the volumetric coordinates, channel and scale, similarly to [GLM17]. Thus, our model only requires the set of reference 3D coordinates and the desired size to generate spatially coherent samples. Given a reference coordinate at the finest scale in any dimension, the corresponding coordinates at the -th scale are computed as . These corresponding noise coordinates need to be aligned in order to ensure the coherence between samples generated separately.

Feeding the generator with the precise set of coordinates of noise at each scale is only a first step to successfully synthesize compatible textures. Recall that the model is based on combinations of transformed noises at different scales (see Figure 2), therefore requiring special care regarding upsampling to preserve the coordinate alignment across scales, i.e. which coordinate at scale must be associated to a given coordinate at the finest scale . Indeed, after every upsampling operation, observe that each value is repeated twice along each spatial dimension, pushing the rest of the values spatially. Depending on the coordinates of the reference voxel being synthesized, this shift of one position can disrupt the coordinate alignment with the subsequent scale. Therefore, the generator network has to compensate accordingly before each concatenation.

For upsamplings, one of the combinations of compensation shifts has to be properly done for each dimension to synthesize a given voxel. In order to consider these compensation shifts, we make the generator network aware of the coordinate of the sample at hand. In our implementation the reference is on the vertex of the sample closest to the origin. Given the final reference coordinate of the voxel (in any spatial dimension), the generator deduces the set of shifts recursively from the following relation, , where is the spatial reference coordinate used to generate the noise at scale , and is the shift value used after the th upsampling operation. At evaluation time, the generator sequentially applies the set of shifts before every corresponding concatenation operation.

5 Training

Here we detail our approach to obtain the parameters that drive the generator network to synthesize solid textures specified by the example . Like current texture networks methods, we leverage the power of existing training frameworks to optimize the parameters of the generator. Typically an iterative gradient-based algorithm is used to minimize a loss function that measures how different the synthetic and target textures are.

However a first challenge facing the training of the solid texture generator is to devise a discrepancy measure between the solid texture and the 2D examples. In Subsection 5.1 we propose a 3D slice-based loss function that collects the measurements produced by a set of 2D comparisons between 2D slices of the synthetic solid and the examples. We conduct the 2D comparisons similarly to the state-of-the-art methods, using the perceptual loss function [GEB15, JAFF16, UVL17].

The second challenge comes from the memory requirements during training. Typically the optimization algorithm estimates a descent direction by applying backpropagation on the loss function evaluated on a batch of samples. In the case of solid textures, each volumetric sample occupies a large amount of memory, which makes the batch processing impractical. Instead, we show in Subsection 

5.2 that, thanks to the stationary properties of our generative model, we can carry out the training using batches of single slice solid samples.

5.1 3D slice-based loss

For a color solid , we denote by the th 2D slice of the solid orthogonal to the th direction. Given a number of slicing directions and the corresponding example images , we propose the following slice-based loss


where is a 2D loss that computes the similarity between an image and the example .

We use the 2D perceptual loss from [GEB15], which proved successful for training the CNNs [JAFF16, ULVL16]. It compares the Gram matrices of the VGG-19 feature maps of the synthesized and example images. The feature maps result from the evaluation of the descriptor network on an image, i.e. , where is the set of considered VGG-19 layers, each layer having spatial values and channels. For each layer , the Gram matrix is computed from the feature maps as


where indicates the transpose of a matrix. The 2D loss between the input example and a slice is then defined as


where is the Frobenius norm. Observe that the Gram matrices are computed along spatial dimensions to take into account the stationarity of the texture. Those Gram matrices encode both first and second order information of the feature distribution (covariance and mean).

5.2 Single slice training scheme

Formally, training the generator with parameters corresponds to minimizing the expectation of the loss in Equation 1 given the set of examples ,



is a multi-scale noise, independent and identically distributed from a uniform distribution in the interval

. Note that each scale of

is stationary. The generator on the other hand induces a non stationary behavior on the output due to upsampling operations. When upsampling a stationary signal by a factor 2 with a nearest neighbor interpolation the resulting process is only invariant to translations of multiples of two. Because our model contains

volumetric upsampling operations, the process is translation invariant by multiples of values on each axis. Considering the -th axis, for any coordinate at the scale of the generated sample with , the statistics of the slice only depend on the value of , therefore


Assuming is a multiple of , we have


As a consequence, instead of using slices per direction the generator network could be trained using only a set of contiguous slices on each constrained direction.

The GPU memory is a limiting factor during the training process, even cutting down the size of the samples to slices restricts the training slice resolution. For example, training a network for a texture output of size with and VGG- would require more than 12GB of memory. For that reason we propose to stochastically approximate the inner sum in Equation (6).

Considering the slice in the -th axis with a fixed and with randomly drawn from a discrete uniform distribution,


Then using doubly stochastic sampling (noise input values and output coordinates) we have


which means that we can train the generator using only single-slice volumes oriented according to the constrained directions. Note that the whole volume model is impacted since the convolution weights are shared by all slices.

The proposed scheme saves computation time during the training, and more importantly, it also reduces the required amount of memory. In this setting we can use resolutions of up to values per size during training (examples and solid samples), a resolution significantly larger than the ones reached in the literature regarding solid texture synthesis by example which are usually limited to .

6 Results

6.1 Experimental settings

Unless otherwise specified all the results in this section were generated using the following settings.

Generator network

We set the number of scales to 6, i.e. , which means that each voxel of the input noise at the coarsest scale impacts nearly 300 voxels at the finest scale. We use input channels and (number of channels after the first convolution block and channel step across scales) which results in the last layer being quite narrow () and the whole network compact, with parameters. We include a batch normalization operation after every convolution layer and before the concatenations. As mentioned in previous methods [ULVL16] we noticed that such a strategy helps stabilizing the training process.

Descriptor network

Following Gatys et al. [GEB15], we use a truncated VGG-19 [SZ14] as our descriptor network, with padded convolutions and average pooling. The considered layers for the loss are: relu1_1, relu2_1, relu3_1, relu4_1 and relu5_1.


We implemented our approach using pytorch (code available in http://github.com/JorgeGtz/SolidTextureNets) and we use the pre-trained parameters for VGG-19 available from the BETHGE LAB [GEB15, GEB16]. We optimize the parameters of the generator network using Adam algorithm [KB15] with a learning rate of 0.1 during 3000 iterations. Figure 3 shows the value of the empirical estimation of during the training of three of the examples shown below in Figure 7 (histology, cheese and granite). We use batches of 10 samples per slicing direction. We compute the gradients individually for each sample in the batch which slows down the training process but allows us to concentrate the available memory on the resolution of the samples. With these settings and using 3 slicing directions, the training takes around 1 hour for a training resolution (i.e. size of the example(s) and generated slices), hours for and hours for using one GPU Nvidia GeForce GTX 1080 Ti.

Figure 3: Value of the 3D empirical loss during the training of the generator for the textures histology, cheese and granite of Figure 7.


In order to synthesize large volumes of texture, it is more time efficient to choose the size of the building blocks in a way that best exploits parallel computation. Considering computation complexity alone, synthesizing large building blocks of texture at once is also more efficient given the spatial dependency (indicated by coefficients ) shared by neighboring voxels. In order to highlight the seamless tiling of on demand generated blocks of texture, most of the samples shown in this work are built by assembling blocks of voxels. However the generator is able to synthesize box-shaped/rectangular samples of any size, given enough memory is available.

Figures 4 and 5 depict how the small pieces of texture tile perfectly to form a bigger texture. It takes nearly 12 milliseconds to generate a block of texture of voxels on a Nvidia GeForce RTX 2080 GPU. However we can use bigger elemental blocks, e.g. a cube of voxels takes milliseconds and one of voxels takes milliseconds. For reference, using the method of Dong et al. [DLTD08] it takes 220 milliseconds to synthesize a volume and 1.7 seconds to synthesize a volume.


Figure 4: Left, example texture of size pixels, used along the three orthogonal axes. Right contiguous cubes of voxels generated on demand to form a texture. The gaps are added to better depict the aggregation.


Figure 5: Left, example texture of size pixels, used for training the generator along the three orthogonal axes. Right, assembled blocks of different sizes generated on demand. Note that it is possible to generate blocks of size in any direction.

6.2 Experiments

In this section we highlight the various properties of the proposed method and we compare them with state-of-the-art approaches.

Texturing mesh surfaces

Figure 6 exhibits the application of textures generated with our model to add texture to 3D mesh models. In this case we generate a solid texture with a fixed size and load it on OpenGL as a regular 3D texture with bilinear filtering. Solid textures avoid the need for surface parametrization and can be used on complex surfaces without creating artifacts.

Figure 6: Texture mapping on 3D mesh models. The example texture used to train the generator is show in the upper left corner of each object. When using solid textures the mapping does not require parametrization as they are defined in the whole 3D space. This prevents any mapping induced artifacts. Sources: the ‘duck’ model comes from Keenan’s 3D Model Repository, the ‘mountain’ and ‘hand’ models from free3d.com and the tower and the vase from turbosquid.com.
Generated volume Examples Generated slices
oblique ()






Figure 7: Synthesis of isotropic textures. We train the generator network using the example in the second column of size along directions. The cubes on the first column are generated samples of size built by assembling blocks of voxels generated using on demand evaluation. Subsequent columns show the middle slices of the generated cube across the three considered directions and a slice extracted in an oblique direction with a angle. The trained models successfully reproduce the visual features of the example in 3D.
Generated volume Examples Generated slices
oblique ()





Figure 8: Synthesis of isotropic textures (same presentation as in Figure 7). While generally satisfactory, the examples in the first and third rows have a slightly degraded quality. In the first row the features have more rounded shapes than in the example and in the third row we observe high frequency artifacts.

Single example setting

Figures 7 and 8 show synthesized samples using our method on a set of examples depicting some physical material. Considering their isotropic structure, we train the generator network using a single example to designate the appearance along the three orthogonal directions, i.e. . On the first column we show a generated sample of size voxels built by assembling blocks of voxels generated using on demand evaluation. The second column is the example image of size pixels, columns 3-5 show the middle slices of the generated cube across the three considered directions and the last column shows a slice extracted in an oblique direction with a angle. These examples illustrate the capacity of the model to infer a plausible 3D structure from the 2D features present in isotropic example images. Observe that a slice across an oblique direction still displays a conceivable structure given the examples. They also demonstrate the spatial consistency while using on demand evaluation. Regarding the visual quality, notice that the model successfully reproduces the patterns’ structure while also capturing the richness of colors and variations. The quality of the slices is comparable to that of the state-of-the-art 2D methods [ULVL16, UVL17, LFY17a, GEB15], which is striking as solid texture synthesis is a more constrained problem. However, such successful extrapolation to 3D might not be possible for all textures. Figure 9 shows a case where the slices of the synthesized solid contain patterns not present in the example texture. This is related to the question of the existence of a solution discussed in the next paragraph.

Existence of a solution

The example texture used in Figure 9 is isotropic (arrangement of red shapes having green spot inside), but the volumetric material it depicts is not (the green stem being outside the pepper). Training the generator using three orthogonal directions assumes 3D isotropy, and thus, the outcome is a solid texture where the patterns are isotropic through the whole volume. This creates some new patterns in the slices, which makes them somewhat different from the example (red shapes without a green spot inside). Actually, the generated volume just does not makes sense physically, as one always obtains full peppers after slicing them.

Figure 9: 2D to 3D isotropic patterns. The example texture (top-left) depicts a pattern that is approximately isotropic in 2D, but the material it depicts is not. Training the generator using the example along three orthogonal directions results in solid textures that are isotropic in the three dimensions (top-right). Here, the red and green patterns vary isotropically in the third dimension, this creates a bigger variation on their size and makes some slices contain red patterns that lack the green spot. This is a case where the slices of the solid texture (bottom) cannot match exactly the patterns of the 2D example, thus, not complying with the example in the way a 2D algorithm would.

This example shows that for a given texture example and a number of directions it is possible that a corresponding 3D texture does not exist, i.e. not all the slices in the chosen directions will respect the structure defined by the 2D example.

This existence issue is vital for example-based solid texture synthesis as it delineates the limits of this slice-based formulation used by many methods in the literature. It is briefly mentioned in [KFCO07, DLTD08] and here we aim to extend the discussion. Let us consider for instance the isotropic example shown in Figure 10, where the input image contains only discs at a given scale (e.g. a few pixels diameter). It follows that when slicing a volume containing spheres with the same diameter , the obtained image will necessarily contain objects with various diameters, ranging from to . This seems to be a paradigm natural to the 2D-3D extrapolation. It demonstrates that for some 2D textures an isotropic solid version might be impossible and conversely, that the example texture and the imposed directions must be chosen carefully given the goal 3D structure.

Figure 10: Illustration of a solid texture whose cross sections cannot comply with the example along three directions. Given a 2D example formed by discs of a fixed diameter (upper left) a direct isotropic 3D extrapolation would be a cube formed by spheres of the same diameter. Slicing that cube would result in images with discs of different diameters. The cube in the upper right is generated after training our network with the 2D example along the three orthogonal axes. The bottom row shows cross sections along the axes, all of them present discs of varying diameters thus failing to look like the example.

This also might have dramatic consequences regarding convergence issues during optimization. In global optimization methods [KFCO07, CW10], where patches from the example are sequentially copied view per view, progressing the synthesis in one direction can create new features in the other directions thus potentially preventing convergence. In contrast, in our method, we seek for a solid texture whose statistics are as close as possible from the examples without requiring a perfect match. This always ensured convergence during training in all of our experiments. An example of this is illustrated in the first row of Figure 12, where the example views are incompatible given the proposed 3D configuration: the optimization procedure converges during training and the trained generator is able to synthesize a solid texture, however, the result is clearly a tradeoff which mixes the different contradictory orientations. This also contrasts with common statistical texture synthesis approaches that are rather based on constrained optimization to guarantee statistical matching, for instance by projecting patches [GRGH17], histogram matching [RPDB12], or moment matching [PS00].

Generated volume Training Examples Generated views

soil ()

soil ()

brick wall ()

cobble wall ()

Figure 11: Training the generator using two or three directions for anisotropic textures. The first column shows generated samples of size built by assembling blocks of voxels generated using on demand evaluation. The second column illustrates the training configuration, i.e. which axes are considered and the orientation used. Subsequent columns show the middle slices of the generated cube across the three considered directions. The top two rows show that for some examples considering only two directions allow the model to better match the features along the directions considered. The bottom rows show examples where the appearance along one direction might not be important.
Generated volume Training Examples Generated views Training
configuration empirical loss
Figure 12: Importance of the compatibility of examples. In this experiment, two generators are trained with the same image along three directions, but for two different orientations. The first column shows generated samples of size built by assembling blocks of voxels generated using on demand evaluation. The second column illustrates the training configuration, i.e. for each direction the orientation of the example shown in the third column. Subsequent columns show the middle slices of the generated cube across the three constrained directions. Finally, the rightmost column gives the empirical loss value at the last iteration. In the first row, no 3D arrangement of the patterns can comply with the orientations of the three examples. Conversely the configuration on the second row can be reproduced in 3D, thus generating more meaningful results. The lower value of the training loss for this configuration reflects the better reproduction of the patterns in the example.

Constraining directions

While for isotropic textures, constraining directions gives good visual results, for some textures it may be more interesting to consider only two views. Indeed using might be essential to obtain acceptable results, at least along the considered directions. This acts in accordance to the question of existence of a solution discussed in the previous paragraph. We exemplify this in the top two rows of Figure 11, where considering only two training directions (second row) results in a solid texture that more closely resembles the example along those directions, compared to using three training directions (first row) which generates a more consistent volume using isotropic shapes that are not present in the example texture. The brick and cobblestone textures in Figure 11 highlight the fact that, when depicting an object where the top view is not crucial to the desired appearance, such as a wall, it can be left to the algorithm to infer a coherent structure. Of course, not considering a direction during training will generate cross-sections that do not necessarily contain the same patterns as the example texture (see column ), but rather color structures that fulfill the visual requirements for the other considered directions.

Additionally, pattern compatibility across different directions is essential to obtain coherent results. In the examples of Figure 12 the generator was trained with the same image but in a different orientation configuration. In the top row example no 3D arrangement of the patterns can comply with the orientations of the three examples. Conversely the configuration on the bottom row can be reproduced in 3D thus generating more meaningful results. All this has to be taken into account when choosing the set of training directions given the expected 3D texture structure. Observe that the value of the loss at the end of the training gives a hint on which configuration works better. This can be exploited for automatically finding the best training configuration for a given set of examples.

These results bring some light to the scope of the slice-based formulation for solid texture synthesis using 2D examples. This formulation is best suited for textures depicting 3D isotropic materials, for which we obtain an agreement with the example’s patterns comparable with 2D state-of-the-art methods. For most anisotropic textures we can usually obtain high quality results by considering only two directions. Finally, for textures with patterns that are only isotropic in 2D, using more than one direction inevitably creates new patterns.


A required feature of texture synthesis models is that they are able to generate diverse samples. Ideally the generated samples are different from each other and from the example itself while still sharing its visual characteristics. Additionally, depending on the texture it might be desired that the patterns vary spatially inside each single sample. Yet, observe that many methods in the literature for 2D texture synthesis generate images that are local copies of the input image [WL00, KEBK05], which strongly limits the diversity. As reported in Gutierrez et al. [GRGH17], the (unwanted) optimal solution of methods based on patch optimization is the input image itself. For most of these methods though, the local copies are sufficiently randomized to deliver enough diversity. Variability issues have also been reported in the literature for texture generation based on CNN, and [UVL17, LFY17a] have proposed to use a diversity term in the training loss to fix it. Without such diversity term to promote variability, the generated samples are nearly identical from each other, although sufficiently different from the example. In these cases it seems that the generative networks learn to synthesize a single sample that induces a low value of the perceptual loss while disregarding the random inputs.

When dealing with solid texture synthesis from 2D examples, such a trivial optimal solution only arises when considering one direction, where the example itself is copied along such direction. Yet, there is no theoretical guarantee that prevents the generator network from copying large pieces of the example as it has been shown that deep generative networks can memorize an image in [LVU18] . However, the compactness or the architecture and the stochastic nature of the proposed model make it very unlikely. In practice, we do not observe repetition among the samples generated with our trained models, even when pursuing the optimization long after the visual convergence (which generally occurs after iterations, see Figure 3). This is consistent with the results of Ulyanov et al. [ULVL16], the 2D architecture that inspired ours, where diversity is not an issue. One explanation for this difference with other methods may be that the architectures that exhibit a loss of diversity process an input noise that is small compared to the generated output ( in [LFY17a] and in [UVL17]) and which is easier to ignore. On the contrary, our generative network receives an input that accounts for roughly 1.14 times the size of the output. Figure 13 demonstrates the capacity of our model to generate diverse samples from a single trained generator. It shows three generated solid textures along with their middle slices. To facilitate the comparison, it includes an a posteriori correspondence map which highlights spatial similarity by forming smooth regions. In all cases we obtain noisy maps which means that the slices do not repeat arrangements of patterns or colors.

Figure 13: Diversity among generated samples. We compare the middle slices along three axis of three generated textures of voxels. The comparison consists in finding the pixel with the most similar neighborhood (size ) in the other image and constructing a correspondence map given its coordinates. The smooth result in the diagonal occurs when comparing a slice to itself. The stochasticity in the rest of results means that the compared slices do not share a similar arrangement of patterns and colors.

Multiple examples setting

As already discussed in the work of Kopf et al. [KFCO07] and earlier, it appears that most solid textures can only be modeled from a single example along different directions. In the literature, to the best of our knowledge, only one success case of a 3D texture using two different examples has been proposed [KFCO07, DLTD08]. This is due to the fact that the two examples have to share similar features such as color distribution and compatible geometry as already shown in Figure 12. Figure 14 illustrates this phenomenon, for each example we experiment with and without performing a histogram matching (independently for each color channel) to the input examples. We observe favorable results particularly when the colors of both examples are close. Although the patterns are not perfectly reproduced, the 3D structure is coherent and close to the examples.

HM examples configuration
Figure 14: Anisotropic texture synthesis using two examples. The first columns show the two examples used and the training configuration, i.e. how images are oriented for each view. Last column shows a sample synthesized using the trained generator. For each example, we experiment with (✓) and without (✗) preprocessing the example images to match color statistics, by performing a histogram matching (HM) on each color channel independently. We observe favorable results particularly when the colors of both examples are close.
Figure 15: Comparison with the existing methods that produce the best visual quality. The last row show the results using the proposed method. Our method is better at reproducing the statistics of the example compared to [KFCO07] and is better at capturing high frequencies compared to both methods.

6.3 Comparison with state-of-the-art

The results in Figures 7 and 8 prove the capability of the proposed model to synthesize photo-realistic textures. This is an important improvement with respect to classical on demand methods based on Gabor/LRP-noise. High resolution examples are important in order to obtain more detailed textures. In previous high quality methods [KFCO07, CW10] the computation times increase substantially with the size of the example, so examples were limited to pixels. Furthermore, the empirical histogram matching steps create a bottleneck for parallel computation. Our method takes a step forward by allowing higher resolution example textures, depending only on the memory of the GPU used.

We compare the visual quality of our results with the two existing methods that seem to produce the best results: Kopf et al. [KFCO07] and Chen et al. [CW10]. Figure 15 shows some samples obtained from the respective articles or websites side by side with results using our method. The most salient advantage of our method is the ability to better capture high frequency information, making the structures in the samples sharper and more photo-realistic. Considering voxels’ statistics, i.e. capturing the richness of the example, both our method and that of Chen et al. [CW10] seem to obtain a better result than the method of Kopf et al. [KFCO07]. The examples used in Figure 15 have resolutions of either or pixels. We observe that the visual quality of the textures generated with our method deteriorate when using small examples. This can be due to the descriptor network which is pre-trained using bigger images.

We do not consider the method of Dong et al. [DLTD08] for a visual quality comparison as their pre-computation of candidates limits the richness of information, which yields lower quality results. However, thanks to the on-demand evaluation, this model greatly surpasses the computation speeds of the other methods. Yet, as detailed before, our method is faster during synthesis while achieving better visual quality. Besides, our computation time does not depend on the resolution of the examples.

Figure 16: Comparison of our approach with Kopf et al. [KFCO07]. Both methods generate a solid texture that is then simply interpolated and intersected with a surface mesh (without parametrization as required for texture mapping). The first column shows the example texture. The second column shows results from [KFCO07], some of which obtained with additional information (a feature map for the second and fourth rows, a specularity map for the third one). The last column illustrates that our approach, using only the example for training, is able to produce fine scale details. The resolution of the first three rows are and for the rest.

On Figure 16 we show some of our results used for texturing a complex surface and we compare them to the results of Kopf et al. [KFCO07]. Here the higher frequencies successfully reproduced with our method cause a more realistic impression.

Finally we would like to point out that although deep learning related models are often thought to produce good results only thanks to a colossal amount of parameters, our method (with parameters to store) stands close to the memory footprint of a patch-based approach working with a color pixels input (i.e. parameters if all the patches are used).

7 Limitations and future work

Long distance correlation

As it can be observed in the brick wall texture on Figure 11 and in the diagonal texture in Figure 12 our model is less successful at preserving the alignment of long patterns in the texture. This limitation is also observed in the second row of Figure 11 where the objects size in the synthesized samples do not match the one in the example, again due to the overlooked long distance correlation. One possible explanation comes from the fixed receptive field of VGG as descriptor network. It is likely that it only sees some local patterns which results in breaking long patterns into pieces. A possible solution could be to use more scales in the generator network, similarly to use larger patches in patch based methods. Another possible improvement to explore is to explicitly construct our 2D loss incorporating those long distance correlations as in [LGX16, SCO17].

Constrained directions

We observed that training the generator with two instead of three constrained directions results in unsatisfying texture along the unconsidered direction, while improving visual quality along the two constrained directions for anisotropic textures (see Figure 11). It would be interesting to explore a middle point between letting the algorithm infer the structure along one direction and constraining it.

Visual quality

Although our method delivers high quality results for a varied set of textures, it still presents some visual flaws that we think are independent of the existence issue. In textures like the pebble and grass of Figure 8 the synthesized sample presents oversimplified versions of the example’s features. Although not detailed in the articles, the available codes for [ULVL16, UVL17, LFY17a] make use of an empirical normalization of the gradients during training. This technique normalizes the gradient of the generator network with respect to the loss at each layer before continuing the back propagation to the generator network’s parameters. In practice it sometimes leads to a slightly closer reproduction of the patterns’ structure of the example. It is however difficult to anticipate which textures can benefit from this technique.

Additionally, our results present some visual artifacts that are typical to generative methods based on deep networks. The most salient are the high frequency checker-board effects, see for instance [JAFF16] where a total variation term is used to try to mitigate the artifact.

Non-stationary textures

The perceptual loss in Equation (3) stands out for traditional texture synthesis, where the examples are stationary. An interesting problem is to consider non-stationary textures, such as the recent method of Zhou et al. [ZZB18], which uses an auto-encoder to extend the example texture twice. This problem is specifically challenging in our setting in absence of any 3D examples of such a texture.

Real time rendering

The trained generator can be integrated in a fragment shader to generate the visible values of a 3D model thanks to its on demand capability. Note however that on-the-fly filtering of the generated solid texture is a challenging problem that is not addressed in this work.

8 Conclusion

The main goal of this paper was to address the problem of example based solid texture synthesis by the means of a convolutional neural network. First, we presented a simple and compact generative network capable of synthesizing portions of infinitely extendable solid texture. The parameters of this 3D generator are stochastically optimized using a pre-trained 2D descriptor network and a slice-based 3D objective function. The complete framework is efficient both during training and at evaluation time. The training can be performed at high resolution, and textures of arbitrary size can be synthesized on demand. This method is capable of achieving high quality results on a wide set of textures. We showed the outcome on textures with varying levels of structure and on isotropic and anisotropic arrangements. We demonstrate that, although solid texture synthesis from a single example image is an intricate problem, our method delivers compelling results given the desired look imposed via the 3D loss function.

The second aim of this study was to achieve on demand synthesis for which, to the best of our knowledge, no other method based on neural networks is capable of. The on demand evaluation capability of the generator allows for it to be integrated with a 3D graphics renderer to replace the use of 2D textures on surfaces and thus eliminating the possible accompanying artifacts. The proposed techniques during training and evaluation can be extended to any fully convolutional generative network. We observed some limitations of our method mainly in the lack of control over the directions not considered in the training. Using multiple examples could complement the training by giving information of the desired aspect along different directions. We aim to further study the limits of solid texture synthesis from multiple sources with the goal of obtaining an upgraded framework better capable of simulating real life objects.


We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), RGPIN-2015-06025. This project has been carried out with support from the French State, managed by the French National Research Agency (ANR-16-CE33-0010-01). This study has been also carried out with financial support from the CNRS for supporting grants PEPS 2018 I3A "3DTextureNets". In like manner, we acknowledge the support of CONACYT. Bruno Galerne acknowledges the support of NVIDIA Corporation with the donation of a Titan Xp GPU used for this research. The authors would like to thank Loïc Simon for fruitful discussions on 3D rendering, and Guillaume-Alexandre Bilodeau for giving us access to one of his GPUs.


  • [BJV17] Bergmann U., Jetchev N., Vollgraf R.: Learning texture manifolds with the periodic spatial GAN. In ICML (2017), pp. 469–477.
  • [BK88] Bourlard H., Kamp Y.:

    Auto-association by multilayer perceptrons and singular value decomposition.

    Biological cybernetics, 4 (1988), 291–294. doi:10.1007/BF00332918.
  • [BM17] Berger G., Memisevic R.: Incorporating long-range consistency in cnn-based texture generation. ICLR (2017).
  • [CW10] Chen J., Wang B.: High quality solid texture synthesis using position and index histogram matching. Vis. Comput. 26, 4 (2010), 253–262. doi:10.1007/s00371-009-0408-3.
  • [De 97] De Bonet J. S.: Multiresolution sampling procedure for analysis and synthesis of texture images. In SIGGRAPH (1997), ACM, pp. 361–368. doi:10.1145/258734.258882.
  • [DGF98] Dischler J. M., Ghazanfarpour D., Freydier R.: Anisotropic solid texture synthesis using orthogonal 2d views. Computer Graphics Forum 17, 3 (1998), 87–95. doi:10.1111/1467-8659.00256.
  • [DLTD08] Dong Y., Lefebvre S., Tong X., Drettakis G.: Lazy solid texture synthesis. In EGSR (2008), pp. 1165–1174. doi:10.1111/j.1467-8659.2008.01254.x.
  • [GCC17] Gwak J., Choy C. B., Chandraker M., Garg A., Savarese S.: Weakly supervised 3d reconstruction with adversarial constraint. In International Conference on 3D Vision (2017), pp. 263–272. doi:10.1109/3DV.2017.00038.
  • [GD95] Ghazanfarpour D., Dischler J.: Spectral analysis for automatic 3-d texture generation. Computers & Graphics 19, 3 (1995), 413–422. doi:10.1016/0097-8493(95)00011-Z.
  • [GEB15] Gatys L., Ecker A. S., Bethge M.: Texture synthesis using convolutional neural networks. In NIPS (2015), pp. 262 – 270.
  • [GEB16] Gatys L. A., Ecker A. S., Bethge M.: Image style transfer using convolutional neural networks. In CVPR (2016), pp. 2414–2423.
  • [GLLD12] Galerne B., Lagae A., Lefebvre S., Drettakis G.: Gabor noise by example. ACM Trans. Graph. 31, 4 (2012), 73:1–73:9. doi:10.1145/2185520.2185569.
  • [GLM17] Galerne B., Leclaire A., Moisan L.: Texton noise. Computer Graphics Forum 36, 8 (2017), 205–218. doi:10.1111/cgf.13073.
  • [GLR18] Galerne B., Leclaire A., Rabin J.: A texture synthesis model based on semi-discrete optimal transport in patch space. SIAM 11, 4 (2018), 2456–2493. doi:10.1137/18M1175781.
  • [GPAM14] Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., Bengio Y.: Generative adversarial nets. In NIPS (2014), pp. 2672–2680.
  • [GRGH17] Gutierrez J., Rabin J., Galerne B., Hurtut T.: Optimal patch assignment for statistically constrained texture synthesis. In SSVM (2017), pp. 172–183. doi:10.1007/978-3-319-58771-4_14.
  • [GSV14] Gilet G., Sauvage B., Vanhoey K., Dischler J.-M., Ghazanfarpour D.: Local random-phase noise for procedural texturing. ACM Trans. Graph. 33, 6 (2014), 195:1–195:11. doi:10.1145/2661229.2661249.
  • [HB95] Heeger D. J., Bergen J. R.: Pyramid-based texture analysis/synthesis. In SIGGRAPH (1995), ACM, pp. 229–238. doi:10.1145/218380.218446.
  • [JAFF16] Johnson J., Alahi A., Fei-Fei L.:

    Perceptual losses for real-time style transfer and super-resolution.

    In European Conference on Computer Vision (2016), pp. 694–711. doi:10.1007/978-3-319-46475-6_43.
  • [JREM16] Jimenez Rezende D., Eslami S. M. A., Mohamed S., Battaglia P., Jaderberg M., Heess N.: Unsupervised learning of 3d structure from images. In NIPS (2016), pp. 4996–5004.
  • [KB15] Kingma D. P., Ba J.: Adam: A method for stochastic optimization. In ICLR (2015).
  • [KEBK05] Kwatra V., Essa I., Bobick A., Kwatra N.: Texture optimization for example-based synthesis. In SIGGRAPH (2005), ACM, pp. 795–802. doi:10.1145/1186822.1073263.
  • [KFCO07] Kopf J., Fu C., Cohen-Or D., Deussen O., Lischinski D., Wong T.: Solid texture synthesis from 2d exemplars. In SIGGRAPH (2007), ACM. doi:10.1145/1275808.1276380.
  • [LeC87] LeCun Y.: Modeles connexionnistes de lapprentissage. PhD thesis, Universite Paris 6, 1987.
  • [LFY17a] Li Y., Fang C., Yang J., Wang Z., Lu X., Yang M.: Diversified texture synthesis with feed-forward networks. In CVPR (2017), pp. 266–274. doi:10.1109/CVPR.2017.36.
  • [LFY17b] Li Y., Fang C., Yang J., Wang Z., Lu X., Yang M.-H.: Universal style transfer via feature transforms. In NIPS (2017), pp. 386–396.
  • [LGX16] Liu G., Gousseau Y., Xia G.: Texture synthesis through convolutional neural networks and spectrum constraints. In ICPR (2016), pp. 3234–3239. doi:10.1109/ICPR.2016.7900133.
  • [LH05] Lefebvre S., Hoppe H.: Parallel controllable texture synthesis. In SIGGRAPH (2005), ACM, pp. 777–786. doi:10.1145/1186822.1073261.
  • [LVU18] Lempitsky V., Vedaldi A., Ulyanov D.: Deep image prior. In CVPR (2018), pp. 9446–9454. doi:10.1109/CVPR.2018.00984.
  • [Mar03] Marsaglia G.: Xorshift rngs. Journal of Statistical Software 8, 14 (2003), 1–6. doi:10.18637/jss.v008.i14.
  • [Pea85] Peachey D. R.: Solid texturing of complex surfaces. SIGGRAPH (1985), 279–286. doi:10.1145/325165.325246.
  • [Per85] Perlin K.: An image synthesizer. SIGGRAPH (1985), 287–296. doi:10.1145/325165.325247.
  • [PS00] Portilla J., Simoncelli E. P.: A parametric texture model based on joint statistics of complex wavelet coefficients. IJCV 40, 1 (2000), 49–71. doi:10.1023/A:1026553619983.
  • [QhY07] Qin X., h. Yang Y.: Aura 3d textures. Transactions on Visualization and Computer Graphics 13, 2 (2007), 379–389. doi:10.1109/TVCG.2007.31.
  • [RMC16] Radford A., Metz L., Chintala S.: Unsupervised representation learning with deep convolutional generative adversarial networks. ICLR (2016).
  • [RPDB12] Rabin J., Peyré G., Delon J., Bernot M.: Wasserstein barycenter and its application to texture mixing. In SSVM (2012), pp. 435–446. doi:10.1007/978-3-642-24785-9_37.
  • [SCO17] Sendik O., Cohen-Or D.: Deep correlations for texture synthesis. ACM Trans. Graph. 36, 5 (2017), 161:1–161:15. doi:10.1145/3015461.
  • [SZ14] Simonyan K., Zisserman A.: Very deep convolutional networks for large-scale image recognition. CoRR (2014). arXiv:1409.1556.
  • [TBD18] Tesfaldet M., Brubaker M. A., Derpanis K. G.: Two-stream convolutional networks for dynamic texture synthesis. In CVPR (2018), pp. 6703–6712. doi:10.1109/CVPR.2018.00701.
  • [ULVL16] Ulyanov D., Lebedev V., Vedaldi A., Lempitsky V.: Texture networks: Feed-forward synthesis of textures and stylized images. In ICML (2016), pp. 1349–1357.
  • [UVL17] Ulyanov D., Vedaldi A., Lempitsky V.: Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In CVPR (2017), pp. 4105–4113. doi:10.1109/CVPR.2017.437.
  • [Wei03] Wei L.-Y.: Texture synthesis from multiple sources. In SIGGRAPH (2003), ACM, pp. 1–1. URL: http://doi.acm.org/10.1145/965400.965507, doi:10.1145/965400.965507.
  • [WL00] Wei L. Y., Levoy M.:

    Fast texture synthesis using tree-structured vector quantization.

    In SIGGRAPH (2000), ACM, pp. 479–488. doi:10.1145/344779.345009.
  • [WL03] Wei L.-Y., Levoy M.: Order-independent texture synthesis. Tech. Rep. TR-2002-01, Computer Science Department, Stanford University (2003).
  • [WRB17] Wilmot P., Risser E., Barnes C.: Stable and controllable neural texture synthesis and style transfer using histogram losses. CoRR (2017). arXiv:1701.08893.
  • [YBS19] Yu N., Barnes C., Shechtman E., Amirghodsi S., Lukac M.: Texture mixer: A network for controllable synthesis and interpolation of texture. In CVPR (June 2019).
  • [YYY16] Yan X., Yang J., Yumer E., Guo Y., Lee H.: Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In NIPS (2016), pp. 1696–1704.
  • [ZDL11] Zhang G.-X., Du S.-P., Lai Y.-K., Ni T., Hu S.-M.: Sketch guided solid texturing. Graphical Models 73, 3 (2011), 59–73. doi:https://doi.org/10.1016/j.gmod.2010.10.006.
  • [ZZB18] Zhou Y., Zhu Z., Bai X., Lischinski D., Cohen-Or D., Huang H.: Non-stationary texture synthesis by adversarial expansion. ACM Trans. Graph. 37, 4 (2018), 49:1–49:13. doi:10.1145/3197517.3201285.