Deep Texture Encoding Network
We propose a Deep Texture Encoding Network (Deep-TEN) with a novel Encoding Layer integrated on top of convolutional layers, which ports the entire dictionary learning and encoding pipeline into a single model. Current methods build from distinct components, using standard encoders with separate off-the-shelf features such as SIFT descriptors or pre-trained CNN features for material recognition. Our new approach provides an end-to-end learning framework, where the inherent visual vocabularies are learned directly from the loss function. The features, dictionaries and the encoding representation for the classifier are all learned simultaneously. The representation is orderless and therefore is particularly useful for material and texture recognition. The Encoding Layer generalizes robust residual encoders such as VLAD and Fisher Vectors, and has the property of discarding domain specific information which makes the learned convolutional features easier to transfer. Additionally, joint training using multiple datasets of varied sizes and class labels is supported resulting in increased recognition performance. The experimental results show superior performance as compared to state-of-the-art methods using gold-standard databases such as MINC-2500, Flickr Material Database, KTH-TIPS-2b, and two recent databases 4D-Light-Field-Material and GTOS. The source code for the complete system are publicly available.READ FULL TEXT VIEW PDF
A novel learnable dictionary encoding layer is proposed in this paper fo...
We present a histogram layer for artificial neural networks (ANNs). An
Contactless and online palmprint identfication offers improved user
Research in texture recognition often concentrates on the problem of mat...
We present a texture network called Deep Encoding Pooling Network (DEP) ...
Tactile sensing or fabric hand plays a critical role in an individual's
This paper emphasizes the significance to jointly exploit the problem
Deep Texture Encoding Network
With the rapid growth of deep learning, convolutional neural networks (CNNs) has become the de facto standard in many object recognition algorithms. The goals of material and texture recognition algorithms, while similar to object recognition, have the distinct challenge of capturing an orderless measure encompassing some spatial repetition. For example, distributions or histograms of features provide an orderless encoding for recognition. In classic computer vision approaches for material/texture recognition, hand-engineered features are extracted using interest point detectors such as SIFT or filter bank responses [10, 11, 26, 42]. A dictionary is typically learned offline and then the feature distributions are encoded by Bag-of-Words (BoWs) [23, 9, 39, 17], In the final step, a classifier such as SVM is learned for classification. In recent work, hand-engineered features and filter banks are replaced by pre-trained CNNs and BoWs are replaced by the robust residual encoders such as VLAD  and its probabilistic version Fisher Vector (FV) . For example, Cimpoi et al. 
assembles different features (SIFT, CNNs) with different encoders (VLAD, FV) and have achieved state-of-the-art results. These existing approaches have the advantage of accepting arbitrary input image sizes and have no issue when transferring features across different domains since the low-level features are generic. However, these methods (both classic and recent work) are comprised of stacking self-contained algorithmic components (feature extraction, dictionary learning, encoding, classifier training) as visualized in Figure1 (left, center). Consequently, they have the disadvantage that the features and the encoders are fixed once built, so that feature learning (CNNs and dictionary) does not benefit from labeled data. We present a new approach (Figure 1, right) where the entire pipeline is learned in an end-to-end manner.
Deep learning 
is well known as an end-to-end learning of hierarchical features, so what is the challenge in recognizing textures in an end-to-end way? The convolution layer of CNNs operates in a sliding window manner acting as a local feature extractor. The output featuremaps preserve a relative spatial arrangement of input images. The resulting globally ordered features are then concatenated and fed into the FC (fully connected) layer which acts as a classifier. This framework has achieved great success in image classification, object recognition, scene understanding and many other applications, but is typically not ideal for recognizing textures due to the need for an spatially invariant representation describing the feature distributions instead of concatenation. Therefore, an orderless feature pooling layer is desirable for end-to-end learning. The challenge is to make the loss function differentiable with respect to the inputs and layer parameters. We derive a new back propagation equation series (see AppendixA). In this manner, encoding for an orderless representation can be integrated within the deep learning pipeline.
As the first contribution of this paper, we introduce a novel learnable residual encoding layer which we refer to as the Encoding Layer
, that ports the entire dictionary learning and residual encoding pipeline into a single layer for CNN. The Encoding Layer has three main properties. (1) The Encoding Layer generalizes robust residual encoders such as VLAD and Fisher Vector. This representation is orderless and describes the feature distribution, which is suitable for material and texture recognition. (2) The Encoding Layer acts as a pooling layer integrated on top of convolutional layers, accepting arbitrary input sizes and providing output as a fixed-length representation. By allowing arbitrary size images, the Encoding Layer makes the deep learning framework more flexible and our experiments show that recognition performance is often improved with multi-size training. In addition, (3) the Encoding Layer learns an inherent dictionary and the encoding representation which is likely to carry domain-specific information and therefore is suitable for transferring pre-trained features. In this work, we transfer CNNs from object categorization (ImageNet) to material recognition. Since the network is trained end-to-end as a regression progress, the convolutional features learned together with Encoding Layer on top are easier to transfer (likely to be domain-independent).
The second contribution of this paper is a new framework for end-to-end material recognition which we refer to as Texture Encoding Network - Deep TEN, where the feature extraction, dictionary learning and encoding representation are learned together in a single network as illustrated in Figure 1. Our approach has the benefit of gradient information passing to each component during back propagation, tuning each component for the task at hand. Deep-Ten outperforms existing modular methods and achieves the state-of-the-art results on material/texture datasets such as MINC-2500 and KTH-TIPS-2b. Additionally, this Deep Encoding Network performs well in general recognition tasks beyond texture and material as demonstrated with results on MIT-Indoor and Caltech-101 datasets. We also explore how convolutional features learned with Encoding Layer can be transferred through joint training on two different datasets. The experimental result shows that the recognition rate is significantly improved with this joint training.
Given a set of visual descriptors and a learned codebook containing codewords that are -dimensional, each descriptor can be assigned with a weight to each codeword and the corresponding residual vector is denoted by , where and . Given the assignments and the residual vector, the residual encoding model applies an aggregation operation for every single codeword :
The resulting encoder outputs a fixed length representation (independent of the number of input descriptors ).
The traditional visual recognition approach can be partitioned into feature extraction, dictionary learning, feature pooling (encoding) and classifer learning as illustrated in Figure 1. In our approach, we port the dictionary learning and residual encoding into a single layer of CNNs, which we refer to as the Encoding Layer. The Encoding Layer simultaneously learns the encoding parameters along with with an inherent dictionary in a fully supervised manner. The inherent dictionary is learned from the distribution of the descriptors by passing the gradient through assignment weights. During the training process, the updating of extracted convolutional features can also benefit from the encoding representations.
Consider the assigning weights for assigning the descriptors to the codewords. Hard-assignment provides a single non-zero assigning weight for each descriptor , which corresponds to the nearest codeword. The -th element of the assigning vector is given by where is the indicator function (outputs 0 or 1). Hard-assignment doesn’t consider the codeword ambiguity and also makes the model non-differentiable. Soft-weight assignment addresses this issue by assigning a descriptor to each codeword . The assigning weight is given by
where is the smoothing factor for the assignment.
Soft-assignment assumes that different clusters have equal scales. Inspired by guassian mixture models (GMM), we further allow the smoothing factor for each cluster center to be learnable:
which provides a finer modeling of the descriptor distributions. The Encoding Layer concatenates the aggregated residual vectors with assigning weights (as in Equation 1). As is typical in prior work [32, 1], the resulting vectors are normalized using the -norm.
The Encoding Layer is a directed acyclic graph as shown in Figure 2, and all the components are differentiable w.r.t the input and the parameters (codewords and smoothing factorsA.
Dictionary Learning is usually learned from the distribution of the descriptors in an unsupervised manner. K-means
learns the dictionary using hard-assignment grouping. Gaussian Mixture Model (GMM)
is a probabilistic version of K-means, which allows a finer modeling of the feature distributions. Each cluster is modeled by a Gaussian component with its own mean, variance and mixture weight. The Encoding Layer makes the inherent dictionary differentiablew.r.t the loss function and learns the dictionary in a supervised manner. To see the relationship of the Encoding Layer to K-means, consider Figure 2 with omission of the residual vectors (shown in green of Figure 2) and let smoothing factor . With these modifications, the Encoding Layer acts like K-means. The Encoding Layer can also be regarded as a simplified version of GMM, that allows different scaling (smoothing) of the clusters.
BoWs (bag-of-word) methods typically hard assign each descriptor to the nearest codeword and counts the occurrence of the visual words by aggregating the assignment vectors . An improved BoW employs a soft-assignment weights . VLAD  aggregates the residual vector with the hard-assignment weights. NetVLAD  makes two relaxations: (1) soft-assignment to make the model differentiable and (2) decoupling the assignment from the dictionary which makes the assigning weights depend only on the input instead of the dictionary. Therefore, the codewords are not learned from the distribution of the descriptors. Considering Figure 2, NetVLAD drops the link between visual words with their assignments (the blue arrow in Figure 2). Fisher Vector  concatenates both the 1st order and 2nd order aggregated residuals. FV-CNN  encodes off-the-shelf CNNs with pre-trained CNN and achieves good result in material recognition. Fisher Kernel SVM  iteratively update the SVM by a convex solver and the inner GMM parameters using gradient descent. A key difference from our work is that this Fisher Kernel method uses hand-crafted instead of learning the features. VLAD-CNN  and FV-CNN  build off-the-shelf residual encoders with pre-trained CNNs and achieve great success in robust visual recognition and understanding areas.
In CNNs, a pooling layer (Max or Avg) is typically used on top of the convolutional layers. Letting and fixing , the Encoding Layer simplifies to Sum pooling ( and ). When followed by -normalization, it has exactly the same behavior as Avg pooling. The convolutional layers extract features as a sliding window, which can accept arbitrary input image sizes. However, the pooling layers usually have fixed receptive field size, which lead to the CNNs only allowing fixed input image size. SPP pooling layer  accepts different size by fixing the pooling bin number instead of receptive field sizes. The relative spatial orders of the descriptors are preserved. Bilinear pooling layer  removes the globally ordered information by summing the outer-product of the descriptors across different locations. Our Encoding Layer acts as a pooling layer by encoding robust residual representations, which converts arbitrary input size to a fix length representation. Table 1 summarizes the comparison our approach to other methods.
We refer to the deep convolutional neural network with the Encoding Layer as Deep Texture Encoding Network (Deep-TEN). In this section, we discuss the properties of the Deep-TEN, that is the property of integrating Encoding Layer with an end-to-end CNN architecture.
Fisher Vector (FV) has the property of discarding the influence of frequently appearing features in the dataset , which usually contains domain specific information . FV-CNN has shown its domain transfer ability practically in material recognition work . Deep-TEN generalizes the residual encoder and also preserves this property. To see this intuitively, consider the following: when a visual descriptor appears frequently in the data, it is likely to be close to one of the visual centers . Therefore, the resulting residual vector corresponding to , , is small. For the residual vectors of corresponding to where , the corresponding assigning weight becomes small as shown in Equation 3. The Encoding Layer aggregates the residual vectors with assignment weights and results in small values for frequently appearing visual descriptors. This property is essential for transferring features learned from different domain, and in this work we transfer CNNs pre-trained on the object dataset ImageNet to material recognition tasks.
Traditional approaches do not have domain transfer problems because the features are usually generic and the domain-specific information is carried by the dictionary and encoding representations. The proposed Encoding Layer generalizes the dictionary learning and encoding framework, which carries domain-specific information. Because the entire network is optimized as a regression progress, the resulting convolutional features (with Encoding Layer learned on top) are likely to be domain-independent and therefore easier to transfer.
CNNs typically require a fixed input image size. In order to feed into the network, images have to be resized or cropped to a fixed size. The convolutional layers act as in sliding window manner, which can allow any input sizes (as discussed in SPP). The FC (fully connected) layer acts as a classifier which take a fix length representation as input. Our Encoding Layer act as a pooling layer on top of the convolutional layers, which converts arbitrary input sizes to a fixed length representation. Our experiments show that the classification results are often improved by iteratively training the Deep Encoding Network with different image sizes. In addition, this multi-size training provides the opportunity for cross dataset training.
There are many labeled datasets for different visual problems, such as object classification [12, 24, 6], scene understanding [44, 46], object detection [14, 27] and material recognition[2, 47]. An interesting question to ask is: how can different visual tasks benefit each other? Different datasets have different domains, different labeling strategies and sometimes different image sizes (e.g. CIFAR10 and ImageNet). Sharing convolutional features typically achieves great success [19, 34]. The concept of multi-task learning  was originally proposed in , to jointly train cross different datasets. An issue in joint training is that features from different datasets may not benefit from the combined training since the images contain domain-specific information. Furthermore, it is typically not possible to learn deep features from different image sizes. Our Encoding Layer on top of convolution layers accepts arbitrary input image sizes and learns domain independent convolutional features, enabling convenient joint training. We present and evaluate a network that shares convolutional features for two different dataset and has two separate Encoding Layers. We demonstrate joint training with two datasets and show that recognition results are significantly improved.
The evaluation considers five material and texture datasets. Materials in Context Database (MINC)  is a large scale material in the wild dataset. In this work, a publicly available subset (MINC-2500, Sec 5.4 of original paper) is evaluated with provided train-test splits, containing 23 material categories and 2,500 images per-category. Flickr Material Dataset (FMD) , a popular benchmark for material recognition containing 10 material classes, 90 images per-class used for training and 10 for test. Ground Terrain in Outdoor Scenes Dataset (GTOS)  is a dataset of ground materials in outdoor scene with 40 categories. The evaluation is based on provided train-test splits. KTH-TIPS-2b (KTH)-, contains 11 texture categories and four samples per-category. Two samples are randomly picked for training and the others for test. 4D-Light-Field-Material (4D-Light)  is a recent light-field material dataset containing 12 material categories with 100 samples per-category. In this experiment, 70 randomly picked samples per-category are used as training and the others for test and only one angular resolution is used per-sample. For general classification evaluations, two additional datasets are considered. MIT-Indoor  dataset is an indoor scene categorization dataset with 67 categories, a standard subset of 80 images per-category for training and 20 for test is used in this work. Caltech 101  is a 102 category (1 for background) object classification dataset; 10% randomly picked samples are used for test and the others for training.
320) makes the network converging faster and improves the performance. The top figure shows the training curve on MIT-Indoor and the bottom one shows the first 35 epochs on MINC-2500.
In order to evaluate different encoding and representations, we benchmark different approaches with single input image sizes without ensembles, since we expect that the performance is likely to improve by assembling features or using multiple scales. We fix the input image size to 352352 for SIFT, pre-trained CNNs feature extractions and Deep-TEN. FV-SIFT, a non-CNN approach, is considered due to its similar encoding representations. SIFT features of 128 dimensions are extracted from input images and a GMM of 128 Gaussian components is built, resulting in a 32K Fisher Vector encoding. For FV-CNN encoding, the CNN features of input images are extracted using pre-trained 16-layer VGG-VD model 
. The feature maps of conv5 (after ReLU) are used, with the dimensionality of 1414512. Then a GMM of 32 Gaussian components is built and resulting in a 32K FV-CNN encoding. To improve the results further, we build a stronger baseline using pre-trained 50-layers ResNet  features. The feature maps of the last residual unit are used. The extracted features are projected into 512 dimension using PCA, from the large channel numbers of 2048 in ResNet. Then we follow the same encoding approach of standard FV-CNN to build with ResNet features. For comparison with multi-size training Deep-TEN, multi-size FV-CNN (VD) is used, the CNN features are extracted from two different sizes of input image, 352352 and 320320 (sizes determined empirically). All the baseline encoding representations are reduced to 4096 dimension using PCA and
-normalized. For classification, linear one-vs-all Support Vector Machines (SVM) are built using the off-the-shelf representations. The learning hyper-parameter is set to, since the features are -normalized. The trained SVM classifiers are recalibrated as in prior work [5, 28], by scaling the weights and biases such that the median prediction score of positive and negative samples are at and .
We build Deep-TEN with the architecture of an Encoding Layer on top of 50-layer pre-trained ResNet (as shown in Table 2). Due to high-dimensionality of ResNet feature maps on Res4, a convolutional layer is used for reducing number of channels (2048128). Then an Encoding Layer with 32 codewords is added on top, followed by -normalization and FC layer. The weights (codewords and smoothing factor
) are randomly initialized with uniform distribution. For data augmentation, the input images are resized to 400 along the short-edge with the per-pixel mean subtracted. For in-the-wild image database, the images are randomly cropped to 9 to 100 of the image areas, keeping the aspect ratio between 34 and 43. For the material database with in-lab or controlled conditions (KTH or GTOS), we keep the original image scale. The resulting images are then resized into 352352 for single-size training (and 320320 for multi-size training), with 50 chance horizontal flips. Standard color augmentation is used as in . We use SGD with a mini-batch size of 64. For fine-tuning, the learning rate starts from 0.01 and divided by 10 when the error plateaus. We use a weight decay of 0.0001 and a momentum of 0.9. In testing, we adopt standard 10-crops .
Deep-TEN ideally can accept arbitrarily input image sizes (larger than a constant). In order to learn the network without modifying the standard optimization solver, we train the network with a pre-defined size in each epoch and iteratively change the input image size for every epoch as in . A full evaluation of combinatorics of different size pairs have not yet been explored. Empirically, we consider two different sizes 352352 and 320320 during the training and only use single image size in testing for simplicity (352352). The two input sizes result in 1111 and 1010 feature map sizes before feeding into the Encoding Layer. Our goal is to evaluate how multi-size training affects the network optimization and how the multi-scale features affect texture recognition.
We evaluate the performance of Deep-TEN, FV-SIFT and FV-CNN on aforementioned golden-standard material and texture datasets, such as MINC-2500, FMD, KTH and two new material datasets: 4D-Light and GTOS. Additionally, two general recognition datasets MIT-Indoor and Caltech-101 are also considered. Table 3 shows overall experimental results using single-size training,
As shown in Table 3, Deep-TEN and FV-CNN always outperform FV-SIFT, which shows that pre-trained CNN features are typically more discriminant than hand-engineered SIFT features. FV-CNN usually achieves reasonably good results on different datasets without fine-tuning pre-trained features. We can observe that the performance of FV-CNN is often improved by employing ResNet features comparing with VGG-VD as shown in Table 4. Deep-TEN outperforms FV-CNN under the same settings, which shows that the Encoding Layer gives the advantage of transferring pre-trained features to material recognition by removing domain-specific information as described in Section 3. The Encoding Layer’s property of representing feature distributions is especially good for texture understanding and segmented material recognition. Therefore, Deep-TEN works well on GTOS and KTH datasets. For the small-scale dataset FMD with less training sample variety, Deep-TEN still outperforms the baseline approaches that use an SVM classifier. For MINC-2500, a relatively large-scale dataset, the end-to-end framework of Deep TEN shows its distinct advantage of optimizing CNN features and consequently, the recognition results are significantly improved (61.8%80.6% and 69.3%81.3, compared with off-the-shelf representation of FV-CNN). For the MIT-Indoor dataset, the Encoding Layer works well on scene categorization due to the need for a certain level of orderless and invariance. The best performance of these methods for Caltech-101 is achieved by FV-CNN(VD) multi (85.7% omitted from the table). The CNN models VGG-VD and ResNet are pre-trained on ImageNet, which is also an object classification dataset like Caltech-101. The pre-trained features are discriminant to target datasets. Therefore, Deep-TEN performance is only slightly better than the off-the-shelf representation FV-CNN.
For in-the-wild datasets, such as MINC-2500 and MIT-Indoor, the performance of all the approaches are improved by adopting multi-size as expected. Remarkably, as shown in Table 4, Deep-TEN shows a performance boost of 4.9% using multi-size training and outperforms the best baseline by 7.4% on MIT-Indoor dataset. For some datasets such as FMD and GTOS, the performance decreases slighltly by adopting multi-size training due to lack of variety in the training data. Figure 3 compares the single-size training and multi-size (two-size) training for Deep-TEN on MIT-Indoor and MINC-2500 dataset. The experiments show that multi-size training helps the optimization of the network (converging faster) and the learned multi-scale features are useful for the recognition.
As shown in Table 5, Deep-TEN outperforms the state-of-the-art on four material/texture recognition datasets: MINC-2500, KTH, GTOS and 4D-Light. Deep-TEN also performs well on two general recognition datasets. Notably, the prior state-of-the-art approaches either (1) relies on assembling features (such as FV-SIFT & CNNs) and/or (2) adopts an additional SVM classifier for classification. Deep-TEN as an end-to-end framework neither concatenates any additional hand-engineered features nor employe SVM for classification. For the small-scale datasets such as FMD and MIT-Indoor (subset), the proposed Deep-TEN gets compatible results with state-of-the-art approaches (FMD within 2%, MIT-indoor within 4%). For the large-scale datasets such as MINC-2500, Deep-TEN outperforms the prior work and baselines by a large margin demonstrating its great advantage of end-to-end learning and the ability of transferring pre-trained CNNs. We expect that the performance of Deep-TEN can scale better than traditional approaches when adding more training data.
We test joint training on two small datasets CIFAR-10  and STL-10  as a litmus test of Joint Encoding from scratch. We expect the convolutional features learned with Encoding Layer are easier to transfer, and can improve the recognition on both datasets.
CIFAR-10 contains 60,000 tiny images with the size 3232 belonging to 10 classes (50,000 for training and 10,000 for test), which is a subset of tiny images database. STL-10 is a dataset acquired from ImageNet  and originally designed for unsupervised feature learning, which has 5,000 labeled images for training and 8,000 for test with the size of . For the STL-10 dataset only the labeled images are used for training. Therefore, learning CNN from scratch is not supposed to work well due to the limited training data. We make a very simple network architecture, by simply replacing the Avg pooling layer of pre-Activation ResNet-20
with Encoding-Layer (16 codewords). We then build a network with shared convolutional layers and separate encoding layers that is jointly trained on two datasets. Note that the traditional CNN architecture is not applicable due to different image sizes from this two datasets. The training loss is computed as the sum of the two classification losses, and the gradient of the convolutional layers are accumulated together. For data augmentation in the training: 4 pixels are padded on each side for CIFAR-10 and 12 pixels for STL-10, and then randomly crop the padded images or its horizontal flip into original sizes 3232 for CIFAR-10 and 9696 for STL-10. For testing, we only evaluate the single view of the original images. The model is trained with a mini batch of 128 for each dataset. We start with a learning rate of 0.1 and divide it by 10 and 100 at 80th and 120th epoch.
In summary, we developed a Encoding Layer which bridges the gap between classic computer vision approaches and the CNN architecture, (1) making the deep learning framework more flexible by allowing arbitrary input image size, (2) making the learned convolutional features easier to transfer since the Encoding Layer is likely to carry domain-specific information. The Encoding Layer shows superior performance of transferring pre-trained CNN features. Deep-TEN outperforms traditional off-the-shelf methods and achieves state-of-the-art results on MINC-2500, KTH and two recent material datasets: GTOS and 4D-Lightfield.
This work was supported by National Science Foundation award IIS-1421134. A GPU used for this research was donated by the NVIDIA Corporation.
This appendix section provides the explicit expression for the gradients of the loss with respect to (w.r.t) the layer input and the parameters for implementing Encoding Layer. The -normalization as a standard component is used outside the encoding layer.
The encoder can be viewed as independent sub-encoders. Therefore the gradients of the loss function w.r.t input descriptor can be accumulated
. According to the chain rule, the gradients of the encoderthe input is given by
where and are defined in Sec 2, . Let and , we can write . The derivatives of the assigning weight w.r.t the input descriptor is
The sub-encoder only depends on the codeword . Therefore, the gradient of loss function w.r.t the codeword is given by .
where . Let . According to the chain rule, the derivatives of assigning w.r.t the codewords can be written as
Similar to the codewords, the sub-encoder only depends on the -th smoothing factor . Then, the gradient of the loss function w.r.t the smoothing weight is given by .
In practice, we multiply the numerator and denominator of the assigning weight with to avoid overflow:
where . Then .
We also tried to train Deep-TEN from-scratch on MINC-2500 , the result is omitted in the main paper due to having inferior recognition performance comparing with employing pre-trained ResNet-50. As shown in Figure 4, the converging speed is significantly improved using multi-size training, which proves our hypothesis that multi-size training helps the optimization of the network. The validation error is less improved than the training error, since we adopt single-size test for simplicity.
Torch7: A matlab-like environment for machine learning.In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.
A unified architecture for natural language processing: Deep neural networks with multitask learning.In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM, 2008.