I Introduction
The study of noise in visual data is a matter of major interest within the image processing and computer vision communities. Due to that many different denoising algorithms were developed for both image [1] and video [2] restoration. These methods are able to improve image quality in applications raging from microscopy [3] to astronomy [4]
Over the last decades the image classification task has motivated the development of many image descriptors (e.g. LBP [5], HOG [6]) and, more recently, representation learning techniques [7]. Nonetheless, the preprocessing stages of the image classification pipeline – that could incorporate and benefit from denoising techniques – have been neglected as pointed out by [8, 9, 10]. Moreover, little has been done to measure the impacts of different types of noise in image classification [11], which can hinder the deployment of computer vision systems in scenarios where image quality varies.
Considering the abovementioned gaps, in this paper we experimentally measure the effects of different types of noise on image classification and investigate denoising algorithms help to mitigate this problem. By doing so, we analyse our results based on the following topics:

Is the performance of a classifier hampered by noise when using the LBP and HOG methods to describe the image dataset?

The decrease in performance is due to the fact that noise makes it harder to separate the classes or does the model learned from images without noise is not robust enough to deal with noisy images?

Can denoising methods help in these situations?
Our results show that classifiers suffer to generalise to different noisy data and image classification becomes harder when dealing with noisy images. Though denoising algorithms can help to mitigate the effects of noise, they may also remove important information, reducing classification performance.
Ii Related work
Ponti et al. [8] divide image classification in five stages (see Figure 1) and show that the method used to convert the images from RGB to grayscale can have a substantial impact on classification performance. They also demonstrate that RGB to grayscale conversion can be used as an effective dimensionality reduction procedure. Their results show that early stages of the classification pipeline – despite being neglected in most image classification applications – can directly influence classification performance. Some other papers [10, 9] also point out the importance of these early stages. Nonetheless, as in [8], they only focus on RGB to grayscale conversion and do not consider noisy images.
Dodge and Karam [11]
analyse how image quality can hamper the performance of some stateoftheart deep learning models by using networks trained on noisefree images to classify noisy, blurred and compressed images. Their results show that image classification is directly affected by image quality. Similarly,
Kylberg and Sintorn [12] evaluate noise robustness of several LBP variants. Given that on both these papers the classifiers are trained in noisefree images, it is not possible to infer if the learned models are not able to deal with noisy images or if noise makes the classes harder to separate.Iii Technical background
Iiia Local binary patterns
Local Binary Patterns (LBP) [5] is a texturebased image descriptor that, due to its success, has several variants and improved versions [13, 14]. In this paper, we employ the version that uses uniform patterns and it is invariant to gray scale shifts, scale changes and rotations. This variant achieves good results while generating low dimensional features.
The LBP descriptor is the distribution (a histogram) of texture patterns extracted for every pixel in an image. Thus, prior to computing the LBP descriptor, it is necessary to compute a texture pattern representation for each pixel. Such texture representation is called LBP code and it is based on the difference between a pixel and its neighbors. These neighbors can be arranged in a circle or in a square. A neighborhood is defined by the parameters and , where is the number of neighbors and
is the radius of the circular neighborhood (or the side of the square neighborhood). If one of the neighbors is not at the center of a pixel, its value needs to be obtained via interpolation.
The LBP code (a binary code) for a pixel and its neighbors is defined as follows:
(1) 
where is the sign function and are neighbors of . This LBP code is invariant to grayscale shifts, because it is based on the differences of pixels and not in absolute values. Also, since only the sign of the difference result is considered, the code is invariant to scale. On the other hand, such code is not invariant to rotation.
It is possible to achieve some invariance to rotation by using the following LBP code:
(2) 
where is the result of circular right bitwise shifts applied to the code . As an example, if and , then . By always considering the minimum of all possible bitwise shifts, a code that is more robust to rotations can be obtained.
Pietikäinen et al. [15] discovered that when LBP patterns are considered circularly, they usually contain two or less bit transitions (patterns with such characteristic were named uniform). The other patterns – that have more than two transactions – occur rarely and were called nonuniform.
Finally, to obtain the LBP descriptor – up to now we were talking about LBP codes – a histogram is computed. In this histogram, each uniform pattern has its own bin, while there is one bin for all the nonuniform patterns.
IiiB Histogram of oriented gradients
Based on evaluating wellnormalized local histograms of image gradient orientations in a dense grid, Histogram of Oriented Gradients (HOG) [6] takes advantage of the distribution of local intensity gradients or edges directions to characterize the local object appearance and shape. This is done by diving the image window into small connected regions, called cells, in which a local histogram of gradient directions or edge directions is computed over all pixels. The final representation is obtained by combining the histograms computed in all cells of the image. HOG descriptors are particularly suited for human detection [6].
To extract HOG descriptors from an image, firstly, gradient values must be computed. This is most commonly done by filtering the color or intensity data of the image using the onedimensional centered point discrete derivative mask in the horizontal () and vertical directions (). Then, the image is divided into small cells of rectangular (RHOG) or circular shape (CHOG). Each pixel contained by a cell is used in a weighted manner to create a orientationbased histogram. This histogram is created for each cell and its bins are evenly spread over the orientation of the gradients. The range of the orientation can be defined over 0 to 180 degrees or over 0 to 360 degrees, depending if the gradient is “signed” or “unsigned”. The contribution of a pixel to each bin of the histogram is weighted based on the magnitude of the gradient or some function of this magnitude.
To increase robustness to illumination and contrast changes, gradient strengths are locally normalized by grouping cells together into blocks. Some methods commonly used for normalization are: norm (Equation 3), hysteresisbased normalization [16] or sqrt (Equation 4), where
is the nonnormalized vector containing all histograms of a given block,
is its norm for and is a small constant.(3) 
(4) 
Blocks typically overlap, which means that a cell can contribute to more than one block, and, therefore, to the final descriptor. The size and shape of the cells and blocks and the number of bins in each histograms are set by the user.
IiiC Median filter
The Median filter replaces each pixel value by the median pixel value in a neighborhood centered on it. This filter can be described by the following equation:
(5) 
where for are the pixel values within the neighborhood centered on .
IiiD NonLocal Means
The Non Local Means (NLM) originally presented in [17] has inspired several variations. In this paper we use the windowed version as proposed by Buades et al. [18]. Given a noisy image , this NLM variant defines a restored version pixel as a weighted average of all pixels inside of a window of size centered on using the following equation:
(6) 
where the weight measures the similarity between pixels and and is the search window ( is an userdefined parameter). Each is computed as follows:
(7) 
where and are regions centered at and ( is an userdefined parameter) and is an userdefined parameter that represents filtering level. To compute the similarity between and
an Euclidean distance weighted by a Gaussian kernel with standard deviation defined by the userdefined parameter
is used.Iv Experiments
Iva Experimental setup
To evaluate if noise hampers classification performance we generated noisy versions of two datasets (Corel and Caltech101600) using different levels of three types of noise: Gaussian, Poisson and salt & pepper. Moreover, to understand the impacts of employing a denoising algorithm as preprocessing, we restored these noisy images using two denoising methods: Median filter and NonLocal Means. All these operations were performed on both, training and test, sets of both datasets.
We trained different linear Support Vector Machines (SVMs) for every version of their training set. Given that every training set version only has one type of noise (or no noise at all), a model specialized on each level of each type of noise was created.
Then, these models were used to classify every version of the test set. As with the training sets, each test set version also only contains one type of noise (or no noise at all), this allows the experiments to measure how well a model learned on a particular noisy training set performs on other types of noisy images (see Figure 2 for a diagram that summarizes this setup). In addition, by training a model using a certain type of noise and noise level and then evaluating its performance on a test set with the same characteristics, it is possible to make a superficial analysis on the linear separability of the problem (since linear SVMs were used).
Since the selected datasets have more than two classes, we trained SVM models using a “onevsall” approach. Furthermore, to evaluate their performance, an average F1score weighted by the number of instances in each class was used. This performance measure was chosen because it addresses the problem of evaluating the classification of unbalanced domains, that is, when classes have different number of instances.
IvB Datasets
IvB1 Corel^{1}^{1}1The Corel dataset is available at: https://sites.google.com/site/dctresearch/Home/contentbasedimageretrieval
a dataset containing RGB images of 80 classes, where each class has at least 100 images. Sample images from this dataset are shown in the first row of Figure 3.
IvB2 Caltech101600^{3}^{3}3The Caltech101600 dataset is available at: http://www.icmc.usp.br/pessoas/moacir/data/
a subset [8] of Caltech101 [19] containing 6 classes (airplanes, bonsai, chandelier, hawksbill, motorbikes, and watch), each one with 100 examples. Images from this dataset can be seen in the second row of Figure 3.
IvC Reproducibility remarks and parameter values
Regarding the insertion of noise to the images, we considered three types of noise: Gaussian, Poisson and salt & pepper. First, for the Gaussian noise, we used zero mean and five different values for the standard deviation (). Figure 4 shows an example of different levels of Gaussian noise. Secondly, since the Poisson is a noise dependent signal, to adjust the intensity of the Poisson noise applied to each image, it was necessary to multiply the image by a scale factor after generating the noise, controlling its effect on the image^{5}^{5}5An in depth explanation about the scale factor for the Poisson noise can be found at: https://ruiminpan.wordpress.com/2016/03/10/thecuriouscaseofpoissonnoiseandmatlabimnoisecommand/. In our tests we used five different levels for the Poisson scale factor (scale =
). Finally, the salt & pepper noise was applied to each pixel with five different probabilities:
.For the image descriptors, the parameters were fixed for all datasets and all noise types and levels. The LBP method was computed using radius and circular neighborhood , while, for the HOG method, possible gradients orientations were considered and each cell was composed by a region of pixels, while each block contains a single cell. To obtain fixedsize feature vectors using the HOG descriptor, all images in the dataset were resized. This process was carried out in three steps. First, considering the number of rows and columns, we computed the values of the bigger and the smaller dimension of every image. Second, we obtained the smallest value for the bigger and the smaller dimensions among all images within the dataset. Thirdly, given (the smallest value for the bigger dimension) and (the smallest value for the smaller dimension), we resized all images in a dataset so that they end up with their bigger dimension equals to and their smaller dimension equals to . This procedure reduces the distortion caused by the resize.
Concerning denoising methods, all NLM restored images were generate using , (which are recommended by the original paper [17]) and . With the Median filter we used a neighborhood of pixels. Examples of the images obtained after applying a denoising method can be seen in Figure 5. During the classification stage, the parameters used to train each SVM were selected using a gridsearch performed in a 5fold cross validation on the training set.
Due to reproducibility purposes, the code used in our experiments is available online^{6}^{6}6Repository url: https://github.com/gbpcosta/wvc_2016_noise .
V Results and discussion
Using the experimental setup presented earlier, in this section we analyse our results considering the three questions presented in Section I. As can be noticed by the heatmaps presented in Figures 7, 8, 9 and 10, the HOG descriptor obtained better results in both datasets. However, our goal is not to compare the descriptors, but rather analyse the impact of noise in image classification by shedding some light on the following questions.
1) Is the performance of a classifier hampered when using the LBP and HOG methods to describe a noisy image dataset?
To answer this question we created Figures 7, 8, 9 and 10. Each one of these figures is a heatmap representing the F1score levels obtained by a classifier in all versions of a dataset (noisy, original and restored). It is possible to observe that the best results were obtained by classifiers trained and tested using noisefree images. This means that, for the analysed scenarios, image classification using LBP and HOG descriptors classified by a linear SVM, is hampered when using noisy images as input. Additionally, the higher the noise level the lower the F1score (see Figure 6) if we consider a model trained with the original (noisefree) train data. This effect was observed also in previous studies [12, 11], but for other descriptors, classifiers, datasets, and types of noise.
Please notice that the darkest color in the heatmaps is defined by the best result obtained in that dataset during the experiments and not by (the best possible F1score value). For that reason the scale of Figures 7 and 9 is different from the one of Figures 8 and 10.
2) The decrease in performance is due to the fact that noise makes it harder to separate the classes or does the model learned from images without noise is not robust enough to deal with noisy images?
If we look at Figure 11 it is possible to see that the models trained in a specific noise configuration have the best performance for a test set with the same noise configuration. Nevertheless, if we compare these best results for every noise level (as shown in Figure 6), the best F1 – for both descriptors in both datasets – are obtained when both training and test is noisefree. Therefore, given that all these models where build after a grid search and that linear SVMs were used, our results indicate that the classes become less linearly separable in the presence of noise.
Those results show that LBP and HOG are sensitive to noise, which might cause it to produce different feature spaces for the same data under different levels of noise. Thus, the SVM model might not have been able to create a classifier that could be sufficiently general for noisy future data, due to hindered class representation.
3) Can denoising methods help in these situations?
Overall, the use of denoising methods improved the classification performance when both training and test sets were affected by the same type of noise. However, the achieved result was not as good as the one obtained using the original dataset, probably due to the loss of detail and texture caused by these methods. Note, however, that models created with images after denoising did not perform well when tested with noisy images.
Va Supplementary material
Due to the size restrictions, not all results were presented in this paper. These results are available at: https://github.com/gbpcosta/wvc_2016_noise.
Vi Conclusion
Results presented in the previous section show that test classifiers in images with a different type of noise not only confuses the models, but also causes the problem become harder. This is noticeable on the diagonal of each heatmap, where none of the classifiers were able to overcome the performance of the classifier trained and tested with the original dataset.
When denoising is applied, the results obtained by classifying images from the same category (same type of noise or denoising method) were slightly better then the ones achieved by classifying noisy images. However, due to the smoothing caused by these methods, these results did not match the classification performance of the original dataset.
Future work include the analysis of the effect of noise in video descriptors, since temporal information might help overcome the difficulty of describing noisy data. The analysis performed in this paper should also be extended to include more datasets, descriptors and denoising methods, mainly to include deep learning methods, since these represent the stateoftheart of image classification. Finally, the use of image quality metrics such as PSNR and SSIM can be important on comparing degraded images.
Acknowledgment
This work was supported by FAPESP (grants #2014/218882, #2015/048830 and #2015/053103).
References

Dabov et al. [2009]
K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Bm3d image denoising with shapeadaptive principal component analysis.” in
SPARS’09Signal Processing with Adaptive Sparse Structured Representations, Saint Malo, France, 2009.  Contato et al. [2016] W. A. Contato, T. S. Nazare, G. B. Paranhos da Costa, M. Ponti, and J. E. S. Batista Neto, “Improving nonlocal video denoising with local binary patterns and image quantization,” in Conference on Graphics, Patterns and Images (SIBGRAPI 2016), 2016.
 Ponti et al. [2016a] M. Ponti, E. S. Helou, P. J. S. G. Ferreira, and N. D. A. Mascarenhas, “Image restoration using gradient iteration and constraints for band extrapolation,” IEEE Journal of Selected Topics in Signal Processing, vol. 10, no. 1, pp. 71–80, Feb 2016.
 Beckouche et al. [2013] S. Beckouche, J.L. Starck, and J. Fadili, “Astronomical image denoising using dictionary learning,” Astronomy & Astrophysics, vol. 556, p. A132, 2013.
 Ojala et al. [1994] T. Ojala, M. Pietikainen, and D. Harwood, “Performance evaluation of texture measures with classification based on kullback discrimination of distributions,” in ICPR94, 1994, pp. A:582–585.

Dalal and Triggs [2005]
N. Dalal and B. Triggs, “Histograms of oriented gradients for human
detection,” in
2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)
, vol. 1, June 2005, pp. 886–893 vol. 1.  Bengio et al. [2013] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, no. 8, pp. 1798–1828, 2013.
 Ponti et al. [2016b] M. Ponti, T. S. Nazaré, and G. S. Thumé, “Image quantization as a dimensionality reduction procedure in color and texture feature extraction.” Neurocomputing, vol. 173, pp. 385–396, 2016.
 Ponti and Escobar [2013] M. Ponti and L. C. Escobar, “Compact color features with bitwise quantization and reduced resolution for mobile processing,” in Global Conference on Signal and Information Processing (GlobalSIP), 2013 IEEE. IEEE, 2013, pp. 751–754.
 Kanan and Cottrell [2012] C. Kanan and G. W. Cottrell, “Colortograyscale: does the method matter in image recognition?” PloS one, vol. 7, no. 1, p. e29740, 2012.

Dodge and Karam [2016]
S. Dodge and L. Karam, “Understanding how image quality affects deep neural networks,” in
2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), Jun. 2016, pp. 1–6.  Kylberg and Sintorn [2013] G. Kylberg and I.M. Sintorn, “Evaluation of noise robustness for local binary pattern descriptors in texture classification.” EURASIP J. Image and Video Processing, vol. 2013, p. 17, 2013.
 Nanni et al. [2012] L. Nanni, A. Lumini, and S. Brahnam, “Survey on LBP based texture descriptors for image classification,” Expert Systems with Applications, vol. 39, no. 3, pp. 3634–3641, Feb. 2012.
 Ojala et al. [2002] T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution grayscale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, Jul. 2002.
 Pietikäinen et al. [2000] M. Pietikäinen, T. Ojala, and Z. Xu, “Rotationinvariant texture classification using feature distributions,” Pattern Recognition, vol. 33, pp. 43–52, 2000.
 Lowe [2004] D. G. Lowe, “Distinctive image features from scaleinvariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.
 Buades et al. [2005] A. Buades, B. Coll, and J.M. Morel, “A nonlocal algorithm for image denoising,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 2. IEEE, 2005, pp. 60–65.
 Buades et al. [2011] A. Buades, B. Coll, and J. Morel, “NonLocal Means Denoising,” Image Processing On Line, vol. 1, 2011.
 FeiFei et al. [2007] L. FeiFei, R. Fergus, and P. Perona, “Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories,” Computer Vision and Image Understanding, vol. 106, no. 1, pp. 59–70, 2007.