HEp-2 Cell Image Classification with Deep Convolutional Neural Networks

04/10/2015 ∙ by Zhimin Gao, et al. ∙ University of Wollongong 0

Efficient Human Epithelial-2 (HEp-2) cell image classification can facilitate the diagnosis of many autoimmune diseases. This paper presents an automatic framework for this classification task, by utilizing the deep convolutional neural networks (CNNs) which have recently attracted intensive attention in visual recognition. This paper elaborates the important components of this framework, discusses multiple key factors that impact the efficiency of training a deep CNN, and systematically compares this framework with the well-established image classification models in the literature. Experiments on benchmark datasets show that i) the proposed framework can effectively outperform existing models by properly applying data augmentation; ii) our CNN-based framework demonstrates excellent adaptability across different datasets, which is highly desirable for classification under varying laboratory settings. Our system is ranked high in the cell image classification competition hosted by ICPR 2014.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 13

page 20

page 21

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Indirect immunofluorescence (IIF) on Human Epithelial- (HEp-) cells is a recommended methodology to diagnose autoimmune diseases (Rigon et al., 2007). However, manual analysis of IIF images leads to crucial limitations, such as the subjectivity of result, the inconsistence across laboratories, and the low efficiency in processing a large number of cell images (Meroni and Schur, 2010; Foggia and Vento, 2013). To improve this situation, automatic and reliable cell images classification has become an active research topic.

Many methods have been recently proposed for this topic, especially during the HEp- cell classification competitions (Foggia and Vento, 2013; Foggia et al., 2014; Lovell et al., 2014)

. Most of them treat feature extraction and classification as two separate stages. For the former, a variety of hand-crafted features are adopted, including local binary pattern (LBP)

(He and Wang, 1990; Nosaka and Fukui, 2014; Theodorakopoulos et al., 2014b), scale-invariant feature transform (SIFT) (Lowe, 2004), histogram of oriented gradients (Dalal and Triggs, 2005), discrete cosine transform, and the statistical features like gray-level co-occurrence matrix (Haralick et al., 1973) and gray-level size zone matrix (Thibault et al., 2014)

. For the latter, nearest-neighbor classifier, boosting, support vector machines (SVM) and multiple kernel SVM have been employed

(Wiliem et al., 2014). As a result, the performance of these classifiers relies highly on the appropriateness of the empirically chosen hand-crafted features. Moreover, because features and classifier are treated separately, they cannot work together to maximally identify and retain discriminative information.

Very recently, deep convolutional neural networks (CNNs) have consistently achieved outstanding performance on generic visual recognition tasks (Krizhevsky et al., 2012) and this has revived extensive research interest in CNN-based classification model (Razavian et al., 2014). The CNNs consist of multi-stage processing of an input image to extract hierarchical and high-level feature representations. Many hand-crafted features and the corresponding classification pipelines can be regarded as an approximation to or a special case of the CNNs, by sharing some basic building blocks. Nevertheless, these features and pipelines have to be carefully designed and integrated in order to preserve discriminative information. The excellent performance achieved by deep CNNs on generic visual recognition and the high demand for full automation of HEp- cell image classification motivate us to research the CNNs for this classification task.

To this end, we propose an automatic feature extraction and classification framework for HEp- staining patterns based on deep CNNs (LeCun et al., 1998)

. This framework extracts features from the raw pixels of cell images and avoids using hand-crafted features. Feature representations for each kind of staining patterns are learned and optimized via training the multi-layer network. Also, the classification layer is jointly learned with this network to predict the probability of a cell image for each class. The highly non-linear and high-capacity properties

(LeCun et al., 2012) make the multi-layer CNNs difficult to train, especially when the number of training samples is not sufficiently large. We explore multiple important aspects in this CNN-based classification system, including network architecture, image preprocessing, hyper-parameters selection, and data augmentation, which are important for CNNs to achieve effective and reliable cell classification. Furthermore, we conduct rigorous experimental comparison with two state-of-the-art hand-designed shallower image representation models, i.e., bag-of-features (BoF) and Fisher Vector (FV), to investigate the advantages and disadvantages of our CNN-based framework on cell image classification. Our system has participated in the Contest on Performance Evaluation on Indirect Immunofluorescence Image Analysis Systems hosted by ICPR 2014111Contest website is at http://i3a2014.unisa.it/?page_id=91. and won the fourth place among international teams.

The rest of the paper is organized as follows. Section 2 reviews the classification models of BoF, FV and deep CNNs. In Section 3, our CNN-based framework for cell images classification is presented and a set of key factors are discussed. Section 4 reports the experimental investigation and comparison, and the conclusions are drawn in Section 5.

We were invited by the ICPR 2014 contest organizers to report our system in a workshop short paper (Gao et al., 2014). This paper significantly extends that workshop paper in the following aspects: i) a more detailed description of our deep CNN-based classification framework for HEp- cell images is presented and multiple key factors for effectively training a reliable deep CNN are discussed and experimentally demonstrated; ii) the role of image rotation as a data augmentation method in helping the deep CNN to achieve robust representations in this classification task is investigated and analyzed; iii) systematic experimental comparisons of our CNN-based framework and the state-of-the-art hand-designed classification models are conducted; iv) the excellent adaptability of our cell classification system with respect to different laboratory settings is demonstrated by transferring the learned network across two datasets with easy implementation, which makes our system attractive for practical clinical applications.

2 Related Work

2.1 Bag-of-features and Fisher Vector Models

The BoF model (Csurka et al., 2004) generally consists of four stages: local feature extraction, dictionary learning, feature encoding, and feature pooling. The dictionary is composed of a set of visual words describing the common visual patterns shared by local descriptors. The relationship between local descriptors and visual words is characterized by feature encoding. A variety of coding methods have been proposed in the literature (Liu et al., 2011; Wang et al., 2010; Jegou et al., 2010; Boiman et al., 2008). On top of these, spatial pyramid matching (SPM) (Lazebnik et al., 2006) is usually utilized to incorporate the spatial information of an image. The BoF model has been applied to staining patterns classification (Wiliem et al., 2014; Kong et al., 2014; Shen et al., 2014; Stoklasa et al., 2014), in which one or more of the above four stages are tailored to obtain better cell image representations for classification. Readers are referred to the review Foggia et al. (2014) for more details.

In the past several years, FV model has shown superior performance to the BoF model (Perronnin and Dance, 2007; Perronnin et al., 2010; Sánchez et al., 2013)

. Their main differences lie at dictionary learning and feature encoding. The dictionary in FV is generated by a probabilistic model, e.g., the Gaussian mixture model (GMM), that characterizes the distribution of local descriptors. Each local descriptor is then encoded by the first- and second-order gradients with respect to the model parameters. FV model has also been applied to cell image classification

(Faraki et al., 2014; Han et al., 2014).

2.2 Deep Convolutional Neural Networks

CNNs belong to a class of learning models inspired by the multi-stage processes of visual cortex (Hubel and Wiesel, 1962). A pioneering work of CNNs was Fukushima’s “neocognitron” (Fukushima, 1980). It has a structure similar to the hierarchical model of the visual nervous system discovered by Hubel and Wiesel (Hubel and Wiesel, 1959). Each stage of the network imitates the functions of simple and complex cells in the primary visual cortex. Later on, LeCun et al. (1998)

extended the neocognitron by utilizing backpropagation algorithm to train the model parameters of CNNs and achieved excellent performance in hand-written digit recognition.

With the advent of fast parallel computing, better regularization strategies, and large-scale datasets, deep CNNs models have recently significantly outperformed the models with hand-crafted features on generic object classification, detection and retrieval (Razavian et al., 2014), as well as other visual recognition tasks, such as face verification (Taigman et al., 2014) and mitosis detection in breast cancer histopathology images (Veta et al., 2015). As for cell images classification, Malon et al. (Foggia and Vento, 2013) adopted a CNN to classify HEp- cell images. Buyssens et al. (2013) designed a multiscale CNN for cytological pleural cancer cells classification. Our CNN framework presented in this paper is different from their works in terms of both image preprocessing method and network architecture. Moreover, our CNN performs better than the CNN reported in Foggia and Vento (2013) on ICPR 2012 HEp- cell classification.

Although CNNs have been initially applied to cell image classification, the following issues have not been systematically investigated and thus remain unclear: i) what are the key issues when adopting deep CNNs for cells classification? ii) how is the performance of the CNN-based classification model when compared with the well-established classification models in the literature, especially the BoF and FV models? These issues will be carefully investigated and addressed in this work.

3 Proposed Framework

The proposed deep CNN-based HEp-2 cell image classification framework consists of three components: image preprocessing, network training, and feature extraction and classification, which are elaborated in this section. Also, data augmentation which plays an important role in this classification framework will be described and analyzed.

Figure 1: The architecture of our deep convolutional neural network classification system for HEp-

cell images. Each plane within the feature extraction stage denotes a feature map. The convolutional layer and max-pooling layer is abbreviated as C and P respectively. C1:6@

means that this is a convolutional layer, and is the first layer of the network. This layer is comprised of six feature maps, each of which has size of . The symbols and number above the feature maps of other layers have the similar meaning, whereas F7:150 means that this is a fully-connected layer. It is the seventh layer of the network and has neurons. The words and number between two layers stand for: the operation, i.e., convolution or max-pooling, applied to the feature maps of the previous layer in order to obtain the feature maps of this layer; and the size of each filter or the size of pooling region.

3.1 Network Architecture

A proper selection of network architecture is crucial to CNNs. Usually, deep CNNs are composed of multiple convolutional layers interlaced with subsampling (pooling) layers, as shown in Fig. 1. Each layer outputs a set of two-dimensional feature maps, each of which represents a specific feature detected from all positions of the input. These feature maps are in turn used as the input of the next layer. Fully-connected layers are usually stacked on the top of the network to conduct classification.

Our deep CNN shares the basic architecture as the classical LeNet-5 (LeCun et al., 1998). Specifically, it contains eight layers. Among them, the first six layers are convolutional layers alternated with pooling layers, and the remaining two are fully-connected layers for classification.

3.1.1 Convolutional Layer

Let’s assume that it is the th layer. Let denote the number of feature maps at this layer, where is used as a superscript. Accordingly, each feature map is denoted as . This convolutional layer is parametrized by an array of two-dimensional filters associating the th feature map in the th layer with the th feature map in the th layer and the bias . Each filter acts as a feature detector to detect one particular kind of feature by convolving with every location of the input feature map. To obtain , each input feature map is firstly convolved with the corresponding filter . The results are summed and appended with the bias

. After that, a non-linear activation function

, which can be sigmoid, tanh or rectified linear function (Krizhevsky et al., 2012), is applied in an element-wise manner. Mathematically, the feature maps of the th layer can be expressed as follows:

(1)

where denotes the convolution operation.

3.1.2 Pooling Layer

A pooling layer down-samples a feature map. This will greatly reduce the computation of training a CNN and also introduces invariance to small translations of input images. Max-pooling or average-pooling is usually applied. The former selects the maximum activation over a small pooling region, while the latter uses the average activation over this region. Max-pooling generally performs better than average-pooling (Boureau et al., 2010).

3.1.3 Classification Layer

Classification layers usually involve one or more fully-connected layers at the top of a CNN. Our network contains two fully-connected layers. The first fully-connected layer (F7 in Fig. 1) takes the cascade of all the feature maps of the sixth layer (denoted as ) as input. This layer is parametrized by weights and biases . The output of this layer is obtained as . The last fully-connected layer is the output layer and parametrized by weights and biases . It contains neurons corresponding to classes of staining patterns, and outputs the probabilities via softmax regression as follows:

(2)
(3)

where is the output probability of the th neuron.

The network architecture of our deep CNN is illustrated in Fig. 1. Specifically, the first layer convolves an input image with each of the six filters of size

with a stride of one pixel, and then adds a bias to each of them after convolution. We adopt the hyperbolic tangent function

(LeCun et al., 1998) as the activation function. The second layer takes the output of the first layer as input, and applies max-pooling over non-overlapping regions of size for each feature map. The third layer adopts filters of size , and has feature maps. The fourth layer then applies max-pooling over non-overlapping pooling regions of size . The fifth layer employs filters of size and includes feature maps. The sixth layer employs non-overlapping max-pooling to the output maps of the fifth layer. After that, the resulting feature maps of size are cascaded and passed to the first fully-connected layer containing neurons.

When a cell image is fed into the network, the spatial resolution of each feature map decreases as the features are extracted hierarchically from one layer to next. The spatial information of each cell is extracted by the feature maps because of the spatial convolution and pooling operations, which are important to distinct different staining pattern types. The features obtained are invariant to small translation or shift of cell images, because the filter weights of the convolutional layers are uniform for different regions of the input maps and max-pooling is robust to small variations.

3.2 Image Preprocessing

An appropriate image preprocessing method that takes the characteristic of images into consideration is necessary for deep CNNs to obtain good internal feature representation and classification performance.

The brightness and contrast of the HEp-

cell images provided by the ICPR 2014 contest (ICPR2014 dataset in short) vary greatly. To reduce this variance and enhance the contrast, we normalize each image by first subtracting the minimum intensity value of the image. The resulting intensity is then divided by the difference between the maximum and minimum intensity values. Furthermore, each image is resized to

to guarantee a uniform scale of all the images used for training. This size is approximately the average size of all the cell images. Examples of six staining patterns in ICPR2014 dataset and the preprocessed images are shown in Fig. 2. In addition, we just use the preprocessed whole cell images to train our network instead of adopting a mask to only keep the foreground within each cell as Malon et al. in (Foggia and Vento, 2013), because the mask information of each cell is usually unavailable in practice, and we find that the classification performance of our system is adversely affected by using cell masks.

Figure 2: Example cells of six classes in ICPR2014 dataset and their corresponding preprocessed and aligned images. There are four images for each cell: (a) the original image; (b) the mask of this cell image (we do not take advantage of it for training the CNN); (c) the preprocessed image when the original image is applied contrast normalization and resized; (d) the aligned image when the contrast normalized image is aligned by PCA and then resized.

3.3 Data Augmentation

Deep CNNs are high-capacity architecture having a large number of parameters to be learned. It will be difficult to effectively train a CNN when training images are insufficient. Data augmentation (Krizhevsky et al., 2012) has been regarded as a simple and effective way to generate more samples to train a CNN and gain robustness against a variety of variances.

For data augmentation in the cell image classification, we identify the following two points: i) generating new training images by rotating existing ones can effectively boost the classification performance of the CNNs; ii) instead of merely increasing the robustness of the CNNs against the global orientation of a cell, the extra samples generated via such rotation-based augmentation help to show the intrinsic distribution of the staining patterns belonging to each cell category, which is a more important factor contributing to the improvement of the classification performance.

To demonstrate the first point, we keep rotating each training image with respect to its center by a step of degree. The newly generated images inherit the class label from the original training image, because rotating a cell image does not change its class label. By doing so, the original training set is enlarged by a factor of , and this augmented training set is used to train the CNN.

To demonstrate the second point, we pre-align each cell image to approximately have the same global orientation. In this way, if the global orientation variance is really the main factor affecting the training performance of the CNN, we shall observe some improvement by using the pre-aligned training set. Also, augmenting this pre-aligned training set with rotated images shall not lead to significantly better classification performance.

To investigate our hypothesis, we apply principal component analysis (PCA) to each cell’s mask to obtain the principal direction of its shape. Each contrast normalized cell is rotated to make this principal direction to be vertical and then is resized. Applying this process to all training cell images makes them pre-aligned. These operations are illustrated in the upper left portion (as indicated) in Fig.

2, followed by more examples of cell images before and after alignment. After that, we use the pre-aligned training images to train the CNN and then classify test images which are also pre-aligned.

We find that the CNN trained in this manner does not show better performance than the CNN trained with the preprocessed training images without alignment. However, when data augmentation is applied to the pre-aligned training set images, the performance of the trained CNN increases greatly. This indicates that, in terms of cell classification, adequately demonstrating the staining patterns within a cell image is more important than removing the global orientation variance222A good example in contrast is human facial image, for which pre-alignment is generally helpful for recognition. This is because the patterns within a facial image, e.g., eyes, nose and mouth, have a rigid geometric association with the global orientation of the face. Pre-aligning the faces with respect to their global orientations effectively makes the patterns inside align with each other. Nevertheless, it is not such a case for cell images.. Detailed experimental results will be presented in Section 4.

3.4 Network Training

Due to the non-convex property of the cost surface of CNNs, it is essential to select appropriate network training parameters, e.g., learning rate, and regularization methods, e.g., weight decay and dropout (Hinton et al., 2012) to make the network converge to good solutions fast.

Our deep CNN is parameterized by the weights and biases of different convolutional layers and fully-connected layers , where . The total number of parameters is over . The network is trained by minimizing the cross-entropy between the output probability vector and the binary class label vector with one non-zero entry “” corresponding to the true class, which is expressed as follows.

(4)

The weights are initialized from a uniform distribution and the biases are initialized to zero. All these trainable parameters are updated periodically via stochastic gradient descent (SGD)

(LeCun et al., 1998) after evaluating the cost function. Let denote a weight of the th layer, i.e., an element of . Let be a bias of the th layer (an element of ). Each weight and bias are updated by the following rules:

(5)

where is the learning rate, and and are the partial derivatives of the cost function with respect to and respectively. They are calculated and updated via back-propagating the output error to the th layer (LeCun et al., 1989) after a number of training images (a mini-batch (Bengio, 2012)) feed into the network.

To smooth the directions of gradient descent and make the network converge fast, we employ momentum (Bengio, 2012) to speed up the learning by guiding the descent direction with past gradients. The update rules of and become as the follows:

where and are the momentum variables for and respectively; and are the coefficients of momentum term and weight decay term, and their optimal values are experimentally tuned, as shown in Section 4. When training error rate becomes stabilized, the learning rate

will be reduced to achieve finer learning. The whole training process terminates after the classification error rates of both training set and validation set (which is held out from the given training images) plateau at some epochs.

In addition, another newly developed regularization strategy, dropout (Hinton et al., 2012), is also investigated in the network training. It randomly sets a fraction of the activations in the hidden layers to zero to force the hidden units to learn more independent and robust features that could generalize well and to prevent overfitting.

3.5 Feature Extraction and Classification

When classifying a test image, the same preprocessing and rotation in Section 3.2 and 3.3 are applied. This results in rotated variants in total. Each of them is forward-propagated through the network, and the probability of this image for each of the classes is obtained. To further improve the robustness of classification, we select four similar CNNs after the training process becomes stable and use them collectively for classification following Krizhevsky et al. (2012). The predicted class is the one having the maximum output probability averaged over the probabilities, that is,

(7)

4 Experimental Results

We evaluate our CNN classification system on two datasets of HEp- cell classification competition held by ICPR 2014 and 2012. The evaluation criterion is the mean class accuracy (MCA) newly adopted by ICPR 2014 competition. It is the average of the per-class accuracies (Lovell et al., 2014) defined as follows:

(8)

where is the classification accuracy of class and is the number of cell classes.

The average classification accuracy (ACA), which is the overall correct classification rate of all the cell images, used by the previous competition is also calculated for the ease of comparison.

4.1 Introduction of the HEp-2 Cell Datasets

ICPR2014 cell dataset. This dataset contains training cell images, and the test set is reserved by the competition organizers and not published yet. The cell images are extracted from specimen images captured by monochrome high dynamic range cooled microscopy camera fitted on a microscope with a plane-Apochromat / objective lens and an LED illumination source (Lovell et al., 2014). These specimen images have been automatically segmented by using the DAPI channel and manually annotated by specialists. Each image belongs to one of the six staining patterns: Homogeneous, Speckled, Nucleolar, Centromere, Nuclear Membrane and Golgi, as shown in the top row of Fig. 3.

ICPR2012 cell dataset. It consists of cell images extracted from specimens, which are acquired with a fluorescence microscope (-fold magnification) coupled with W mercury vapor lamp and with a digital camera (Foggia and Vento, 2013). The dataset is pre-partitioned into training set ( images) and test set ( images). Each image belongs to one of the six classes: Homogeneous, Coarse Speckled, Nucleolar, Centromere, Fine Speckled and Cytoplasmic, as shown in the bottom row of Fig. 3.

Comparing the two datasets shows that two of the six classes are different. Specifically, two sub-categories of ICPR2012 dataset (Fine Speckled and Coarse Speckled) are merged into one category (Speckled) in ICPR2014 dataset, and two less frequent staining patterns appearing in daily clinical cases, Golgi and Nuclear Membrane are introduced in ICPR2014 dataset for developing more realistic HEp- cell classification systems. Moreover, because the images in the two datasets are captured with different laboratory settings, a classification system that can be easily transferred from one dataset to the other one will be highly desired.

Figure 3: Comparison of HEp- cell images of ICPR2014 dataset (top row) and ICPR2012 dataset (bottom row). The number below the name of each cell is the total number of this kind of cells in the training set of each dataset.

4.2 Experiments of Hyper-parameters Optimization

This experiment demonstrates the importance of properly tuning the hyper-parameters in the CNN-based system. We categorize the hyper-parameters into two groups: model-relevant and training-relevant, as listed in Tables 1 and 2.

Layer Number Layer Type Hyper-parameter
Layer 1 Convolution Filter size:
Feature map number:
Activation function:
hyperbolic tangent
Layer 2 Pooling Pooling region size:
Pooling method: max-pooling
Layer 3 Convolution Filter size:
Feature map number:
Activation function:
hyperbolic tangent
Layer 4 Pooling Pooling region size:
Pooling method: max-pooling
Layer 5 Convolution Filter size:
Feature map number:
Activation function:
hyperbolic tangent
Layer 6 Pooling Pooling region size:
Pooling method: max-pooling
Layer 7 Full connection Neurons number:
Activation function:
hyperbolic tangent
Table 1: Model-relevant hyper-parameters obtained
Hyper-parameter
Initial
learning rate
Mini-batch
size
Momentum
coefficient
Weight decay
coefficient
Dropout ratio
Value 0.01 113 0.9 0.0005 0
Table 2: Training-relevant hyper-parameters obtained

To tune these hyper-parameters, we randomly partition the cell images of ICPR2014 dataset into three subsets, that is, % for training ( images), % for validation ( images), and % for test ( images). This partition is utilized by all experiments on ICPR2014 dataset (multiple partitions could be certainly implemented when the computational resource is not an issue.). Data augmentation is not used when tuning hyper-parameters. Following Bengio (2012), the parameters are tuned until the error rate of not only the training set but also the validation set become sufficiently small and stabilized. The hyper-parameters obtained by this tuning process are summarized in Tables 1 and 2.

We highlight that training-relevant hyper-parameters can significantly affect the convergence of cost function, the learning speed and the generalization capability of the network. Their impacts are demonstrated via the learning curves of MCA on training, validation and test sets shown from Fig. 4 to Fig. 8. In each figure, we focus on one hyper-parameter while the others are set to their optimal values in Table 2.

Fig. 4 LABEL:sub@subfig:lr1 indicates that when learning rate is small, e.g., , the learning process is so slow that the MCA of the three sets have not become stable in epochs. Properly increasing the learning rate effectively improves learning efficiency and the MCA becomes stable in epochs, as shown in Fig. 4 LABEL:sub@subfig:lr2. At the same time, an over-large learning rate, e.g., , will destabilize the learning process and degrade the classification performance. Also, Fig. 5, 6 and 7 demonstrate the impacts of mini-batch size, momentum and weight decay, respectively.

The comparison in Fig. 8 shows that the dropout strategy (Hinton et al., 2012) shall be used cautiously. When dropout with ratio of (randomly setting the activations to zero with probability of ) is applied to the first fully-connected layer of our CNN system, the learning process becomes slow and fluctuated on ICPR2014 cell dataset. A stabler and faster learning process without overfitting on the test set is gained when removing dropout, as well as better classification performance. This indicates that the neurons at the first fully-connected layer may have to work together to distinguish different staining patterns. In light of this, we decide not to employ dropout when training our network on ICPR2014 dataset.

(a) Learning rate = 0.001
(b) Learning rate = 0.01
(c) Learning rate = 0.1
Figure 4: Demonstration of the impact of learning rate. It shows that an over-small learning rate, e.g., , slows down the learning process, whereas an over-large learning rate, e.g., , destabilizes the learning process and degrades the classification performance. A better classification result can be obtained by properly tuning the learning rate, as shown in (b).
(a) Mini-batch size = 11
(b) Mini-batch size = 77
(c) Mini-batch size = 113
(d) Mini-batch size = 791
Figure 5: Demonstration of the impact of mini-batch size. It shows that when mini-batch size is unnecessarily small, the learning process becomes bumpy and does not lead to the best result. On the other hand, when the mini-batch size is too large, the learning process becomes less responsive and the learning efficiency is decreased.
(a) Momentum coefficient = 0
(b) Momentum coefficient = 0.8
(c) Momentum coefficient = 0.9
(d) Momentum coefficient = 0.97
Figure 6: Demonstration of the impact of momentum. It shows that using momentum can well accelerate the learning process. Meanwhile, a large momentum coefficient, e.g., , makes the descent direction dominated by the previous ones and causes oscillation at the initial stage. Also, it decreases the classification performance at the later stage.
(a) Weight decay coefficient = 0.00005
(b) Weight decay coefficient = 0.0005
(c) Weight decay coefficient = 0.005
Figure 7: Demonstration of the impact of weight decay. It shows that a smaller weight decay coefficient seems to be a safer choice, while a larger coefficient, e.g., , could destabilize the learning process.
(a) Dropout ratio = 0.5
(b) Dropout ratio = 0
Figure 8: Demonstration of the impact of dropout. It shows that the dropout strategy shall be used cautiously. As seen in (a), the learning process becomes slow and fluctuated on ICPR2014 cell dataset, when dropout is applied. A better learning process is obtained in (b) after removing dropout.

In sum, among the hyper-parameters of a CNN, the learning rate, mini-batch size, momentum coefficient, and weight decay coefficient can significantly impact the network training process. They have to be carefully tuned before satisfactory classification performance is obtained. For our deep CNN system, with the hyper-parameters set in Table 2, we can achieve the MCA of % on the test set of ICPR2014 dataset without using data augmentation.

4.3 Experiments on Data Augmentation

This experiment demonstrates the two points presented in Section 3.3, which are recapped as follows: i) the performance of the CNN can be greatly boosted by generating new training images via rotation; ii) the extra samples generated via such rotation-based augmentation help to enrich our observations of the staining patterns of each cell category for training the CNN, which is a more important factor contributing to the improvement of the classification performance than increasing robustness of the CNN against the global orientation of cells.

Effectiveness of data augmentation. We augment the training set by rotating each cell image for , with the step of , and , respectively. In this way, the training set is expanded by , and times, and they are used to train the CNNs, respectively. To improve the robustness of our system, we select four CNNs corresponding to the th, th, th and th epochs after the network learning becomes stable333This strategy is adopted as a model average. Different number of CNNs may be chosen, e.g. 3 or 5, to compromise between the computational expense and performance, which leads to similar classification accuracy in our experiments. as in Krizhevsky et al. (2012). A test image will go through the same rotation process as the training images and be jointly classified by the four CNNs as in Eq.(7). This system is named as “CNN”. As shown in the first row of Table 3, the MCA is significantly improved (by more than percentage points) from “No data augmentation” to “Augmentation by a rotation angle step of ”. Furthermore, applying a smaller angle step to generate more training data pushes the MCA even higher, reaching %. Similar results can be observed on the ACA values. These consistent and continuous improvements well demonstrate the effectiveness and efficiency of data augmentation on cell image classification.

Method
Accuracy
(on test set)
No
data augmentation
Augmentation by
a rotation angle step of
Augmentation by
a rotation angle step of
Augmentation by
a rotation angle step of
CNN MCA(%) 88.58 95.99 96.71 96.76
ACA(%) 89.04 96.51 97.10 97.24
CNN-Align MCA(%) 88.86 95.13 96.50 96.52
ACA(%) 88.71 95.33 96.84 96.84
Table 3: Classification accuracy of our deep CNN on ICPR2014 dataset

Data augmentation vs pre-alignment. To gain more insight on the rotation-based data augmentation, we pre-align all the cell images with PCA as described in Section 3.3 to train the CNNs. We call this method “CNN-Align”. Two experiments are conducted: i) only using these aligned images to train the CNNs without performing data augmentation; and ii) as a comparison, we further rotate each aligned training image by , also with an angle step of , and , respectively. The augmented training set is used for training. As previous, augmentation (or no augmentation) is equally applied to test images.

As shown in Table 3, when no augmentation is performed, CNN-Align does not achieve any improvement over CNN. This indicates that pre-alignment does not help here. In contrast, when training data are augmented by rotation (even with the largest angle step of ), CNN-Align improves significantly. This sharp change clearly demonstrates that through the rotation-based augmentation, the network can access more examples showing the diverse staining patterns within cell images. This is a more important factor contributing to the performance improvement compared with pre-alignment that only tackles the global orientation variance of cells.

The features (filters) learned by the first and second convolutional layers of CNN corresponding to the th epoch trained with rotated cell images are depicted as Fig. 9. It can be seen that the filters of the first convolutional layer are stain-like texture detectors. Some of the second convolutional layer filters are edge-like detectors, and most of them are also stain-like texture extractors.

(a) 1st convolutional layer features
(b) 2nd convolutional layer features
Figure 9: The features learned by the first and second convolutional layers. In general, most of the filters are stain-like texture detectors, and some are edge-like extractors.
Figure 10: Confusion matrix of our best CNN ( rotation) (%).

In addition, the confusion matrix of the best CNN (trained with the rotation angle step of ) is shown in Fig. 10. The overall classification performance is very promising. The staining patterns Nucleolar and Nuclear Membrane obtain the highest classification accuracy (both %), which means that they are well separated from the others. The maximum misclassification rate (%) happens to Golgi cells. They are easy to be misclassified as Nucleolar cells, because both patterns consist of a few large dots within the cells (see misclassification examples in Fig. 11). Also, Golgi can be confused with Nuclear Membrane. This may be because when the large dots within Golgi cells are at the edge, they will look like the Nuclear Membrane cells having ring-like edges. In addition, the Speckled cells are easy to be misclassified as Homogeneous cells, probably because the densely distributed speckles are the main signatures for both patterns. Misclassification examples of these staining patterns are shown in Fig. 11.

Figure 11: Misclassification examples of the three highest misclassification rates in the confusion matrix of Fig. 10. Every two rows form a group, and the first row shows cells that are misclassified to the cell type of the second row.

4.4 Comparison with the BoF and Fisher Vector Models

Experimental setting. To ensure a fair comparison, the same image preprocessing in our CNN model is equally used in both models. For each cell image, SIFT descriptors are extracted from densely sampled patches with a stride of two pixels. The visual dictionary is generated by applying the -means clustering to the descriptors extracted from training images. Local soft-assignment coding (LSC) (Van Gemert et al., 2008; Liu et al., 2011) is employed to encode the SIFT descriptors. SPM is used to partition each image into , and regions, and max-pooling is applied to extract representations from each region.

A similar setting is applied to the FV model. In addition, the -dimensional SIFT descriptors are decorrelated and reduced to dimensions of by PCA as in Sánchez et al. (2013)

. A GMM is then estimated to represent the visual dictionary. Afterwards, each PCA-reduced SIFT descriptor is encoded with the improved Fisher encoding

(Perronnin et al., 2010), where the signed square-root and -normalization are applied to the coding vector. SPM with four regions ( and ) are adopted (Sánchez et al., 2013). Following the literature, a multi-class linear SVM classifier is used in the BoF and FV models. In our implementation of BoF and FV, the publicly available VLFeat toolbox (Vedaldi and Fulkerson, 2010) is used.

Parameter setting. There are two primary parameters in the BoF and FV models: patch size and dictionary size (or equally, the number of components of the GMM in the FV model). We tune these parameters by five-fold cross-validation on the union of training and validation sets, with the criterion of MCA. The candidate patch sizes are , , , and , while the candidate dictionary sizes are , , , , and . Also, the number of Gaussian components will be chosen from , , , and for FV. Through the cross-validation, the patch size and the dictionary size in the BoF model are selected as and . With the use of SPM, this results in a -dimensional representation for each cell image. For the FV model, the patch size is chosen as and the number of GMM components is . With the use of SPM, this leads to a -dimensional representation for each image.

Comparison results. The BoF, FV and CNN models are compared on the same training and test sets. Also, both of the cases, i.e., with and without data augmentation, are investigated. To be fair, when data augmentation is used, the visual dictionary in the BoF and FV models will be built with the augmented training set. Also, to keep consistent with the setting of our deep CNN system, each test image in this case will be equally augmented and its label is predicted in the way similar to Eq.(7), except that the probabilities are replaced by the decision values of the linear SVM classifier.

As shown in Table 4, FV is consistently better than BoF, regardless of whether data augmentation is applied or not. This agrees well with the literature. Furthermore, both BoF and FV can well benefit from data augmentation, with an average performance increase of about percentage points. Compared with BoF and FV, CNN system shows slightly lower performance (% vs % for BoF and % for FV), when there is no augmentation. However, CNN outperforms both BoF and FV once data augmentation is applied. In specific, the highest MCA, %, is obtained by our CNN, while BoF and FV achieve only % and % respectively. Similar situation can be observed from the ACA values. These results suggest that i) when training samples are not sufficient, the high-capacity CNN is more difficult to train than the shallower, hand-designed models such as BoF and FV; and ii) by properly using data augmentation to generate more training data, the CNN can be better trained and are able to achieve better performance than the BoF and FV models.

Accuracy
(on test set)
Methods
No
data augmentation
Augmentation by
a rotation angle step of
Augmentation by
a rotation angle step of
Augmentation by
a rotation angle step of
MCA (%) BoF 89.83 94.23 93.98 94.14
FV 91.60 95.41 95.73 95.53
CNN 88.58 95.99 96.71 96.76
ACA (%) BoF 90.70 94.30 94.19 94.38
FV 92.65 95.78 96.07 95.81
CNN 89.04 96.51 97.10 97.24
Table 4: Comparison of classification accuracy among the methods of BoF, FV and our deep CNN on ICPR2014 datatset

4.5 Experiments on the Adaptability across Datasets

As previously mentioned, HEp- cell image classification varies with laboratory settings, the types of staining patterns involved, and the size of dataset. Such differences can be well seen from the ICPR2014 and ICPR2012 datasets. As a result, it is highly desired that a cell classification system trained with one dataset can be conveniently adapted to another one. Owning this feature not only improves the efficiency of system building, but also can take full advantages of the image data in different datasets. To demonstrate this feature for our CNN-based system, we compare the CNN purely trained on ICPR2012 dataset (called CNN-Standard in short) with the other CNN which is an adapted version of the CNN pre-trained on ICPR2014 dataset to ICPR2012 dataset (called CNN-Finetuning).

Following previous experimental settings, CNN-Standard is trained with the training images predefined in ICPR2012 dataset. Only the green channel of each image is kept and the same preprocessing in Section 3.2 is performed. The dropout strategy (with ratio of ) is used, because it can benefit network training and classification performance on this small dataset. CNN-Standard is trained by epochs and then used to classify the predefined test images by following Eq.(7).

To train CNN-Finetuning, we first select a basic CNN system learned with the ICPR2014 dataset. It is the one obtained at the th epoch when the system is trained with an augmented (rotation with an angle step of ) training set of ICPR2014. Afterwards, this basic system is fine-tuned with the training set of ICPR2012 dataset, with or without data augmentation. All the trainable network parameters of different layers are updated during this fine-tuning process. To demonstrate the efficiency, we only fine-tune this basic system by epochs, which takes significantly less time than the epochs spent in training CNN-Standard.

Figure 12: The MCA of test set obtained by CNN-Finetuning at each of the epochs. Data augmentation with various angle steps is investigated.

The evolution of the MCA on test set with the epochs is plotted in Fig. 12. As shown by the line of “No rotation”, CNN-Finetuning does not work well at the beginning. Nevertheless, it catches up quickly in a couple of epochs and reaches a satisfying performance in epochs. Furthermore, the adaption stage is significantly shortened, by applying data augmentation to the small training set of ICPR2012 to increase training samples. These results demonstrate the high efficiency of the adaptability of our CNN-based system, especially considering that there are two different classes of staining patterns across these datasets. Comparison of CNN-Standard and CNN-Finetuning is shown in Table 5. It is interesting to note that CNN-Finetuning consistently outperforms CNN-Standard, even though it is only fine-tuned for a few epochs. We attribute its superiority to the good initialization of the network obtained from the training process on ICPR2014 dataset. Based on the above results, we believe that our CNN-based system will be a better option for practical applications.

Accuracy
(on test set)
Methods
No
data augmentation
Augmentation by
a rotation angle step of
Augmentation by
a rotation angle step of
Augmentation by
a rotation angle step of
MCA (%) CNN-Standard 63.1 72.4 72.4 73.2
CNN-Finetuning 74.5 76.3 76.2 74.9
ACA (%) CNN-Standard 64.3 70.2 70.0 70.1
CNN-Finetuning 72.9 74.8 74.7 73.3
Table 5: Classification accuracy of our CNN-based system on ICPR2012 dataset

At last, we compare our CNN-Finetuning (rotation with an angle step of ) with other methods reported in the literature in Table 6. As seen, it outperforms the best-performing method of that contest and the CNN at the ICPR2012 contest. For that CNN, a pixels area of the green channel centered at the largest connected component of each cell is taken via the mask and then is normalized by mapping the first and th percentile values to and . The architecture of that CNN is composed of two sequences of convolution, absolute value rectification and subtractive normalization, one average pooling layer, one max pooling layer and one fully connected layer444Please refer to the contest report available at http://mivia.unisa.it/hep2contest/HEp2-Contest_Report.pdf for the detailed presentation of the contest CNN., which is also quite different from our architecture. The better performance of our CNN may benefit from these differences as well as our effective data augmentation. Also, our CNN-Finetuning is just slightly inferior to the method in Theodorakopoulos et al. (2014b). That method combines two kinds of hand-crafted features: the distribution of SIFT and gradient-oriented co-occurrence LBP, and a dissimilarity representation of an image is created with them.

Method Average classification accuracy (ACA)
2012 contest
best-performing method (Foggia and Vento, 2013)
68.7%
2012 contest CNN (Foggia and Vento, 2013)
59.8%
Nosaka and Fukui (2014) 68.5%
Shen et al. (2014) 74.4%
Faraki et al. (2014) 70.2%
Larsen et al. (2014) 71.5%
Theodorakopoulos et al. (2014b) 75.1%
Our CNN-Finetuning 74.8%
Table 6: Comparison with other methods on the ICPR2012 dataset

In addition, it is worth mentioning that in the ICPR2014 contest (Lovell et al., 2014), the three methods that perform better than or comparable to our deep CNNs system (%, % and % vs % with the MCA criterion) are all built on two-stage frameworks: hand-designed feature representation and classification. The top-ranked method utilizes multi-scale and multiple types of local descriptors (Manivannan et al., 2014); the second-ranked method adopts the hand-crafted rotation invariant dense scale local descriptor (Gragnaniello et al., 2014); and the third method combines morphological features and different local texture features (Theodorakopoulos et al., 2014a). In contrast, our CNN system generates discriminative features from raw pixels directly by utilizing class label information and jointly learns the classifier in a single architecture without learning extra dictionaries as these methods.

4.6 Discussion on Computational Issues

For the CNN-based classification system, training the network is the most time-consuming step in the whole pipeline. However, this process can be well accelerated by utilizing GPU programming. Also, as previously shown, an existing CNN-based system can be efficiently transferred to a new but related task via a short training process. Once the networks are trained, a test cell image only needs to go through the four networks and then is classified within seconds in total with Matlab implementation on a computer with GHz Intel CPU and GB RAM.

For the BoF and FV models, building visual dictionary or the GMM is computationally intensive, especially when there are a large number of training images, e.g., due to the use of data augmentation. For example, building a dictionary of visual words and the GMM of components takes more than days and days in our implementation, when the training set of ICPR2014 dataset is augmented by rotation with an angle step of . Also, a large dictionary in the BoF model could slow down the encoding process, e.g., around seconds per image in our experiment. Although the time for this process can be reduced in the FV model, it still takes about three seconds per image. In addition, SPM is usually needed to attain better classification performance. In this case, the dimensions of the resulting image representation are much higher than that in the CNN-based system ( or vs only).

5 Conclusion

This paper proposes an automatic HEp-

cell staining patterns classification framework with deep convolutional neural networks. We give a detailed description on various aspects of this framework and carefully discuss a number of key issues that could affect its classification performance. Extensive experimental study on two benchmark datasets demonstrates i) the advantages of our framework over the well-established image classification models on cell image classification; ii) the importance and effectiveness of data augmentation, especially when training images are not sufficient; iii) the desirable adaptability of our CNN-based system across different datasets, which makes our system attractive for practical tasks. Much future work can be done to further improve the performance of the proposed system. In particular, a super-CNN trained with a large-scale generic image benchmark, ImageNet

(Deng et al., 2010), has recently prevailed on many generic visual recognition tasks. We would like to explore the effectiveness of the features generated by this CNN for HEp- cell image and the adaption of this CNN to cell image classification. These issues will be of significance considering the substantial differences between generic images and HEp- cell images.

References

References

  • Bengio (2012) Bengio, Y., 2012. Practical recommendations for gradient-based training of deep architectures, in: Neural Networks: Tricks of the Trade. Springer, pp. 437–478.
  • Boiman et al. (2008) Boiman, O., Shechtman, E., Irani, M., 2008.

    In defense of nearest-neighbor based image classification, in: Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, IEEE. pp. 1–8.

  • Boureau et al. (2010) Boureau, Y.L., Ponce, J., LeCun, Y., 2010.

    A theoretical analysis of feature pooling in visual recognition, in: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118.

  • Buyssens et al. (2013) Buyssens, P., Elmoataz, A., Lézoray, O., 2013. Multiscale convolutional neural networks for vision–based classification of cells, in: Computer Vision–ACCV 2012. Springer, pp. 342–352.
  • Csurka et al. (2004) Csurka, G., Dance, C., Fan, L., Willamowski, J., Bray, C., 2004. Visual categorization with bags of keypoints, in: Workshop on statistical learning in computer vision, ECCV, pp. 1–2.
  • Dalal and Triggs (2005) Dalal, N., Triggs, B., 2005. Histograms of oriented gradients for human detection, in: Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, pp. 886–893 vol. 1.
  • Deng et al. (2010) Deng, J., Berg, A.C., Li, K., Fei-Fei, L., 2010. What does classifying more than 10,000 image categories tell us?, in: Computer Vision–ECCV 2010. Springer, pp. 71–84.
  • Faraki et al. (2014) Faraki, M., Harandi, M.T., Wiliem, A., Lovell, B.C., 2014.

    Fisher tensors for classifying human epithelial cells.

    Pattern Recognition 47, 2348–2359.
  • Foggia et al. (2014) Foggia, P., Percannella, G., Saggese, A., Vento, M., 2014. Pattern recognition in stained hep-2 cells: Where are we now? Pattern Recognition 47, 2305–2314.
  • Foggia and Vento (2013) Foggia, P., P.G.S.P., Vento, M., 2013. Benchmarking hep-2 cells classification methods. Medical Imaging, IEEE Transactions on 32, 1878–1889.
  • Fukushima (1980) Fukushima, K., 1980. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics 36, 193–202.
  • Gao et al. (2014) Gao, Z., Zhang, J., Zhou, L., Wang, L., 2014. Hep-2 cell image classification with convolutional neural networks, in: Pattern Recognition Techniques for Indirect Immunofluorescence Images (I3A), 2014 1st Workshop on, pp. 24–28.
  • Gragnaniello et al. (2014) Gragnaniello, D., Sansone, C., Verdoliva, L., 2014. Biologically-inspired dense local descriptor for indirect immunofluorescence image classification, in: Pattern Recognition Techniques for Indirect Immunofluorescence Images (I3A), 2014 1st Workshop on, pp. 1–5.
  • Han et al. (2014) Han, X.H., Wang, J., Xu, G., Chen, Y.W., 2014. High-order statistics of microtexton for hep-2 staining pattern classification. Biomedical Engineering, IEEE Transactions on 61, 2223–2234.
  • Haralick et al. (1973) Haralick, R., Shanmugam, K., Dinstein, I., 1973. Textural features for image classification. Systems, Man and Cybernetics, IEEE Transactions on SMC-3, 610–621.
  • He and Wang (1990) He, D.C., Wang, L., 1990. Texture unit, texture spectrum, and texture analysis. Geoscience and Remote Sensing, IEEE Transactions on 28, 509–512.
  • Hinton et al. (2012) Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R., 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 .
  • Hubel and Wiesel (1959) Hubel, D.H., Wiesel, T.N., 1959. Receptive fields of single neurones in the cat’s striate cortex. The Journal of physiology 148, 574–591.
  • Hubel and Wiesel (1962) Hubel, D.H., Wiesel, T.N., 1962. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of physiology 160, 106.
  • Jegou et al. (2010) Jegou, H., Douze, M., Schmid, C., Perez, P., 2010. Aggregating local descriptors into a compact image representation, in: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 3304–3311.
  • Kong et al. (2014) Kong, X., Li, K., Cao, J., Yang, Q., Wenyin, L., 2014. Hep-2 cell pattern classification with discriminative dictionary learning. Pattern Recognition 47, 2379–2388.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks., in: NIPS, p. 4.
  • Larsen et al. (2014) Larsen, A., Vestergaard, J., Larsen, R., 2014. Hep-2 cell classification using shape index histograms with donut-shaped spatial pooling. Medical Imaging, IEEE Transactions on 33, 1573–1580.
  • Lazebnik et al. (2006) Lazebnik, S., Schmid, C., Ponce, J., 2006. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories, in: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, IEEE. pp. 2169–2178.
  • LeCun et al. (1989) LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D., 1989. Backpropagation applied to handwritten zip code recognition. Neural computation 1, 541–551.
  • LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 2278–2324.
  • LeCun et al. (2012) LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.R., 2012. Efficient backprop, in: Neural networks: Tricks of the trade. Springer, pp. 9–48.
  • Liu et al. (2011) Liu, L., Wang, L., Liu, X., 2011. In defense of soft-assignment coding, in: Computer Vision (ICCV), 2011 IEEE International Conference on, IEEE. pp. 2486–2493.
  • Lovell et al. (2014) Lovell, B.C., Percannella, G., Vento, M., Wiliem, A., 2014. Performance evaluation of indirect immunofluorescence image analysis systems. ICPR 2014 URL: http://i3a2014.unisa.it/.
  • Lowe (2004) Lowe, D.G., 2004. Distinctive image features from scale-invariant keypoints. International journal of computer vision 60, 91–110.
  • Manivannan et al. (2014) Manivannan, S., Li, W., Akbar, S., Wang, R., Zhang, J., McKenna, S., 2014. Hep-2 cell classification using multi-resolution local patterns and ensemble svms, in: Pattern Recognition Techniques for Indirect Immunofluorescence Images (I3A), 2014 1st Workshop on, pp. 37–40.
  • Meroni and Schur (2010) Meroni, P.L., Schur, P.H., 2010. Ana screening: an old test with new recommendations. Annals of the rheumatic diseases 69, 1420–1422.
  • Nosaka and Fukui (2014) Nosaka, R., Fukui, K., 2014. Hep-2 cell classification using rotation invariant co-occurrence among local binary patterns. Pattern Recognition 47, 2428–2436.
  • Perronnin and Dance (2007) Perronnin, F., Dance, C., 2007. Fisher kernels on visual vocabularies for image categorization, in: Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, IEEE. pp. 1–8.
  • Perronnin et al. (2010) Perronnin, F., Sánchez, J., Mensink, T., 2010. Improving the fisher kernel for large-scale image classification, in: Computer Vision–ECCV 2010. Springer, pp. 143–156.
  • Razavian et al. (2014) Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S., 2014. Cnn features off-the-shelf: An astounding baseline for recognition, in: Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pp. 512–519.
  • Rigon et al. (2007) Rigon, A., Soda, P., Zennaro, D., Iannello, G., Afeltra, A., 2007. Indirect immunofluorescence in autoimmune diseases: assessment of digital images for diagnostic purpose. Cytometry Part B: Clinical Cytometry 72, 472–477.
  • Sánchez et al. (2013) Sánchez, J., Perronnin, F., Mensink, T., Verbeek, J., 2013. Image classification with the fisher vector: Theory and practice. International journal of computer vision 105, 222–245.
  • Shen et al. (2014) Shen, L., Lin, J., Wu, S., Yu, S., 2014. Hep-2 image classification using intensity order pooling based features and bag of words. Pattern Recognition 47, 2419–2427.
  • Stoklasa et al. (2014) Stoklasa, R., Majtner, T., Svoboda, D., 2014. Efficient k-nn based hep-2 cells classifier. Pattern Recognition 47, 2409–2418.
  • Taigman et al. (2014) Taigman, Y., Yang, M., Ranzato, M., Wolf, L., 2014. Deepface: Closing the gap to human-level performance in face verification, in: Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pp. 1701–1708.
  • Theodorakopoulos et al. (2014a) Theodorakopoulos, I., Kastaniotis, D., Economou, G., Fotopoulos, S., 2014a. Hep-2 cells classification using morphological features and a bundle of local gradient descriptors, in: Pattern Recognition Techniques for Indirect Immunofluorescence Images (I3A), 2014 1st Workshop on, pp. 33–36.
  • Theodorakopoulos et al. (2014b) Theodorakopoulos, I., Kastaniotis, D., Economou, G., Fotopoulos, S., 2014b. Hep-2 cells classification via sparse representation of textural features fused into dissimilarity space. Pattern Recognition 47, 2367–2378.
  • Thibault et al. (2014) Thibault, G., Angulo, J., Meyer, F., 2014. Advanced statistical matrices for texture characterization: Application to cell classification. Biomedical Engineering, IEEE Transactions on 61, 630–637.
  • Van Gemert et al. (2008) Van Gemert, J.C., Geusebroek, J.M., Veenman, C.J., Smeulders, A.W., 2008. Kernel codebooks for scene categorization, in: Computer Vision–ECCV 2008. Springer, pp. 696–709.
  • Vedaldi and Fulkerson (2010) Vedaldi, A., Fulkerson, B., 2010. Vlfeat: An open and portable library of computer vision algorithms, in: Proceedings of the international conference on Multimedia, ACM. pp. 1469–1472.
  • Veta et al. (2015) Veta, M., van Diest, P.J., Willems, S.M., Wang, H., Madabhushi, A., Cruz-Roa, A., Gonzalez, F., Larsen, A.B., Vestergaard, J.S., Dahl, A.B., Cireşan, D.C., Schmidhuber, J., Giusti, A., Gambardella, L.M., Tek, F.B., Walter, T., Wang, C.W., Kondo, S., Matuszewski, B.J., Precioso, F., Snell, V., Kittler, J., de Campos, T.E., Khan, A.M., Rajpoot, N.M., Arkoumani, E., Lacle, M.M., Viergever, M.A., Pluim, J.P., 2015. Assessment of algorithms for mitosis detection in breast cancer histopathology images. Medical Image Analysis 20, 237 – 248.
  • Wang et al. (2010) Wang, J., Yang, J., Yu, K., Lv, F., Huang, T., Gong, Y., 2010. Locality-constrained linear coding for image classification, in: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, IEEE. pp. 3360–3367.
  • Wiliem et al. (2014) Wiliem, A., Sanderson, C., Wong, Y., Hobson, P., Minchin, R.F., Lovell, B.C., 2014. Automatic classification of human epithelial type 2 cell indirect immunofluorescence images using cell pyramid matching. Pattern Recognition 47, 2315–2324.