Diagnosis of Celiac Disease and Environmental Enteropathy on Biopsy Images Using Color Balancing on Convolutional Neural Networks

04/10/2019 ∙ by Kamran Kowsari, et al. ∙ 0

Celiac Disease (CD) and Environmental Enteropathy (EE) are common causes of malnutrition and adversely impact normal childhood development. CD is an autoimmune disorder that is prevalent worldwide and is caused by an increased sensitivity to gluten. Gluten exposure destructs the small intestinal epithelial barrier, resulting in nutrient mal-absorption and childhood under-nutrition. EE also results in barrier dysfunction but is thought to be caused by an increased vulnerability to infections. EE has been implicated as the predominant cause of under-nutrition, oral vaccine failure, and impaired cognitive development in low-and-middle-income countries. Both conditions require a tissue biopsy for diagnosis, and a major challenge of interpreting clinical biopsy images to differentiate between these gastrointestinal diseases is striking histopathologic overlap between them. In the current study, we propose a convolutional neural network (CNN) to classify duodenal biopsy images from subjects with CD, EE, and healthy controls. We evaluated the performance of our proposed model using a large cohort containing 1000 biopsy images. Our evaluations show that the proposed model achieves an area under ROC of 0.99, 1.00, and 0.97 for CD, EE, and healthy controls, respectively. These results demonstrate the discriminative power of the proposed model in duodenal biopsies classification.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and Related Works

Under-nutrition is the underlying cause of approximately % of the  million under -year-old childhood deaths annually in low and middle-income countries (LMICs) [1] and is a major cause of mortality in this population. Linear growth failure (or stunting) is a major complication of under-nutrition, and is associated with irreversible physical and cognitive deficits, with profound developmental implications [31]. A common cause of stunting in LMICs is EE, for which there are no universally accepted, clear diagnostic algorithms or non-invasive biomarkers for accurate diagnosis [31], making this a critical priority [28]. EE has been described to be caused by chronic exposure to enteropathogens which results in a vicious cycle of constant mucosal inflammation, villous blunting, and a damaged epithelium [31]. These deficiencies contribute to a markedly reduced nutrient absorption and thus under-nutrition and stunting [31]

. Interestingly, CD, a common cause of stunting in the United States, with an estimated 

% prevalence, is an autoimmune disorder caused by a gluten sensitivity [15] and has many shared histological features with EE (such as increased inflammatory cells and villous blunting) [31]. This resemblance has led to the major challenge of differentiating clinical biopsy images for these similar but distinct diseases. Therefore, there is a major clinical interest towards developing new, innovative methods to automate and enhance the detection of morphological features of EE versus CD, and to differentiate between diseased and healthy small intestinal tissue [4].

Figure 1: Overview of methodology

In this paper, we propose a CNN-based model for classification of biopsy images. In recent years, Deep Learning architectures have received great attention after achieving state-of-the-art results in a wide variety of fundamental tasks such classification 

[13, 18, 19, 24, 34, 20] or other medical domains [12, 35]

. CNNs in particular have proven to be very effective in medical image processing. CNNs preserve local image relations, while reducing dimensionality and for this reason are the most popular machine learning algorithm in image recognition and visual learning tasks 

[16]. CNNs have been widely used for classification and segmentation in various types of medical applications such as histopathological images of breast tissues, lung images, MRI images, medical X-Ray images, etc. [11, 24]. Researchers produced advanced results on duodenal biopsies classification using CNNs [3], but those models are only robust to a single type of image stain or color distribution. Many researchers apply a stain normalization technique as part of the image pre-processing stage to both the training and validation datasets [27]. In this paper, varying levels of color balancing were applied during image pre-processing in order to account for multiple stain variations.

The rest of this paper is organized as follows: In Section 2, we describe the different data sets used in this work, as well as, the required pre-processing steps. The architecture of the model is explained in Section 4. Empirical results are elaborated in Section 5. Finally, Section 6 concludes the paper along with outlining future directions.

2 Data Source

For this project, Hematoxylin and Eosin (H&E) stained duodenal biopsy glass slides were retrieved from  patients. The slides were converted into  whole slide images, and labeled as either EE, CD, or normal. The biopsy slides for EE patients were from the Aga Khan University Hospital (AKUH) in Karachi, Pakistan ( slides from  patients) and the University of Zambia Medical Center in Lusaka, Zambia (). The slides for CD patients () and normal () were retrieved from archives at the University of Virginia (UVa). The CD and normal slides were converted into whole slide images at x magnification using the Leica SCN  slide scanner (Meyer Instruments, Houston, TX) at UVa, and the digitized EE slides were of 20x magnification and shared via the Environmental Enteric Dysfunction Biopsy Investigators (EEDBI) Consortium shared WUPAX server. Characteristics of our patient population are as follows: the median (, ) age of our entire study population was  (, ) months, and we had a roughly equal distribution of males (%, ) and females (%, ). The majority of our study population were histologically normal controls , followed by CD patients , and EE patients .

3 Pre-Processing

In this section, we cover all of the pre-processing steps which include image patching, image clustering, and color balancing. The biopsy images are unstructured (varying image sizes) and too large to process with deep neural networks; thus, requiring that images are split into multiple smaller images. After executing the split, some of the images do not contain much useful information. For instance, some only contain the mostly blank border region of the original image. In the image clustering section, the process to select useful images is described. Finally, color balancing is used to correct for varying color stains which is a common issue in histological image processing.

3.1 Image Patching

Although effectiveness of CNNs in image classification has been shown in various studies in different domains, training on high resolution Whole Slide Tissue Images (WSI) is not commonly preferred due to a high computational cost. However, applying CNNs on WSI enables losing a large amount of discriminative information due to extensive downsampling [14]

. Due to a cellular level difference between Celiac, Environmental Entropathy and normal cases, a trained classifier on image patches is likely to perform as well as or even better than a trained WSI-level classifier. Many researchers in pathology image analysis have considered classification or feature extraction on image patches 

[14]. In this project, after generating patches from each images, labels were applied to each patch according to its associated original image. A CNN was trained to generate predictions on each individual patch.

Figure 2:

Structure of clustering model with autoencoder and K-means combination

3.2 Clustering


In this study, after image patching, some of created patches do not contain any useful information regarding biopsies and should be removed from the data. These patches have been created from mostly background parts of WSIs. A two-step clustering process was applied to identify the unimportant patches. For the first step, a convolutional autoencoder was used to learn embedded features of each patch and in the second step we used k-means to cluster embedded features into two clusters: useful and not useful. In Figure 2, the pipeline of our clustering technique is shown which contains both the autoencoder and k-mean clustering.

An autoencoder is a type of neural network that is designed to match the model’s inputs to the outputs [10]. The autoencoder has achieved great success as a dimensionality reduction method via the powerful reprehensibility of neural networks [32]. The first version of autoencoder was introduced by DE. Rumelhart el at. [29] in 1985. The main idea is that one hidden layer between input and output layers has much fewer units [23] and can be used to reduce the dimensions of a feature space. For medical images which typically contain many features, using an autoencoder can help allow for faster, more efficient data processing.

A CNN-based autoencoder can be divided into two main steps [25] : encoding and decoding.

(1)

Where  is a convolutional filter, with convolution among an input volume defined by  which it learns to represent the input by combining non-linear functions:

Figure 3: Some samples of clustering results - cluster 1 includes patches with useful information and cluster 2 includes patches without useful information (mostly created from background parts of WSIs)
Total Cluster 1 Cluster 2
Celiac Disease (CD)
Normal
Environmental Enteropathy (EE)
Total
Table 1: The clustering results for all patches into two clusters
(2)

where 

is the bias, and the number of zeros we want to pad the input with is such that: dim(I) = dim(decode(encode(I))) Finally, the encoding convolution is equal to:

(3)

The decoding convolution step produces  feature maps . The reconstructed results  is the result of the convolution between the volume of feature maps  and this convolutional filters volume  [7, 9].

(4)
(5)

Where Equation 5 shows the decoding convolution with   dimensions. The input’s dimensions are equal to the output’s dimensions.

Results of patch clustering has been summarized in Table 1. Obviously, patches in cluster , which were deemed useful, are used for the analysis in this paper.

Figure 4: Color Balancing samples for the three classes

3.3 Color Balancing

The concept of color balancing for this paper is to convert all images to the same color space to account for variations in H&E staining. The images can be represented with the illuminant spectral power distribution , the surface spectral reflectance , and the sensor spectral sensitivities  [5, 6]. Using this notation [6], the sensor responses at the pixel with coordinates  can be thus described as:

(6)

where  is the wavelength range of the visible light spectrum, ρ and 

are three-component vectors.

(7)

where  is raw images from biopsy and   is results for CNN input. In the following, a more compact version of Equation 7 is used:

(8)

where  is exposure compensation gain, refers the diagonal matrix for the illuminant compensation and  indicates the color matrix transformation.

Figure 4 shows the results of color balancing for three classes (Celiac Disease (CD), Normal and Environmental Enteropathy (EE)) with different color balancing percentages between  and .

4 Method

In this section, we describe Convolutional Neural Networks (CNN) including the convolutional layers, pooling layers, activation functions, and optimizer. Then, we discuss our network architecture for diagnosis of Celiac Disease and Environmental Enteropathy. As shown in figure 

5, the input layers starts with image patches () and is connected to the convolutional layer (Conv ). Conv  is connected to the pooling layer (MaxPooling), and then connected to Conv . Finally, the last convolutional layer (Conv ) is flattened and connected to a fully connected perception layer. The output layer contains three nodes which each node represent one class.

4.1 Convolutional Layer

CNN is a deep learning architecture that can be employed for hierarchical image classification. Originally, CNNs were built for image processing with an architecture similar to the visual cortex. CNNs have been used effectively for medical image processing. In a basic CNN used for image processing, an image tensor is convolved with a set of kernels of size 

. These convolution layers are called feature maps and can be stacked to provide multiple filters on the input. The element (feature) of input and output matrices can be different [22]. The process to compute a single output matrix is defined as follows:

(9)

Each input matrix  is convolved with a corresponding kernel matrix , and summed with a bias value  at each element. Finally, a non-linear activation function (See Section 4.3) is applied to each element [22].

In general, during the back propagation step of a CNN, the weights and biases are adjusted to create effective feature detection filters . The filters in the convolution layer are applied across all three ’channels’ or  (size of the color space) [13].

Figure 5:

Structure of Convolutional Neural Net using multiple 2D feature detectors and 2D max-pooling

4.2 Pooling Layer

To reduce the computational complexity, CNNs utilize the concept of pooling to reduce the size of the output from one layer to the next in the network. Different pooling techniques are used to reduce outputs while preserving important features  [30]. The most common pooling method is max pooling where the maximum element is selected in the pooling window.
In order to feed the pooled output from stacked featured maps to the next layer, the maps are flattened into one column. The final layers in a CNN are typically fully connected [19].

4.3 Neuron Activation

The implementation of CNN is a discriminative trained model that uses standard back-propagation algorithm using a sigmoid (Equation 10

), (Rectified Linear Units (ReLU

[26] (Equation 11) as activation function. The output layer for multi-class classification includes a function (as shown in Equation 12).

(10)
(11)
(12)

4.4 Optimizor

For this CNN architecture, the optimizor [17]

which is a stochastic gradient optimizer that uses only the average of the first two moments of gradient (

and , shown in Equation 13, 14, 15, and 16

). It can handle non-stationary of the objective function as in RMSProp, while overcoming the sparse gradient issue limitation of RMSProp 

[17].

(13)
(14)
(15)
(16)

where is the first moment and indicates second moment that both are estimated. and .

4.5 Network Architecture

As shown in Table 2 and Figure 6, our CNN architecture consists of three convolution layer each followed by a pooling layer. This model receives RGB image patches with dimensions of   as input. The first convolutional layer has  filters with kernel size of . Then we have Pooling layer with size of  which reduce the feature maps from  to . The second convolutional layers with  filters with kernel size of . Then Pooling layer (MaxPooling ) with size of  reduces the feature maps from  to . The third convolutional layer has  filters with kernel size of , and final pooling layer (MaxPooling ) is scaled down to . The feature maps as shown in Table 2 is flatten and connected to fully connected layer with  nodes. The output layer with three nodes to represent the three classes:  (Environmental Enteropathy, Celiac Disease, and Normal).

The optimizer used is Adam (See Section 4.4) with a learning rate of , , and the loss considered is sparse categorical crossentropy [8]. Also for all layers, we use Rectified linear unit (ReLU) as activation function except output layer which we use  (See Section 4.3).

Figure 6: Our Convolutional Neural Networks’ Architecture
Layer (type) Output Shape
Trainable
Parameters
1 Convolutional Layer
2 Max Pooling
3 Convolutional Layer
4 Max Pooling
5 Convolutional Layer
6 Max Pooling
8 dense
10 Output
Table 2: CNN Architecture for Diagnosis of Diseased Duodenal on Biopsy Images

5 Empirical Results

5.1 Evaluation Setup

In the research community, comparable and shareable performance measures to evaluate algorithms are preferable. However, in reality such measures may only exist for a handful of methods. The major problem when evaluating image classification methods is the absence of standard data collection protocols. Even if a common collection method existed, simply choosing different training and test sets can introduce inconsistencies in model performance [33]

. Another challenge with respect to method evaluation is being able to compare different performance measures used in separate experiments. Performance measures generally evaluate specific aspects of classification task performance, and thus do not always present identical information. In this section, we discuss evaluation metrics and performance measures and highlight ways in which the performance of classifiers can be compared.

Since the underlying mechanics of different evaluation metrics may vary, understanding what exactly each of these metrics represents and what kind of information they are trying to convey is crucial for comparability. Some examples of these metrics include recall, precision, accuracy, F-measure, micro-average, and macro-average. These metrics are based on a “confusion matrix” that comprises true positives (TP), false positives (FP), false negatives (FN) and true negatives (TN) 

[21]. The significance of these four elements may vary based on the classification application. The fraction of correct predictions over all predictions is called accuracy (Eq. 17). The proportion of correctly predicted positives to all positives is called precision, i.e. positive predictive value (Eq. 18).

(17)
(18)
(19)
(20)

5.2 Experimental Setup

The following results were obtained using a combination of central processing units (CPUs) and graphical processing units (GPUs). The processing was done on a with cores and memory, and the GPU cards were two and a . We implemented our approaches in Python using the Compute Unified Device Architecture (CUDA), which is a parallel computing platform and Application Programming Interface (API) model created by

. We also used Keras and TensorFlow libraries for creating the neural networks 

[2, 8].

5.3 Experimental Results

In this section we show that CNN with color balancing can improve the robustness of medical image classification. The results for the model trained on different color balancing values are shown in Table 3. The results shown in Table 4 are also based on the trained model using the same color balancing values. Although in Table 4, the test set is based on a different set of color balancing values:   and . By testing on a different set of color balancing, these results show that this technique can solve the issue of multiple stain variations during histological image analysis.

As shown in Table 3, the f1-score of three classes (Environmental Enteropathy (EE), Celiac Disease (CD), and Normal) are , , and respectively. In Table 4, the f1-score is reduced, but not by a significant amount. The three classes (Environmental Enteropathy (EE), Celiac Disease (CD), and Normal) f1-scores are , , and respectively. The result is very similar to MA. Boni et.al [3] which achieved 90.59% of accuracy in their mode, but without using the color balancing technique to allow differently stained images.

precision recall f1-score support
Celiac Disease (CD)
Normal
Environmental Enteropathy
(EE)
Table 3: F1-score for train on a set with color balancing of 0.001, 0.01, 0.1, and 1.0. Then, we evaluate test set with same color balancing
precision recall f1-score support
Celiac Disease (CD)
Normal
Environmental Enteropathy
(EE)
Table 4: F1-score for train with color balancing of 0.001, 0.01, 0.1, and 1.0 and test with color balancing of 0.5, 1.0, 1.5 and 2.0

In Figure 7

, Receiver operating characteristics (ROC) curves are valuable graphical tools for evaluating classifiers. However, class imbalances (i.e. differences in prior class probabilities) can cause ROC curves to poorly represent the classifier performance. ROC curve plots true positive rate (TPR) and false positive rate (FPR). The ROC shows that AUC of Environmental Enteropathy (EE) is 

, Celiac Disease (CD) is 0.99, and Normal is 0.97.

Figure 7: Receiver operating characteristics (ROC) curves for three classes also the figure shows micro-average and macro-average of our classifier
Method     
Solve Color
Staining Problem
   
Model
Architecture
   Accuracy
Shifting and Reflections [3] No CNN 85.13%
Gamma [3] No CNN 90.59%
CLAHE [3] No CNN 86.79%
Gamma-CLAHE [3] No CNN 86.72%
Fine-tuned ALEXNET [27] Yes ALEXNET 89.95%
Ours Yes CNN 93.39%
Table 5: Comparison accuracy with different baseline methods

As shown in Table 5, our model performs better compared to some other models in terms of accuracy. Among the compared models, only the fine-tuned ALEXNET [27]

has considered the color staining problem. This model proposes a transfer learning based approach for the classification of stained histology images. They also applied stain normalization before using images for fine tuning the model.

6 Conclusion

In this paper, we proposed a data driven model for diagnosis of diseased duodenal architecture on biopsy images using color balancing on convolutional neural networks. Validation results of this model show that it can be utilized by pathologists in diagnostic operations regarding CD and EE. Furthermore, color consistency is an issue in digital histology images and different imaging systems reproduced the colors of a histological slide differently. Our results demonstrate that application of the color balancing technique can attenuate effect of this issue in image classification.

The methods described here can be improved in multiple ways. Additional training and testing with other color balancing techniques on data sets will continue to identify architectures that work best for these problems. Also, it is possible to extend the model to more than four different color balance percentages to capture more of the complexity in the medical image classification.

Acknowledgements

This research was supported by University of Virginia, Engineering in Medicine SEED Grant , the University of Virginia Translational Health Research Institute of Virginia () Mentored Career Development Award , and the Bill and Melinda Gates Foundation  (; , ; )

References

  • [1] Who. children: reducing mortality. fact sheet 2017. http://www.who.int/mediacentre/factsheets/fs178/en/. Accessed: 2019-1-30
  • [2] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al.: Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016)
  • [3] Al Boni, M., Syed, S., Ali, A., Moore, S.R., Brown, D.E.: Duodenal biopsies classification and understanding using convolutional neural networks. American Medical Informatics Association (2019)
  • [4] Bejnordi, B.E., Veta, M., Van Diest, P.J., Van Ginneken, B., Karssemeijer, N., Litjens, G., Van Der Laak, J.A., Hermsen, M., Manson, Q.F., Balkenhol, M., et al.: Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Jama 318(22), 2199–2210 (2017)
  • [5] Bianco, S., Cusano, C., Napoletano, P., Schettini, R.: Improving cnn-based texture classification by color balancing. Journal of Imaging 3(3), 33 (2017)
  • [6] Bianco, S., Schettini, R.: Error-tolerant color rendering for digital cameras. Journal of mathematical imaging and vision 50(3), 235–245 (2014)
  • [7] Chen, K., Seuret, M., Liwicki, M., Hennebert, J., Ingold, R.: Page segmentation of historical document images with convolutional autoencoders. In: Document Analysis and Recognition (ICDAR), 2015 13th International Conference on, pp. 1011–1015. IEEE (2015)
  • [8]

    Chollet, F., et al.: Keras: Deep learning library for theano and tensorflow.

    https://keras.io/ (2015)
  • [9] Geng, J., Fan, J., Wang, H., Ma, X., Li, B., Chen, F.: High-resolution sar image classification via deep convolutional autoencoders. IEEE Geoscience and Remote Sensing Letters 12(11), 2351–2355 (2015)
  • [10] Goodfellow, I., Bengio, Y., Courville, A., Bengio, Y.: Deep learning, vol. 1. MIT press Cambridge (2016)
  • [11] Gulshan, V., Peng, L., Coram, M., Stumpe, M.C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J., et al.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama 316(22), 2402–2410 (2016)
  • [12] Hegde, R.B., Prasad, K., Hebbar, H., Singh, B.M.K.: Comparison of traditional image processing and deep learning approaches for classification of white blood cells in peripheral blood smear images. Biocybernetics and Biomedical Engineering (2019)
  • [13] Heidarysafa, M., Kowsari, K., Brown, D.E., Jafari Meimandi, K., Barnes, L.E.: An improvement of data classification using random multimodel deep learning (rmdl) 8(4), 298–310 (2018). DOI 10.18178/ijmlc.2018.8.4.703
  • [14] Hou, L., Samaras, D., Kurc, T.M., Gao, Y., Davis, J.E., Saltz, J.H.: Patch-based convolutional neural network for whole slide tissue image classification.

    In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2424–2433 (2016)

  • [15] Husby, S., et al.: European society for pediatric gastroenterology, hepatology, and nutrition guidelines for the diagnosis of coeliac disease. Journal of pediatric gastroenterology and nutrition 54(1), 136–160 (2012)
  • [16] Ker, J., Wang, L., Rao, J., Lim, T.: Deep learning applications in medical image analysis. IEEE Access 6, 9375–9389 (2018)
  • [17] Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  • [18] Kowsari, K., Brown, D.E., Heidarysafa, M., Meimandi, K.J., Gerber, M.S., Barnes, L.E.: Hdltex: Hierarchical deep learning for text classification. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 364–371. IEEE (2017)
  • [19] Kowsari, K., Heidarysafa, M., Brown, D.E., Meimandi, K.J., Barnes, L.E.: Rmdl: Random multimodel deep learning for classification. In: Proceedings of the 2nd International Conference on Information System and Data Mining, pp. 19–28. ACM (2018)
  • [20] Kowsari, K., Jafari Meimandi, K., Heidarysafa, M., Mendu, S., Barnes, L., Brown, D.: Text classification algorithms: A survey. Information 10(4) (2019). DOI 10.3390/info10040150
  • [21] Lever, J., Krzywinski, M., Altman, N.: Points of significance: Classification evaluation (2016)
  • [22] Li, Q., Cai, W., Wang, X., Zhou, Y., Feng, D.D., Chen, M.: Medical image classification with convolutional neural network. In: 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV), pp. 844–848. IEEE (2014)
  • [23] Liang, H., Sun, X., Sun, Y., Gao, Y.: Text feature extraction based on deep learning: a review. EURASIP journal on wireless communications and networking 2017(1), 211 (2017)
  • [24] Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A., Van Ginneken, B., Sánchez, C.I.: A survey on deep learning in medical image analysis. Medical image analysis 42, 60–88 (2017)
  • [25] Masci, J., Meier, U., Cireşan, D., Schmidhuber, J.: Stacked convolutional auto-encoders for hierarchical feature extraction. In: International Conference on Artificial Neural Networks, pp. 52–59. Springer (2011)
  • [26]

    Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines.

    In: Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814 (2010)
  • [27] Nawaz, W., Ahmed, S., Tahir, A., Khan, H.A.: Classification of breast cancer histology images using alexnet. In: International Conference Image Analysis and Recognition, pp. 869–876. Springer (2018)
  • [28] Naylor, C., Lu, M., Haque, R., Mondal, D., Buonomo, E., Nayak, U., Mychaleckyj, J.C., Kirkpatrick, B., Colgate, R., Carmolli, M., et al.: Environmental enteropathy, oral vaccine failure and growth faltering in infants in bangladesh. EBioMedicine 2(11), 1759–1766 (2015)
  • [29] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. Tech. rep., California Univ San Diego La Jolla Inst for Cognitive Science (1985)
  • [30] Scherer, D., Müller, A., Behnke, S.: Evaluation of pooling operations in convolutional architectures for object recognition. Artificial Neural Networks–ICANN 2010 pp. 92–101 (2010)
  • [31] Syed, S., Ali, A., Duggan, C.: Environmental enteric dysfunction in children: a review. Journal of pediatric gastroenterology and nutrition 63(1), 6 (2016)
  • [32] Wang, W., Huang, Y., Wang, Y., Wang, L.: Generalized autoencoder: A neural network framework for dimensionality reduction. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 490–497 (2014)
  • [33] Yang, Y.: An evaluation of statistical approaches to text categorization. Information retrieval 1(1-2), 69–90 (1999)
  • [34] Zhai, S., Cheng, Y., Zhang, Z.M., Lu, W.: Doubly convolutional neural networks. In: Advances in neural information processing systems, pp. 1082–1090 (2016)
  • [35] Zhang, J., Kowsari, K., Harrison, J.H., Lobo, J.M., Barnes, L.E.: Patient2vec: A personalized interpretable deep representation of the longitudinal electronic health record. IEEE Access 6, 65,333–65,346 (2018)