Convolution Neural Networks for diagnosing colon and lung cancer histopathological images

09/08/2020
by   Sanidhya Mangal, et al.
42

Lung and Colon cancer are one of the leading causes of mortality and morbidity in adults. Histopathological diagnosis is one of the key components to discern cancer type. The aim of the present research is to propose a computer aided diagnosis system for diagnosing squamous cell carcinomas and adenocarcinomas of lung as well as adenocarcinomas of colon using convolutional neural networks by evaluating the digital pathology images for these cancers. Hereby, rendering artificial intelligence as useful technology in the near future. A total of 2500 digital images were acquired from LC25000 dataset containing 5000 images for each class. A shallow neural network architecture was used classify the histopathological slides into squamous cell carcinomas, adenocarcinomas and benign for the lung. Similar model was used to classify adenocarcinomas and benign for colon. The diagnostic accuracy of more than 97 and 96

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 6

08/24/2018

Using Apple Machine Learning Algorithms to Detect and Subclassify Non-Small Cell Lung Cancer

Lung cancer continues to be a major healthcare challenge with high morbi...
07/23/2021

Early Diagnosis of Lung Cancer Using Computer Aided Detection via Lung Segmentation Approach

Lung cancer begins in the lungs and leading to the reason of cancer demi...
12/18/2019

Integration of Convolutional Neural Networks for Pulmonary Nodule Malignancy Assessment in a Lung Cancer Classification Pipeline

The early identification of malignant pulmonary nodules is critical for ...
04/16/2020

Representation Learning of Histopathology Images using Graph Neural Networks

Representation learning for Whole Slide Images (WSIs) is pivotal in deve...
09/21/2016

Characterization of Lung Nodule Malignancy using Hybrid Shape and Appearance Features

Computed tomography imaging is a standard modality for detecting and ass...
11/18/2019

Learning Permutation Invariant Representations using Memory Networks

Many real world tasks such as 3D object detection and high-resolution im...
06/22/2018

TriResNet: A Deep Triple-stream Residual Network for Histopathology Grading

While microscopic analysis of histopathological slides is generally cons...

Code Repositories

sanidhyamangal

Repo for my portfolio


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

According to the World health organization(WHO) cancer is the leading cause of morality in the world. Lung cancer is the most commonly diagnosed cancer ( of the total cases) and the leading cause of cancer death ( of the total cancer deaths), while colorectal cancer contributes to () for mortality  Bray et al.. A rising trend has been reported around the globe for the incidence of malignant tumors which may be attributed to an increase in population. Malignancy can occur to any age group by histopathological type is diagnosed mostly in elderly age group of 50-60 years  Arslan et al.. It has been predicted that cancer mortality might bump up to untill 2035  Araghi et al..

Lung cells become cancerous when they mutate to grow uncontrollably and form a cluster called a tumor. 3 The worldwide increase in lung cancer has been attributed to different factors but mainly to the exposure to breathing dangerous or toxic substances and the rise in the number of aged people in the society. The symptoms, however, are not likely to be observed until it has spread to the other parts of the body, which makes it harder to treat it.

Although lung cancer can occur in people who have never smoked, it usually is the greatest risk for people who do. Adenocarcinoma and squamous cell carcinoma are the most commonly occurring types of lung cancer while the other histological types include small as well as large cell carcinomas. Adenocarcinoma of lung cancer usually occurs in current or former smokers but is also prevalent in non-smokers. This is more prone to occur in women and youth and is found in the outer parts of the lungs even before it spreads. Squamous cell carcinomas are also associated with the history of smoking. Small and Large cell carcinoma, on the other hand, can develop in any part of the lung and has the tendency to grow and unroll rapidly making it harder to medicate.

Colon, the final part of our digestive system, when host cancerous cells, could cause colon cancer. Colon cancer is not age-specific but it typically affects older adults. It usually begins as small, noncancerous (benign) clumps of cells called polyps that form on the inner side of the colon. Over time, some of these polyps can develop to cause colon cancers.

In most colon cancers, a tumor is formed when healthy cells in the lining of the colon or rectum grow uncontrollably. Adenocarcinomas of the colon or rectum develop in the lining of the large intestine occuring in the epithelial cells and then spread to the other layers. Mucinous adenocarcinomas and signet ring cell adenocarcinoma are two less common subtypes of adenocarcinoma but are aggressive and difficult to medicate. The changes can occur over the years in one’s body which is reliant on factors such as gender, ethnicity, age, smoking patterns, and socio-economic conditions. However, if a person has some unusual inherited syndrome, the changes can transpire in a small duration of some months.

The aim of the research paper revolves around measuring the potency of the proposed algorithm of the Convolutional Neural Network (CNN)  LeCun et al.

to detect the common types of lung and colon cancer in the human body. The architecture of the algorithm keeps in mind the patterns of the neurons and their connectivity inside the human brain which also attributes for it’s low pre-processing required in comparison to other classification algorithms. The ability of the algorithm to learn characteristics overpowers the primitive method wherein filters are hand-engineered. The ConvNet takes input in the form of images attributes weight (learnable weights and biases) to multiple features in the image, and is able to differentiate one image from the other. We use Histopathology slides 

Gurcan et al. as a dataset, since the preparation process preserves the underlying tissue architecture it provides an eclectic view of disease and its effect on tissues. Additionally,histopathology image renders as a ‘gold standard’ in diagnosing almost distinct cancer types Rubin et al..

In the later section the paper is organized in the following manner: Section 2 provides an insight about previously explored research in the current domain. Section 3 provided a brief introduction about LC25000 dataset. Section 4 covers a short introduction on CNN, In addition to this Section 5 elaborates the cnn architecture used in training both the models. Section 6 reports all the experimental results and findings. Finally, Section 7 concludes our experiment while presenting some insights on future work.

2 Related Work

Doi

explored the potential for automatic image processing around 4 decades ago but it is still challenging due to complexity of images to analyze. Back then feature extraction was a key step in adopting machine learning based computer-aided diagnosis (CAD). Different ontologies of cancer has been investigated in,  

te Brake et al.Beller et al.Yin et al.Aerts et al.Eltonsy et al.,  Wei et al.Hawkins et al.Barata et al.Barata et al.Han et al.,  Sadeghi et al.Zikic et al.Meier et al.. Moreover,  Munir et al.

provides a detailed overview on cancer diagnosis by conducting experiments on several deep learning techniques. It also provides a comparison of various predominant architectures for each technique.

Also,  Coudray et al. trained an inceptionv3  Szegedy et al. model for classification and mutation from non–small cell lung images of adenocarcinomas and squamous cell carcinomas achieving a mean area under curve(auc) of 0.97. They also mutated the result for ten most common genes for lung adenocarcinomas. Similarly  Ardila et al. predicts the risk of lung cancer using deep learning techniques by computing prior and current tomography of the patients.  Lakshmanaprabu et al. explored the CT scans using deep neural networks and linear discriminant analysis for automated diagnosis.  Sirinukunwattana et al. used spatially constrained CNN to perform nuclei detection and classification of cancerous tissues in colon histology images.

Recently,  Abbas et al. presented a comparative study on histopathology diagnosis on squamous cell carcinomas using CNNs. It compares various CNN architectures such as AlexNet, VGG-16, ResNet achieving an F-1 score of 0.97. Similarly,  Bukhari et al. presents a comparative analysis on colonic adenocarcinomas using variations of ResNet architecture achieving a baseline accuracy of .

3 LC25000 Dataset

A brief introduction on dataset is provided followed by all the data preprocessing steps. The LC25000 Dataset  Borkowski et al. contains microsopic images of lung and colon. The dataset can be bifuragted into five different classes namely, lung adenocarcinomas, lung squamous cell carcinomas, lung benign, colon adenocarcinomas and colon benign each containing 5000 images. Figure 1 describes some sample images corresponding to above mentioned classes. Original dataset contains only 750 images lung and 500 images of colon with pixel size of 1024x768, later it was converted into square of 768x768 pixels. Augmentor was used to expand the dataset into 25000 images with the help of rotation and flips.

(a) lung adenocarcinomas
(b) lung squamous cell carcinomas
(c) lung benign
(d) colon adenocarcinomas
(e) colon benign
Figure 1: (a) and (b) is an example image of adenocarcinomas and squamous cell carcinomas cancer types for the lung, (c) represents the benign histopathology of lung. Similarly in (d) adenocarcinomas cancer for colon is described where in (e) illustrates benign class for colon

Before feeding augmented data, it undergo some preprocessing. Initially data was sampled into 4500 and 500 datapoints for training and test set respectively for each class using random sampling proposed in  Yates. Furthermore images were resized to 150x150 pixels, along with some randomized shear, zoom transformation followd by normaliztion of images.

4 Deep Learning Approach Using CNN

Image classification is a challenging task for the visual content, particularly microscopic images for example histopathological images due to high convolution of inter-intraclass dependencies. The underlying structures are complex and interwoven due to similar structural morphological textures. Figure 1 presents some of the complex textures present in histopathology of images. Deep learning is prevalent due to its ability to learn features directly from the input, providing us a window to avoid arduous feature extraction processes  Bengio et al.. One of the key features of deep learning is to discover abstract level features and then deep dive for extracting structural semantics in the feature map. In recent years, deep learning, especially CNNs has proven to be an effective tool for classifying and diagnosing medical images  Shen et al.,  Kermany et al.,  Lee et al.,  Suzuki

In nutshell, CNN contains multiple trainable layers which could be stacked together along with a supervised classifier to learn feature maps from the given input data feed. Input data feed could be either digital or signal data such as audio, video, images and time-series. For example, upon considering a coloured image it is a feature map of 3D tensor, i.e., a 2D tensor for each colour channel.

CNN architectures are composed of mainly three layers which are: convolution layer, max pooling layer and fully connected layer or dense layer. These layers could be stacked in multiple combinations to produce a CNN. An example of typical CNN is show in Figure 

2.

Figure 2: Describes one of the most commonly used cnn architectures,  Alom et al.

In a typical CNN, convolution layer acts as a key component for any given architecture. Convolution layers compute a dot product between weights and input signal connected to that local region. The set of weights which are convoluted along the input vector is called

kernel or filters. Each filter is small but extends across the full depth of the input volume. For image inputs a typical size of filter is generally (3x3, 5x5, 8x8). These weights are shared across neurons so that filters can learn all the geometrical structures from the image. The distance between applications of these filters is termed as stride

. If hyperparameter stride is smaller than filter size than overlapping convolutions are applied to the image.

It is a common practice to insert a pooling in between two convolution layers in order to downsample the image along volume component, This is crucial to reduce progressively the spatial size of the representation. Thus, reducing the number of parameters and computations required by the network helps in the overfitting control. The pooling operation resizes the images along height and width discarding activation. In practice, max pooling operation which provides a window of selecting the maximum value from input patch among neighborhoods has shown better results  Scherer et al..

In fully connected layer or dense layer

, full connection is established between activations of input and their activation. Computation is done with the help of matrix multiplication along with successive bias offset. The last fully connected layer contains the final output such as probability density or

logit values  Spanhol et al.,  Krizhevsky et al..

5 CNN Architecture and Training Strategy

In order to classify the (citeauthor dataset name) dataset we constructed the deep CNN with following layers and parameters:

Input Layer

This layer is used to load data and feed it to the first convolution layer, In our case the input is an image of size 150x150 pixels with colour channels which is 3 for RGB.

Convolution Layer

This layer is used to convolve the input image with trainable filters to learn the geo spatial structure of images, this model contains three convolution layers with filter size 3x3, stride set to 2 and padding kept the same. First layer contains 32 filters, followed by two layers with 64 filters each and they are initialized with Gaussian distribution. In addition to this, ReLU activation is applied for non linear operation to improve the performance  

Behnke

Pooling Layer

Pooling operation is used for downsampling the output images received from the convolution layer. There is one pooling layer after each convolution layer with pooling size of 2, padding set to valid. All the pooling layers use the most common max pooling operation.

Flatten Layer

This layer is used to convert the output from the convolution layer into a 1D tensor to connect a dense layer or fully connected layer.

Fully connected layer or dense layer

These layers treat the input as a simple vector and produce an output in a form of vector. Two dense layers are used in this model, first one contains 512 neurons and last one contains 3 and 2 neurons for lung and colon cancer respectively depending on the input class. Output from the last fully connected layer could be activated with the help of softmax activation.

(1)

Dropout Layer

In order to prevent overfitting of the model layers we use a dropout layer in between fully connected layers which randomly drops neurons from both visible and hidden layers  Srivastava et al..

Table 1 Illustrates the parameters of the layers, where CONV+POOL stands for convolution layer followed by a pooling layer and FC by fully connected layer or dense layer.

Layers
1 2 3 4 5
Type CONV+POOL CONV+POOL CONV+POOL FC FC
Channels 32 64 64 512 3 or 2
Filter Size 3x3 3x3 3x3 - -
Convolution Strides 2x2 2x2 2x2 - -
Pooling Size 2x2 2x2 2x2 - -
Pooling Strides 1x1 1x1 1x1 - -
Table 1: Summary of CNN Layers

For all the CNN modes a similar training protocol was used, purely supervised in nature. The RMSprop method proposed by  

Tieleman and Hinton

with backpropagation was used to compute gradient and a mini batch size of 32 was used to update network weights, with starting learning rate of 10-4, in conjunction with

and . Categorical cross entropy loss  2 is used to ensure that performance of the model is maintained throughout the training process. The CNN was trained for 100 iterations.

(2)

6 Results

To ensure that classifiers generalize well, the data was split into three categories, with 80-10-10 of data into training, testing and validation set respectively into distinct sets. This protocol was applied independently for both lung and colon cancer images. When discussing medical image processing it can be evaluated in two ways, first one is at patient level, i.e., determining the amount of images classified correctly for each patient. Secondly, it can be evaluated at the image where we calculate the number of cancer images classified correctly. In LC25000 dataset 3

no information about patients was provided hence we decided to move forward with a later method to evaluate the model performance. All the CNN models were trained on Google’s Colab using TensorFlow framework  

Abadi et al.. These models would be made available in h5 format at https://github.com/sanidhyamangal/lung_colon_cancer_research. Training each model took around 45 minutes.

Deep Learning techniques are one of the advanced machine learning techniques which do not require the design of feature extraction by domain experts but model learns by itself. We can learn the feature detectors learned by models, considering the weights learned by feature maps. Figure 3 describes feature maps learned by all the convolution layers for both the models. We can visualize the filters at image level and also filters resemble like Gabor filters(  Fogel and Sagi,  Bovik et al.,  Zeiler and Fergus).

(a) colon filter 1
(b) colon filter 2
(c) colon filter 3
(d) lung filter 1
(e) lung filter 2
(f) lung filter 3
Figure 3: Feature maps learned by convolution network layers, (a), (d) describes filters form first convolution layer for colon and lung models respectively, Similarly, (b), (e), (c), (f) illustrates the filters from second and third convolution layers respectively.
(a) Colon Loss
(b) Colon Accuracy
(c) Lung Loss
(d) Lung Accuracy
Figure 4: Delineates the accuracy and loss metrics plot over steps. (a), (b) reports the loss and accuracy metrics over all the 100 steps for the colon model. Similarly, loss and accuracy for lung model is described in (c), (d)

To better assess performance metrics, Figure 4

visualizes the plot between epochs vs loss and accuracy. It is quite transparent that there are jitters in all the accuracy and loss graphs due to the dropout layer which helps the neural network generalize. However, there is a slight aberration in the plots of colon, i.e., validation loss increases initially upto 5 epochs then starts to converge. Also, it can be inferred that there are large spikes in both accuracy and loss for the colon model at around

and epoch. For both the models it could also be extrapolated that both the models took around 20 epochs to converge.

Table 2 reports the performace metrics for both the models, in both the modes, i.e., training and validation at image level.

Type Rule Accuracy Loss
Lung Training
Validation
Colon Training
Validation
Table 2: Performance metrics of all the models

7 Conclusions

In this paper, we have presented a set of experiments conducted on the LC25000 dataset using a deep learning approach. We have shown that we could use a shallow CNN architecture, that has been designed for classifying color images of objects, and adapt it to the classification of lung and colon histopathological images. We have also proposed a training and evaluation strategy for training the CNN architecture, it allows to deal with the high-resolution of these textured images without converting those images to low-resolution images. Our experimental results obtained on the LC25000 showed improved accuracy obtained by CNN when compared to traditional machine learning models and deep convolutional neural networks models leveraging transfer learning trained on the same dataset but with state of the art texture descriptors. Future work can explore different CNN architectures and the optimization of the hyperparameters. Also, strategies to apply neural style transfer for generating interclass images for different histopathology. In addition to this generative models could be used to generate histopathological images for visualizing and exploring mutations on different ontologies.

We would like to acknowledge Mayank Pratap Singh, Aditi Chaurasia, Sumit Yadav, and Prachi Bundela for helpful discussions. Ravinder Singh shared his script for train test split with us and much-needed support with LaTeX typesetting. We would like to thank the developers of TensorFlow. We would also like to thank Engineerbabu IT Services PVT LTD. for providing computational resources.

References

  • M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng (2015) TensorFlow: large-scale machine learning on heterogeneous systems. Note: Software available from tensorflow.org External Links: Link Cited by: §6.
  • M. A. Abbas, S. U. K. Bukhari, A. Syed, and S. S. H. Shah (2020) The histopathological diagnosis of adenocarcinoma & squamous cells carcinoma of lungs by artificial intelligence: a comparative study of convolutional neural networks. medRxiv. Cited by: §2.
  • [3] About lung cancer: lung cancer overview. External Links: Link Cited by: §1.
  • H. J. Aerts, E. R. Velazquez, R. T. Leijenaar, C. Parmar, P. Grossmann, S. Carvalho, J. Bussink, R. Monshouwer, B. Haibe-Kains, D. Rietveld, et al. (2014) Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nature communications 5 (1), pp. 1–9. Cited by: §2.
  • Md. Z. Alom, T. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. Nasrin, M. Hasan, B. Essen, A. Awwal, and V. Asari (2019) A state-of-the-art survey on deep learning theory and architectures. Electronics 8, pp. 292. External Links: Document Cited by: Figure 2.
  • M. Araghi, I. Soerjomataram, M. Jenkins, J. Brierley, E. Morris, F. Bray, and M. Arnold (2019) Global trends in colorectal cancer mortality: projections to the year 2035. International journal of cancer 144 (12), pp. 2992–3000. Cited by: §1.
  • D. Ardila, A. P. Kiraly, S. Bharadwaj, B. Choi, J. J. Reicher, L. Peng, D. Tse, M. Etemadi, W. Ye, G. Corrado, et al. (2019) End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature medicine 25 (6), pp. 954–961. Cited by: §2.
  • N. Arslan, A. Yilmaz, U. Firat, and M. H. Tanriverdi (2018) Analysis of cancer cases from dicle university hospital; ten years’ experience analysis of cancer cases. JOURNAL OF CLINICAL AND ANALYTICAL MEDICINE 9 (2), pp. 102–106. Cited by: §1.
  • C. Barata, J. S. Marques, and J. Rozeira (2012) A system for the detection of pigment network in dermoscopy images using directional filters. IEEE transactions on biomedical engineering 59 (10), pp. 2744–2754. Cited by: §2.
  • C. Barata, M. Ruela, T. Mendonça, and J. S. Marques (2014) A bag-of-features approach for the classification of melanomas in dermoscopy images: the role of color and texture descriptors. In Computer vision techniques for the diagnosis of skin cancer, pp. 49–69. Cited by: §2.
  • S. Behnke (2003) Hierarchical neural networks for image interpretation. Vol. 2766, Springer. Cited by: §5.
  • M. Beller, R. Stotzka, T. O. Müller, and H. Gemmeke (2005) An example-based system to support the segmentation of stellate lesions. In Bildverarbeitung für die Medizin 2005, pp. 475–479. Cited by: §2.
  • Y. Bengio, A. Courville, and P. Vincent (2013) Representation learning: a review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35 (8), pp. 1798–1828. Cited by: §4.
  • A. A. Borkowski, M. M. Bui, L. B. Thomas, C. P. Wilson, L. A. DeLand, and S. M. Mastorides (2019) Lung and colon cancer histopathological image dataset (lc25000). External Links: 1912.12142 Cited by: §3.
  • A. C. Bovik, M. Clark, and W. S. Geisler (1990) Multichannel texture analysis using localized spatial filters. IEEE transactions on pattern analysis and machine intelligence 12 (1), pp. 55–73. Cited by: §6.
  • F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal (2018)

    Global cancer statistics 2018: globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries

    .
    CA: a cancer journal for clinicians 68 (6), pp. 394–424. Cited by: §1.
  • S. U. K. Bukhari, S. Asmara, S. K. A. Bokhari, S. S. Hussain, S. U. Armaghan, and S. S. H. SHAH (2020)

    The histological diagnosis of colonic adenocarcinoma by applying partial self supervised learning

    .
    medRxiv. Cited by: §2.
  • N. Coudray, P. S. Ocampo, T. Sakellaropoulos, N. Narula, M. Snuderl, D. Fenyö, A. L. Moreira, N. Razavian, and A. Tsirigos (2018) Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nature medicine 24 (10), pp. 1559–1567. Cited by: §2.
  • K. Doi (2007) Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Computerized medical imaging and graphics 31 (4-5), pp. 198–211. Cited by: §2.
  • N. H. Eltonsy, G. D. Tourassi, and A. S. Elmaghraby (2007) A concentric morphology model for the detection of masses in mammography. IEEE transactions on medical imaging 26 (6), pp. 880–889. Cited by: §2.
  • I. Fogel and D. Sagi (1989) Gabor filters as texture discriminator. Biological cybernetics 61 (2), pp. 103–113. Cited by: §6.
  • M. N. Gurcan, L. E. Boucheron, A. Can, A. Madabhushi, N. M. Rajpoot, and B. Yener (2009) Histopathological image analysis: a review. IEEE reviews in biomedical engineering 2, pp. 147–171. Cited by: §1.
  • F. Han, H. Wang, G. Zhang, H. Han, B. Song, L. Li, W. Moore, H. Lu, H. Zhao, and Z. Liang (2015) Texture feature analysis for computer-aided diagnosis on pulmonary nodules. Journal of digital imaging 28 (1), pp. 99–115. Cited by: §2.
  • S. H. Hawkins, J. N. Korecki, Y. Balagurunathan, Y. Gu, V. Kumar, S. Basu, L. O. Hall, D. B. Goldgof, R. A. Gatenby, and R. J. Gillies (2014) Predicting outcomes of nonsmall cell lung cancer using ct image features. IEEE access 2, pp. 1418–1426. Cited by: §2.
  • D. S. Kermany, M. Goldbaum, W. Cai, C. C. Valentim, H. Liang, S. L. Baxter, A. McKeown, G. Yang, X. Wu, F. Yan, et al. (2018) Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172 (5), pp. 1122–1131. Cited by: §4.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §4.
  • S. Lakshmanaprabu, S. N. Mohanty, K. Shankar, N. Arunkumar, and G. Ramirez (2019) Optimal deep learning model for classification of lung cancer on ct images. Future Generation Computer Systems 92, pp. 374–382. Cited by: §2.
  • Y. LeCun, Y. Bengio, et al. (1995) Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks 3361 (10), pp. 1995. Cited by: §1.
  • J. Lee, S. Jun, Y. Cho, H. Lee, G. B. Kim, J. B. Seo, and N. Kim (2017) Deep learning in medical imaging: general overview. Korean journal of radiology 18 (4), pp. 570–584. Cited by: §4.
  • R. Meier, S. Bauer, J. Slotboom, R. Wiest, and M. Reyes (2013) A hybrid model for multimodal brain tumor segmentation. Multimodal Brain Tumor Segmentation 31, pp. 31–37. Cited by: §2.
  • K. Munir, H. Elahi, A. Ayub, F. Frezza, and A. Rizzi (2019) Cancer diagnosis using deep learning: a bibliographic review. Cancers 11 (9), pp. 1235. Cited by: §2.
  • R. Rubin, D. S. Strayer, E. Rubin, et al. (2008) Rubin’s pathology: clinicopathologic foundations of medicine. Lippincott Williams & Wilkins. Cited by: §1.
  • M. Sadeghi, T. K. Lee, D. McLean, H. Lui, and M. S. Atkins (2013) Detection and analysis of irregular streaks in dermoscopic images of skin lesions. IEEE transactions on medical imaging 32 (5), pp. 849–861. Cited by: §2.
  • D. Scherer, A. Müller, and S. Behnke (2010) Evaluation of pooling operations in convolutional architectures for object recognition. In International conference on artificial neural networks, pp. 92–101. Cited by: §4.
  • D. Shen, G. Wu, and H. Suk (2017) Deep learning in medical image analysis. Annual review of biomedical engineering 19, pp. 221–248. Cited by: §4.
  • K. Sirinukunwattana, S. E. A. Raza, Y. Tsang, D. R. Snead, I. A. Cree, and N. M. Rajpoot (2016) Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE transactions on medical imaging 35 (5), pp. 1196–1206. Cited by: §2.
  • F. A. Spanhol, L. S. Oliveira, C. Petitjean, and L. Heutte (2016) Breast cancer histopathological image classification using convolutional neural networks. In 2016 international joint conference on neural networks (IJCNN), pp. 2560–2567. Cited by: §4.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §5.
  • K. Suzuki (2017) Overview of deep learning in medical imaging. Radiological physics and technology 10 (3), pp. 257–273. Cited by: §4.
  • C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 2818–2826. Cited by: §2.
  • G. M. te Brake, N. Karssemeijer, and J. H. Hendriks (2000) An automatic method to discriminate malignant masses from normal tissue in digital mammograms1. Physics in Medicine & Biology 45 (10), pp. 2843. Cited by: §2.
  • T. Tieleman and G. Hinton (2012) Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude. Note: COURSERA: Neural Networks for Machine Learning Cited by: §5.
  • J. Wei, B. Sahiner, L. M. Hadjiiski, H. Chan, N. Petrick, M. A. Helvie, M. A. Roubidoux, J. Ge, and C. Zhou (2005) Computer-aided detection of breast masses on full field digital mammograms. Medical physics 32 (9), pp. 2827–2838. Cited by: §2.
  • D. Yates (2008) The practice of statistics : ti-83/84/89 graphing calculator enhanced. W.H. Freeman, New York. External Links: ISBN 978-0-7167-7309-2 Cited by: §3.
  • F. Yin, M. L. Giger, K. Doi, C. J. Vyborny, and R. A. Schmidt (1994) Computerized detection of masses in digital mammograms: automated alignment of breast images and its effect on bilateral-subtraction technique. Medical Physics 21 (3), pp. 445–452. Cited by: §2.
  • M. D. Zeiler and R. Fergus (2014) Visualizing and understanding convolutional networks. Lecture Notes in Computer Science, pp. 818–833. External Links: ISBN 9783319105901, ISSN 1611-3349, Link, Document Cited by: §6.
  • D. Zikic, B. Glocker, E. Konukoglu, A. Criminisi, C. Demiralp, J. Shotton, O. M. Thomas, T. Das, R. Jena, and S. J. Price (2012) Decision forests for tissue-specific segmentation of high-grade gliomas in multi-channel mr. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 369–376. Cited by: §2.