Introduction
Breast cancer is the second leading cause of cancerrelated death among women Rebecca et al. (2016). Effective treatment of breast cancer depends heavily on the accurate diagnosis at an early stage. In clinical practice, histopathological image analysis is the gold standard for detecting breast cancer, which is usually conducted manually by pathologists. However, this analysis is difficult even for skilled pathologists. According to the previous studies Elmore et al. (2015), the average diagnostic concordance among different pathologists is only 75%. Moreover, this diagnostic process is timeconsuming due to the complexity of pathological images. Therefore, in order to improve accuracy and efficiency, it is necessary to develop computeraided diagnosis systems (CADs) for cancer recognition.
In the past decades, many automatic algorithms based on digital pathology have been proposed for tumor classification. By exploiting handcrafted features, a variety of machine learning methods have been used, such as support vector machines (SVM)
Filipczuk et al. (2013), multilayer perceptron (MLP)
George et al. (2013)and random forest (RF)
Nguyen, Wang, and Nguyen (2013). Currently, convolutional neural networks (CNNs) have achieved remarkable success in this field Roy et al. (2019); Yao et al. (2019); Alzubaidi et al. (2020), benefiting from its advantage of extracting hierarchical features automatically. Typically, the whole pathological image is divided into small patches which are classified by a CNN, then these patchwise predictions are integrated to obtain the final imagewise classification result. However, such patchwise feature learning lacks the ability to capture global contextual information.
To overcome this limitation, some researchers have attempted to use graph convolutional networks (GCNs) for pathological image classification Zhou et al. (2019); Anand, Gadiya, and Sethi (2020); Wang et al. (2020); Adnan, Kalra, and Tizhoosh (2020). Most of these works follow the workflow below: Firstly, the pathological image is transformed into a graph representation, where the detected cancer cells serve as nodes and edges are formed in terms of spatial distance. Secondly, they extract celllevel features as the initial node embeddings. Thirdly, the cellgraph is fed into a GCN followed by a MLP to perform imagewise classification. In such setting, global features including spatial relations among cells are embeded in GCN.
However, the current pipeline still faces two challenges. Firstly, only cellular interactions is insufficient to completely represent the pathological structure. In fact, the tissue distribution is hierarchical with many substructures, such as stromal, gland, tumor etc. To learn the intrinsic characteristic of cancerous tissue, it is necessary to aggregate multilevel structural information. For this reason, pathologists always need to analyze many images at different magnification levels to give an accurate cancer diagnosis. Secondly, this multistage workflow is tedious with too much workload. The performance of GCN relies heavily on the previous steps such as cell detection and feature extraction. Moreover, such staged framework lacks robustness to different datasets, as parameters need to be tuned at every step.
To tackle the aforementioned problems, we propose a novel framework named multiscale graph wavelet neural network (MSGWNN) for histopathological image classification. Graph wavelet neural network (GWNN) Xu et al. (2019)
replaces the graph Fourier transform in spectral GCN as graph wavelet transform. Further, GWNN has good localization property in node domain, making it more flexible to adjust the receptive fields of nodes (via the scaling parameter
). Based on GWNN, we present multiscale graph wavelet neural network (MSGWNN), which takes advantage of spectral graph wavelets to make multiscale analysis. More specifically, after converting pathological images into graph representations, we use multiple GWNNs with different scaling parameters in parallel to obtain the multiscale contextual information in graph topology. Then, all these features are aggregated to produce the final imagelevel (i.e. graphlevel) classification prediction.The main contributions are summarized as follows:

We propose a novel framework (MSGWNN) for breast cancer diagnosis in the way of mapping pathological images into graph domain. By exploiting multiscale graph wavelets, the proposed MSGWNN can obtain multilevel tissue structural information, which is exactly what the pathology analysis needs. Although we apply MSGWNN on a disease detection problem in this manuscript, the MSGWNN is a general framework of image analysis that can be applied to many classifications tasks.

MSGWNN can be trained in an endtoend manner. Compared to the previous multistage workflow based on GCN, MSGWNN simplifies the diagnostic process and enhances the robustness. To the best of our knowledge, MSGWNN is the first endtoend framework to apply GCN to pathological image classification.

MSGWNN is evaluated on two public breast cancer datasets (BACH and BreakHis), and it achieves an accuracy of 93.75% and 99.67% respectively. The results outperform the existing stateoftheart methods, demonstrating the superiority of our proposed model. Through ablation studies, we verify that multiscale structural features are crucial for characterizing cancers.
Related Work
MultiScale Feature Learning in Pathology Image
For cancer recognition, both local information about lesion appearance and global tissue organization are required. Based on this motivation, many attempts have been made to encode multiscale information in pathology image, mainly including multiscale features fusion Bardou, Zhang, and Ahmad (2018); Shen et al. (2017); Tokunaga et al. (2019)
and the combination of CNN and recurrent neural networks (RNNs)
Zhou et al. (2018); Guo et al. (2018); Yan et al. (2020). For instance, Shen et al. Shen et al. (2017)presented a multicrop pooling operation to enable a multilevel feature extraction, then these rich features are concatenated to form the final feature vector. In the work of
Tokunaga et al. (2019), three parallel CNNs were used to process images at different magnifications, where the multibranch features are incorporated in a weighted manner. In order to extract the contextual information in cancer tissue, Yan et al. Yan et al. (2020) utilized an RNN to capture the spatial correlations between patchwise features learned from a CNN.In summary, the above methods aim at learning hierarchical features in Euclidean space, where the notion of scale is typically related with Euclidean distance (spatial location). However, in this paper, we map pathological images into graph domain. The graph edges are formed according to the feature similarities between nodes, thus making it possible to explore the function of scale in nonEuclidean space.
Spectral Graph Wavelet Transform
Spectral graph wavelet transform was originally proposed by Hammond et al. Hammond, Vandergheynst, and Gribonval (2011), which constructed wavelet transforms in the node domain of a finite weighted graph. They also proved that graph wavelets exhibit good localization property in the fine scale limit. In the work of Tremblay and Borgnat (2014), Tremblay et al. applied graph wavelets to detect multiscale communities in networks. They used the correlation between wavelets to access the similarity between nodes and proposed a clustering algorithm according to this similarity. Based on graph diffusion wavelets, Donnat et al. Donnat et al. (2018) developed the GraphWave method to represent each node’s neighborhood via a lowdimensional embedding. This kind of node embedding pays more attention to the local topology of node. More recently, Xu et al. Xu et al. (2019) proposed graph wavelet neural network (GWNN) to introduce the graph wavelet transform to spectral GCN, offering high spareness and good localization for graph convolution. However, their experiments were performed only at one scale, which didn’t leverage the localization property of graph wavelets to perform multiscale analysis.
Method
In this section, we first introduce the basic concepts of spectral graph convolution, and further describe the spectral convolution based on graph wavelets. Then we discuss the architecture of graph wavelet neural network (GWNN). Finally, we present multiscale graph wavelet neural network (MSGWNN) for pathological image classification.
Graph Representation
Let be an undirected graph, where is the set of nodes with and is the set of edges. is the node embedding matrix. is a symmetric adjacency matrix defining the graph topology. We denote the graph’s normalized Laplacian matrix as , where
is the identity matrix and
is a diagonal matrix with . Further, we denoteas the eigenvector decomposition of the normalized Laplacian matrix
, whereis the diagonal matrix formed by the eigenvalues of
.Spectral Graph Convolution
Spectral convolution was proposed by Bruna et al. (2013), which defines the graph convolution operation in the Fourier domain. It can be represented as:
is the signal to be processed and is the convolutional filter parameterized by . This definition of spectral graph convolution is not spatially localized in node domain. In other words, the feature aggregation of one node depends on all the nodes, not only its neighbourhood nodes. Moreover, it is computationally expensive to compute the eigendecomposition of Laplacian matrix. To improve computational efficiency, some researchers have attempted to use a truncated expansion based on Chebyshev polynomials to approximate the convolutional kernel Hammond, Vandergheynst, and Gribonval (2011); Defferrard, Bresson, and Vandergheynst (2016); Kipf and Welling (2016).
Spectral Convolution Based on Graph Wavelets
Spectral graph wavelet was presented in Hammond, Vandergheynst, and Gribonval (2011), which introduces a bandpass filter in the traditional graph Fourier domain mentioned above. We denote as the wavelet centered at node at the scale of , thereby the graph wavelet basis is defined as:
is a scaling matrix with . Given a set of graph wavelets, the graph wavelet transform is defined as and the inverse transform is . can be obtained by replacing in with . Further, in such setting, the graph convolution based on spectral wavelet bases reads as
Similarly, is the convolutional filter to be learned. As stated in the work of Xu et al. (2019), the graph convolution using wavelet transform has excellent localization property, making it outperforming the traditional spectral convolution in graphrelated tasks like node classification. In addition, the scaling parameter controls the receptive fields of nodes in a continuous manner, different from the previous approach Defferrard, Bresson, and Vandergheynst (2016) using the discrete shortest path distance.
Graph Wavelet Neural Network (GWNN)
Based on spectral graph wavelet convolution, in this paper, we consider a threelayer graph wavelet neural network (GWNN) for node classification. The structure of the mlayer can be divided into two steps:
feature transformation: ,
graph convolution: .
and
are the input and output tensor respectively.
is the matrix of filter parameters, is the diagonal convolutional kernel, andis the activation function. More specifically, our forward model can be described as:
first layer: ,
second layer: ,
third layer: .
MultiScale Graph Wavelet Neural Network (MSGWNN)
In this section, we present a new framework called multiscale graph wavelet neural network (MSGWNN), for the task of pathological image classification. The architecture of MSGWNN is shown in Figure 1, which consists of three parts: graph construction, node classification based on GWNN and graph classification using feature aggregation.
Graph Construction. In this work, we transform pathological images into graph representations. Nodes are nonoverlapping image patches and edges are generated in terms of the intrinsic relationships between these patches. Firstly, we use the modified ResNet50 (we remove block3,4 and use averagepooling for downsampling) to learn discriminative features. In this way, we can obtain a series of feature maps (), where each pixel corresponds to a ( is the downsampling rate) square patch in the raw image. Therefore, the feature vector of one pixel can be regarded as the node embedding of the corresponding patch (node).
Secondly, we make use of the similarity between node embeddings to form edges, which is defined by dot product:
are the node embeddings of node and , while are two transformation functions implemented via convolutions. The formula to form graph edges is as follow:
is the th percentile () of . is the number of nodes equivalent to here.
Node Classification via GWNN. Given the constructed graphstructure data, we utilize multiple GWNNs with different scaling parameters
in parallel to conduct node classification. Each branch is supervised by the nodelevel label via the cross entropy loss function. Different parameters
mean different receptive fields, enabling the model to extract multiscale contextual information.Feature Aggregation.
After the process of node classification, each branch can obtain a node prediction probability map with the size of
(is the number of cancer type). In this work, we sum these probability maps to aggregate multiscale structure representations. To perform graphlevel (i.e. imagelevel) classification, we sum the node class probabilities to yield the graphlevel prediction probability. Then, this graphlevel prediction is passed to a softmax layer, which is supervised by the graphlevel label.
In this framework, the final loss is formed by the nodelevel loss and graphlevel loss together as follow:
(1) 
where is a balancing parameter. Note that both and are implemented via the classic cross entropy loss function, and is the sum of the nodelevel losses from three branches. By minimizing the final loss, the error can be easily backpropagated through the whole MSGWNN system in an endtoend way.
Experiments and Results
Datasets
We validate the proposed MSGWNN on two public datasets: ICIAR 2018 breast cancer histology (BACH) grand challenge dataset Aresta et al. (2019) and BreakHis dataset Spanhol et al. (2015). The BACH dataset consists of 400 histopathological images with size of , which aims to predict breast cancer type as 1) normal, 2) benign, 3) in situ carcinoma, and 4) invasive carcinoma. We randomly select 320 samples for training and the rest 80 images for testing. In BreakHis dataset, there are 7909 samples classified as either benign or malignant. The images are collected at different magnification factors (, , , ). Experiments are conducted on the samples obtained at magnification (1995 images in total). In line with the previous works Bardou, Zhang, and Ahmad (2018); Gandomkar, Brennan, and MelloThoms (2018); Alom et al. (2019), the entire dataset is randomly divided into a training set with 70% samples and a testing set with 30% data. In this work, all images on BACH (BreakHis) dataset are resized to () to reduce the memory workload of GPU. For image preprocessing, we use the classic H&E color normalization approach described in Vahadane et al. (2015). To avoid overfitting, extensive data augmentation is performed including flip, rotation, translation, shear and linear contrast normalization. Moreover, the performance of model is evaluated according to the average accuracy at the image level.
Implementations
To implement the proposed model, we use the Tensorflow platform on an NVIDIA GeForce GTX 1080 Ti GPU. Each GWNN model consists of three graph convolutional layers with the feature dimension of 256, 128 and 4 (2) respectively. The overall downsampling rate of the modified ResNet50 is
. Moreover, we use the Adam optimizer to train the model, where the initial learning rate is and. All models are trained for 30000 epochs with a batch size of 16. The hyperparameters
are set as 99 and 1 respectively.Computational Complexity
As the eigendecomposition of Laplacian matrix requires much computational cost, we utilize a fast algorithm to approximate the graph wavelets. Hammond et al. Hammond, Vandergheynst, and Gribonval (2011) suggested that and can be efficiently approximated by a truncated expansion according to the Chebyshev polynomials with low orders. In this way, the computational complexity is where is the number of graph edges and is the degree of Chebyshev polynomials. In this work, we set .
Results and Comparisons
Model  Accuracy 

Golatkar, Anand, and Sethi (2018)  85.00% 
Mahbod et al. (2018)  88.50% 
Roy et al. (2019)  90.00% 
Meng, Zhao, and Su (2019)  91.00% 
Yao et al. (2019)  92.00% 
Wang et al. (2018)  92.00% 
Kassani et al. (2019)  92.50% 
Proposed MSGWNN  93.75% 

Model  Accuracy 

Song et al. (2017)  87.00% 
Nahid and Kong (2019)  95.00% 
Han et al. (2017)  95.80% 
Veeling et al. (2018)  96.10% 
Kausar et al. (2019)  97.02% 
Alom et al. (2019)  97.95% 
Bardou, Zhang, and Ahmad (2018)  98.33% 
Gandomkar, Brennan, and MelloThoms (2018)  98.60% 
Proposed MSGWNN  99.67% 

As for breast cancer classification, the new MSGWNN obtains an accuracy of 93.75% and 99.67% on BACH and BreakHis dataset respectively. The normalized confusion matrices are shown in Figure 2. On BACH dataset, both normal and invasive categories yield a high accuracy of 100%, while the accuracy of predicting in situ type is relatively low. This could be due to the similar structures appeared in interclass images. In addition, we compare the results with existing stateoftheart models as listed in Table 1,2. Obviously, MSGWNN outperforms the other methods on both datasets, especially on BreakHis dataset where the error rate is only 0.33%. The results demonstrate the strong capacity of the proposed MSGWNN model to tackle pathological image classification.
Ablation Studies
Model  Accuracy 
GCN  88.75% 
MSGWNN1 (s=0.5)  88.75% 
MSGWNN1 (s=1.0)  88.75% 
MSGWNN1 (s=1.5)  87.50% 
MSGWNN2 (s=0.5,1.0)  91.25% 
MSGWNN2 (s=0.5,1.5)  92.50% 
MSGWNN2 (s=1.0,1.5)  91.25% 
MSGWNN3 (s=0.5,1.0,1.5)  93.75% 

To verify the effectiveness of MSGWNN, we perform two kinds of ablation tests on BACH dataset. One is to investigate the effect of multiscale feature extraction, and the other is about the selection of the balancing parameter in loss.
The Function of MultiScale Feature Learning
In this section, we aim at analyzing the effect of the multiscale features fusion. As shown in Table 3, as aggregating features at more scales, the model performance becomes better accordingly. The specific scaling parameter is given in brackets. When using only onelevel features (), MSGWNN obtains an accuracy of 88.75%. After introducing the features at another scale (), the accuracy increases to 91.25%. Further, MSGWNN with three branches () can achieve a 93.75% accuracy. This comparison confirms that extracting multiscale context information is very important for pathological image analysis, which is consist with the experience of pathologists.
In addition, we also perform an experiment which replaces the multibranch GWNNs in MSGWNN as the traditional graph convolutional network (GCN) Kipf and Welling (2016). As shown in Table 3, GCN achieves a similar result to MSGWNN with one branch. However, due to the high spareness of graph wavelets, GWNN is much more computationally efficient than GCN, which has also been discussed in the previous work Xu et al. (2019).
Visualization of the wavelet bases at different scales. To further investigate the working mechanism of MSGWNN, we show the graph wavelet bases at different scales in the top row of Figure 3. Each small square represents a node in the constructed graph, and the nodetopatch correspondence relation is drawn in the raw image as shown in the second row of Figure 3. Specifically, the red square denotes the node which the wavelet centered at, and the green squares denote the neighbourhood of node whose values are greater than 0 in the wavelet . As the scale gets larger, the scope of neighborhood becomes wider accordingly, meaning that the receptive field of node is gradually expanding. At the small scale (), only a minority of nodes contribute to the embedding updating of node . However, at the large scale (), the neighborhood nodes increase a lot which nearly spread on the entire tissue. In such setting, information can be propagated among the tissue structures at different levels, enabling MSGWNN to acquire multiscale contextual features.
Visualization of the learned node embeddings. In order to find the intrinsic characteristics of the node embeddings, we plot the 2D tSNE projections of node embeddings in Figure 4
. Tdistributed stochastic neighbor embedding (tSNE) can convert highdimensional data to a lowdimensional (2dimensional here) representation while maintaining the distance between objects. Then, the extracted 2D vectors are normalized to
and are plotted in a scatter diagram. In Figure 4, (a) is the raw image where the squares correspond to the patches (nodes) in the image (graph). (b) shows the tSNE representation of the initial node embeddings which are learned from the modified ResNet. (c)(d)(e) are the tSNE representations of the node embeddings yielded by GWNN with different scales. It should be noted that the red and blue points in the scatter diagrams correspond to the squares (nodes) with the same color in (a). Obviously, in the pathological image (a), the red and blue squares belong to different tissues. Similarly, in (b)(c)(d)(e), the red and blue points are located in the different clusters. This phenomenon demonstrate that the intrinsic characteristics of nodes have been encoded in the embeddings.On the other hand, the distribution of points reflects the inherent relationship between nodes. At the medium scale (), the red points are very close to each other. However, at the large scale (s=5), the red points have a much wider distribution. This contrast prove that the node embeddings produced by GWNN at different scales encode different structural information. Compared to using the CNN features only, MSGWNN can provide richer node embeddings representing multilevel tissue structures. In clinical diagnosis, the tissue structural information is a critical factor considered by pathologists.
The Selection of the Scaling Parameter .
Model  Accuracy 

MSGWNN3 ()  90.00% 
MSGWNN3 ()  92.50% 
MSGWNN3 ()  93.75% 
MSGWNN3 ()  91.25% 
MSGWNN3 ()  88.75% 

As mentioned before, we can adjust the receptive field by varying the parameter . Hence how to select the appropriate scale is a key factor for GWNN. According to our experience of parameter tuning, a rule of thumb is to select the scaling parameter in . if the scale is much greater than 2, the model performance will decrease. There are methods to guide the selection of in graph wavelet analysis. Donnat et al. Donnat et al. (2018) designed a technique for selecting a set of applicable scales. In their strategy, the maximum and minimum scale are set to and respectively. Then the appropriate scale can be selected in . is the maximum eigenvalue of Laplacian matrix , and is the minimum nonzero eigenvalue. They suggest setting 0.85 and . However, our experiments show this strategy doesn’t work for graph wavelet neural network. We will further investigate how to design an elegant, theoretically guided rule to select the scaling parameter .
The Selection of Parameter
Table 4 shows the numerical results of MSGWNNs with different parameter . The number “3” in “MSGWNN3” means that the MSGWNN model utilizes three branches to learn multiscale structural information. In more details, the three scaling factors in this experiment are set as . It can be observed that the parameter is an important factor influencing the performance of MSGWNN. Further, the best option is , suggesting that both graphlevel and nodelevel supervisions are equally crucial to identify cancer types.
Conclusion
In this work, we propose multiscale graph wavelet neural network (MSGWNN) for histopathological image classification. The MSGWNN leverages the localization property of graph wavelets to perform multiscale analysis with different scaling parameter . For the task of breast cancer diagnosis, MSGWNN outperforms the stateoftheart approaches, mainly resulting from its powerful ability to integrate multiscale contextual interactions. Through ablation studies, we prove the importance of exploiting multiscale features. More broadly, the MSGWNN model provides a novel solution to extract multiscale structural information using graph wavelets, which can also be applied to other tasks.
Ethics Statement
Across the world, breast cancer has frequent morbidity and high mortality among women. Early detection is the key to increase the survival rate. Our proposed MSGWNN has great performance on breast cancer diagnosis with high efficiency, which is valuable for the clinical applications. Further, it opens up a new direction towards better multiscale representations of pathological images in graph domain.
References

Adnan, Kalra, and Tizhoosh (2020)
Adnan, M.; Kalra, S.; and Tizhoosh, H. R. 2020.
Representation Learning of Histopathology Images using Graph Neural
Networks.
In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
, 988–989.  Alom et al. (2019) Alom, M. Z.; Yakopcic, C.; Nasrin, M. S.; Taha, T. M.; and Asari, V. K. 2019. Breast cancer classification from histopathological images with inception recurrent residual convolutional neural network. Journal of digital imaging 32(4): 605–617.

Alzubaidi et al. (2020)
Alzubaidi, L.; AlShamma, O.; Fadhel, M. A.; Farhan, L.; Zhang, J.; and Duan,
Y. 2020.
Optimizing the Performance of Breast Cancer Classification by Employing the Same Domain Transfer Learning from Hybrid Deep Convolutional Neural Network Model.
Electronics 9(3): 445.  Anand, Gadiya, and Sethi (2020) Anand, D.; Gadiya, S.; and Sethi, A. 2020. Histographs: graphs in histopathology. In Medical Imaging 2020: Digital Pathology, volume 11320, 113200O. International Society for Optics and Photonics.
 Aresta et al. (2019) Aresta, G.; Araújo, T.; Kwok, S.; Chennamsetty, S. S.; Safwan, M.; Alex, V.; Marami, B.; Prastawa, M.; Chan, M.; Donovan, M.; et al. 2019. Bach: Grand challenge on breast cancer histology images. Medical image analysis 56: 122–139.
 Bardou, Zhang, and Ahmad (2018) Bardou, D.; Zhang, K.; and Ahmad, S. M. 2018. Classification of breast cancer based on histology images using convolutional neural networks. IEEE Access 6: 24680–24693.
 Bruna et al. (2013) Bruna, J.; Zaremba, W.; Szlam, A.; and LeCun, Y. 2013. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203 .
 Defferrard, Bresson, and Vandergheynst (2016) Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, 3844–3852.
 Donnat et al. (2018) Donnat, C.; Zitnik, M.; Hallac, D.; and Leskovec, J. 2018. Learning structural node embeddings via diffusion wavelets. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1320–1329.
 Elmore et al. (2015) Elmore, J. G.; Longton, G. M.; Carney, P. A.; Geller, B. M.; Onega, T.; Tosteson, A. N.; Nelson, H. D.; Pepe, M. S.; Allison, K. H.; Schnitt, S. J.; et al. 2015. Diagnostic concordance among pathologists interpreting breast biopsy specimens. Jama 313(11): 1122–1132.
 Filipczuk et al. (2013) Filipczuk, P.; Fevens, T.; Krzyżak, A.; and Monczak, R. 2013. Computeraided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies. IEEE transactions on medical imaging 32(12): 2169–2178.
 Gandomkar, Brennan, and MelloThoms (2018) Gandomkar, Z.; Brennan, P. C.; and MelloThoms, C. 2018. MuDeRN: Multicategory classification of breast histopathological image using deep residual networks. Artificial intelligence in medicine 88: 14–24.
 George et al. (2013) George, Y. M.; Zayed, H. H.; Roushdy, M. I.; and Elbagoury, B. M. 2013. Remote computeraided breast cancer detection and diagnosis system based on cytological images. IEEE Systems Journal 8(3): 949–964.

Golatkar, Anand, and Sethi (2018)
Golatkar, A.; Anand, D.; and Sethi, A. 2018.
Classification of breast cancer histology using deep learning.
In International Conference Image Analysis and Recognition, 837–844. Springer.  Guo et al. (2018) Guo, Y.; Liu, Y.; Bakker, E. M.; Guo, Y.; and Lew, M. S. 2018. CNNRNN: A largescale hierarchical image classification framework. Multimedia Tools and Applications 77(8): 10251–10271.
 Hammond, Vandergheynst, and Gribonval (2011) Hammond, D. K.; Vandergheynst, P.; and Gribonval, R. 2011. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis 30(2): 129–150.
 Han et al. (2017) Han, Z.; Wei, B.; Zheng, Y.; Yin, Y.; Li, K.; and Li, S. 2017. Breast cancer multiclassification from histopathological images with structured deep learning model. Scientific reports 7(1): 1–10.
 Kassani et al. (2019) Kassani, S. H.; Kassani, P. H.; Wesolowski, M. J.; Schneider, K. A.; and Deters, R. 2019. Breast cancer diagnosis with transfer learning and global pooling. arXiv preprint arXiv:1909.11839 .
 Kausar et al. (2019) Kausar, T.; Wang, M.; Idrees, M.; and Lu, Y. 2019. HWDCNN: Multiclass recognition in breast histopathology with Haar wavelet decomposed image based convolution neural network. Biocybernetics and Biomedical Engineering 39(4): 967–982.
 Kipf and Welling (2016) Kipf, T. N.; and Welling, M. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 .
 Mahbod et al. (2018) Mahbod, A.; Ellinger, I.; Ecker, R.; Smedby, Ö.; and Wang, C. 2018. Breast cancer histological image classification using finetuned deep network fusion. In International Conference Image Analysis and Recognition, 754–762. Springer.
 Meng, Zhao, and Su (2019) Meng, Z.; Zhao, Z.; and Su, F. 2019. Multiclassification of Breast Cancer Histology Images by Using Gravitation Loss. In ICASSP 20192019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1030–1034. IEEE.
 Nahid and Kong (2019) Nahid, A.A.; and Kong, Y. 2019. Histopathological breastimage classification using concatenated r–g–b histogram information. Annals of Data Science 6(3): 513–529.

Nguyen, Wang, and Nguyen (2013)
Nguyen, C.; Wang, Y.; and Nguyen, H. 2013.
Random forest classifier combined with feature selection for breast cancer diagnosis and prognostic.
Journal of Biomedical Science and Engineering 2013(5): 551–560.  Rebecca et al. (2016) Rebecca; L.; Siegel; MPH; Kimberly; D.; Miller; MPH; Ahmedin; and Jemal. 2016. Cancer statistics, 2016. Ca A Cancer Journal for Clinicians .
 Roy et al. (2019) Roy, K.; Banik, D.; Bhattacharjee, D.; and Nasipuri, M. 2019. Patchbased system for Classification of Breast Histology images using deep learning. Computerized Medical Imaging and Graphics 71: 90–103.
 Shen et al. (2017) Shen, W.; Zhou, M.; Yang, F.; Yu, D.; Dong, D.; Yang, C.; Zang, Y.; and Tian, J. 2017. Multicrop convolutional neural networks for lung nodule malignancy suspiciousness classification. Pattern Recognition 61: 663–673.
 Song et al. (2017) Song, Y.; Zou, J. J.; Chang, H.; and Cai, W. 2017. Adapting fisher vectors for histopathology image classification. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), 600–603. IEEE.
 Spanhol et al. (2015) Spanhol, F. A.; Oliveira, L. S.; Petitjean, C.; and Heutte, L. 2015. A dataset for breast cancer histopathological image classification. IEEE Transactions on Biomedical Engineering 63(7): 1455–1462.
 Tokunaga et al. (2019) Tokunaga, H.; Teramoto, Y.; Yoshizawa, A.; and Bise, R. 2019. Adaptive weighting multifieldofview CNN for semantic segmentation in pathology. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 12597–12606.
 Tremblay and Borgnat (2014) Tremblay, N.; and Borgnat, P. 2014. Graph wavelets for multiscale community mining. IEEE Transactions on Signal Processing 62(20): 5227–5239.
 Vahadane et al. (2015) Vahadane, A.; Peng, T.; Albarqouni, S.; Baust, M.; Steiger, K.; Schlitter, A. M.; Sethi, A.; Esposito, I.; and Navab, N. 2015. Structurepreserved color normalization for histological images. In 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), 1012–1015. IEEE.
 Veeling et al. (2018) Veeling, B. S.; Linmans, J.; Winkens, J.; Cohen, T.; and Welling, M. 2018. Rotation equivariant CNNs for digital pathology. In International Conference on Medical image computing and computerassisted intervention, 210–218. Springer.
 Wang et al. (2020) Wang, J.; Chen, R. J.; Lu, M. Y.; Baras, A.; and Mahmood, F. 2020. Weakly supervised prostate tma classification via graph convolutional networks. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 239–243. IEEE.
 Wang et al. (2018) Wang, Z.; Dong, N.; Dai, W.; Rosario, S. D.; and Xing, E. P. 2018. Classification of breast cancer histopathological images using convolutional neural networks with hierarchical loss and global pooling. In International Conference Image Analysis and Recognition, 745–753. Springer.
 Xu et al. (2019) Xu, B.; Shen, H.; Cao, Q.; Qiu, Y.; and Cheng, X. 2019. Graph wavelet neural network. arXiv preprint arXiv:1904.07785 .
 Yan et al. (2020) Yan, R.; Ren, F.; Wang, Z.; Wang, L.; Zhang, T.; Liu, Y.; Rao, X.; Zheng, C.; and Zhang, F. 2020. Breast cancer histopathological image classification using a hybrid deep neural network. Methods 173: 52–60.
 Yao et al. (2019) Yao, H.; Zhang, X.; Zhou, X.; and Liu, S. 2019. Parallel structure deep neural network using cnn and rnn with an attention mechanism for breast cancer histology image classification. Cancers 11(12): 1901.

Zhou et al. (2018)
Zhou, F.; Hang, R.; Liu, Q.; and Yuan, X. 2018.
Integrating convolutional neural network and gated recurrent unit for hyperspectral image spectralspatial classification.
In Chinese Conference on Pattern Recognition and Computer Vision (PRCV), 409–420. Springer.  Zhou et al. (2019) Zhou, Y.; Graham, S.; Alemi Koohbanani, N.; Shaban, M.; Heng, P.A.; and Rajpoot, N. 2019. Cgcnet: Cell graph convolutional network for grading of colorectal cancer histology images. In Proceedings of the IEEE International Conference on Computer Vision Workshops, 0–0.
Comments
There are no comments yet.