Image based cellular contractile force evaluation with small-world network inspired CNN: SW-UNet

08/23/2019 ∙ by Li Honghan, et al. ∙ Osaka University 3

We propose an image-based cellular contractile force evaluation method using a machine learning technique. We use a special substrate that exhibits wrinkles when cells grab the substrate and contract, and the wrinkles can be used to visualize the force magnitude and direction. In order to extract wrinkles from the microscope images, we develop a new CNN (convolutional neural network) architecture SW-UNet (small-world U-Net), which is a CNN that reflects the concept of the small-world network. The SW-UNet shows better performance in wrinkle segmentation task compared to other methods: the error (Euclidean distance) of SW-UNet is 4.9 times smaller than 2D-FFT (fast Fourier transform) based segmentation approach, and is 2.9 times smaller than U-Net. As a demonstration, we compare the contractile force of U2OS (human osteosarcoma) cells and show that cells with a mutation in the KRAS oncogne show larger force compared to the wild-type cells. Our new machine learning based algorithm provides us an efficient, automated and accurate method to evaluate the cell contractile force.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Experimental materials

Cell substrate

Based on our previous studies Yokoyama et al. (2017); Fukuda et al. (2017), we prepare the substrate that can generate wrinkles reversibly upon application of cellular forces following steps as in Fig. 1(c). Firstly, parts A and B of CY 52-276 (Dow Corning Toray) are mixed at a weight ratio of 1.25:1 to form a PDMS (polydimethylsiloxane) gel layer that is coated on a circular cover glass. Secondly, the cover glass is placed in a 60°C oven for 20 hours to cure the PDMS gel. Thirdly, oxygen plasma (SEDE-GE, Meiwafosis) is applied uniformly along the surface of the PDMS layer to create an oxide layer that works as the substrate for cell culture. Finally, the substrate is coated with 10 μg/mL collagen type I solution for 3 hours.

Cells

U2OS cells (HTB-96; ATCC) were maintained in DMEM (043-30085; Wako) supplemented with 10% FBS (SAFC Bioscience), 100 U/mL penicillin, and 100 µg/ mL streptomycin (168-23191; Wako). Cells were maintained in a humidified 5% CO incubator at 37°C.

Plasmids

The human KRAS wild-type cDNA (Addgene plasmid #83166, a gift from Dominic Esposito) and KRAS G12V cDNA (Addgene plasmid #83169, a gift from Dominic Esposito) were amplified using KOD-plus-Neo DNA polymerase kit (KOD-401; Toyobo). The expression plasmids encoding mClover2-tagged KRAS wild-type and mRuby2-tagged KRAS G12V were constructed by inserting the PCR-amplified cDNAs into the mClover2-C1 vector (Addgene plasmid #54577, a gift from Michael Davidson) and the mRuby2-C1 vector (Addgene plasmid #54768, a gift from Michael Davidson). Before seeding two populations of KRAS expressing cells onto the gel substrate, cells were transiently transfected with either mClover2-KRAS wild-type or mRuby2-KRAS G12V using ScreenFect A (299-73203; Wako).

Methods

Overview

We overview our CNN-based wrinkle detection system in Fig 2. The full process consists of these three steps: (a)-(b) preparing the training dataset, (c) training and (d) wrinkle segmentation. Firstly, we utilize 2D-FFT method Ichikawa et al. (2017) and curvature filter Gong and Sbalzarini (2017) to extract rough wrinkle images for the CNN training, as shown in Fig. 2(a). Note images of cells and wrinkles are captured on an inverted phase-contrast microscope (IX73; Olympus) using a camera (ORCA-R2; Hamamatsu) with a 20 objective lens. A large number of cells cultured on the same substrate were imaged almost simultaneously using an XY motorized stage (Sigma Koki). In this step, the wrinkles are detected purely by the image processing techniques, and image augmentation is used to increase the number of training data. Secondly, we train SW-UNet using images that we prepared in the first step: raw cell image (input) and wrinkle image (label) shown in Fig. 2(c). Finally, we utilize this SW-UNet to obtain the wrinkles from test images as in Fig. 2(d). In the following subsections, we explain each step in detail.

Figure 2: Overview of our approach. (a) Preparation of training dataset. The wrinkles are extracted by image processing techniques, 2D-FFT (bandpass filtering) and curvature filter. (b) Image augmentation methods, affine and warping transformation, are used to increase the number of the training dataset. (c) Training SW-UNet from two images: the original microscope images and extracted wrinkle images. (d) Utilize SW-UNet to extract wrinkles.

Training dataset preparation

2D-FFT and bandpass filter

The wrinkle patterns are firstly extracted by combinations of successive three operations: 2D-FFT, bandpass filtering and inverse FFT (IFFT) techniques Fukuda et al. (2017); Ichikawa et al. (2017). Note this approach has been already established and utilized in our previous studies Fukuda et al. (2017); Ichikawa et al. (2017), and please refer to these papers for details. Since the wrinkles have a characteristic wavelength (3-6 pixels), the pattern can be extracted applying a bandpass filter to the image after the 2D-FFT operation as shown in Fig. 3(a). Restoring the image with IFFT, the wrinkles can be extracted as the figure, but the image also contains cell contours.

Figure 3: Preparation of training data: wrinkle extraction with image processing techniques. (a) Rough extraction of wrinkles by a combination of three operations: 2D-FFT, bandpass filtering and IFFT. Since the wrinkles have their characteristic wavelength (3-6 pixels), they can be extracted (bandpass filtering) and restored (IFFT) with these three steps. (b) Extracting cell contours from the original images utilizing the curvature filter. Smoothing out the wrinkles, which has a smaller wavelength (i.e. high curvature), the cell contour is extracted. (c) Constructing clear wrinkle image combining two resultant images A and B.
Curvature filter

Curvature filter is originally designed to achieve efficient smoothing and denoising operations Gong and Sbalzarini (2017). Considering the image intensities as a heightfield, the surface mean curvature can be obtained at each pixel. The filter can be used to smooth out only wrinkles because pixels that have higher curvature decay faster in this filter. Figure 3(b) shows images before and after the curvature filter, and it is clearly shown that the wrinkles smoothed out, and only cell contours remained. Note we utilized the filter repeatedly 200-1000 times until only wrinkles disappear.

Computing conjunction () of two resultant images, (right end of Fig. 3(a)) and (right end of (b)), the cell contours that appear in image can be extracted. Finally substituting the cell contours () from image as shown in Fig. 3(c), images with only wrinkles are obtained.

Image augmentation

We prepared 126 original cell images for the training. Many previous researches that handle biomedical images Ronneberger et al. (2015); Wu et al. (2015) used image augmentation techniques to increase the number of training images. In this study, we also expand the quantity of our cell images from 126 to 1404 by the geometric affine transformations Wang and Perez (2017); Gao et al. (2016) and warping transformations.

CNN architecture

Although the traditional image processing techniques are effective as shown in the previous section, the method fails to reproduce the wrinkle pattern in some cases (as also shown later in Fig. 6(a)). This image processing approach is not applicable in following three situations: (i) when the wrinkles are entirely underneath and overlapped with the cell, (ii) when the wrinkles have fewer features of wave-like patterns and (iii) when there are intense noises in the images. In this work, we utilize CNN to overcome the situation and to extract clear wrinkle images.

In recent researches, U-Net Ronneberger et al. (2015) has been widely used for segmentation of biological and medical images Han and Ye (2018); Kayalibay et al. (2017); Dong et al. (2017). Figure 4

(b) shows network topology of U-Net, and each node corresponds to the tensor format (

); and represent the image size in pixel units both - and -direction respectively, while is the number of images. Starting from a single input image (, ), which is shown with a blue node in Fig. 4(b), the input image goes through the network counter clockwise. Lines between the nodes are the tensor conversions, such as the pooling and convolution operations. The image would finally come back to a single output image (, 1) at the green node, and the network is designed to extract the desired segmented image at this final tensor.

The U-Net mainly consists of two paths, contracting path (left side of Fig. 4(b)) and expansive path (right side). The contracting path is responsible for extracting the feature from the images, while the expansive path is designed to reconstruct the desired object from the image features. The contracting path shrinks the image size using the alternate operations of convolution and pooling in the order of (pooling, convolution, convolution). As the result of these procedures, and decrease while increases. On the other hand, the expansive path increases the image size and while decreasing using alternate operations of (upsampling, convolution, convolution). The image sizes and reach to a minimal after the contracting path, and come back to the original size after the expansive path. There are special bypass connections in U-Net called “copy and crop" path Ronneberger et al. (2015), which goes horizontally from the contracting to the expansive path in Fig. 4(b), and the path is responsible for avoiding the loss of effective information during the operation of pooling.

Figure 4: Overview of SW-UNet architecture.

(a) Network topology difference based on the random re-connection probability

. (b)-(e) Network topology of several CNN structures. Each node corresponds to tensor format, while black lines correspond to the tensor conversions. (f) A table showing the node connection status for (e) SW-UNet. Labels on the horizontal and vertical axis are both tensor formats, and the colors inside the table represent the connection status: red shows connected nodes, blue shows unconnected nodes and orange shows connected nodes but with the recursively reduced number of input images. (g) A schematic showing a tensor conversion with three input nodes and a single output node .

Algorithm building SW-UNet

We now introduce the concept of the small-world and modify the CNN topology. The topology of the small-world network is characterized and controlled by three parameters: , and Watts and Strogatz (1998); Kochen (1989): is the number of nodes in the network, is the average number of the connection branches between the neighbouring nodes, and is the random reconnection probability. The total number of branches is , and selected branches are randomly re-connected to other nodes in the network. Figure 4(a) shows the schematic of the small-world network topology under fixed and , but different parameter. Each node has connections only to its closest neighbouring nodes for , and the network topology becomes disordered with the increase of . We built our SW-UNet architecture through the following procedures.

Network topology generation

In the first step, we build the DenseNet Huang et al. (2017); Jégou et al. (2017); Li et al. (2018) with , as shown in Fig. 4(c). Each node corresponds to a tensor format (), and the input image would go through the network counter-clockwise as U-Net. Following the tensor conversions of U-Net, SW-UNet also consists of the contracting path with successive operations of (pooling, convolution, convolution) and the expansive path with (upsampling, convolution, convolution). Figure 4(f) shows the list of tensor formats that we use in SW-UNet.

In the second step, we reconnect randomly selected connections for as shown in Fig. 4(d)-(e). The network is DenseNet for , while the network is totally random for as shown in Fig. 4(e). The image flow direction is always from the upstream to the downstream node.

Node connection

The format conversions are necessary to connect nodes that have different tensor formats, and Fig. 4(g) is a schematic of our connection algorithm. The extracted connections are from Fig. 4(d), and it shows a situation that three input nodes are connected to a single output node . We first use the pooling and up-sampling operations to match the image size of destination node , . For example, the pooling operation is utilized to contract large images as node (), while up-sampling operation is utilized to expand smaller images as node (). Summing up all resultant images from node A-C, the number of total images is now but the value would not necessary match the destination node image number . Therefore, the convolution operation is utilized to convert the image number from to . Note when one of the input image number (, and ) exceeds the destination image number , we halve the input image number recursively until they become smaller than . Figure 4(f) shows the connection status for the network (Fig. 4(e)): red shows connected nodes, blue shows unconnected nodes, and orange shows connected nodes but with the recursively reduced number of input images.

Training parameter

The number of the training dataset is 1404 (126 original images), and Adam optimizer Kingma and Ba (2014) with a learning rate of 0.0001 is utilized for training the CNN network. We used Nvidia Titan Black (2 GPUs) to accelerate the training process.

In previous studies, researchers prepared original images in an order of Roth et al. (2014); Depeursinge et al. (2012); Shin et al. (2016); Zhang et al. (2016); Sampaio et al. (2011)

as the training dataset to avoid the overfitting. Since we have 126 original images for the training dataset, we need to restrict our training epochs

Loughrey and Cunningham (2004). Therefore, we set the training steps in one epoch as 300 and set the total epochs as 10.

Wrinkle evaluation

After training CNNs, we evaluate its accuracy with

test images by comparing with the ground-truth data. The ground-truth data are produced by three different researchers that were asked to trace the wrinkle lines manually. Although the cross-entropy is the standard method to compare images

Shore and Johnson (1980); Yi-de et al. (2004); Ronneberger et al. (2015); Shin et al. (2016), we did not use this method because it was not a proper criterion to compare the performance of different networks. Interestingly, the accuracy (range: 0.9642-0.9759) and loss (range: 0.798-0.808) in the training process converge almost to a same value for all networks, though there is a significant difference in the extracted wrinkles (as shown in Fig. 5(a)).

Instead, we utilize perimeter length of the wrinkles as the comparison criteria. In order to obtain the perimeter, we extract the wrinkle edge with the Prewitt operator at a threshold of 0.01 and count up the number of edge pixels to obtain . We introduce two different distances, Euclidean and cosine distance , to quantify the difference between the wrinkle perimeter obtained by CNN and the ground truth . Each distance is defined as

(1)
(2)
Figure 5: Wrinkle segmentation using SW-UNet with different -values. (a) Wrinkle images that are produced by SW-UNet and U-Net, and is the wrinkle perimeter length. (b) Black plots show the distance while red plots show SWI (3) of the network, as a function of network reconnection parameter . (c) The distance as a function of , and the figure indicates that our network SW-UNet achieves better performance for larger .

Results

Effect of -value in SW-UNet

We first evaluate the segmentation performance using different network topology, SW-Net ( to 1) and U-Net, in Fig. 5(a). Although most of the networks succeeded in extracting the wrinkles to some extent, (DenseNet) and (SW-UNet) failed, and they only showed vague regions of wrinkles. Comparing the wrinkle perimeter length for different SW-UNets, images (i) and (ii) shows maximum length at intermediate , while image (iii) shows larger for larger -values. For images (i) and (ii), the wrinkles are well extracted in but become less prominent with increase. As a result, SW-UNets with large -value would underestimate the wrinkle length. In the case of image (iii), the network with overestimates the wrinkle length because the network failed to distinguish the cell contours and wrinkles. Figure 5(b) shows the distance from the manually tracked ground truth, and the result shows that the segmentation performance is best at . The distance of U-Net was almost the same as SW-UNet with .

We now introduce SWI (small-world index) Neal (2017) to characterize the network topology, which is defined as

(3)

where is the average path length and is the clustering coefficient defined as

(4)
(5)

is the distance between two nodes, is the number of nodes in the network and is the connection status between two nodes: when nodes and are connected while if the nodes are not connected. Subscripts and describes that the value is from the regular or random network respectively: and are the clustering coefficients for regular and random networks, while and are the average path lengths in regular and random networks.

Figure 5(b) shows that SWI reaches maximum at and gradually decrease with increase. Plotting distance as a function of SWI as shown in Fig. 5(c), the result infers that the network with larger SWI has better segmentation performance. Note we evaluated the distance and SWI with three randomly generated network for each value. In recent years, there was a report on the macaques and cats cortex topology Sporns and Zwi (2004), and the small-world index was estimated as from their results. The network topology in the brain might be optimized in the process of evolution. Although we cannot draw a definite conclusion here because of the small number of sample data, there is a possibility that the network SWI is one criterion to judge the performance when designing a new CNN.

From next sections, we will fix the value to for SW-UNet.

Figure 6: Wrinkle segmentation accuracy of SW-UNet and its application. (a) Comparison of extracted wrinkles by different methods. (b) Accuracy of wrinkle segmentation quantified by the distances, Euclidean and cosine distances, from the ground truth data. SW-UNet has the smallest error compared to 2D-FFT based segmentation and U-Net. (c) The wrinkles (green lines) extracted from the microscope images by SW-UNet for U2OS cells with mutant KRAS gene (first row), and wild-type U2OS cells (second row). (d) Wrinkle lengths of the two cell types. The mutant cell has longer wrinkle compared to the wild- type, and there is a significant difference (student’s -test) in two groups.

Comparison of different segmentation methods

Figure 6(a) compares extracted wrinkles with different approaches: 2D-FFT based method (image processing based segmentation), U-Net and our SW-UNet. The 2D-FFT based method has the worst segmentation performance, and extracted wrinkles are dotted-line-like patterns rather than continuous lines. This is because the 2D-FFT based method can only detect the patterns that have periodic wave patterns, and it has a limitation detecting complex-shaped wrinkles as images (ii) or (iii). The third row of Fig. 6(a) shows the images generated by U-Net. Although the wrinkles are extracted clearer compared to the 2D-FFT based approach, U-Net failed to distinguish the cell contours and wrinkles in some circumstances. For example, U-Net treated the cell organelles as the wrinkles in images (ii) and (iii) and accordingly overestimating the length of wrinkles. In the case of image (iv), U-Net detected wrinkles at the cell perimeter even though there are no apparent wrinkles in the microscope image. On the other hand, SW-UNet succeeded in distinguishing the wrinkles from the cell contour, and the wrinkle length can be evaluated precisely.

We now introduce the Euclidean distance (1) and cosine distance (2) to quantify the segmentation accuracy. Figure 6(b) shows the accuracy, which is the inverse of the distance , obtained by comparing with manually traced wrinkle lines. Note the accuracy is normalized by the score of SW-UNet in the figure. The figure shows that SW-UNet has far better performance compared to other two approaches, and the accuracy based on Euclidean distance was 4.9 times accurate compared to the 2D-FFT based approach, and 2.9 times accurate compared to U-Net. In the case of the accuracy based on cosine distance , it was 36.8 times accurate compared to 2D-FFT based approach, and 5.5 times accurate compared to U-Net. In summary, our SW-UNet is the most effective method for this application.

Demonstration: Effect of KRAS mutation

To demonstrate that our SW-UNet is applicable to evaluate the cellular contractile force, we finally evaluate the force with and without a KRAS mutation and compare them. Mutations in the KRAS oncogene are highly correlated with various types of cancer development Tsuchida et al. (2016), including metastatic colorectal cancer Amado et al. (2008), pancreatic cancer Son et al. (2013) and non-small cell lung cancer Riely et al. (2009). G12V, which is a point mutation with a replacement from glycine to valine at amino acid 12, is one of the most common oncogenic KRAS mutations and has been reported to result in enhanced myosin phosphorylation Hogan et al. (2009).

Utilizing our new SW-UNet method, we extracted the wrinkles from the microscope images, as shown in Fig. 6(c), and the mutant group shows more wrinkles than the wild-type group. In supplemental meterial, we also show movies of moving cells with extracted wrinkles (Movie 1 and 2). Figure 6(d) compares the wrinkle length , and the average length of mutant cells () is larger than that of the wild-type (

). Student’s t-test shows that the

-value between these two groups is 0.0245, and thus indicating that the mutant group and wild-type group are significantly different. The previous study Hogan et al. (2009), which reported enhanced myosin phosphorylation upon G12V mutation, indirectly suggests an increased force generation during cancer development. In accordance with this study, our present result demonstrates that the mutated cells indeed exhibit greater forces.

Given that comprehensive analyses are often crucial in the field of cell biology to evaluate, e.g., how mutations in specific oncogenes or administration of specific drugs result in changes in cellular physical forces, our system with SW-UNet of high-throughput capability is potentially useful to more thoroughly evaluate potential changes in the cellular contractile force upon different types of molecular perturbations.

Conclusion

In this paper, we proposed an image-based cellular contractile force evaluation method using a machine learning technique. We developed a new CNN architecture SW-UNet for the image segmentation task, and the network reflects the concept of the small-world network. The network topology is controlled by three parameters: number of nodes , number of connection branches from a single node to other and re-connection probability . Our network reaches to the maximum segmentation performance at , and the result infers that the networks with larger SWI might have better performance in the segmentation. Using our SW-UNet, we can extract the wrinkles clearer than other methods. The error (Euclidean distance) of SW-UNet was 4.9 times smaller than 2D-FFT based wrinkle segmentation approach and was 2.9 times smaller than U-Net. As a demonstration, we compared the contractile force of U2OS cells and showed that cells with mutant KRAS gene exhibit larger force compared to the wild-type cells. Our new machine learning based algorithm provides us an efficient, automated and accurate method to compare the cell contractile force. We believe that our network SW-UNet and CNN building strategy would be useful for other applications.

Acknowledgement

This work was supported by JSPS KAKENHI Grant Number 18H03518.

References

  • Polacheck and Chen (2016) W. J. Polacheck and C. S. Chen, Nature methods 13, 415 (2016).
  • Munevar et al. (2001) S. Munevar, Y.-l. Wang,  and M. Dembo, Biophysical journal 80, 1744 (2001).
  • Tan et al. (2003) J. L. Tan, J. Tien, D. M. Pirone, D. S. Gray, K. Bhadriraju,  and C. S. Chen, Proceedings of the National Academy of Sciences 100, 1484 (2003).
  • Liu et al. (2010) Z. Liu, J. L. Tan, D. M. Cohen, M. T. Yang, N. J. Sniadecki, S. A. Ruiz, C. M. Nelson,  and C. S. Chen, Proceedings of the National Academy of Sciences 107, 9944 (2010).
  • Burton and Taylor (1997) K. Burton and D. L. Taylor, Nature 385, 450 (1997).
  • Balaban et al. (2001) N. Q. Balaban, U. S. Schwarz, D. Riveline, P. Goichberg, G. Tzur, I. Sabanay, D. Mahalu, S. Safran, A. Bershadsky, L. Addadi, et al., Nature cell biology 3, 466 (2001).
  • Yokoyama et al. (2017) S. Yokoyama, T. S. Matsui,  and S. Deguchi, Biochemical and biophysical research communications 482, 975 (2017).
  • Ichikawa et al. (2017) T. Ichikawa, M. Kita, T. S. Matsui, A. I. Nagasato, T. Araki, S.-H. Chiang, T. Sezaki, Y. Kimura, K. Ueda, S. Deguchi, et al., J Cell Sci 130, 3517 (2017).
  • Fukuda et al. (2017) S. P. Fukuda, T. S. Matsui, T. Ichikawa, T. Furukawa, N. Kioka, S. Fukushima,  and S. Deguchi, Development, growth & differentiation 59, 423 (2017).
  • Harris et al. (1980) A. K. Harris, P. Wild,  and D. Stopak, Science 208, 177 (1980).
  • Ronneberger et al. (2015) O. Ronneberger, P. Fischer,  and T. Brox, in International Conference on Medical image computing and computer-assisted intervention (Springer, 2015) pp. 234–241.
  • Falk et al. (2019) T. Falk, D. Mai, R. Bensch, Ö. Çiçek, A. Abdulkadir, Y. Marrakchi, A. Böhm, J. Deubner, Z. Jäckel, K. Seiwald, et al., Nature methods 16, 67 (2019).
  • Van Valen et al. (2016) D. A. Van Valen, T. Kudo, K. M. Lane, D. N. Macklin, N. T. Quach, M. M. DeFelice, I. Maayan, Y. Tanouchi, E. A. Ashley,  and M. W. Covert, PLoS computational biology 12, e1005177 (2016).
  • Fabijańska (2018)

    A. Fabijańska, Artificial intelligence in medicine 

    88, 1 (2018).
  • Niioka et al. (2018) H. Niioka, S. Asatani, A. Yoshimura, H. Ohigashi, S. Tagawa,  and J. Miyake, Human cell 31, 87 (2018).
  • Watts and Strogatz (1998) D. J. Watts and S. H. Strogatz, nature 393, 440 (1998).
  • Humphries and Gurney (2008) M. D. Humphries and K. Gurney, PloS one 3, e0002051 (2008).
  • Neal (2017) Z. P. Neal, Network Science 5, 30 (2017).
  • Hubel and Wiesel (1968) D. H. Hubel and T. N. Wiesel, The Journal of physiology 195, 215 (1968).
  • Bullmore and Sporns (2009) E. Bullmore and O. Sporns, Nature reviews neuroscience 10, 186 (2009).
  • Rubinov and Sporns (2010) M. Rubinov and O. Sporns, Neuroimage 52, 1059 (2010).
  • Sanz-Arigita et al. (2010) E. J. Sanz-Arigita, M. M. Schoonheim, J. S. Damoiseaux, S. A. Rombouts, E. Maris, F. Barkhof, P. Scheltens,  and C. J. Stam, PloS one 5, e13788 (2010).
  • Xie et al. (2019) S. Xie, A. Kirillov, R. Girshick,  and K. He, arXiv preprint arXiv:1904.01569  (2019).
  • Javaheripi et al. (2019) M. Javaheripi, B. D. Rouhani,  and F. Koushanfar, arXiv preprint arXiv:1904.04862  (2019).
  • Gong and Sbalzarini (2017) Y. Gong and I. F. Sbalzarini, IEEE Transactions on Image Processing 26, 1786 (2017).
  • Wu et al. (2015) R. Wu, S. Yan, Y. Shan, Q. Dang,  and G. Sun, arXiv preprint arXiv:1501.02876  (2015).
  • Wang and Perez (2017) J. Wang and L. Perez, Convolutional Neural Networks Vis. Recognit  (2017).
  • Gao et al. (2016) Z. Gao, L. Wang, L. Zhou,  and J. Zhang, IEEE journal of biomedical and health informatics 21, 416 (2016).
  • Han and Ye (2018) Y. Han and J. C. Ye, IEEE transactions on medical imaging 37, 1418 (2018).
  • Kayalibay et al. (2017) B. Kayalibay, G. Jensen,  and P. van der Smagt, arXiv preprint arXiv:1701.03056  (2017).
  • Dong et al. (2017) H. Dong, G. Yang, F. Liu, Y. Mo,  and Y. Guo, in annual conference on medical image understanding and analysis (Springer, 2017) pp. 506–517.
  • Kochen (1989) M. Kochen, The small world (Ablex Pub., 1989).
  • Huang et al. (2017) G. Huang, Z. Liu, L. Van Der Maaten,  and K. Q. Weinberger, in 

    Proceedings of the IEEE conference on computer vision and pattern recognition

     (2017) pp. 4700–4708.
  • Jégou et al. (2017) S. Jégou, M. Drozdzal, D. Vazquez, A. Romero,  and Y. Bengio, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017) pp. 11–19.
  • Li et al. (2018) S. Li, M. Deng, J. Lee, A. Sinha,  and G. Barbastathis, Optica 5, 803 (2018).
  • Kingma and Ba (2014) D. P. Kingma and J. Ba, arXiv preprint arXiv:1412.6980  (2014).
  • Roth et al. (2014) H. R. Roth, L. Lu, A. Seff, K. M. Cherry, J. Hoffman, S. Wang, J. Liu, E. Turkbey,  and R. M. Summers, in International conference on medical image computing and computer-assisted intervention (Springer, 2014) pp. 520–527.
  • Depeursinge et al. (2012) A. Depeursinge, A. Vargas, A. Platon, A. Geissbuhler, P.-A. Poletti,  and H. Müller, Computerized medical imaging and graphics 36, 227 (2012).
  • Shin et al. (2016) H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura,  and R. M. Summers, IEEE transactions on medical imaging 35, 1285 (2016).
  • Zhang et al. (2016) R. Zhang, Y. Zheng, T. W. C. Mak, R. Yu, S. H. Wong, J. Y. Lau,  and C. C. Poon, IEEE journal of biomedical and health informatics 21, 41 (2016).
  • Sampaio et al. (2011) W. B. Sampaio, E. M. Diniz, A. C. Silva, A. C. De Paiva,  and M. Gattass, Computers in biology and medicine 41, 653 (2011).
  • Loughrey and Cunningham (2004) J. Loughrey and P. Cunningham, in International Conference on Innovative Techniques and Applications of Artificial Intelligence (Springer, 2004) pp. 33–43.
  • Shore and Johnson (1980) J. Shore and R. Johnson, IEEE Transactions on information theory 26, 26 (1980).
  • Yi-de et al. (2004) M. Yi-de, L. Qing,  and Q. Zhi-Bai, in Proceedings of 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing, 2004. (IEEE, 2004) pp. 743–746.
  • Sporns and Zwi (2004) O. Sporns and J. D. Zwi, Neuroinformatics 2, 145 (2004).
  • Tsuchida et al. (2016) N. Tsuchida, A. K. Murugan,  and M. Grieco, Oncotarget 7, 46717 (2016).
  • Amado et al. (2008) R. G. Amado, M. Wolf, M. Peeters, E. Van Cutsem, S. Siena, D. J. Freeman, T. Juan, R. Sikorski, S. Suggs, R. Radinsky, S. D. Patterson,  and D. D. Chang, Journal of Clinical Oncology 26, 1626 (2008), pMID: 18316791, https://doi.org/10.1200/JCO.2007.14.7116 .
  • Son et al. (2013) J. Son, C. A. Lyssiotis, H. Ying, X. Wang, S. Hua, M. Ligorio, R. M. Perera, C. R. Ferrone, E. Mullarky, N. Shyh-Chang, et al., Nature 496, 101 (2013).
  • Riely et al. (2009) G. J. Riely, J. Marks,  and W. Pao, Proceedings of the American Thoracic Society 6, 201 (2009).
  • Hogan et al. (2009) C. Hogan, S. Dupré-Crochet, M. Norman, M. Kajita, C. Zimmermann, A. E. Pelling, E. Piddini, L. A. Baena-López, J.-P. Vincent, Y. Itoh, et al., Nature cell biology 11, 460 (2009).