Recently deep learning methods , which automatically extract hierarchical features capturing complex nonlinear relationships in the data, have managed to successfully replace most task-specific hand-crafted features. This has resulted in a significant improvement in performance on a variety of biomedical image analysis tasks, like object detection, recognition and segmentation (see e.g. 
for a recent survey of the field and representative methods used in different applications), and currently Convolutional Neural Network (CNN) based methods define the state-of-the-art in this area.
In this paper we concentrate on biomedical image segmentation. For segmentation, where each pixel needs to be classified into its corresponding class, initially patch-wise training/classification was used . In patch-based methods, local patches of pre-determined size are extracted from the images, typically using a CNN as pixel-wise classifier. During training, the patch is used as an input to the network and it is assigned as a label the class of the pixel at the center
of the patch (available from ground-truth data provided by a human expert). During the test phase, a patch is fed into the trained net and the output layer of the net provides the probabilities for each class. More recently, Fully Convolutional Networks (FCN), which replace the fully connected layers with convolutional ones, have replaced the patch-wise approach by providing a more efficient way to train CNNs end-to-end, pixels-to-pixels, and methods stemming from this approach presently define the state-of-the-art in biomedical image segmentation, exemplified by conv-deconv-based methods like U-Net .
Although fully convolutional methods have shown state-of-the-art performance on many segmentation tasks, they typically need to be trained on large datasets to achieve good accuracy. In many biomedical image segmentation tasks, however, only a few training images are available – either data simply being not available, or providing pixel-level ground truth by experts being too costly to obtain. Here we are motivated by a similar problem (section 3), where less than 50 images are available for training. On the other hand, patch-wise methods need only local patches, a huge number of which can be extracted even from a small number of training images. They however suffer from the following problem. While fully convolutional methods learn a map from pixel areas (multiple input image values) to pixel areas (multiple classes of all the pixels in the area), patch-wise methods learn a map from pixel areas (input image values) to a single pixel (class of the pixel in the center of the patch). As illustrated in Fig. 1 [i], this wastes the rich information about the topology of the class structure inside the patch for many of the samples which contain more than a single class, and these would typically be the most interesting/difficult samples . Instead of trying to represent in a suitable way and learn the complex class topology, it just substitutes it by a single class (the class of the pixel in the center of the patch).
Regarding fully convolutional methods, they treat all locations in the images in the same way, which is in contrast with how human visual perception works. It is known that humans employ attentional mechanisms to focus selectively on subareas of interest and construct a global scene-representation by combining the information from different local subareas .
Based on the above observations, we propose a new method, which takes a middle ground between fully convolutional and patch-wise learning and combines the benefits of both of these strategies. As shown in Fig. 1, as in the patch-wise approach we consider subareas of the whole image at a time, which provides us with sufficient number of training samples, even if only a few ground-truth labeled images are available. However, as illustrated in Fig. 1 [ii], the class information is organized as in the retina : the spatial resolution is highest in the central area (corresponding to the fovea), and it diminishes as we go to the periphery of the subarea. We propose a sequential attention mechanism which shifts the focus of attention in such a way that areas of the image which are difficult to classify (i.e. the classification uncertainty is higher) are considered in much more detail than areas which are easy to classify. Since the focus of attention moves the subarea under investigation much slower over difficult areas (i.e. with much smaller step), this results in many overlapping subareas in these regions. The final segmentation is achieved by averaging the class predictions over the overlapping subareas. In this way, the power of ensemble learning  is utilized by incorporating information from all overlapping subareas in the neighborhood (each of which provides slightly different views of the scene) to further improve accuracy.
This is the basic idea of the method proposed in the paper, and details how to implement it in a CNN will be given in the next section. Experimental results are reported in section 3, indicating that a significant improvement in segmentation accuracy can be achieved by the proposed method compared to both patch-based and fully convolutional-based methods.
We represent a subarea
extracted from an input image (centered at the current focus of attention) as a tensor of size, where is the size of the subarea in pixels and stands for the color channels if color images are used (as typically done, and for RGB images). As shown in Fig. 1 [ii], we can represent the class information corresponding to this subarea as grids of different resolution, where each cell in a grid contains a sample histogram calculated so that the -th element of gives the number of pixels from class observed in the -th cell of the grid. If each histogram
is normalized to sum to 1 by dividing each bin by the total number of pixels covered by the cell, the corresponding vectorcan now be used to represent the probability mass function (pmf) for cell : the -th element of , i.e. , can be interpreted as the probability of observing class in the area covered by the -th cell of the grid.
Next, we show how retina-like grids of different resolution levels can be created. Let’s start with a grid of size , as shown in the top row of Fig. 1 [ii]. This we will call Resolution Level 1 and denote it as . At resolution level 1 all the cells in the grid have the same resolution, i.e. the pmfs corresponding to each cell are calculated from areas of the same size. For example, if the size of the local subarea under consideration is pixels (i.e. image patch of size pixels), each cell in the grid at resolution level would correspond to an image area of size pixels from which a probability mass function would be calculated, as explained above. Next, we can create a grid at Resolution Level 2 () by dividing in half the four cells in the center of the grid, so that they form an inner grid, whose resolution is double. We can continue this process of dividing the innermost 4 cells into 2 to obtain still higher resolution levels. It is easy to see that the number of cells in a grid obtained at resolution level is . Of course, it is not necessary the initial grid at to be of size , but choosing this number makes the process of creating different resolution levels especially simple, since in this case the innermost cells are always 4 ().
In our method, we train a CNN to learn the map between a local subarea image given as input to the network, and the corresponding pmfs used as target values. We use the cross-entropy between the pmfs of the targets and the corresponding output unit activations:
where indexes the training subarea image patches ( being the -th training subarea image patch), indexing the cells in the corresponding resolution grid, and the classes. Here, represents the weights of the network, to be found by minimizing the loss function. To form probabilities, the network output units corresponding to each cell are passed through the soft-maxactivation function.
Finally, we describe the sequential attention mechanism we utilize, whose purpose is to move the focus of attention across the image in such a way that those parts which are difficult to classify (i.e. classification uncertainty is high) are observed at the highest possible resolution, and the retina-like grid of pmfs moves with smaller steps across such areas. To evaluate the classification uncertainty of the grid over the present subarea , we use the following function,
which represents the average entropy obtained from the posterior pmfs for each cell (indexed by ) inside the grid, and indexes the classes. Using as a measure of classification uncertainty, the position of the next location where to move the focus of attention (horizontal shift in pixels) is given by
The whole process is illustrated in Fig. 2. We start at the upper left corner of the input image, with a subarea of size pixels (the yellow patch (a) in the figure). The classification uncertainty for that subarea is calculated using Eq. 2, and the step size in pixels to move in the horizontal direction is calculated by Eq. 3. As illustrated in the graph in the center of Fig. 2, since in this case the classification uncertainty is 0 (all pixels in the subarea belong to the same class), the focus of attention moves pixels to the right, i.e. in this extreme case there is no overlap between the current and next subareas. For the subarea (b) shown in green, the situation is very different. In this case the classification uncertainty is very high, and the focus of attention would move only slightly to the right, allowing the image around this area to be assessed at the highest resolution. This would result in very high level of overlap between neighboring subareas, as shown in the heat map on the right (where intensity is proportional to the level of overlap). This process is repeated until the right corner of the image is reached. Then the focus of attention is moved 10 pixels in the vertical direction to scan the next row and everything is repeated until the whole image is processed.
While the above attention mechanism moves the focus of attention across the image, the posterior class pmfs from the grids corresponding to each subarea are stored in a probability map of the same size as the image, i.e. to each pixel in the image is allocated a pmf equal to the pmf of the cell from the grid positioned above that pixel. In areas in the image where several subareas overlap, the probability map is computed by averaging for each pixel the pmfs of all cells which partially overlap over that pixel. Finally, the class of the pixel is obtained by taking the class with highest probability from the final probability map, as shown in the upper right corner of Fig. 2 for the final segmented image.
|96||patch-center||0.811 0.039||0.879 0.034||0.877 0.039||0.922 0.014||0.932 0.016|
|ResLv-1||0.798 0.046||0.868 0.040||0.866 0.042||0.914 0.016||0.928 0.015|
|ResLv-2||0.811 0.037||0.879 0.031||0.878 0.035||0.919 0.020||0.931 0.015|
|ResLv-3||0.819 0.034||0.884 0.029||0.883 0.035||0.923 0.012||0.936 0.010|
|ResLv-4||0.820 0.033||0.884 0.029||0.881 0.036||0.926 0.013||0.936 0.012|
|UNet-patch||0.788 0.051||0.861 0.047||0.859 0.045||0.916 0.013||0.922 0.011|
|128||patch-center||0.811 0.029||0.878 0.026||0.873 0.028||0.917 0.012||0.931 0.012|
|ResLv-1||0.812 0.038||0.878 0.032||0.874 0.033||0.918 0.020||0.935 0.012|
|ResLv-2||0.815 0.039||0.881 0.032||0.878 0.034||0.923 0.015||0.934 0.014|
|ResLv-3||0.816 0.030||0.881 0.029||0.879 0.033||0.920 0.011||0.935 0.008|
|ResLv-4||0.818 0.034||0.821 0.031||0.881 0.031||0.921 0.017||0.936 0.011|
|ResLv-5||0.826 0.032||0.891 0.030||0.890 0.032||0.924 0.018||0.936 0.009|
|UNet-patch||0.810 0.035||0.879 0.030||0.873 0.034||0.921 0.014||0.933 0.012|
|192||patch-center||0.778 0.029||0.856 0.025||0.849 0.020||0.899 0.017||0.917 0.017|
|ResLv-1||0.810 0.037||0.876 0.030||0.872 0.031||0.914 0.021||0.933 0.016|
|ResLv-2||0.817 0.041||0.880 0.038||0.879 0.041||0.922 0.017||0.936 0.011|
|ResLv-3||0.821 0.032||0.885 0.030||0.883 0.032||0.926 0.010||0.935 0.011|
|ResLv-4||0.832 0.036||0.894 0.030||0.890 0.034||0.926 0.015||0.940 0.012|
|ResLv-5||0.825 0.032||0.887 0.030||0.883 0.032||0.925 0.015||0.938 0.012|
|UNet-patch||0.809 0.036||0.878 0.029||0.870 0.034||0.920 0.015||0.933 0.015|
|256||patch-center||0.732 0.038||0.822 0.037||0.813 0.039||0.862 0.023||0.897 0.019|
|ResLv-1||0.804 0.039||0.871 0.035||0.866 0.036||0.910 0.018||0.931 0.012|
|ResLv-2||0.810 0.038||0.877 0.037||0.870 0.040||0.918 0.018||0.932 0.012|
|ResLv-3||0.819 0.029||0.882 0.028||0.879 0.034||0.922 0.016||0.937 0.010|
|ResLv-4||0.814 0.033||0.877 0.030||0.871 0.031||0.917 0.020||0.936 0.011|
|ResLv-5||0.815 0.038||0.880 0.034||0.876 0.035||0.919 0.016||0.934 0.012|
|UNet-patch||0.811 0.034||0.879 0.028||0.874 0.030||0.921 0.014||0.933 0.012|
|UNet-image||0.806 0.033||0.877 0.029||0.874 0.031||0.919 0.013||0.928 0.008|
In this section we evaluate the proposed method in comparison with a standard patch-wise
classification-based CNN  and the fully convolutional-based U-Net  on the dataset described below.
Additionally we implemented a U-Net version, called UNet-patch, which applies U-Net to local patches rather than to a whole image. The original U-Net method which takes as input the whole image we will call UNet-image.
Network Architecture and Hyperparameters:
Dataset: Our dataset consists of 59 images showing colonies of undifferentiated and differentiated iPS cells obtained through phase-contrast microscopy. Induced pluripotent stem (iPS) cells , for whose discovery S. Yamanaka received the Nobel prize in Physiology and Medicine in 2012, contain great promise for regenerative medicine. Still, in order to fulfill their promise a steady supply of iPS cells obtained through harvesting of individual cell colonies is needed and automating the detection of abnormalities arising during the cultivation process is crucial. Thus, our task is to segment the input images into three categories: Good (undifferentiated), Bad (differentiated) and Background (BGD, the culture medium). Several representative images together with ground-truth provided by experts can be seen in Fig. 3. All images in this dataset are of size pixels. Several images contained a few locations where even the experts were not sure what the corresponding class was. Such ambiguous regions are shown in pink and these areas were not used during training and not evaluated during test.
Network Architecture and Hyperparameters:
We used a network architecture based on the VGG-16 CNN net, apart from the following differences. There are 13 convolutional layers in VGG-16, while we used 10 here. Also, in VGG-16 there are 3 fully-convolutional layers of which the first two consist of 4096 units, while those had 1024 units in our implementation. The learning rate was set to 0.0001 and for the optimization procedure we used ADAM. Batch size was 16, training for 20 epochs (U-Net-patch for 15 epochs and U-Net-image for 200 epochs). For the implementation of the CNNs we used TensorFlow and Keras. Four different sizes for the local subareas were tried:and resolution level was changed between to . The width of the Gaussian in Eq. 3 was empirically set to for all experimental results. Evaluation procedure and criteria:
The quality of the obtained segmentation results for each method were evaluated by 5-fold cross-validation using the following criteria: Jaccard index (the most challenging one), Dice coefficient, True Positive Rate (TPR), True Negative Rate (TNR) and Accuracy. For each score the average and standard deviation are reported.
Results: The results obtained on the iPS cell colonies dataset for each of the methods are given in Table 1, where, patch-center stands for the patch-wise classification method, and results for resolution levels from to are given for the proposed method. As can be seen from the results, the proposed method outperforms both patch-wise classification and the U-Net-based methods. Fig. 3 gives some examples of segmentation on images from the iPS dataset, showing that very good accuracy of segmentation can be achieved by the proposed method. The heat maps given in the last column demonstrate that the proposed attentional mechanism is able to focus the high-resolution parts of the retina-like grid on the boundaries between the classes which seem to be most difficult to classify, resulting in increased accuracy of segmentation.
In this paper we have shown that the combined power of (1) a sequential attention mechanism controlling the shift of the focus of attention, (2) local retina-like representation of the spatial distribution of class information and (3) ensemble learning can lead to increased segmentation accuracy in biomedical segmentation tasks. We expect that the proposed method can be especially useful for datasets for which only a few training images are available.
Deep neural networks segment neuronal membranes in electron microscopy images. In NIPS, pp. 2843–2851. Cited by: §1, §3.
-  (2016) Deep learning. MIT Press. Cited by: §1.
-  (2013) Principles of neural science, 5ed.. McGraw-Hill. Cited by: §1.
Structured class-labels in random forests for semantic image labeling. In Proc. ICCV2012, pp. 2190 – 2197. Cited by: §1.
-  (2014) Combining pattern classifiers, 2ed.. Wiley. Cited by: §1.
-  (2000) The dynamic representation of scenes. Visual Cognition 7 (1-3), pp. 17–42. Cited by: §1.
-  (2015) U-net: convolutional networks for biomedical image segmentation. In MICCAI, Cited by: §1, §3.
-  (2017) Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39 (4), pp. 640–651. Cited by: §1.
-  (2007) Induction of pluripotent stem cells from adult human fibroblasts by defined factors. Cell 131 (5), pp. 861–871. Cited by: §3.
-  S. K. Zhou, H. Greenspan, and D. Shen (Eds.) (2017) Deep learning for medical image analysis. Academic Press. External Links: Cited by: §1.