Common Semantic Segmentation Architectures
We propose UOLO, a novel framework for the simultaneous detection and segmentation of structures of interest in medical images. UOLO consists of an object segmentation module which intermediate abstract representations are processed and used as input for object detection. The resulting system is optimized simultaneously for detecting a class of objects and segmenting an optionally different class of structures. UOLO is trained on a set of bounding boxes enclosing the objects to detect, as well as pixel-wise segmentation information, when available. A new loss function is devised, taking into account whether a reference segmentation is accessible for each training image, in order to suitably backpropagate the error. We validate UOLO on the task of simultaneous optic disc (OD) detection, fovea detection, and OD segmentation from retinal images, achieving state-of-the-art performance on public datasets.READ FULL TEXT VIEW PDF
Object detection or localization is an incremental step in progression f...
Since manual object detection is very inaccurate and time consuming, som...
Panoramic segmentation is a scene where image segmentation tasks is more...
Learning to understand and infer object functionalities is an important ...
The task of localizing and categorizing objects in medical images often
Object detection and segmentation represents the basis for many tasks in...
3D object detection is an essential task in autonomous driving and robot...
Common Semantic Segmentation Architectures
Detection and segmentation of anatomical structures are central medical image analysis tasks since they allow to delimit Regions-Of-Interest (ROI), create landmarks and improve feature collection. In terms of segmentation, Deep Fully-Convolutional (FC) Neural Networks (NNs) achieve the highest performance on a variety of images and problems. Namely, U-Net 
has become a reference model – its autoencoder structure with skip connections enables the propagation from the encoding to the decoding part of the network, allowing a more robust multi-scale analysis while reducing the need for training data.
Similarly, Deep Neural Networks (DNNs) have become the technique of choice in many medical imaging detection problems. The standard approach is to use networks pre-trained on large datasets of natural images as feature extractors of a detection module. For instance, Faster-R CNN  uses these features to identify ROIs via a specialized layer. ROIs are then pooled, rescaled and supplied to a pair of Fully-Connected NNs responsible for adjusting the size and label the bounding boxes. Alternatively, YOLOv2  avoids the use of an auxiliary ROI proposal model by directly using region-wise activations from pre-trained weights to predict coordinates and labels of ROIs.
When a ROI has been identified, the segmentation of an object contained on it becomes much easier. For this reason, the combination of detection and segmentation models into a single method is being explored. For instance, Mask-R CNN  extends Faster-R CNN with the addition of FC layers after its final pooling, enabling a fine segmentation without a significant computational overhead. In this architecture, the segmentation and detection modules are decoupled, i.e.
the segmentation part is only responsible for predicting a mask, which is then labeled class-wise by the detection module. However, despite the high performance achieved by Mask-R CNN in computer vision, its application to medical image analysis problems remains limited. This is due to the large requirement of data annotated at a pixel level, which is usually not available in medical applications.
In this paper we propose UOLO (Fig. 1), a novel architecture that performs simultaneous detection and segmentation of structures of interest in biomedical images. UOLO harvests the best of its individual detection and segmentation modules to allow robust and efficient predictions even when few training data is available. Moreover, training UOLO is simple since the entire network can be updated during back-propagation. We experimentally validate UOLO on eye fundus images for the joint task of fovea (FV) detection, optic disc (OD) detection, and OD segmentation, where we achieve state-of-the-art performance.
For object segmentation we consider an adapted version of the U-Net network presented in 
. U-Net is composed of FC layers organized on an auto-encoder scheme, which allows to obtain an output of the same size of the input, thus enabling pixel-wise predictions. Skip connections between the encoding and decoding parts are used for avoiding the information loss inherent to encoding. The model’s upsampling path includes a large number of feature channels with the aim of propagating the multi-scale context information to higher resolution layers. Ultimately, the segmentation prediction results from the analysis of abstract representations of the images from multiple scales, with the majority of the relevant classification information being available on the decoder portion of the network due to the skip connections. We modify the network by adding batch normalization after each convolutional layer, and replacing the pooling layers by convolutions with stride. The soft intersection over union (IoU) is used as loss:
where and are the ground truth mask and the soft prediction mask, respectively, and is the Hadamard product.
For object detection we take inspiration from YOLOv2 , a network composed of: 1) a DNN that extracts features from an image (); 2) a feature interpretation block that predicts both labels and bounding boxes for the objects of interest (). YOLOv2 assumes that every image’s patch can contain an object of size similar to one of various template bounding boxes (or anchors) computed a priori from the objects’ shape distribution in the training data.
Let the output of
be a tensorof shape , where is the dimension of the spatial grid and is the number of maps. convolves and reshapes into , a tensor of shape , where is the number of anchors, is the number of object classes, and 5 is the number of variables to be optimized: center coordinates and , width , height , and the confidence (how likely is the bounding box to be an object) of the bounding boxes. For each anchor in , the value of each feature map element is responsible for adjusting a property of the predicted bounding box ,
is a sigmoid function. YOLOv2 is trained by optimizing the loss function:
where are predefined weighting factors, , and are mean squared errors, and is the cross-entropy loss. Each loss term penalizes a different error: 1) penalizes the error in the center position of the cells; 2) penalizes the incorrect size, i.e. height and width, of the bounding box; 3) penalizes the incorrect prediction of a box presence; 4) penalizes the misclassification of the objects.
UOLO framework for object detection and segmentation is depicted in Fig. 2
, where the segmentation module itself is used as a feature extraction module, adopting the role of, and serving as input for the localization module . The intuition behind this design is that the abstract representation learned by the decoding part of U-Net contains multi-scale information that can be useful not only to segment objects, but also to detect them. In addition, the class of objects that UOLO can detect is not limited to those for which segmentation ground-truth is available.
Let be an U-Net-like network that, given pairs of images and binary masks, can be trained for performing segmentation by minimizing (Eq. 1). has a second output corresponding to the concatenation of the downsampled decoding maps with its bottle neck (last encoder layer). The resulting tensor corresponds to a set of multi-scale representations of the original image that are supplied to the object detection block , which, by its turn, can be optimized via , defined in Eq. 3. and are then merged by concatenation into , a single model that can be optimized by minimizing the addition of the corresponding loss functions:
Thanks to the straightforward definition of the loss function in Eq. (4), can be trained with a simple iterative scheme detailed in Algorithm 1. In essence, is updated only when segmentation information is available. However, a global weight update is performed at every step based on the prediction error backpropagation. Furthermore, the outlined training scheme allows for a different number of strong (pixel-wise) and weak (bounding boxes) annotations, easing its application to medical images.
We test UOLO on 3 public eye fundus datasets with healthy and pathological images: 1) Messidor  has 1200 images (1440960, 22401488 and 23041536 pixels, 45 field-of-view (FOV)), 1136 having ground truth (GT) for OD segmentation and FV centers111http://www.uhu.es/retinopathy; 2) IDRID222https://idrid.grand-challenge.org/, available since January 20, 2018 training set has 413 images (42882848 pixels, 50 FOV) with OD and FV centers and 54 with OD segmentation; 3) DRIVE  has 40 images (768584 pixels, 45 FOV) with OD segmentation333https://sites.google.com/a/uw.edu/src/useful-links.
All images are cropped around the FOV (determined via Otsu’s thresholding) and resized to 256256 pixels. The side of the square GT bounding boxes is set to and for the FV and OD following their relative size in the image. For training, and (Alg. 1
) are set to 256 and 32, respectively. Online data augmentation, a mini-batch size of 8, and the Adam optimizer (learning rate of 1e-4) were used for training, while 25% of the data was kept for validation. The bounding box with highest confidence for each class is kept. The predicted soft segmentations are binarized using a threshold of 0.5.
The OD segmentation is evaluated with IoU and Sorensen-Dice coefficient overlap metrics. The detection is evaluated in terms of mean euclidean distance (ED) between the prediction and the GT. We also evaluate ED relatively to the OD radius, [7, 8]. Finally, detection success, , is assessed using the maximum distance criteria of 1 OD radius.
|Datasets||OD seg.||OD det.||FV det.|
We evaluate UOLO both inter and intra-dataset-wise. For inter-dataset experiments, UOLO was trained on Messidor and tested in the other datasets whereas for intra-dataset studies stratified 5-fold cross-validation was used. We do not extensively optimize the training parameters to verify how robust UOLO is when dealing with segmentation and detection simultaneously. Table 1 shows the results of UOLO for the OD detection and segmentation and FV detection tasks, Table 2 compares our performance with state-of-the-art methods and Fig. 3 shows two prediction examples in complex detection and segmentation cases.
UOLO achieves equal or better performance in comparison to the state-of-the-art on both detection and segmentation tasks (IoU on Messidor) in a single step prediction. Furthermore, the proposed network is robust even in inter-dataset scenarios, maintaining both segmentation and detection performances. This indicates that the abstract representations learned by UOLO are highly effective for solving the task at hands. It is worth noting that our segmentation and detection performances do not alter significantly even when UOLO is trained with only 15% of the pixel-wise annotated images. This means that UOLO does not require a significant amount of pixel-wise annotations, easing its application on the medical field, where these are expensive to obtain.
Our results also suggest that UOLO is capable of using multi-scale information (eg. relative position to the OD or vessel tree) to perform predictions. For instance, Fig. 3 shows UOLO’s output for two Messidor images, illustrating that the network is capable of detecting the FV in a low contrast scenario. On the other hand, the segmentation and detection processes are not completely interdependent, as expected from the proposed training scheme, since the network segments OD confounders outside the detected OD region. Another advantage of UOLO is that these segmentation errors are easily correctable by limiting the pixel-wise predictions to the found OD region. Unlike hand-crafted feature-based methods, UOLO does not require an extensive parameter tunning and it is simple to extend to different applications.
We also evaluate U-Net (, Fig. 2) for OD segmentation and YOLOv2 (with a pretrained Inceptionv3 as feature extractor) for OD and FV detection (Table 2). The training conditions were set as in UOLO. UOLO segmentation performance is practically the same as U-Net, whereas the detection drops slightly when comparing with YOLOv2, mainly for OD detection. However, one has to consider the trade-off between computational burden and performance, since UOLO network has parameters, whereas U-Net has and YOLOv2 has , being that for training U-Net and YOLO a total of parameters have to be optimized (60% increase).
We presented UOLO, a novel network that performs joint detection and segmentation of objects of interest in medical images by using the abstract representations learned by U-Net. Furthermore, UOLO can detect objects from a different class for which segmentation ground-truth is available.
We tested UOLO for simultaneous fovea detection and optic disk detection and segmentation, achieving state-of-the-art results. This network can be trained with relatively few images with segmentation ground-truth and still maintain a high performance. UOLO is also robust to inter-dataset settings, thus showing great potential for applications in the medical image analysis field.
T. Araújo is funded by the FCT grant SFRH/BD/122365/ 2016. G. Aresta is funded by the FCT grant SFRH/BD/120435/ 2016. This work is funded by the ERDF European Regional Development Fund, Operational Programme for Competitiveness and Internationalisation - COMPETE 2020, and by National Funds through the FCT - project CMUP-ERI/TIC/0028/2014.