Object recognition refers to the task of detecting and labeling all objects in a given image. A bounding box is usually used in this approach to localize the object(s). In object detection, bounding boxes are used to localize a specific object in the image and the rest of the image is assigned to the non-object class. Semantic segmentation refers to the classification of each pixel in an image to generate an image mask consisting of a number of labeled regions. Object recognition approaches are generally easier to implement and computationally less expensive than semantic segmentation methods. However, accuracy and pixel-depth segmentation can be more important than computational complexity in certain applications such as medical image processing.
Deep fully convolutional networks (FCNs)  are popular models for semantic segmentation  that use a convolutional decoder with a large annotated training dataset. Since there are limited numbers of annotated images and data samples available for every possible class in real-world problems , augmentation methods such as image rotation and synthesis  can help increase the diversity of training datasets, and therefore prevent the models from overfitting , , . In , we have proposed a radial transform method in the polar coordinate system as a novel augmentation method for classification problems. This technique is well suited for highly imbalanced datasets, or datasets with a limited number of labeled images.
In this paper, we propose a cylindrical transform in the cylindrical coordinate system as a technique to generate representations from limited annotated sequential images. The cylindrical transform method enables us to train contemporary classification deep convolutional neural networks (DCNNs) instead of FCNs for semantic segmentation. We applied the proposed method for registration-free segmentation of left kidney, right kidney, and non-kidney data classes in abdominal computed tomography (CT) images by training AlexNet  and GoogLeNet  DCNNs. We have selected these architectures due to their simplicity of training and relatively high classification performance .
2 Proposed Method
In this section, we discuss the proposed cylindrical transform sampling method and the training and inference procedures of a DCNN using cylindrical transform generated images for semantic segmentation.
2.1 Sampling Using Cylindrical Transform in 3D Space
A cylindrical coordinate system is a generalization of a polar coordinate system to 3D space and is created by superposing a height along the z-axis. The objects in a volume of images not only have spatial dependency on the x-y plane, but also along the z-axis. We define a volume as a sequence of images along the z-axis, where is of size , as presented in Figure 1. We can randomly select a pixel from in the Cartesian coordinate system, such as . This pixel can be mapped onto the cylindrical coordinate system as a pole with coordinates . Cylindrical transform represents each pixel in the volume as a new image of the size , where , by up-sampling the pole and representing the spatial information between the pole and other pixels in the volume.
In the cylindrical coordinate system, a pixel on the plane can be represented as , where is the radial coordinate from the pole along the z-axis and is the counter-clockwise angular coordinate. It is considered with respect to an axis drawn horizontally from the pole to the right, as illustrated in Figure 1. For a given volume of images, we can select slices, with a given distance of slices, above and under the slice with respect to the pole such as . In the cylindrical coordinate system , we can generate sampling points with respect to a pole such that
for . By , we project the pixels at Cartesian coordinates from the original image to generate an image with respect to the pole using cylindrical transform as
for and such that , , and is the rounding function to the nearest integer. These conditions guarantee that the pair stays spatially within . A pixel in the constructed image is then defined as The image is the cylindrical transform image of X with respect to the pole with sampling step of along the z-axis. Algorithm 1 shows the pseudocode of the cylindrical transform sampling.
shows the advantage of using cylindrical transformed images over independent slices of a volume. An sphere and cylinder look different in a 3D space. However, these objects may look similar depending on the object location along the z-axis on the x-y plane. This is while the cylindrical transformed images capture the spatial difference along the z-axis and by combining that with spatial information on the x-y plane, represent a volume along an arbitrary pixel as an image, feasible for machine learning. This image contains information about spatial dependency on the x-y plane as well as the z-axis.
2.2 Cylindrical Transform for Semantic Segmentation
Figure 3 shows samples of cylindrical transform generated images from contrast-enhanced abdominal CT. Figure 4 shows the procedure for training a DCNN with cylindrical transformed images. By considering a sequence of images as the input, the cylindrical transform generates images for a number of randomly selected poles in and stores them with their corresponding labels in a pool of images to train a DCNN. The trained model can later be used for inference, where the cylindrical transform considers every pixel in the original image as the pole and generates its corresponding cylindrical transform image. The generated images are then passed to the trained DCNN for classification and labeling of a mask template, which represents the predicted data class for each pixel in .
With the approval of the research ethics board, 20 contrast-enhanced normal abdominal CT acquisitions from an equal number of male and female subjects between 25 to 50 years of age were collected . Each acquisition had on average 18 axial slices containing kidneys. The left and the right kidneys were outlined manually by trained personnel and stored as images. The boundary delineation was performed using a standard protocol for all kidneys. To avoid inter-rater variability in the dataset, quality of segmentation was assured by two board certified radiologists. The sampling step along the z-axis is and the size of a cylindrical transform generated image is .
3.2 Technical Details of Training
The FCN models were trained on original images with a setup as outlined in . For experiments with cylindrical transformed images, 7 acquisitions each containing on average 18 axial slices (totalling ) were used for training with 1,000 randomly selected poles per label class per slice to generate cylindrical transformed images. For all experiments, three acquisitions were used for validation ( axial slices), and 10 acquisitions ( axial slices) were used for test. The number of training iterations was set to 120. An Adam 12]. The regularization was set to
and early-stopping (storing network parameters and stopping at maximum validation performance in a window of 5 iterations) was applied. The training datasets were shuffled in each training epoch. The performance results were collected after 10-fold cross-validation.
3.3 Semantic Segmentation of Kidneys in Contrast-Enhanced Abdominal Computed Tomography
Using the definition of true positive (), false positive (), and false negative (), the precision and recall measure the success of prediction in classification tasks . The Dice similarity coefficient (DSC)  is a well-known measure for the accuracy of segmentation methods . By considering a volume as a set of pixels, for a segmented sequence of images and its corresponding ground-truth , the DSC is expressed as , where is the cardinality of the set. Since we apply the transform to each pixel of the volume , the DSC segmentation accuracy can be interpreted as the top-1 classification accuracy .
|CLT-AlexNet||4||93.68%||92.55 %||94.42 %||93.52%|
DSC value of classification DCNNs trained on cylindrical transform generated images and FCNs for semantic segmentation of kidneys in contrast-enhanced abdominal CTs. CLT: cylindrical transform; P: pre-trained on ImageNet; LR: adaptive learning rate basis; MB: size of mini-batch. Top DSC value is in boldface.
In , 16,000 original annotated images were used for training a VGG-16 FCN for semantic segmentation of kidneys with a DSC performance of . In our experiments, the focus was on using a limited number of annotated images and considering sequential spatial dependency between images along the z-axis. For the purpose of semantic segmentation, the FCNs require the entire volume of annotated original images (i.e., 126 images) as input for training and inference. However, the cylindrical transform method enables us to train contemporary classification networks for whole-image classification without the need for a FCN to predict dense outputs for semantic segmentation.
The performance results of FCN-AlexNet , FCN-GoogLeNet , and FCN-VGG-19  are presented in Table 1. The VGG-19 pre-trained on ImageNet  requires square-size input images. Since cylindrical transformed images are of size , we did not use pre-trained models. However, we used FCN-VGG-19  for training with original images for sake of comparison. The experiments were conducted in five schemes: 1) from scratch end-to-end in an FCN mode; 2) using pre-trained weights (denoted with P in the tables) on ImageNet  end-to-end in a FCN mode; 3) from scratch end-to-end in an FCN mode with augmentation; 4) using pre-trained weights on ImageNet  end-to-end in an FCN mode with augmentation; 5) from scratch using cylindrical transformed (denoted with CT in the tables) images. The augmentation methods used in the FCNs include rotation (every 36 degrees - 10), scaling ( - 2), shifting an image in x-y direction ( - 2), and applying an intensity variation ( - 2) similar to , totaling training images.
shows precision and recall scores of the DCNNs evaluated in Table1. The receiver operating characteristic (ROC) plots in Figure 5 show the area under curve (AUC) of the classification models trained using cylindrical transformed images, which is and for AlexNet and GoogLeNet, respectively. The overall performance results show that FCNs are challenging to train with a limited number of training images. These models have achieved less DSC performance comparing to the GoogLeNet, trained with cylindrical transform generated images, that produced a DSC value of .
Most of the proposed methods for semantic segmentation of sequential images (i.e., a volume) perform segmentation for each image of the sequence independently, without considering the sequential spatial dependency between the images. In addition, annotating sequential images is challenging and expensive, which is a drawback in using supervised deep learning models due to their need for a massive number of training samples. In this paper, we investigate the semantic segmentation of sequential images in a 3D space by proposing a sampling method in the cylindrical coordinate system. The proposed method can generate images up to the number of pixels in the volume, and therefore augment the training dataset. The generated images contain spatial samples from the x-y plane, as well as the time (i.e., sequential) dimension along the z-axis. This method enables us to train contemporary classification convolutional neural networks instead of a fully convolutional network (FCN). This technique helps the network to avoid overfitting and boost up its generalization performance.
-  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE CVPR, 2015, pp. 3431–3440.
-  K. Sharma, C. Rupprecht, A. Caroli, M. C. Aparicio, A. Remuzzi, M. Baust, and N. Navab, “Automatic segmentation of kidneys using deep learning for total kidney volume quantification in autosomal dominant polycystic kidney disease,” Scientific reports, vol. 7, no. 1, 2017.
F. Pouladi, H. Salehinejad, and A. M. Gilani, “Recurrent neural networks for sequential phenotype prediction in genomics,” inDevelopments of E-Systems Engineering (DeSE), 2015 International Conference on. IEEE, 2015, pp. 225–230.
-  H. Salehinejad, S. Valaee, T. Dowdell, E. Colak, and J. Barfett, “Generalization of deep neural networks for chest pathology classification in x-rays using generative adversarial networks,” in IEEE International Conference on Acoustics, Speech and Signal Processing, 2018, pp. 990–994.
-  H. Salehinejad, S. Valaee, T. Dowdell, and J. Barfett, “Image augmentation using radial transform for training deep neural networks,” in IEEE International Conference on Acoustics, Speech and Signal Processing, 2018, pp. 3016–3020.
-  H. Salehinejad, J. Baarbe, S. Sankar, J. Barfett, E. Colak, and S. Valaee, “Recent advances in recurrent neural networks,” arXiv preprint arXiv:1801.01078, 2017.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich et al., “Going deeper with convolutions.” CVPR, 2015.
-  A. Canziani, A. Paszke, and E. Culurciello, “An analysis of deep neural network models for practical applications,” arXiv preprint arXiv:1605.07678, 2016.
-  H. Salehinejad, J. Barfett, E. Colak, S. Valaee, A. Mnatzakanian, and T. Dowdell, “Interpretation of mammogram and chest radiograph reports using deep neural networks - preliminary results,” arXiv preprint arXiv:1708.1986764, 2017.
-  D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  J. Davis and M. Goadrich, “The relationship between precision-recall and roc curves,” in Proceedings of the 23rd international conference on Machine learning. ACM, 2006, pp. 233–240.
M. Lapin, M. Hein, and B. Schiele, “Loss functions for top-k error: Analysis and insights,” inIEEE CVPR, 2016, pp. 1468–1477.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in IEEE CVPR, 2009, pp. 248–255.