I Introduction
Despite tremendous advancements in tomographic imaging, chest radiography remains the most commonly used imaging modality for pulmonary analysis mainly due to its low cost, low radiation dosage, and widespread availability. Radiation dosage is of particular concern in pediatric applications, especially in neonatal intensive care units where chest radiographs (CXRs) are considered the first option for pulmonary diagnosis [1]. Lung field segmentation is the necessary initial step for imagebased pulmonary analysis. Accurate delineation of lung field from CXR, however, is challenging due to ambiguous boundaries, pathologies, occultation of lung field by anatomical structures in thorax, anatomical variation of lung shapes, and size across subjects (Fig. 1). Part of the challenge in developing computeraided diagnosis (CAD) methods, especially for pediatric cohorts, is also the anatomical shape variation of lung field that occur during growth [2, 3]. As shown in Fig. 1, pediatric cohorts have a more compliant chest wall, small thoracic cage, and relative large abdominal space. Furthermore, the diaphragm of children has smaller apposition area which has a concave shape in the posterioranterior (PA) view CXR [3]. Therefore, existing approaches to lung field segmentation that are designed primarily for adult cohorts, are not accurate at analyzing the pediatric subjects. Although a few pilot studies such as [2] have been conducted recently to look at the agerelated radiological biomarkers in lungs, no comprehensive study of pediatric lung field segmentation exists to the best of our knowledge.
Illustration of agerelated anatomical differences captured within CXRs. CXR obtained from: (a) 2month old subject, (b) 4year old subject, (c) 44year old subject. (d) Structural differences in the lung field between the adults and pediatrics based on the aspect ratio, (e) Structural differences in the lung field between the adults and pediatrics based on the two largest modes of principal component analysis.
Traditionally, CAD algorithms designed to segment lung field from CXR ignore the retrocardiac region, i.e., the lung region occluded by heart (Fig. 1(a)). The segmentation label without the retrocardiac region provides only partial unobstructed lung field. Accurate delineation including the occluded retrocardiac region, is necessary for correct diagnosis in diseases related to the change in lung capacity such as atelectasis (lung collapse), hyaline membrane disease, transient trochnpea, and Meconium aspirat. Fig. 1(c) presents the correlation between the lung volume estimated from computed tomography (CT) scans and the segmented lung field area from CXR (with and without retrocardiac region) from 108 individuals. The plot shows a stronger overall correlation between the lung capacity calculated including the retrocardiac region and the lung volume obtained through CT scans (R=0.80 without retrocardiac region, R=0.86 including retrocardiac region; no inspiration/ expiration information was available. R is the correlation coefficient).
The current CXRbased lung segmentation approaches (Table I) can be divided into three major categories:
Rulebased methods
that use predefined knowledge about the lung field to create a set of rules (e.g. intensity, edge information, etc.) for segmentation. These are usually heuristic approaches therefore subsequent refinement steps are generally needed
[4, 5, 6, 7].RuleBased Methods  
Brown et al. [4]  Matches the anatomical model of lung to extracted edges from image. 
Dureya et al. [5]  Extracts diaphragm for lung field extraction. 
Armato et al. [6]  Uses global and local intensities. 
Li et al. [7]  Combines edgebased feature classification with iterative contour smoothing. 
Feature ClassificationBased Methods  
van Ginneken et al. [8]  Uses kNN classifier with Gaussian derivative filters for multiscale pixel classification. 
McNittGray et al. [9]  Employes linear discriminator and neural networks with features selected. 
Dai et al. [10]  Uses adversarial network that jointly train a segmentation network and a critic network. 
Wang et al. [11]  Uses fully convolutional network (FCN) to simultaneously segment multiple structures including the lung field within chest radiographs. 
Deformable Shape ModelBased Methods  
Dawoud et al. [12]  Fuses shape prior with intensity threshold. 
Annangi et al. [13]  Integrates lung edge and castrophenic angle into level set information. 
Sohn et al. [14]  Uses active contour model [15] for lung field segmentation. 
Shi et al. [16]  Combines cohortspecific statistics for constraining the deformable contour. 
Xu et al. [17]  Combines edge and region forces for shape model deformation. 
Hybrid Methods  
Shao et al. [18]  Uses local shape and appearance sparse learning in hierarchical deformation framework. 
Candemir et al. [19]  Uses multiple atlases with nonrigid registration. 
Ibragimov et al. [20]  Employs Haarlike features with random forest classifier to model the appearance of the landmarks and shapebased Gaussian distribution to model the spatial relationship amongst those landmarks. 
Note: None of the methods include the retrocardiac region as part of the lung field label.
Feature classificationbased methods
that formulate segmentation as a classification problem by learning the probability of every pixel (or region) belonging the lung field. The probability is calculated using a set of features extracted around the pixel being classified
[8, 9]. Recently, [10] used adversarial architecture for the lung field segmentation. Adversarial networks are generally harder to train, i.e., large datasets and exhaustive parameter optimization is needed. Furthermore, as demonstrated later in Section IV (Experimental Results) ignorance of the object shape specificity results in the suboptimal performance even by the most sophisticated feature classificationbased methods.Deformable shape modelbased methods that use curves and surfaces defining the lung field that can be moved to the true boundary under the influence of internal forces from lung shape and external forces from lung appearance [12, 13, 14, 16, 17].
In addition, hybrid methods such as [18] and [19] cross over multiple categories. Amongst these approaches, deformable statistical shape models (SSMs) have demonstrated superior performance due to their ability to seamlessly integrate low level localized appearance features and high level global features. These models learn patterns of shape deformation from the training data of annotated images. A learned model is subsequently deformed to fit the object of interest within the test image by estimating its shape deformation patterns through an appearanceguided iterative optimization procedure. SSMs remained the workhorse for various medical image analysis applications including the lung segmentation; however, the iterative optimization is generally found to be not robust to initialization, complex background, weak edges, and contrast information. Henceforth, accurate initialization of shape models [21] and various refinements [22] remain topics of active research. In addition, conventional SSMs [23], assume a unimodal Gaussian distribution of training shapes; however, in practice, the assumption of both unimodality as well as Gaussianity may be inaccurate when the training data consist of shapes with large variation obtained from multiple cohorts, e.g., from adult and pediatric subjects (see Fig. 0(e), 0(d)).
Contrary to SSM methods, representation learning techniques have demonstrated great potential in handling a wide range of variation including nonGaussian and multimodal Gaussian distributed data [24, 25, 26]. These techniques have also found to be robust to intensity variation and minima optimization. However, the cost of performing hypothesis testing at the atomic (pixel/voxel) level prohibits their use for large object segmentation. Furthermore, since final segmentation label using these methods are generally obtained as a concatenation of independent atomiclevel hypotheses, object shape specificity cannot be guaranteed. Shape modeling through representation learning has not garnered much attention in the past, primarily because of two reasons. First, the effective representation of a segmentation (detection+delineation) task as a learning problem is not trivial. Second, handcrafting representation features for deformable objects is not straightforward and relies heavily on the human ingenuity [24, 25].
Recently, representation learning through deep learning (DL) has shown great promise in expanding the scope of learning algorithms to automated feature extraction. Specific to medical imaging, DL frameworks are extensively being used in various organ detection [27], classification [26], and segmentation [28] tasks. In this paper, we extend the applicability of DL to parametrized shape learning and demonstrate it via an efficient generic solution to lung field segmentation. The main contributions of our work are:

A generic lung field segmentation framework from CXR, accommodating both adult and pediatric cohorts.

Segmentation of the lung field including the occluded retrocardiac region for reliable estimation of capacity and inter/intra subject comparisons.

A DLbased mechanism for the automated detection of object of interest with large shape variation from images acquired under diverse acquisition protocols. This detection mechanism, dubbed ensemble space learning (ESL), also addresses the issue of error propagation to subsequent marginal spaces within the current stateoftheart detection methods: marginal space learning (MSL) [29, 30].

A hybrid principal component analysis (PCA)DL based approach for including shape prior information for deformable object segmentation. This module which we call marginal shape deep learning
(MaShDL) transforms the iterative approach of the conventional SSMbased segmentation methods to a recursive marginal refinement approach. Specifically, the method begins by learning the mode of shape deformation in the eigenspace of the largest variation and then marginally increases the dimensionality of eigenspaces by recursively including the next largest modes. As demonstrated later in the paper, the transformation allows the SSM to be posed as an efficient parameter estimation problem solvable through representation learning.
The proposed framework is evaluated using a comprehensive CXR datasets to demonstrate its potential for generic applicability.
Ii Datasets and Reference Standards
Our experiments are conducted on both publicly available and inhouse acquired datasets using a wide range of devices, age groups, and pulmonary pathologies. 247 publicly available radiographs from Japanese Society of Radiological Technology (JSRT; http://www.jsrt.or.jp) dataset and 108 from the Belarus Tuberculosis Portal (BTP; http://tuberculosis.by) were used. For data acquired inhouse, after approval from the Internal Review Board, 313 posterioranterior CXRs were collected at Children’s National Health System (CNHS). The subjects in the JSRT dataset have ages between 16 to 89 year ( year). The dataset is a standard digital CXR database with and without chest lung nodules created by the Japanese Society of Radiological Technology. The radiographs had dimensions of pixels, spatial resolution of mm/pixels, and digital resolution of 12 bits. BTP images, from patients between 18 to 86 year ( year), had dimensions of pixels, spatial resolution of mm/pixel and the digital resolution of 12 bits. The dataset consists of CXRs obtained from patients diagnosed with or suspected of multidrugresistant tuberculosis (MDRTB). The CXR findings of these patients include consolidation, cavitary lesions, nodules, pleural effusion, pneumothorax, and fibrotic scars. For CNHS data, patients having ages between 3 months to 18 year ( year) with viral chest infections were scanned. The dataset consists of radiographs collected from individuals having or suspected of having either Human metapneumovirus (hMPV) or rhinovirus. The radiological symptoms to these viruses include acute respiratory infections, chronic lung conditions, chest wall deformities, cardiovascular anomalies. The radiographs have dimensions within the range pixels with spatial resolution ranges between to and a digital resolution of 12 bits. For CNHS data, patients having ages between 3 months to 18 year ( year) with viral chest infections were scanned. The dataset consists of radiographs collected from individuals having or suspected of having either Human metapneumovirus (hMPV) or rhinovirus. The radiological symptoms to these viruses include acute respiratory infections, chronic lung conditions, chest wall deformities, cardiovascular anomalies. For consistency of training data, all scans from the three datasets were resized to
pixels using Bspline interpolation.
The ground truth labels both including and excluding the retrocardiac region were prepared by two fellows using the ITKSNAP interactive software ( ) under the supervision of two expert pulmonologists. For ground truth labels including the retrocardiac region, an overall interobserver agreement of was observed; specifically, for CNHS data and for the JSRT and BTP data was estimated. Ground truth labels excluding the retrocardiac region were prepared for comparative purposes with the stateoftheart methods. To construct the statistical shape model, 144 boundary points (72 per left/right lung) with anatomical correspondences are annotated. Specifically, six manually annotated primary landmarks were initially obtained for each lung based on their distinctive anatomical appearance and ability to roughly define the shape of lung. Subsequently, equidistant secondary landmarks were estimated along the lung contour using interpolation between the primary landmarks. In order to make sure that no loss in the segmentation label accuracy has occurred due to the interpolation, the accuracy of the proposed interpolation method was evaluated using the Dice coefficient score (DCS) between the manual ground truth and the landmarkbased interpolated contour. A mean DCS of was obtained for our dataset. Further details on our manual landmarking approach can be found in [31].
Iii Methods
Iiia Overview
Fig. 3 shows the flow diagram summarizing the proposed framework. The segmentation of a deformable object (lung field) is performed by learning space (localization) and shape parameters using two separate DL architectures. As demonstrated later in the manuscript, the presented DLbased approach for shape parameters learning is theoretically equivalent to the one adopted by conventional SSM techniques: estimating the shape parameters of the object of interest under constraints on shape model and appearance. However, unlike the iterative convergence approaches of conventional SSMs that optimize the entire shape parameter space simultaneously, the proposed method transforms the parameter space into linearly independent subspaces and employs a battery of DL classifiers to learn the shape parameters individually. This marginal learning of independent parameter subspaces makes our approach both computationally tractable as well as significantly more accurate compared to the stateoftheart SSM approaches. Herein, we introduce a generic method for space and shape parameters learning of deformable objects, which we apply later to the lung field segmentation from CXR.
IiiB Parametrized Shape Representation and Learning
Among the approaches for deformable shape representation presented in the literature [32], PCAbased SSM [23] has been found to be most successful due to their simplicity, performance, and compact representation. These models have been widely used to deform an initial estimate of shape (mostly the mean shape of the object of interest obtained using training data) under the guidance of appearancebased image evidence (external forces) and shape priors (internal forces). SSM uses an explicit pointbased representation in which each shape is described by points (or landmarks) distributed across the contour. Given a set of aligned shapes in 2D, the SSM is defined using a mean shape , a set of eigenvectors
, and a set of corresponding eigenvalues,
, obtained by applying PCA to the aligned shapes. The magnitude is proportional to the shape variance explained by the particular eigenvector.
is generally chosen to be the smallest number of modes such that their cumulative variance explains a sufficiently large proportion (normally ) of the total variance explained by all eigenvectors (usually ). Subsequently, any shape in the nonaligned image space can be approximated using the anisotropic similarity transform parameters (presented below), the aligned mean shape , and the weighted sum of largest modes (eigenvectors).(1) 
where
is an invertible matrix called the anisotropic similarity transform matrix. The matrix transforms the mean shape from the aligned shape space to a nonaligned image space, using specifically, position:
, orientation: , and anisotropic scale: . is the shape weight matrix.Given and , the weight parametrization of new target shape can be obtained using (1) as . The legitimacy of the estimated shape is generally guaranteed by imposing individual constraints on each weight. [23] demonstrated that the suitable constraints on the weights are typically of the order of .
After initializing the similarity parameters described by , SSM iteratively adjusts the deformable shape until convergence, causing the points of
to move under the influence of object model and image evidence. The weight vector
after iterations is(2) 
where is the change in the model parameters at the iteration . Using eq. (1) and (2), the shape parameter vector for any deformable shape in 2D can be written as
(3) 
Given a shape parameter vector and a set of 2D training images , each having dimensions : (where denotes intensity at location ), a representation classifier can be learned that can estimate the correct parameter vector,
, by maximizing the following posterior probability over a valid parameter space:
(4) 
However, due to the large number of testing hypotheses in eq. (4), learning a classifier with efficiency comparable to the traditional SSMbased iterative segmentation methods is challenging and requires a large amount of training data. A few attempts have been made in the past for efficient parameter learning by partitioning the parameter space into linearly or marginally independent subspaces. For instance, [24] proposed an efficient method, MSL, for object detection by training classifiers to learn . Since its introduction, MSL has been successfully applied in various medical imaging applications such as segmentation of heart [24], left ventricle detection [33], midsagittal plane detection [34], and standard echocardiographical plane detection [35]. MSL learns classifiers in marginally independent parameter subspaces. Their work suggested that the dimensionality of effective parameter space can be significantly contracted by separating conditionally independent parameters into semigroups (translations, scales, and orientations). A semigroup is an algebraic structure consisting of a set with an associative binary operation. According to MSL, the object detection approach can be expressed as the maximization of posterior probability of semigroup ,
(5) 
Extending the concept, we propose that the posterior probability of the semigroup can be similarly approximated as the maximization of the marginal probabilities of its semisubgroups: and ,
(6)  
However, in contrast to eq. (5) and the MSL framework proposed in [24] that does not impose any commutativity constraints, our proposition in eq. (6) is subject to an assertion that the parameter vector can be estimated marginally only as a nowherecommutative semigroup: . A nowhere commutative semigroup is any semigroup , such that for all and if then . The nowherecommutativity is enforced since, as discussed before, within the context of SSMbased methods (eq. (1)), imagealigned mean shape () serves as the initialization for the shape deformation. During the iterative process of (2), this initial estimate is continuously refined till convergence.
The marginal parameter space simplification introduced in eq. (6) is mainly intended to improve the computational cost of a classifierbased approach to parameter estimation. However, despite this simplification, the proposed classifierbased framework meet or exceed the iterative refinementbased SSM alternatives in terms of segmentation accuracy as demonstrated later in Section IVB. The semisubgroups and can be further partitioned till the trivial semisubgroup level, i.e., a semigroup with one element only,
(7) 
and
(8) 
where and denote commutative and nowherecommutative semisubgroups respectively. Also note that . Eq. (7) and (8) suggest that by splitting the semigroups to commutative and noncommutative nontrivial semisubgroups, a dimensional learning space can be approximated to a concatenation of one dimensional subspaces; therefore, reducing the computational complexity of the manifold [29, 30]. Individual classifiers can be trained subsequently for independent subspaces, thus simplifying training and reducing the amount of data needed to train the classifier.
IiiC Deep Learning Network for Space and Shape Parameter Estimation
Our proposed DL framework for learning the parameters
consists of two main layers: an unsupervised stacked denoising autoencoder (SdAE) layer for pretraining to initialize the weights of feed forward deep neural network (DNN) and a supervised DNN layer for finetuning. Unsupervised pretraining to initialize the weights of DNN has demonstrated to have better convergence properties especially if the labeled training data is not very large
[36].An autoencoder (AE) consists of two components: the encoder and the decoder. The AE, in our framework, takes a vectorized image patch(es) as input
and maps it to a hidden representation
through a deterministic mapping, , where is called the activation vector,is the logistic sigmoid function
, is the mapping matrix, andis the bias vector. The decoder maps back to the same shape as the observed data using the reverse mapping. The denoising autoencoder (dAE) is a stochastic version of the AE. Specifically, to force the hidden layers to discover more robust features, the dAE is trained to reconstruct the input from its corrupted version. Finally, the SdAE
[37] is a DNN consisting of multiple layered dAEs.Once the layers are pretrained using SdAE, the weights and biases of the encoder layer are used to initialize the feed forward DNN. This network architecture is subsequently used for learning space and shape parameters in our DL framework. For greater details on the training of DNNs and SdAE, readers are encouraged to review [38]. Specific details of the network configuration pertaining to learning space and shape parameters are presented in Sections IIID and IIIF respectively.
IiiD Space Parameters Estimation
MSL and MSDL, the current stateoftheart learningbased techniques for space parameters estimation (), has been found to be very successful in various medical imaging applications [39, 25]. Both approaches solve the same classification problem using two different classification techniques. MSL uses the probabilistic boosting tree classifier while MSDL adopts the deep neural network for the parameter estimation. Both MSL and MSDL are initialized using a bounding box of arbitrary parameters (Fig. 3(a)). Later these parameters are marginally refined (translation followed by orientation followed by scale). The marginal refinement transforms the arbitrary bounding box into a minimum area bounding box enclosing the object of interest. The sequential parameter learning within MSL, however, results in the propagation of estimation error to successive stages. Specifically, the error in the translation estimation propagates to orientation and scale estimations. Consequently, the cumulative estimation error at a given stage is lowerbounded by the cumulative error at the previous stages,
(9) 
From eq. (9), the domain normalized error (further explanation on the propagation of error is provided in Section IVA). Moreover, since MSL and MSDL are based on using a minimum area bounding box, deciding the optimal initialization values of similarity transform parameters () for the bounding box is generally not trivial, especially in data with large variation. To address these challenges, we propose ESL that learns by transforming it from being a marginally independent semigroup of parameters (as described in MSL, eq. (9)), to a linearly independent semigroup of surrogate parameters. Specifically, instead of estimating using the minimum area bounding box, ESL estimates them as a function of four linearly independent vertices of two sets of parallel lines bounding the object of interest. Fig. 4 graphically illustrates the methodological differences between MSL and ESL for the specific application of lung field segmentation. Given a pair of parallel bounding lines , and a second pair of lines perpendicular to , the four intersecting vertices provide the estimation of translation () and the scale () of the minimum area bounding box enclosing the object of interest (lung field). The box of estimated translation and scale is subsequently used to estimate the orientation (). Unlike the MSL, no assumption on the initial values of parameters is needed in the ESL. Moreover, since the parameters of ESL are linearly independent (i.e., ; in MSL ()); therefore, , where denotes the sequence of geometrical operations to extract and from the four estimated vertices. Similar to eq. (9), the lowerbounds on cumulative estimation error for the space parameters using the ESL is,
(10) 
Since the orientation is estimated independently and PA CXRs are acquired under a position protocol (upright), pairs and can be assumed to be parallel to the horizontal axis and the vertical axes of the image respectively for simplicity (Fig. 3(b)). Therefore, for the pairs of lines bounding the object of interest parallel to the horizontal () and vertical () axes, the bounding line estimation problem is reduced to estimating two pairs of xintercepts (lines bounding the object vertically: , ) and yintercepts (lines bounding horizontally: , ).


IiiD1 Bounding Lines Estimation
Training: Four separate DL classifiers are trained for the four bounding lines. To provide contextual information to the classifier, an image patch of size (, ) or (, ) are extracted around each line: (see Fig. 3(b)). A positive hypothesis for a line is formulated to find the horizontally (or vertically) oriented bounding box centered at position (or ) respectively,
(11) 
where and denote the ground truth and the hypothesized position of the line respectively. Similarly, a negative sample satisfies:
(12) 
The separation in the positive and negative hypotheses is intended to provide a clean split between the training hypotheses.
Classifier Architecture: The set of positive class image patches (satisfying eq. (11)) and negative class image patches (satisfying eq. (12)) are first normalized to range and then stacked together to train the framework presented in Sec. IIIC. As mentioned in Section II, the digital resolution in all three datasets used in our experiments are bits ( graylevels, DICOM tag; unsigned, DICOM tag); therefore, the CXR intensities are divided by 4096 to achieve normalization. For training datasets acquired under different protocols, corresponding DICOM tags can be used to decide the normalization. Moreover, in our experiments, is set to based on performance accuracy and efficiency ( was tested); therefore, each training patch has dimensions of or . The dimension of the multiple layer SdAE is
. For SdAE, we use the sigmoid activation function, learning rate of 0.001, batch size of 1000, and 100 epochs. For DNN we use the sigmoid activation, batch size of 1000, learning rate of 0.1, and 100 epochs. Again, the parameters of the network are empirically estimated to minimize the reconstruction error. Furthermore, the number of layers has been decided empirically: the layers are added to the network until the reconstruction error stops decreasing. The proposed deep learning architecture for line estimation is shown in Fig.
5a.Hypothesis Testing: Each pixel row (or column) along the axis (as shown in Fig. 5a) is tested for the line position using the trained classifiers. Similar to the practice adopted in [29], the position of the line is determined by averaging the top candidates (10 in our experiments) with the highest score in order to make the framework robust to classification noise. The four intersecting vertices of bounding lines are used to extract and using a sequence of wellknown geometrical operations.
IiiD2 Orientation Estimation
Training: The orientation estimation hypothesis in ESL is formulated as: finding the object of interest with centroid at position having anisotropic scale and orientation . Using a bounding box with position and anisotropic scale already estimated, the hypotheses for orientation estimation is generated by rotating the bounding box of size () around (). The positionscaleorientation (anisotropic similarity) hypothesis is positive if, in addition to satisfying (11) for all four lines, it also satisfies: , where denotes the orientation of the bounding box encapsulating the groundtruth label and is the hypothesized orientation. A negative hypothesis satisfies (). In our experiments, we use rad (1 degree) and rad.
Classifier Architecture: For computational efficiency and feature uniformity, the extracted patches using the oriented bounding box are resized to pixels using Bspline interpolation. The proposed architecture and its configuration for orientation estimation are shown in Fig. 5b. The architecture for SdAE and DNN uses same hyper parameters as the bounding line classifiers (Section IIID1).



Hypothesis Testing: A bounding box with estimated position and anisotropic scale from Section IIID1 is rotated with a stepsize of radians. Subsequently, trained orientation classifier is used to calculate the similarity scores for each rotated hypothesis. The final estimate is obtained using the average of top 10 candidates.
IiiE Optimal Mean Shape Determination
For optimal performance using SSMbased segmentation methods, the mean contour shape has to be initiated as close to the true boundary as possible [40]
. As the anatomical structure of the lung evolves with age, resulting in shape variation amongst various age groups; therefore, we evaluate a multiple shape modeling approach for our generic framework. Based on maximum likelihood estimation of the Gaussian mixture model (GMM) clustering of the aspectratios (
; Fig. 0(d)); the optimal number of shape models for our dataset is determined to be two for our training dataset. The aspectratio is also found to be strongly correlated with the shape variation of modes weighted by eigenvalues (R=). Therefore, the training data is partitioned into two groups based on the aspectratio.Training: Let define a set of training shapes for the group then the optimal mean shape for that group are obtained iteratively by minimizing the following residual error after generalized Procrustes alignment of group training shapes,
(13) 
where denotes the generalized Procrustes transformation from the mean shape to a training shape .
Hypothesis Testing: The appropriate shape model for the test image is chosen based on the estimated aspectratio (). A threshold of for the aspectratio is empirically determined to decide between appropriate shape model for the test image.
IiiF Shape Parameter Estimation
The concept of using representation learning methods for SSM is not novel. A few attempts have already been made in the literature such as [29] where irregular sampling patterns were used to capture the shape deformation followed by Haarwavelet feature extraction. However, the need for extracting optimal handcrafted features, the amount of training data needed to learn shape parameters simultaneously, as well as the computational complexity of the multiparameter classifier made representation learning methods a less attractive choice compared to the conventional iterative optimization techniques. Our proposed approach, Marginal Shape Deep Learning (MaShDL), attempts to address these challenges. To learn , MaShDL adopts a recursive rather than an iterative approach adopted by the conventional SSM (eq. (2)) [23]. Specifically, instead of estimating and optimizing all modes collectively (eq. (1)); MaShDL refines the aligned mean shape by recursively adding finer modes. This modification simplifies the hypothesis space by letting separate classifiers trained for each mode. From eq. (1), (2), and (8), the estimated aligned shape using the largest modes can be recursively written in terms of the aligned shape , obtained using the largest modes,
where is the eigenvector and is the corresponding weight. It is important to mention here that modes and mean shape in eq. (14) are based on the grouping performed in section (IIIE), the superscript is dropped for the ease of reading. Eq. (14) transforms (1) from block parameter estimation of the modes (as performed in eq. (2)) to recursive estimation by successively adding the next lower order mode. Moreover, eq. (8) and (14) imply that is nowhere commutative subgroup since the largest mode of variation has to be estimated prior to mode. Therefore, parameter estimation through representation learning in MaShDL starts with the most informative (highest) mode and sequentially adds lower variability modes.
Training: MaShDL begins by learning the highest mode through deforming the mean shape (or the zeroth mode: ) that is subsequently deformed by the second highest mode and so on.
Positive Shape Hypotheses: The positive shape hypothesis for all modes are the same. The positive hypothesis corresponds to extracting patches around these landmarks (=144 (72 per lung)) of manually delineated ground truth shape.
Negative Shape Hypotheses: The negative hypotheses for the mode are fabricated as follows:
(1)
Use eq. (1) and the mean shape to estimate the “true” modes of variation of shape in the training set.
To generate negative hypotheses for the mode of the shape, generate a set of synthesized shapes by keeping the largest estimated modes from (a) constant and varying only the mode within the range (). Henceforth, a negative hypothesis should satisfy eq. (7) and
(15) 
is the value of the mode obtained using eq. (2). translates, in our application, to a minimum landmarktolandmark distance of 2 pixels. Extracting patches (shown as squares in Fig. 5c) around points of the shape synthesized. Fig. 6 shows examples of positive (in green) and negative hypotheses (in red) for the four highest modes. Each hypothesis corresponds to a shape depicted by the concatenation of patches of size , each extracted around the (red or green) landmark points. The extracted hypotheses are subsequently used to train a classifier for the deformation mode. Similar to conventional SSM, our framework uses local appearance information to move the object boundary to the optimal position.
Classifier Architecture:
In our experiments, identical patch sizes () are used for training classifiers for all modes ( were tested). Smaller patch sizes are found to be prone to noise while the higher sizes tend to miss subtle shape deformations. The image patches extracted around every landmark point are subsequently stacked together in a specific order (Fig. 5c) to form a single hypothesis. Each training hypothesis has dimensions of pixels. A multiple layer () SdAE followed by DNN is used (shown in Fig. 5c). For SdAE, sigmoid activation function, learning rate of 0.001, batch size of 1000, and 100 epochs are used.
Hypothesis Testing:
The optimal mean shape (obtained in Section IIIE) is first aligned to the detected object in the test image using . Next, the trained classifier for the largest mode of shape variation is used to deform the aligned mean shape followed by classifier trained for the second highest mode and so on. The process is iterated to estimate the next highest variation mode until a cumulative energy of , which, in our application, is equivalent to using the largest fifteen modes of variation are included. Limiting the number of modes is a common practice when creating PCAbased statistical shape models [23]. Although there is no theoretical limitation on learning all modes using MaShDL, a larger training dataset is generally needed to train classifiers for lowerranked modes due to the increasing subtlety between the positive and negative hypotheses with the number of modes. Moreover, it is also predicted that both the total number of estimateable modes as well as the machinediscernibility of adjacent modes are correlated with both digital and spatial image resolution.
IiiG Data Augmentation
Since the number of positive hypotheses in our training routine is smaller than the number of negative hypotheses, a data augmentation approach similar to the one presented in [41] along with random sampling is adopted to balance the samples prior to training. Specifically, we used two forms of data augmentation: (1) geometrical augmentation, and (2) appearance augmentation. The geometrical augmentation consists of generating horizontal and vertical reflections of hypotheses while the appearance augmentation consists of slightly altering the intensities in the training images. For intensity alteration we first perform PCA over the entire training dataset (). Subsequently, to the training normalized image (
), we add the multiples of the extracted principal component with magnitude proportional to the corresponding eigenvalue times a random variable drawn from a Gaussian distribution with zero mean and 0.1 standard deviation, i.e.,
where and denotes the eigenvector and eigenvalue respectively. The superscript is added to denote the training image data and to differentiate them from eigenvalues and eigenvectors of training shape data defined in section IIIB. The drawn random variable is applied to every pixel of the training image. Same augmentation scheme was applied to the hypotheses for every classifier in our framework.
Iv Experimental Results
The performance of the proposed framework and its individual modules (ESL, MaShDL) was evaluated using twofold crossvalidation. All three datasets (JSRT, BTP, CNHS) were evenly divided into two sets for training and validation then the results were averaged over the two validation rounds.
Iva Space Parameter Estimation: MSL vs. ESL
The performance of ESL and MSL methods were compared using the DL extension of MSL [25]. Furthermore, parameters in the original MSL were reordered from eq. (5) for a more meaningful comparison with ESL: translation followed by scale and orientation estimation respectively. Fig. 7 presents the estimation error in translation and scale using MSL and ESL. The estimation error in translation was mm with the ESL compared to mm using MSL (value; Wilcoxon rank sum test). For scale estimation, an average error of mm using ESL was obtained compared to mm using MSL (value). Although both ESL and MSL follow the same mechanism for orientation estimation, as predicted in eq. (9), due to the accumulation of error from translation and scale, MSL achieves an orientation error of radians, which was significantly worse than the one obtained using ESL ( radians, value). In our experiments, the average time to perform detection using the ESL pipeline was seconds per CXR, compared seconds for the MSL. Both techniques were implemented in Matlab (The MathWorks, Inc., Natick, MA) and ran using CPU only.


IvB Shape Parameter Estimation: MaShDL vs. ASM
Fig. 8(a) shows the boxplots of DSC for the lung field segmentation using just the mean shape (baseline), SSMbased ASM [23], and the MaShDL (using single and two SSMs). A single model was created using the training data from all three datasets. Two separate shape models were created using the clustering criteria described in Section IIIE. Mean shape initialization was performed using ESL (Section IIID). The best results were achieved with the two SSMs; however, in both cases, MaShDL significantly outperforms the conventional ASM (value for single SSM, value for two SSM). A mean DSC of was obtained using just the mean shape alignment through ESL, using ASM, and using MaShDL. The results in the boxplot are reported using the modes carrying cumulative energy ().


Method  Overlap  ACD (mm)  DSC 

MeanStandard Deviation (Min/Max)  
[13]      
[12]    
[14]    
[16]    
[8]  
Pixel Classification (PC)  
AAM Whiskers  
ASM tuned  
Hybrid AAM+PC    
Hybrid ASM+PC    
PC+Postprocessing    
Hybrid Voting    
[18]  
[19]    
[20]    
[10]    
InterObserver Agreement  
With RetroCardiac Region  
Without RetroCardiac Region [42]    
UNet [11] (With RetroCardiac Region)  
Overall  
JSRT  
BTP  
CNHS  
UNet [11] (Without RetroCardiac Region)  
Overall  
JSRT  
BTP  
CNHS  
Proposed Method (With RetroCardiac Region)  
Overall  
JSRT  
BTP  
CNHS  
Proposed Method (Without RetroCardiac Region)  
Overall  
JSRT  
BTP  
CNHS 
GT=binary labels of manual ground truth. 
SEG=binary labels produced by the proposed method. 
The operator denotes cardinality. 
Method tested on the JSRT dataset. 
Method tested on the JSRT dataset among others. 
Fig. 8(b) shows the performance of ASM and MaShDL as a function of cumulative modes of variation (two SSMs). The DL mechanism adopted by MaShDL to extract the local appearance features deforms the shape contour to the true object boundary using less number of modes than ASM. Also from eq. (2) and (14), each atomic unit within ASM and MaShDL have the same order of computational complexity; therefore, MaShDL was demonstrated to be faster than ASM in our experiments for a given performance accuracy. In our experiments, MaShDL framework was also found to be at least four times faster on average than SSM for a given accuracy.
IvC Quantitative Comparison with StateoftheArt Methods
We compared the segmentation performance obtained through our approach to the results reported by the stateofart methods using three widely used metrics (overlap, average contour distance (ACD), and DSC) in Table II. The table reports the performance on both lungs. The results reported here for our method are obtained on the original images and not the downsampled version. None of the other methods includes the retrocardiac region within the segmentation. In addition, we also compared the segmentation performance with the UNet based architecture proposed by [11]
: the current stateoftheart convolutional neural network for biomedical image segmentation. The UNet architecture and its derivatives have been extensively used for segmentation in radiological and histological images, providing some of the most accurate and satisfactory performances
[43, 44, 45, 46]. The Unet architecture is a fully convolutional network, which includes shortcut connections between a contracting encoder and a successive expanding decoder. The quantitative overall segmentation performance as well as on individual datasets (JSRT, BTP, CNHS) using Wang et al.’s approach is also reported in Table II for segmentation labels with and without retrocardiac space. Exact same architecture and hyper parameters as reported in Wang’s paper were used except the postprocessing step which was omitted for fair comparison since our proposed approach does not use any postprocessing. Although, a range of hyperparameters were tested; however, ones proposed by Wang et al. were found to be optimal for the task.Fig. 9 presents the qualitative results of performing the lung segmentation using the proposed pipeline (ESL+MaShDL). The figure provides a visual insight on how inclusion of retrocardiac region results in the segmentation label that is independent to the shape and structural changes in the closeby anatomical structures such as heart. For comparison purposes, similar qualitative results for the lung field labels obtained using the method proposed in [11] are provided in Fig. 10. As predicted before, the shape specificity is not preserved for the lung field labels obtained using [11]. This is further evident through the results presented in Table II. Moreover, unlike the proposed method, the Unet architecture uses an overlappingbased objective function (e.g., crossentropy) which provides satisfactory results in cases with reduced shape variability. However, in the particular case of thoracic radiographs, the lung field labels without retrocardiac space present higher shape variability than those observed when including this region. This could be a possible explanation of a slightly better overlappingbased performance (i.e., Overlap and DSC) by UNet [11] when including the retrocardiac space than without including it.






V Discussion and Conclusion
This work introduced a generic representation learning framework for the deformable object segmentation via space (translation, orientation, anisotropic scaling) and shape parameter estimation. The boundary detectors in the conventional statistical shape models (SSM) do not work consistently well on the data with complex patterns or with poor contrast and edge information. Furthermore, since the SSMs are known to be sensitive to initial shape estimation, an efficient learningbased mechanism to estimate the space parameters (translation, scale, and orientation) was also presented in this work to initialize the mean shape. Our solution to space parameter learning, ensemble space learning (ESL), was significantly more accurate than current stateoftheart marginal space learning (MSL) [29] and marginal space deeplearning (MSDL) [30]
approaches as demonstrated through rigorous experiments. Although ESL has the potential to be generically applicable for the localization of objects of interest in 2D/3D images; however, the algorithm, in its current form, assumes symmetry (such as lung field) of the object of interest as well as the neighborhood context information for efficiency purposes. Therefore, while ESL is envisioned to demonstrate best performance for the localization of objects in medical images where organ symmetry as well as the contextual information can be somewhat guaranteed; for the general computer vision tasks, the ESL may need to be modified for optimal results. Furthermore, due to the existence of clinical acquisition protocols, large rotational variation is not expected in various CXR images; therefore, rotation estimation is still performed sequentially after translation and scaling rather than independently. The method can be easily modified for tasks where large rotational variation in the training data is expected.
Furthermore, for a given performance accuracy, our formulation for marginal shape deep learning (MaShDL) estimated deformable shape parameters significantly faster than the conventional SSMbased methods. As has been stressed throughout in the manuscript, MaShDL extends the ASM into deep learning realm; which results in better overall accuracy as demonstrated by rigorous set of experiments. However, since the mathematical framework behind MaShDL is still similar to the ASM, some of the limitations of the traditional ASM that are part of the mathematical framework exists in the MaShDL framework as well, that includes: (1) the tedious task of labeling training images that becomes unacceptable especially with the large training set. As pointed out previously in the manuscript that although the current scheme of six manually defined landmarks was found sufficient for accurate lung field segmentation (Dice score between the manual ground truth label and the label obtained using the interpolated landmarks ), different number of manually annotated landmarks can be tested based on the application and the object of interest. (2) Similar to the traditional ASM, parameters such as the number of modes of variation still need to be specified. Furthermore, as the difference between the positive and negative hypothesis becomes more subtle at higher modes; in order to learn modes beyond a certain limit, approaches such as the use of deeper networks and modedependent thresholds for hypotheses testing need to be investigated. (3) The use of global statistical shape models by approaches like ASM is one of the most successful method to impose shape and anatomical constraints in medical image segmentation. However, while providing robust and anatomically accurate constraints, it also limits the flexibility of the method to deal with small localized shape details, such as the region around the diaphragm in chest radiographs. To overcome this limitation, we intend to extend our previous work on partitioned shape modeling [47] to MaShDL in the future. (4) For the specific application of lungfield segmentation, certain extreme cases of scoliosis that the mean shape model failed to capture accurately during training may show suboptimal accuracy. Although we exemplified an application of our framework through the segmentation of the lung field from CXR using diversified populations (i.e., age, pathology, and source); however, even with this diversity, as long as the standard acquisition protocols of routine clinical environment were followed, the algorithm was designed to robustly handle variation in shape. Our framework is applicable to general deformable object segmentation in both 2D and 3D image data, as a faster and potentially more accurate alternative to statistical appearance and shape model.
References
 [1] C.C. Yu, “Radiation safety in the neonatal intensive care unit: too little or too much concern?” Pediatrics & Neonatology, vol. 51, no. 6, pp. 311–319, 2010.
 [2] S. Candemir, S. Antani, S. Jaeger, R. Browning, and G. R. Thoma, “Lung boundary detection in pediatric chest Xrays,” in SPIE Medical Imaging. International Society for Optics and Photonics, 2015, pp. 94 180Q–94 180Q.
 [3] M. Smeets, B. Brunekreef, L. Dijkstra, and D. Houthuijs, “Lung growth of preadolescent children,” European Respiratory Journal, vol. 3, no. 1, pp. 91–96, 1990.
 [4] M. S. Brown, L. S. Wilson, B. D. Doust, R. W. Gill, and C. Sun, “Knowledgebased method for segmentation and analysis of lung boundaries in chest Xray images,” Computerized Medical Imaging and Graphics, vol. 22, no. 6, pp. 463–477, 1998.
 [5] J. Duryea and J. M. Boone, “A fully automated algorithm for the segmentation of lung fields on digital chest radiographic images,” Medical Physics, vol. 22, no. 2, pp. 183–191, 1995.
 [6] S. G. Armato, M. L. Giger, and H. MacMahon, “Automated lung segmentation in digitized posteroanterior chest radiographs,” Academic Radiology, vol. 5, no. 4, pp. 245–255, 1998.
 [7] L. Li, Y. Zheng, M. Kallergi, and R. A. Clark, “Improved method for automatic identification of lung regions on chest radiographs,” Academic Radiology, vol. 8, no. 7, pp. 629–638, 2001.
 [8] B. Van Ginneken and B. M. ter Haar Romeny, “Automatic segmentation of lung fields in chest radiographs,” Medical Physics, vol. 27, no. 10, pp. 2445–2455, 2000.
 [9] M. F. McNittGray, H. Huang, and J. W. Sayre, “Feature selection in the pattern classification problem of digital chest radiograph segmentation,” Medical Imaging, IEEE Transactions on, vol. 14, no. 3, pp. 537–547, 1995.
 [10] W. Dai, J. Doyle, X. Liang, H. Zhang, N. Dong, Y. Li, and E. P. Xing, “Scan: Structure correcting adversarial network for chest xrays organ segmentation,” arXiv preprint arXiv:1703.08770, 2017.
 [11] C. Wang, “Segmentation of multiple structures in chest radiographs using multitask fully convolutional networks,” in Scandinavian Conference on Image Analysis. Springer, 2017, pp. 282–289.
 [12] A. Dawoud, “Fusing shape information in lung segmentation in chest radiographs,” in Image Analysis and Recognition. Springer, 2010, pp. 70–78.
 [13] P. Annangi, S. Thiruvenkadam, A. Raja, H. Xu, X. Sun, and L. Mao, “A region based active contour method for Xray lung segmentation using prior shape and low level features,” in Biomedical Imaging: From Nano to Macro, 2010 IEEE International Symposium on. IEEE, 2010, pp. 892–895.
 [14] K. Sohn, “Segmentation of lung fields using chanvese active contour model in chest radiographs,” in SPIE Medical Imaging. International Society for Optics and Photonics, 2011, pp. 796 332–796 332.
 [15] T. F. Chan and L. A. Vese, “Active contours without edges,” Image processing, IEEE transactions on, vol. 10, no. 2, pp. 266–277, 2001.
 [16] Y. Shi, F. Qi, Z. Xue, L. Chen, K. Ito, H. Matsuo, and D. Shen, “Segmenting lung fields in serial chest radiographs using both populationbased and patientspecific shape statistics,” Medical Imaging, IEEE Transactions on, vol. 27, no. 4, pp. 481–494, 2008.
 [17] T. Xu, M. Mandal, R. Long, I. Cheng, and A. Basu, “An edgeregion force guided active shape approach for automatic lung field detection in chest radiographs,” Computerized Medical Imaging and Graphics, vol. 36, no. 6, pp. 452–463, 2012.
 [18] Y. Shao, Y. Gao, Y. Guo, Y. Shi, X. Yang, and D. Shen, “Hierarchical lung field segmentation with joint shape and appearance sparse learning,” Medical Imaging, IEEE Transactions on, vol. 33, no. 9, pp. 1761–1780, 2014.
 [19] S. Candemir, S. Jaeger, K. Palaniappan, J. P. Musco, R. K. Singh, Z. Xue, A. Karargyris, S. Antani, G. Thoma, and C. J. McDonald, “Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration,” Medical Imaging, IEEE Transactions on, vol. 33, no. 2, pp. 577–590, 2014.
 [20] B. Ibragimov, B. Likar, F. Pernuš, and T. Vrtovec, “Accurate landmarkbased segmentation by incorporating landmark misdetections,” in Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on. IEEE, 2016, pp. 1072–1075.
 [21] F. A. Cosío, “Automatic initialization of an active shape model of the prostate,” Medical Image Analysis, vol. 12, no. 4, pp. 469–483, 2008.
 [22] S. Zhang, Y. Zhan, M. Dewan, J. Huang, D. N. Metaxas, and X. S. Zhou, “Towards robust and effective shape modeling: Sparse shape composition,” Medical Image Analysis, vol. 16, no. 1, pp. 265–277, 2012.
 [23] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape modelstheir training and application,” Computer Vision and Image Understanding, vol. 61, no. 1, pp. 38–59, 1995.
 [24] Y. Zheng, A. Barbu, B. Georgescu, M. Scheuering, and D. Comaniciu, “Fourchamber heart modeling and automatic segmentation for 3D cardiac CT volumes using marginal space learning and steerable features,” Medical Imaging, IEEE Transactions on, vol. 27, no. 11, pp. 1668–1681, 2008.
 [25] F. C. Ghesu, B. Georgescu, Y. Zheng, J. Hornegger, and D. Comaniciu, “Marginal space deep learning: Efficient architecture for detection in volumetric image data,” in Medical Image Computing and ComputerAssisted Intervention–MICCAI 2015. Springer, 2015, pp. 710–718.
 [26] M. Anthimopoulos, S. Christodoulidis, L. Ebner, A. Christe, and S. Mougiakakou, “Lung pattern classification for interstitial lung diseases using a deep convolutional neural network,” IEEE Transactions on Medical Imaging, vol. PP, no. 99, pp. 1–1, 2016.
 [27] H.C. Shin, M. R. Orton, D. J. Collins, S. J. Doran, and M. O. Leach, “Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4D patient data,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, no. 8, pp. 1930–1943, 2013.
 [28] O. Ronneberger, P. Fischer, and T. Brox, “Unet: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and ComputerAssisted Intervention. Springer, 2015, pp. 234–241.
 [29] Y. Zheng, A. Barbu, B. Georgescu, M. Scheuering, and D. Comaniciu, “Fourchamber heart modeling and automatic segmentation for 3D cardiac CT volumes using marginal space learning and steerable features,” Medical Imaging, IEEE Transactions on, vol. 27, no. 11, pp. 1668–1681, 2008.
 [30] F. C. Ghesu, E. Krubasik, B. Georgescu, V. Singh, Y. Zheng, J. Hornegger, and D. Comaniciu, “Marginal space deep learning: efficient architecture for volumetric image parsing,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1217–1228, 2016.
 [31] K. Okada, M. Golbaz, A. Mansoor, G. F. Perez, K. Pancham, A. Khan, G. Nino, and M. G. Linguraru, “Severity quantification of pediatric viral respiratory illnesses in chest xray images,” in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2015, pp. 165–168.
 [32] R. Davies, C. Taylor et al., Statistical models of shape: Optimisation and evaluation. Springer Science & Business Media, 2008.

[33]
Y. Zheng, X. Lu, B. Georgescu, A. Littmann, E. Mueller, and D. Comaniciu,
“Robust object detection using marginal space learning and rankingbased
multidetector aggregation: Application to left ventricle detection in 2d mri
images,” in
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on
. IEEE, 2009, pp. 1343–1350.  [34] A. Schwing, Y. Zheng, M. Harder, and D. Comaniciu, “Method and system for anatomic landmark detection using constrained marginal space learning and geometric inference,” Jan. 29 2013, uS Patent 8,363,918.
 [35] X. Lu, B. Georgescu, Y. Zheng, J. Otsuki, and D. Comaniciu, “Autompr: Automatic detection of standard planes in 3d echocardiography,” in Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. 5th IEEE International Symposium on. IEEE, 2008, pp. 1279–1282.

[36]
D. Erhan, Y. Bengio, A. Courville, P.A. Manzagol, P. Vincent, and S. Bengio,
“Why does unsupervised pretraining help deep learning?”
Journal of Machine Learning Research
, vol. 11, no. Feb, pp. 625–660, 2010.  [37] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio, “An empirical evaluation of deep architectures on problems with many factors of variation,” in Proceedings of the 24th International Conference on Machine Learning. ACM, 2007, pp. 473–480.
 [38] P. Vincent, H. Larochelle, Y. Bengio, and P.A. Manzagol, “Extracting and composing robust features with denoising autoencoders,” in Proceedings of the 25th international conference on Machine learning. ACM, 2008, pp. 1096–1103.
 [39] Y. Zheng and D. Comaniciu, Marginal Space Learning for Medical Image Analysis. Springer, 2014.
 [40] T. E. Cootes and A. Lanitis, “Active shape models: Evaluation of a multiresolution method for improving image search,” in Proc. British Machine Vision Conference, 1994, pp. 327–338.

[41]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in
Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.  [42] B. Van Ginneken, M. B. Stegmann, and M. Loog, “Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database,” Medical image analysis, vol. 10, no. 1, pp. 19–40, 2006.
 [43] K. Kamnitsas, C. Ledig, V. F. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert, and B. Glocker, “Efficient multiscale 3d cnn with fully connected crf for accurate brain lesion segmentation,” Medical image analysis, vol. 36, pp. 61–78, 2017.
 [44] K. Sirinukunwattana, S. E. A. Raza, Y.W. Tsang, D. R. Snead, I. A. Cree, and N. M. Rajpoot, “Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1196–1206, 2016.
 [45] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d unet: learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and ComputerAssisted Intervention. Springer, 2016, pp. 424–432.
 [46] H. Chen, X. Qi, L. Yu, and P.A. Heng, “Dcan: Deep contouraware networks for accurate gland segmentation,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2016, pp. 2487–2496.
 [47] A. Mansoor, J. J. Cerrolaza, R. Idrees, E. Biggs, M. A. Alsharid, R. A. Avery, and M. G. Linguraru, “Deep learning guided partitioned shape model for anterior visual pathway segmentation,” IEEE transactions on medical imaging, vol. 35, no. 8, pp. 1856–1865, 2016.
Comments
There are no comments yet.