I Introduction
Medical imaging systems that produce images for specific diagnostic tasks are commonly assessed and optimized by use of objective measures of image quality (IQ). Objective measures of IQ quantify the performance of an observer for specific tasks[3, 22, 26, 27, 31, 42, 37, 43, 39]. When imaging systems and dataacquisition designs are optimized for binary signal detection tasks (e.g., detection of tumor or lesion), the observer performance can be summarized by the receiver operating characteristic (ROC) curve. The Bayesian Ideal Observer (IO) maximizes the area under the ROC curve (AUC) and has been advocated for computing figuresofmerit (FOMs) for guiding imaging system optimization [3, 22, 42]
. In this way, the amount of taskspecific information in the measurement data is maximized. The IO computes the test statistic as any monotonic transformation of the likelihood ratio
[3]. It can be employed to assess efficiency of other numerical observers and human observers[23].Joint signal detection and localization (detectionlocalization) tasks are frequently considered in medical imaging [33, 11, 9, 36, 35]. When such tasks are considered, the localization receiver operating characteristic (LROC) curve can be employed to describe the observer performance and the area under the LROC curve (ALROC) can be employed as a FOM to guide the optimization of imaging systems. The IO employs a modified generalized likelihood ratio test (MGLRT) and maximizes the ALROC [17]. Except for some special cases, the test statistics involved cannot be computed analytically. MarkovChain Monte Carlo (MCMC) techniques[22] have been developed to approximate likelihood ratios by constructing Markov chains that comprise samples drawn from posterior distributions. However, practical issues such as the design of proposal densities from which Markov chains can be efficiently generated need to be addressed. Current applications of MCMC methods for approximating the IO have been limited to specific object models that include lumpy object model[22], a parametrized torso phantom[14], and a binary texture model[1].
Supervised learningbased methods hold great promise for establishing numerical observers that can be employed to compute objective measures of IQ. When optimizing imaging systems and dataacquisition designs, computersimulated data are often employed [22, 3]. In such cases, large amounts of labeled data can be simulated to train inference models to be employed as numerical observers [21, 42]. Artificial neural networks (ANNs) possess the ability to represent complicated functions and, accordingly, they can be trained to establish numerical observers and approximate test statistics. Supervised learning methods have been successfully employed to train ANNs for this purpose. For example, Kupinski et al.
explored the use of fullyconnected neural networks (FCNNs) to approximate the test statistic of the IO acting on lowdimensional feature vectors for binary signal detection tasks
[21]. More recently, Zhou et al. developed a supervised learningbased method for computing the test statistics of IOs performing binary signal detection tasks with 2D image data by use of convolutional neural networks (CNNs) [37, 42]. In addition, the ability of CNNs to approximate the IO for a backgroundknownexactly (BKE) signal detectionlocalization task has been explored[38]. However, the BKE assumption in that study is simplistic and there remains a need to explore methods for approximating the IO for more realistic signal detectionlocalization tasks that account for background variablity.In this work, a supervised learningbased method that employs CNNs to approximate the IO for signal detectionlocalization tasks is explored. The proposed method represents a deeplearningbased implementation of the IO decision strategy proposed in the seminal theoretical work by Khurd and Gindi [17]. The considered signal detectionlocalization tasks involve various object models in combination with several realistic measurement noise models. Numerical observer performance is assessed via the LROC analysis. The results of the proposed supervisedlearning methods are compared to those produced by MCMC methods or analytical computations when feasible.
The remainder of this paper is organized as follows. Salient aspects of joint signal detectionlocalization theory are reviewed in Section II. The use of supervised deep learning for approximating the IO for signal detectionlocalization tasks is described in Section III. The numerical studies and results are discussed in Sections IV and V, respectively. Finally, the article concludes with a discussion in Section VI.
Ii Background
A digital imaging system can be described as a continuoustodiscrete (CD) mapping:
(1) 
where the vector denotes the measured image data, is a compactly supported and bounded function of a spatial coordinate that describes the object being imaged, is the imaging operator that maps , and is the measurement noise. The measured data is random because the measurement noise is random. Object variablity is known to limit observer performance [28]. As such, the object function can be either deterministic or stochastic, depending on the specification of the diagnostic task to be assessed. When linear imaging operators are considered, the imaging process described in Eqn. (1) can be written as [3]:
(2) 
where and are the element of and , respectively, and is the point response function (PRF) of the imaging system.
Iia Detectionlocalization tasks with a discretelocation model
Detectionlocalization tasks in which the signal location is modeled as a discrete parameter having finite possible values are considered [17]. Also, for the tasks considered, signal present images are assumed to contain a single signal [17]. The number of possible signal locations is denoted as
. An observer is required to classify an image as satisfying either one of the
hypotheses (i.e., one signalabsent hypothesis and signalpresent hypotheses). The imaging processes under these hypotheses can be represented as:(3) 
where , is the image of the background , and is the image of the signal at the location. The quantities and will denote the element of and , respectively. It should be noted that is the signal absent hypothesis, and is the signalpresent hypothesis corresponding to the signal location ().
Scanning observers compute a test statistic for each possible signal location and a maxstatistic rule can be subsequently implemented to make a decision [10]. The decision strategy employed by scanning observers with maxstatistic rules is described as[10]:
(4) 
Here, maps the measured image to a realvalued test statistic corresponding to the signal location, and
is a predetermined decision threshold. An LROC curve that depicts the tradeoff between the probability of correct localization and the falsepositive rate is formed by varying
to evaluate the observer performance[17].IiB Scanning Ideal Observer and Scanning Hotelling Observer
The scanning IO that employs a modified generalized likelihood ratio test (MGLRT) maximizes the signal detectionlocalization task performance. The MGLRT[17] can be described as:
(5) 
By use of Bayes rule, the decision strategy described in Eqn. (5) is equivalent to a posterior ratio test[38]:
(6) 
When the likelihood function in Eqn. (5
) can be described by a Gaussian probability density function having the same covariance matrix
under each hypothesis, the IO is equivalent to a scanning Hotelling Observer (HO) [2, 10]. When is a constant, the scanning HO can be represented as [2, 10]:(7) 
where is the Hotelling template corresponding to the
signal location. Due to its relative ease of implementation, the scanning HO can be employed when the IO test statistic is difficult to compute. In addition, for a simplified binary signal detection task, the IO test statistic can be computed as a posterior probability
.Iii Approximating the IO for signal detectionlocalization tasks by use of CNNs
To approximate the IO for signal detectionlocalization tasks, a CNN can be trained to approximate a set of posterior probabilities that are employed in the posterior probability test in Eqn. (6
). To achieve this, the softmax function is employed in the last layer of the CNN, the socalled softmax layer, so that the output of the CNN can be interpreted as probabilities. Let
denote the vector of weight parameters of a CNN and let denote the output of the last hidden layer of the CNN, which is also the input to the softmax layer. The CNNapproximated posterior probabilities can be computed as:(8) 
where is the element of . The CNN parameter vector is to be determined such that the difference between the CNNapproximated posterior probability and the actual posterior probability is minimized.
The maximum likelihood (ML) estimate of
can be approximated by use of a supervised learning method. Let denote the label of the measured image , where corresponds to the hypothesis . The ML estimate of the CNN weight parameters can be obtained by minimizing the generalization error, which is defined as the ensemble average of the crossentropy over the distribution [21, 42]:(9) 
where denotes the ensemble average over the distribution . When the CNN possesses sufficient representation capacity such that can take any functional form, . To see this, one can compute the gradient of the crossentropy with respect to as:
(10)  
The derivation of this gradient computation can be found in Appendix A. Because can take any functional form when the CNN possesses sufficient representation capacity, determining involves finding that minimizes the crossentropy defined in Eqn. (9). According to Eqn. (10), for any , the optimal solution that has zero gradient value satisfies , from which .
Given a training dataset that contains independent training samples , can be estimated by minimizing the empirical error as:
(11) 
where is an empirical estimate of . The posterior probability can be subsequently approximated by the CNNrepresented posterior probability and the decision strategy described in Eqn. (6) can be implemented. It should be noted that minimizing empirical error on a small training dataset can result in overfitting and large generalization errors[12]
. Minibatch stochastic gradient descent algorithms can be employed to reduce the rate of overfitting
[12]. These minibatches can be generated onthefly when online learning is implemented[12].Iv Numerical studies
Computersimulation studies were conducted to investigate the supervised learningbased method for approximating the IO for signal detectionlocalization tasks. The considered signal detectionlocalization tasks included two backgroundknownexactly (BKE) tasks and two backgroundknownstatistically (BKS) tasks. A lumpy background (LB) model [19] and a clustered lumpy background (CLB) model [4] were employed in the BKS tasks.
The imaging system considered was an idealized parallelhole collimator system that was described by a linear CD mapping with Gaussian point response functions (PRFs) given by [22, 20]:
(12) 
where and are the height and width of the PRFs, respectively.
The signal to be detected and localized was modeled by a 2D Gaussian function with 9 possible locations:
(13) 
where is the signal amplitude, is a rotation matrix corresponding to the rotating angle , is a matrix that determines the width of the signal along each axis, and is the center location of the signal. With consideration of the specified imaging system, the element of the signal image can be subsequently computed as:
(14) 
where and .
For each task described below, the LROC curves were fit by use of LROC software [16] that implements Swensson’s fitting algorithm[34] and the IO performance was quantified by the ALROC.
Iva BKE signal detectionlocalization tasks
For the BKE tasks, the size of background image was pixels and . The signal to be detected and localized had the signal amplitude , width , and for all 9 possible locations . Two imaging systems described by different PRFs were considered. The first imaging system, “System 1”, was described by and . The second imaging system, “System 2”, was described by and . The signals at different locations imaged through the two imaging systems are illustrated in Fig. 1.
To investigate the ability of the CNN to approximate a nonlinear IO test statistic, a Laplacian probability density function, which has been utilized to describe histograms of fine details in digital mammographic images [15, 6], was employed to model the likelihood function . Specifically, the measured image data were simulated by adding independent and identically distributed (i.i.d.) Laplacian noise[6]: , where denotes a Laplacian distribution with the mean of 0 and the exponential decay of c, which was set to
corresponding to a standard deviation of 20. In this case, the likelihood ratio can be analytically computed as
[6]:(15) 
The IO decision strategy described by Eqn. (5) was subsequently implemented by use of the likelihood ratios given by Eqn. (15), and the resulting LROC curves and ALROC values were compared to those produced by the proposed supervised learning method described in Sec. III.
The two imaging systems were ranked by use of the IO performance for the considered signal detectionlocation tasks via LROC analysis. In addition, to demonstrate that the imaging system design optimized by use of the IO for signal detectionlocalization tasks may differ from that optimized by use of the IO for the simplified binary signal detection tasks, the two imaging systems were also assessed by use of the IO performance for the simplified binary signal detection tasks via ROC analysis. The ROC curves were fit by use of MetzROC software[24] using “proper” binormal model[25].
IvB BKS signal detectionlocalization task with a lumpy background model
The first BKS task utilized a lumpy object model to emulate background variability [19]. The considered lumpy background models are described as[3, 19]:
(16) 
where denotes the number of the lumps,
denotes a Poisson distribution with mean of
, and denotes the lump function that was modeled by a 2D Gaussian function:(17) 
Here, , , and denotes the center location of the
lump that was sampled from a uniform distribution over the image field of view. The imaging system PRF was specified by
and . The image size was and the element of the background image is given by:(18) 
The measurement noise considered in this case was i.i.d. Gaussian noise with a mean of 0 and a standard deviation of 20. Three realizations of the signalabsent images are shown in the top row in Fig. 2. The signals to be detected and localized were specified by Eqn. (13) with , , and for all 9 possible locations . The signal at different locations is illustrated in Fig. 3 (a).
Because the likelihood ratios in this case cannot be analytically computed, the MCMC method developed by Kupinski el. al[22] was implemented as a reference method. The MCMC method computed the likelihood ratio as:
(19) 
where is the BKE likelihood ratio conditional on the background image and is the number of samples used in Monte Carlo integration. Because Gaussian noise was considered in this case, can be analytically computed as:
(20) 
where is the covariance matrix of the measurement noise . The background image was sampled from the probability density function by constructing a Markov chain according to the method described in [22]. Each Markov chain was simulated by running 200,000 iterations.
IvC BKS signal detectionlocalization task with a clustered lumpy background model
The second BKS task utilized a clustered lumpy background (CLB) model to emulate background variability. This model was developed to synthesize mammographic image textures [4]. In this case, the background image had the dimension of pixels and its element is computed as[4]:
(21) 
Here, denotes the number of clusters that was sampled from a Poisson distribution with the mean of : , denotes the number of blobs in the cluster that was sampled from a Poisson distribution with the mean of : , denotes the center location of the cluster that was sampled uniformly over the image field of view, and denotes the center location of the blob in the
cluster that was sampled from a Gaussian distribution with the center of
and standard deviation of . The blob function was specified as:(22) 
where is computed as the “radius” of the ellipse with halfaxes and , and is the rotation matrix corresponding to the angle that was sampled from a uniform distribution between 0 and . The parameters of the CLB model employed in this study are shown in Table. I
50  20  5  2  2.1  0.5  12  40 
The measurement noise was modeled by a mixed PoissonGaussian noise model[3] in which the standard deviation of Gaussian noise was set to 20. Three examples of the signalabsent images are shown in the bottom row in Fig. 2.
The signal had the amplitude of 80, the width of along each axis took a value from {5, 8, 10}, and the rotation angle of took a value from {}. The signal at different locations is illustrated in Fig. 3 (b).
IvD CNN training details
The conventional trainvalidationtest scheme was employed to evaluate the proposed supervised learning approaches. The CNNs were trained on a training dataset, the CNN architectures and weight parameters were subsequently specified by assessing performance on a validation dataset and, finally, the performances of the CNNs on the signal detectionlocalization tasks were evaluated on a testing dataset. The training datasets were comprised of 100,000 lumpy background images and 400,000 CLB background images for the considered BKS detectionlocalization tasks. Additionally, a “semionline learning” method in which the measurement noise was generated onthefly was employed to mitigate the overfitting problem [42]. Both the validation dataset and testing dataset comprised 200 images for each class.
Specifications of CNN architectures that possess the ability to approximate the posterior probability are required. A family of CNNs that comprise different number of convolutional (CONV) layers was explored to specify the CNN architecture. Specifically, a CNN having an initial architecture was firstly trained by minimizing the average of the crossentropy over the training dataset defined Eqn. (11). CNNs having more CONV layers were subsequently trained until the average of the crossentropy over the validation dataset did not have significant decrement. A crossentropy decrement of at least 1% of that produced by the previous CNN architecture was considered significant. The CNN that produced the minimum crossentropy evaluated on the validation dataset was selected. All CNN architectures in the considered architecture family comprised CONV layers having 32 filters with the dimension of
, a maxpooling layer
[30], and a fully connected layer. A LeakyReLU activation function
[32] was applied to the feature maps produced by each CONV layer and a softmax function was applied to the output of the fully connected layer. An instance of the considered CNN architecture is illustrated in Fig. 4. This architecture family was determined heuristically and may not be optimal for many other tasks. At each iteration of the training, the CNN weight parameters were updated by minimizing the empirical error function on minibatches by use of the Adam algorithm
[18], which is a stochastic gradientbased method.V Results
Va BKE signal detectionlocalization task
Convolutional neural networks that comprised one, three, and five CONV layers were trained for 500,000 minibatches with each minibatch comprising 80 images for each class. For both “System 1” and “System 2”, the validation crossentropy was not significant decreased after 5 CONV layers were employed in the CNNs. Accordingly, we stopped training CNNs with more CONV layers, and the CNN corresponding to the smallest validation crossentropy was selected, which was the CNN having five CONV layer.
For the joint detectionlocalization task, with both imaging systems, the LROC curves produced by the analytical computation (solid curves) are compared to those produced by the CNN (dashed curves) in Fig. 5 (a). In addition, for the simplified binary signal detection tasks, the ROC curves produced by the analytical computation (solid curves) are compared to those produced by the CNN (dashed curves) in Fig. 5 (b). The curves corresponding to the analytically computed IO and the CNN approximation of the IO (CNNIO) are in close agreement in both cases. As shown in Fig. 5, the rankings of the two imaging systems are different when the joint detectionlocalization task and the simplified binary signal detection task were considered. When the signal detectionlocalization task is considered, “system 1” “system 2”, while if the binary signal detection task is considered, “system 2” “system 1”.
VB BKS signal detectionlocalization task with a lumpy background model
Convolutional neural networks comprising 1, 3, 5, 7, 9, and 11 CONV layers were trained for 500,000 minibatches with each minibatch comprising 80 images for each class. The validation crossentropy value was not significantly decreased after 11 CONV layers were employed in the CNN, and therefore the CNN having 11 CONV layers was selected for approximating the IO. The performance of the CNN for the signal detectionlocalization task was characterized by the LROC curve that was evaluated on the testing dataset. Note that the ALROC value produced by the CNNIO was , which was larger than the produced by the scanning HO.
The MCMC simulation provided further validation of the CNNIO. The LROC curve produced by the MCMC method (blue curve) is compared to that produced by the CNNIO (reddashed curve) are shown in Fig. 6. The curves are in close agreement. The ALROC values were and corresponding to the MCMC and the CNNIO, respectively.
VC BKS signal detectionlocalization task with a CLB model
Convolutional neural networks that comprised 1, 3, 5, and 7 CONV layers were trained for 500,000 minibatches with each minibatch comprising 20 images for each class. The validation crossentropy value was not significantly decreased after 7 CONV layers were employed in the CNN, and therefore the CNN having 7 CONV layers was selected for approximating the IO. The performance of the selected CNN was quantified by computing the LROC curve and ALROC value on the testing dataset. The CNNIO was compared to the scanning HO. The ALROC value produced by the CNNIO was , which was larger than the produced by the scanning HO as expected. The LROC curves corresponding to the CNNIO and the HO are displayed in Fig. 7.
Because the computation of the IO test statistic has not been addressed by MCMC methods for CLB models, validation corresponding to MCMC methods was not provided in this case.
Vi Summary
Signal detectionlocalization tasks are of interest when optimizing medical imaging systems and scanning numerical observers have been proposed to address them. However, there remains a scarcity of methods that can be implemented readily for approximating the IO for detectionlocalization tasks. In this work, a deeplearningbased method was investigated to address this need. Specifically, the proposed method provides a generalized framework for approximating the IO test statistic for multiclass classifications tasks. Compared to methods that employ MCMC techniques, supervised learning methods may be easier to implement. To properly run MCMC methods, numerous practical issues such as the design of proposal densities from which the Markov chain can be efficiently generated need to be addressed. Because of this, current applications of MCMC methods have been limited to relative simple object models such as a lumpy object model and a binary texture model. As such, the proposed supervised learning methods may possess a larger domain of applicability for approximating the IO than the MCMC methods. To demonstrate this, the proposed supervised learning method was applied to approximate the IO for a clustered lumpy object model, for which the IO approximation has not been achieved by the current MCMC methods.
To properly implement the proposed supervised learning method, a CNN architecture that possesses sufficient representation capacity needs to be specified. To achieve this, the CNN architecture was specified by training a family of CNN architectures that comprised different number of CONV layers [42]. A recently developed work that proposed a method to jointly optimize model architectures and weight parameters[7] may represent an alternative way to specify CNN architectures for the IO approximation, which presents a topic for future investigation.
There remains several other topics for future investigation. The proposed CNNbased method may require a large amount of training data to accurately approximate the IO. Such data may be available when optimizing imaging systems and data acquisition designs via computersimulation studies. For use in situations where such data are not readily produced, it will be important to investigate methods to train deep neural networks for approximating the IO on limited training data. To achieve this, one may investigate the methods that employ domain adaptation[8, 13]
[29]. One may also establish a stochastic object model (SOM) from noisy and/or indirect experimental measurements by training an AmbientGAN[5, 40, 41]. Having a wellestablished SOM, one can produce large amount of training samples to train a deep CNN. Finally, it will be important to investigate supervised learning methods for approximating IOs for other more general tasks such as joint signal detection and estimation tasks associated with the estimation ROC (EROC) curve.Appendix A Gradient of crossentropy
The crossentropy can be written as:
(23)  
The derivative of with respect to can subsequently be computed as:
(24)  
The last step in Eqn. (24) is derived because and .
References
 [1] (2008) An Ideal Observer for a model of Xray imaging in breast parenchymal tissue. In International Workshop on Digital Mammography, pp. 393–400. Cited by: §I.
 [2] (2006) Objective assessment of image quality. IV. Application to adaptive optics. JOSA A 23 (12), pp. 3080–3105. Cited by: §IIB.
 [3] (2013) Foundations of Image Science. John Wiley & Sons. Cited by: §I, §I, §II, §IVB, §IVC.
 [4] (1999) Statistical texture synthesis of mammographic images with clustered lumpy backgrounds. Optics express 4 (1), pp. 33–43. Cited by: §IVC, §IV.
 [5] (2018) AmbientGAN: Generative models from lossy measurements. In International Conference on Learning Representations (ICLR), Cited by: §VI.
 [6] (2000) Approximations to IdealObserver performance on signaldetection tasks. Applied Optics 39 (11), pp. 1783–1793. Cited by: §IVA.
 [7] (2016) Adanet: Adaptive structural learning of artificial neural networks. arXiv preprint arXiv:1607.01097. Cited by: §VI.

[8]
(2014)
Unsupervised domain adaptation by backpropagation
. arXiv preprint arXiv:1409.7495. Cited by: §VI.  [9] (2005) A comparison of human and model observers in multislice lroc studies. IEEE transactions on medical imaging 24 (2), pp. 160–169. Cited by: §I.
 [10] (2016) Visualsearch observers for assessing tomographic xray image quality. Medical physics 43 (3), pp. 1563–1575. Cited by: §IIA, §IIB.
 [11] (1999) A comparison of human observer lroc and numerical observer roc for tumor detection in spect images. IEEE Transactions on Nuclear Science 46 (4), pp. 1032–1037. Cited by: §I.
 [12] (2016) Deep Learning. Vol. 1, MIT press Cambridge. Cited by: §III.
 [13] (2020) Learning numerical observers using unsupervised domain adaptation. In Medical Imaging 2020: Image Perception, Observer Performance, and Technology Assessment, Vol. 11316, pp. 113160W. Cited by: §VI.
 [14] (2008) Toward realistic and practical Ideal Observer (IO) estimation for the optimization of medical imaging systems. IEEE Transactions on Medical Imaging 27 (10), pp. 1535–1543. Cited by: §I.
 [15] (1999) Multiresolution probability analysis of random fields. JOSA A 16 (1), pp. 6–16. Cited by: §IVA.
 [16] (2010)(Website) External Links: Link Cited by: §IV.
 [17] (2005) Decision strategies that maximize the area under the LROC curve. IEEE transactions on medical imaging 24 (12), pp. 1626–1636. Cited by: §I, §I, §IIA, §IIA, §IIB.
 [18] (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IVD.
 [19] (2005) Smallanimal spect imaging. Vol. 233, Springer. Cited by: §IVB, §IV.
 [20] (2003) Experimental determination of object statistics from noisy images. JOSA A 20 (3), pp. 421–429. Cited by: §IV.
 [21] (2001) Ideal Observer approximation using Bayesian classification neural networks. IEEE Transactions on Medical Imaging 20 (9), pp. 886–899. Cited by: §I, §III.
 [22] (2003) IdealObserver computation in medical imaging with use of MarkovChain Monte Carlo techniques. JOSA A 20 (3), pp. 430–438. Cited by: §I, §I, §I, §IVB, §IV.
 [23] (1995) Object classification for human and Ideal Observers. Vision Research 35 (4), pp. 549–568. Cited by: §I.
 [24] (1998) Rockit user’s guide. Chicago, Department of Radiology, University of Chicago. Cited by: §IVA.
 [25] (1999) ‘Proper’ binormal ROC curves: theory and maximumlikelihood estimation. Journal of Mathematical Psychology 43 (1), pp. 1–33. Cited by: §IVA.
 [26] (2007) ChannelizedIdeal Observer using LaguerreGauss channels in detection tasks involving nonGaussian distributed lumpy backgrounds and a gaussian signal. JOSA A 24 (12), pp. B136–B150. Cited by: §I.
 [27] (2009) Efficient estimation of IdealObserver performance in classification tasks involving highdimensional complex backgrounds. JOSA A 26 (11), pp. B59–B71. Cited by: §I.
 [28] (2007) Efficiency of the human observer for detecting a Gaussian signal at a known location in nonGaussian distributed lumpy backgrounds. JOSA A 24 (4), pp. 911–921. Cited by: §II.

[29]
(2016)
A survey of machine learning for big data processing
. EURASIP Journal on Advances in Signal Processing 2016 (1), pp. 67. Cited by: §VI.  [30] (2010) Evaluation of pooling operations in convolutional architectures for object recognition. In Artificial Neural Networks–ICANN 2010, pp. 92–101. Cited by: §IVD.
 [31] (2006) Using Fisher information to approximate IdealObserver performance on detection tasks for lumpybackground images. JOSA A 23 (10), pp. 2406–2414. Cited by: §I.
 [32] (2014) Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806. Cited by: §IVD.
 [33] (1975) Visual detection and localization of radiographic images. Radiology 116 (3), pp. 533–538. Cited by: §I.
 [34] (1996) Unified measurement of observer performance in detecting and localizing target objects on images. Medical physics 23 (10), pp. 1709–1725. Cited by: §IV.
 [35] (2016) Optimal joint detection and estimation that maximizes roctype curves. IEEE transactions on medical imaging 35 (9), pp. 2164–2173. Cited by: §I.
 [36] (2009) Collimator optimization in spect based on a joint detection and localization task. Physics in Medicine & Biology 54 (14), pp. 4423. Cited by: §I.
 [37] (2018) Learning the Ideal Observer for SKE detection tasks by use of convolutional neural networks. In Medical Imaging 2018: Image Perception, Observer Performance, and Technology Assessment, Vol. 10577, pp. 1057719. Cited by: §I, §I.
 [38] (2019) Learning the ideal observer for joint detection and localization tasks by use of convolutional neural networks. In Medical Imaging 2019: Image Perception, Observer Performance, and Technology Assessment, Vol. 10952, pp. 1095209. Cited by: §I, §IIB.
 [39] (2020) Markovchain monte carlo approximation of the ideal observer using generative adversarial networks. In Medical Imaging 2020: Image Perception, Observer Performance, and Technology Assessment, Vol. 11316, pp. 113160D. Cited by: §I.
 [40] (2019) Learning stochastic object model from noisy imaging measurements using AmbientGANs. In Medical Imaging 2019: Image Perception, Observer Performance, and Technology Assessment, Vol. 10952, pp. 109520M. Cited by: §VI.
 [41] (2020) Progressivelygrowing ambientgans for learning stochastic object models from imaging measurements. In Medical Imaging 2020: Image Perception, Observer Performance, and Technology Assessment, Vol. 11316, pp. 113160Q. Cited by: §VI.
 [42] (2019) Approximating the Ideal Observer and Hotelling Observer for binary signal detection tasks by use of supervised learning methods. IEEE Transactions on Medical Imaging 38 (10), pp. 2456–2468. Cited by: §I, §I, §III, §IVD, §VI.
 [43] (2019) Learning the hotelling observer for ske detection tasks by use of supervised learning methods. In Medical Imaging 2019: Image Perception, Observer Performance, and Technology Assessment, Vol. 10952, pp. 1095208. Cited by: §I.
Comments
There are no comments yet.