# Simultaneous Estimation of Noise Variance and Number of Peaks in Bayesian Spectral Deconvolution

The heuristic identification of peaks from noisy complex spectra often leads to misunderstanding of the physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multipeak spectra into single peaks statistically and consists of two steps. The first step is estimating both the noise variance and the number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multipeak models are nonlinear and hierarchical. Our framework enables the escape from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy via the multiple histogram method. We discuss a simulation demonstrating how efficient our framework is and show that estimating both the noise variance and the number of peaks prevents overfitting, overpenalizing, and misunderstanding the precision of parameter estimation.

• 2 publications
• 5 publications
• 12 publications
03/18/2022

### Bayesian Spectral Deconvolution of X-Ray Absorption Near Edge Structure Discriminating High- and Low-Energy Domain

In this paper, we propose a Bayesian spectral deconvolution considering ...
03/14/2022

### Accelerated Bayesian SED Modeling using Amortized Neural Posterior Estimation

State-of-the-art spectral energy distribution (SED) analyses use a Bayes...
12/11/2018

### Bayesian Spectral Deconvolution Based on Poisson Distribution: Bayesian Measurement and Virtual Measurement Analytics (VMA)

In this paper, we propose a new method of Bayesian measurement for spect...
06/06/2019

### Hierarchical Bayesian myocardial perfusion quantification

Purpose: Tracer-kinetic models can be used for the quantitative assessme...
12/02/2019

### A Bayesian Inference Framework for Procedural Material Parameter Estimation

Procedural material models have been graining traction in many applicati...
05/25/2021

### Investigating Manifold Neighborhood size for Nonlinear Analysis of LIBS Amino Acid Spectra

Classification and identification of amino acids in aqueous solutions is...
08/21/2021

### Post-Processed Posteriors for Sparse Covariances and Its Application to Global Minimum Variance Portfolio

We consider Bayesian inference of sparse covariance matrices and propose...

## 1 Introduction

Spectroscopy is at the heart of all sciences concerned with matter and energy. An electromagnetic spectrum indicates the electronic states and the kinetics of atoms. The quantum nature of spectra allows them to be approximately reduced to the sum of unimodal peaks (such as Lorentzian peaks, Gaussian peaks, and their convolutions), whose centers are the energy levels from the semiclassical viewpoint [1]

. The peak intensity is proportional to both the population density of the atoms or molecules and their transition probabilities. The Lorentzian peak width indicates the lifetime of the eigenstate due to the time-energy uncertainty relation. The Gaussian peak width indicates the Doppler effect caused by the kinetics of atoms and depends on temperature. These pieces of information about the electronic states or kinetics of atoms are obtained by identifying peaks from spectra.

It is generally a difficult problem to distinguish each peak from noisy spectra with overlapping peaks. The simplest solution is least-squares fitting by a gradient method [2]

. This type of method has a drawback in that fitting parameters are often trapped at a local minimum or a saddle whenever there is another global minimum in the parameter space. Moreover, the number of peaks is not always known in practice. Bayesian inference, by using a Markov chain Monte Carlo (MCMC) method, provides a superior solution

[3, 4, 5, 6, 7, 8, 9, 10, 11]. Although the Bayesian framework enables us to estimate the number of peaks, MCMC methods generally have the limitation of local minima and saddles. Nagata et al. reported [6] that the exchange Monte Carlo method [12] (or parallel tempering [13]) can prevent local minima or saddles efficiently and provide a more accurate estimation than the reversible jump MCMC method [14] and its extension [15].

We constructed a Bayesian framework for estimating both the noise variance and the number of peaks from spectra with white Gaussian noise by expanding the previous framework by Nagata et al. [6]. The noise variance and the number of peaks are respectively estimated by hyperparameter optimization and model selection. These estimations are carried out by maximizing a function called the marginal likelihood [16, 17, 18], which is a conditional probability of observed data given the noise variance and the number of peaks in our framework. We provide a straightforward and efficient scheme that calculates this bivariate function by using the exchange Monte Carlo method and the multiple histogram method[19, 20]. We also demonstrated our framework through simulation. We show that estimating both the noise variance and the number of peaks prevents overfitting, overpenalizing, and misunderstanding the precision of parameter estimation.

## 2 Framework

### 2.1 Models

An observed spectrum is represented by the sum of single peaks and additive noise as

 y =f(x;w)+ε, (1) f(x;w) :=K∑k=1akϕk(x;μk,ρk), (2) ϕk(x;μk,ρk) :=exp[−ρk2(x−μk)2], (3)

where denotes energy, frequency, or wave number depending on the case. The parameter set is , where , , and for each are respectively the intensity, energy level, and peak width. The Gaussian function for each should be replaced with other parametric functions, such as the Lorentzian or Voigt function, depending on the case [1, 21]. If the peaks are symmetric functions for all (i.e., their values depend only on the distance from each center), the function

is called a radial basis function network in neural networks and related fields

[6, 22]. This is the junction of the spectral data analysis and singular learning theory [23]. If the additive noise is assumed to be a zero-mean Gaussian with variance , the statistical model of the observed spectrum is represented by a conditional probability as

 p(y∣x,w,b):=√b2πexp{−b2[y−f(x;w)]2}, (4)

where

is taken as a random variable. This Gaussian distribution

is valid if the thermal noise is dominant. The parameter set

is also regarded as a random variable from the Bayesian viewpoint. The probability density function of

, called the prior density, is heuristically modeled as

 φ(w∣K) :=K∏k=1φ(ak)φ(μk)φ(ρk), (5) φ(ak) :=κexp(−κak), (6) φ(μk) :=√α2πexp[−α2(μk−μ0)2] (7) φ(ρk) :=νexp(−νρk), (8)

where , , , and are hyperparameters. This prior density modeling is a special case of that by Nagata et al. [6]. Equation (6) promotes the sparsity of . Equation (7) is regarded as an almost flat prior density if is sufficiently small. These prior density models can be replaced with any other model without loss of generality in our framework.

### 2.2 Bayesian formalization

The conditional probability density function of given samples , set as

for the sake of convenience, is represented by Bayes’ theorem as

 p(w∣D,K,b) =1Zn(K,b)n∏i=1p(Yi∣Xi,w,b)φ(w∣K) (9) =1~Zn(K,b)exp[−nbEn(w)]φ(w∣K), (10) Zn(K,b) :=∫dwn∏i=1p(Yi∣Xi,w,b)φ(w∣K) (11) =(b2π)n2~Zn(K,b), (12) ~Zn(K,b) :=∫dwexp[−nbEn(w)]φ(w∣K), (13) En(w) :=12nn∑i=1[Yi−f(Xi;w)]2, (14)

where the functions and are respectively called the posterior density and marginal likelihood. Note that the function is a probability density but is not. Bayes free energy is defined as

 Fn(K,b) :=−logZn(K,b) (15) =b~Fn(K,b)−n2(logb−log2π), (16) ~Fn(K,b) :=−1blog~Zn(K,b). (17)

Note that Nagata et al. regarded as Bayes free energy for the sake of convenience [6] since the noise variance is treated as a known constant. We also assume the case in which there are no peaks as (see Appendix A). In terms of the empirical Bayes (or type II maximum likelihood) approach [16, 17, 18], empirical Bayes estimators of and are given by

 (^K,^b) :=arg maxK,bZn(K,b) (18) =arg minK,bFn(K,b). (19)

The hierarchical Bayes approach [24] is also tractable in our framework (see Appendix B). The partial derivative of with respect to the variable is obtained as

 ∂Fn∂b =n[⟨En(w)⟩b−12b], (20)

where denotes the posterior mean of an arbitrary quantity over . If is a stationary point of , then the following equation is satisfied:

 ⟨En(w)⟩^b=12^b. (21)

The Bayes estimator of is given by

with the standard deviation

for each parameter if . However, cannot be derived in this case since and are analytically intractable for our model.

### 2.3 Exchange Monte Carlo method

In practice, we calculate and by using the exchange Monte Carlo method, which efficiently enables sampling from at without knowing or . The target density is a joint probability density as

 p({wl}Ll=1∣D,K,{bl}Ll=1) :=L∏l=1p(wl∣D,K,bl), (22)

where is the parameter set at . Each density is called a replica. Sequence is set as for the sake of convenience. Note that the variable is replaced with the inverse temperature of Nagata et al.’s formulation[6]. The variable works as quasi-inverse temperature and varies the substantial support of the posterior density . The state exchange between high- and low-temperature replicas enables the escape from local minima or saddles in the parameter space. The sampling procedure includes the two following steps.

• State update in each replica
Simultaneously and independently update state subject to using the Metropolis algorithm [25].

• State exchange between neighboring replicas
Exchange states and at every step subject to the probability as

 u(wl+1,wl,bl+1,bl) :=min[1,v(wl+1,wl,bl+1,bl)], (23) v(wl+1,wl,bl+1,bl) :=p(wl+1∣D,K,bl)p(wl∣D,K,bl+1)p(wl∣D,K,bl)p(wl+1∣D,K,bl+1) (24) =exp{n(bl+1−bl)[En(wl+1)−En(wl)]}, (25)

where Eq. (23) ensures a detailed balance condition.

A straightforward way of computing via the exchange Monte Carlo method is bridge sampling [26, 27], in which is expressed as

 ~Fn(K,bl) =−1bllogl−1∏l′=1~Z(K,bl′+1)~Z(K,bl′) (26) =−1bll−1∑l′=1log⟨exp[−n(bl′+1−bl′)En(wl′)]⟩bl′, (27)

where for the arbitrary quantity at the replica is approximated by the mean of an MCMC sample as

 ⟨Ql⟩bl =1MlMl∑m=1Ql,m. (28)

However, is not easy to accurately calculate using only the above scheme since is a discrete set, whereas is a continuous variable.

### 2.4 Multiple histogram method

or with respect to for any via the multiple histogram method. The density of states is defined and estimated by

 g(E;K) :=∫dwδ[E−En(w)]φ(w∣K) (29) =∑Ll=1Nl(E)∑Ll′=1Ml′~Zn(K,bl′)−1exp(−nbl′E), (30)

then we obtain

 ~Zn(K,b) =∫dEg(E;K)exp(−nbE) (31) =L∑l=1Ml∑m=11∑Ll′=1Ml′~Zn(K,bl′)−1exp[n(b−bl′)El,m], (32)

where and are respectively the histogram of at the replica and the value of at the snapshot of the replica in an MCMC simulation, i.e., . The values of are determined self-consistently by iterating Eq. (32) with . We take computed via Eq. (27) as the initial values for the sake of convenience. Given , we then calculate as via Eq. (32) again. The above procedure can be appropriately generalized to treat multidimensional histograms such as [28]. Then, the posterior mean of an arbitrary quantity is calculated as

 ⟨Q⟩b =1~Zn(K,b)L∑l=1Ml∑m=1Ql,m∑Ll′=1Ml′~Zn(K,bl′)−1exp[n(b−bl′)El,m], (33)

where is the value of at the snapshot of the replica in an MCMC simulation. We calculate via Eq. (33) and solve Eq. (21) numerically by the bisection method. Then, with the standard deviation of each parameter is also calculated via Eq. (33). The posterior density of arbitrary quantities can also be interpolated with respect to in the same way (see Appendix C).

## 3 Demonstration

We demonstrated how efficient our framework is through simulation in which the same synthetic data as used by Nagata et al. [6] were used. The synthetic data shown in Fig. 1 were generated from the true probability density as

 q(y∣x,w0,b0):=√b02πexp{−b02[y−f(x;w0)]2}, (34)

where and are respectively the true inverse noise variance and true parameter set, as in Tables 1 and 2. The inputs were linearly spaced in the interval with spectral resolution , where the number of samples was . The sequence were logarithmically spaced in the interval , where the number of replicas was . The model size was set as integers from to . The hyperparameters were , , , and in the heuristics. The total number of MCMC sweeps was 100,000 including 50,000 burn-in sweeps: an MCMC sample of size for every was obtained. The estimators are listed in Tables 1 and 2, where was converted into an inverse square-root scale for comparison. Every true value of the parameter lies within two standard deviations.

First, we discuss how to estimate both the noise variance and the number of peaks. (A) Bayes free energy and (B) the posterior mean of the mean square error are shown in Fig. 2. The horizontal axes represent on a log scale. The colored solid lines show calculated via Eq. (27) for each in (A) and calculated via Eq. (28) for each on a log scale in (B). The three lines of almost overlap in (A-1) and (B-1), whose enlarged views around the black circles are respectively shown in (A-2) and (B-2). The colored markers in (A-2) and (B-2) respectively indicate as in (A-1) and as in (B-1). The colored dotted lines in (A-2) and (B-2) respectively indicate the interpolated values calculated via Eqs. (32) and (33). The gray solid lines in (B) show the function . The vertical black dashed lines and vertical black dash-dotted ones respectively show the true value and the estimated value . There is a minimum point of depending on each value of , i.e., the probability density has a maximum at this point (see Appendix B). In this case, Eq. (21) holds at the intersection of the purple dotted line and the gray solid line shown in (B-2).

Second, we discuss the validity of our framework. The dependence on in the model selection is shown in Fig. 3. The horizontal axis represents on a log scale. The colored markers show the estimated model size that minimizes for each as

 ^Kb :=arg minKFn(K,bl) (35) =arg minK~Fn(K,bl). (36)

Note that is regarded as the optimal number of peaks in Nagata et al.’s framework [6]. The vertical black dashed line and the vertical black dash-dotted one respectively show the true value and the estimated value . Although for each value of depends on the noise realization, as Nagata et al. showed in the case of [6], also changes depending on the value of . There is a rough trend, explained by the asymptotic form of , in which becomes larger as increases. If the sample size is sufficiently large, is expressed as

 ~Fn(K,b) =nEn(w0)+λblognb+1bOp(loglognb), (37)

where

is the parameter set that minimizes the Kullback–Leibler divergence of a statistical model from a true distribution, and

is a rational number called the real log canonical threshold (RLCT) [29, 30]. The RLCT is determined by the pair of a statistical model and true distribution, and the ones determined by Eqs. (4) and (34) are clarified for several cases of with [23]. The values and respectively become larger and smaller as increases. The term dominantly works for model selection for large : overfitting occurs. The term dominantly works for small : overpenalizing occurs. A moderate model is estimated under the moderate value of . Estimating the optimal value of is indispensable, and this result shows the validity of our framework.

Finally, we discuss the validity of our framework from another viewpoint. (A) The posterior mean of , (B) the posterior standard deviation of , and (a-d) the marginal posterior distribution of when are shown in Fig. 4. The horizontal axes in (A-B) represent on a log scale. The colored solid lines show for each in (A) and for each in log scale in (B). These values were calculated via Eq. (28). The identification of mode was reassigned by sorting the MCMC sample into for each and in light of the exchange symmetry. The vertical black dashed lines and the vertical black dash-dotted ones respectively show the true value and the estimated value . The horizontal black dotted lines in (A) show the true value for each and the horizontal gray dashed line in (B) shows the spectral resolution . The vertical black solid lines in (A-B) correspond to each value of

in (a-d). The relative frequency histograms (a-d) show the marginal posterior probability of

for each bin and as follows:

 P(Xi≤μk≤Xi+1∣D,K,b) =∫Xi+1Xidμkp(μk∣D,K,b), (38) p(μk∣D,K,b) =∫dw′p(w∣D,K,b) (39) =~zn(K,b,μk)φ(μk)~Zn(K,b), (40) ~zn(K,b,μk) :=∫dw′exp[−nbEn(w′;μk)]φ(w′∣K), (41)

where and . indicates the function given the value . The histograms (a), (b), and (d) were respectively constructed using the MCMC sample as for each . Histogram (c) was calculated via Eq. (C.5) for each (see Appendix C). The coloring of the histogram for each follows that in (A-B). The horizontal axes in (a-d) represent , and the vertical ones represent relative frequency on a log scale. The vertical black dotted lines in (a-d) show the true value for each , as in (A). and respectively change depending on , where the changes in the support of the posterior density correspond. These changes are considerable around , where for each asymptotically approaches the true value from this region and for each monotonically decreases from the same region. The marginal posterior densities of , , and overlap and are unidentifiable if is smaller than around . Otherwise, they are separated and identifiable. is smaller than as (c)

: a kind of super-resolution. This effect is based on the same principle as super-resolution microscopy techniques

[31, 32]. for each is also smaller than as (d) , whereas the support of does not cover the true value

: outside the confidence interval. An appropriate setting of

provides an appropriate precision of parameter estimation. Estimating the optimal value of is indispensable even if the true model size is known; thus, this result also shows the validity of our framework.

## 4 Discussion and Conclusion

We constructed a framework that enables the dual estimation of the noise variance and the number of peaks and demonstrated the effectiveness of our framework through simulation. We also warned that there are the risks of overfitting, overpenalizing, and misunderstanding the precision of parameter estimation without the estimation of the noise variance. Our framework is an extension of Nagata et al.’s framework and is versatile and applicable to not only spectral deconvolution but also any other nonlinear regression with hierarchical statistical models.

Our framework is also considered as a learning scheme in radial basis function networks. However, the goal of spectral deconvolution is not to predict any future data, which is the goal of most other learning tasks, but to identify the true model since spectral deconvolution is an inverse problem of physics. This is the reason why we do not adopt the Bayes generalization error but adopt the Bayes free energy for hyperparameter optimization and model selection. The Akaike information criterion (AIC) [33] and Bayesian information criterion (BIC) [34], which are respectively approximations of the generalization error and Bayes free energy, do not hold for hierarchical models such as radial basis function networks: the widely applicable information criterion (WAIC) [35] and widely applicable Bayesian information criterion (WBIC) [36] generally hold for any statistical model. If the noise variance is unknown, these criteria do not lead to computational reduction since the value of the noise variance needs to be estimated, as discussed in Sect. 3

. The example we gave is classified as an unrealizable and singular (or regular) case

[37], which is a difficult problem. On the other hand, the example Nagata et al. gave [6]

is classified as a realizable and singular (or regular) case, which is a relatively easy problem. Statistical hypothesis testing does not hold for a singular case. Our scheme is also valid and sophisticated from the viewpoint of statistics.

This work was partially supported by a Grant-in-Aid for Scientific Research on Innovative Areas (No. 25120009) from the Japan Society for the Promotion of Science, by “Materials Research by Information Integration” Initiative (MI2I) project of the Support Program for Starting Up Innovation Hub from the Japan Science and Technology Agency (JST), and by Council for Science, Technology and Innovation (CSTI), Cross-ministerial Strategic Innovation Promotion Program (SIP), “Structural Materials for Innovation” (Funding agency: JST).

## Appendix A: Bayes free energy for no-peaks model

We define the function as , where is the empty set. The statistical model of the no-peaks spectrum and marginal likelihood are expressed as

 p(y∣x,w=ϕ,b) =√b2πexp(−b2y2), (A.1) Zn(K=0,b) =n∏i=1p(Yi∣Xi,w=ϕ,b) (A.2) =(b2π)n2~Zn(K=0,b), (A.3) ~Zn(K=0,b) =exp[−nbEn(w=ϕ)], (A.4) En(w=ϕ) =12nn∑i=1Yi2. (A.5)

The main term of Bayes free energy and the posterior mean of the mean square error are also respectively expressed as

 ~Fn(K=0,b) =nEn(w=ϕ), (A.6) ⟨En(w=ϕ)⟩b =En(w=ϕ), (A.7)

where they can be calculated without any MCMC method.

## Appendix B: Hierarchical Bayes approach

In Sect. 3, we adopted the empirical Bayes (or type II maximum likelihood) approach, in which and are estimated by the minimization of (or the maximization of ). The hierarchical Bayes approach, which takes into account the posterior density of and , is also suitable for our framework. The prior density of and is set as , where

is a discrete uniform distribution on the natural numbers

and is a continuous uniform distribution on the interval . The joint posterior probability and marginal ones are expressed as

 P(K,bl≤b≤bl+1∣D) =∫bl+1bldbp(K,b∣D), (B.1) p(K,bl∣D) =exp[−Fn(K,bl)]∑5K=0∫bLb1dbexp[−Fn(K,b)], (B.2) P(K∣D) =L−1∑l=1P(K,bl≤b≤bl+1∣D), (B.3) P(bl≤b≤bl+1∣D) =∫bl+1bldbp(b∣D), (B.4) p(bl∣D) =5∑K=0p(K,bl∣D), (B.5)

where the integration along the -axis is calculated using the trapezoidal rule. Note that . The (A) joint probability of and the marginal probability of , (B) the marginal probability of , and (C) the marginal probability density of are shown in Fig. B.1. The horizontal axes represent on a log scale. The colored stairstep graphs and the black one in (A) respectively show the joint probability for each and the marginal probability . The three colored graphs of almost overlap in contrast to Fig. 2(A-1). The black bar in (B) shows the marginal probability . The black markers and black dotted line in (C) respectively show the marginal probability density and the interpolated values. The vertical black dashed lines and vertical black dash-dotted ones respectively show the true value and the estimated value , as in Fig. 2. Both and are within the same interval of , which maximize the probabilities and in this case. Although the value of that maximizes is the same as in this case, the value of that maximizes is slightly different from in the strict sense. These values are not always consistent in practice, and there is a continuous discussion: which is better, to optimize or to integrate out? [38] The users of our framework can choose a better way in light of their perspective.

## Appendix C: Interpolation of posterior distribution

The density of states in the bin, which is the function given the value of in the interval , is defined and estimated as

 g(E;K,Xi≤μk≤Xi+1) :=∫dw′δ[E−En(w′;Xi≤μk≤Xi+1)]φ(w′∣K) (C.1) =∑Ll=1Nl(E;Xi≤μk≤Xi+1)∑Ll′=1M(i)l′~Zn(K,bl′)−1exp(−nbl′E), (C.2)

then we obtain

 ~zn(K,b,Xi≤μk≤Xi+1) =∫dEg(E;K,Xi≤μk≤Xi+1)exp(−nbE) (C.3) =L∑l=1M(i)l∑m=11∑Ll′=1M(i)l′~zn(K,bl′,Xi≤μk≤Xi+1)−1exp[n(b−bl′)E(i)l,m], (C.4)

where , , and respectively indicate , , and in the bin. is defined as , where . The values of for each are determined self-consistently by iterating Eq. (C.4) with . Given for each , we calculate for each with via Eq. (C.4) again. If is sufficiently small (or is almost flat),