Pulse processing routines for neutron time-of-flight data

by   P. Žugec, et al.
University of Zagreb

A pulse shape analysis framework is described, which was developed for n_TOF-Phase3, the third phase in the operation of the n_TOF facility at CERN. The most notable feature of this new framework is the adoption of generic pulse shape analysis routines, characterized by a minimal number of explicit assumptions about the nature of pulses. The aim of these routines is to be applicable to a wide variety of detectors, thus facilitating the introduction of the new detectors or types of detectors into the analysis framework. The operational details of the routines are suited to the specific requirements of particular detectors by adjusting the set of external input parameters. Pulse recognition, baseline calculation and the pulse shape fitting procedure are described. Special emphasis is put on their computational efficiency, since the most basic implementations of these conceptually simple methods are often computationally inefficient.



There are no comments yet.


page 1

page 2

page 3

page 4


Construction of efficient detectors for character information recognition

We have developed and tested in numerical experiments a universal approa...

Performance Characterization of Image Feature Detectors in Relation to the Scene Content Utilizing a Large Image Database

Selecting the most suitable local invariant feature detector for a parti...

Developing and Analyzing Boundary Detection Operators Using Probabilistic Models

Most feature detectors such as edge detectors or circle finders are stat...

SHREC 2011: robust feature detection and description benchmark

Feature-based approaches have recently become very popular in computer v...

Impact of Benign Modifications on Discriminative Performance of Deepfake Detectors

Deepfakes are becoming increasingly popular in both good faith applicati...

The Shape of Alerts: Detecting Malware Using Distributed Detectors by Robustly Amplifying Transient Correlations

We introduce a new malware detector - Shape-GD - that aggregates per-mac...

Fast calculation of correlations in recognition systems

Computationally efficient classification system architecture is proposed...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: (Color online) Illustration of the procedure for calculating the signal derivative from Eq. (1). The filter of step-size (red dots) is applied to the artificially constructed signal (open dots). The behavior of the filter at signal boundaries is shown as well (blue and green dots).

After a year and a half long shutdown, the neutron time of flight facility n_TOF ntof1 ; ntof2 at CERN has entered a third phase of its operation, known as n_TOF-Phase3. The new era of the n_TOF facility is marked by the successful completion of the construction of Experimental Area 2 (EAR2) ear2_0 ; ear2_1 ; ear2_2 , which was recently put into operation. Experimental Area 1 (EAR1), already in function for more than a decade, operates in parallel. The in-depth description of the general features of the n_TOF facility, such as the neutron production and the neutron transport, may be found in Refs. ear2_1 ; ear2_2 ; carlos .

At n_TOF a wide variety of detectors is used for measuring neutron induced reactions, including neutron capture (), neutron induced fission () and reactions of type (), () and (). Among these are solid-state detectors (such as the silicon based neutron beam monitor simon and CVD diamond detectors diamond ), scintillation detectors (an array of BaF scintillator crystals tac , CD liquid scintillators c6d6 ) and gaseous detectors (such as MicroMegas-based detectors mgas1 ; mgas2 , a calibrated fission chamber from the Physikalisch Technische Bundesanstalt ptb , a set of Parallel Plate Avalanche Counters ppac ). Several other types of detectors were recently introduced and tested at n_TOF, such as solid-state HPGe, scintillation NaI, gaseous He detectors, etc.

A high-performance digital data acquisition system is used for the management and storage of the electronic detector signals. The system is based on flash analog-to-digital (FADC) units, recently upgraded to handle an amplitude resolution of 8 to 12 bits. It operates at sampling rates typically ranging from 100 MHz to 1 GHz, with a memory buffer of up to 175 MSamples, allowing for an uninterrupted recording of the detector output signals during the full time-of-flight range of approximately 100 ms (as used in EAR1). A detailed description of the previous version of this system can be found in Ref. daq .

Once stored in digital form, the electronic signals have to be accessed for offline analysis, in order to obtain the time-of-flight and pulse height information for each detected pulse. The analysis procedures applied to the signals from CD and BaF detectors have already been described in Refs. daq ; baf2_analysis . In order to efficiently and consistently accommodate analysis requirements of a wide variety of detectors used at n_TOF, a generic type of routine was recently developed that can be applied to different types of signals. The routine is characterized by a minimal number of explicit assumptions about the nature of signals and is based on a pulse template adjustment, which we refer to as the pulse shape fitting. For each detector or type of detector a set of analysis parameters needs to set externally. A number of these will be mentioned throughout this paper.

Many of the procedures adopted for the signal analysis – such as the pulse integration with the goal of extracting the energy deposited in the detectors, or the constant fraction discrimination for determining the pulses’ timing properties – are all well established techniques, thus we don’t find it necessary to enter their description. Consequently, we will focus on the technical aspects of the more specific methods that were found to perform very well for the wide variety of signals from different detectors, in order to provide their documentation and ensure their reproducibility. Special emphasis will be put on the computational efficiency of these procedures.

Selected examples of the signals from the detectors available at n_TOF are shown throughout the paper. Regarding the previous works on the signal analysis procedures adapted to the specific types of detectors, the reader may consult Refs. psa_naid ; psa_hpge ; psa_sili ; psa_scint , dealing with NaI, HPGe, silicon and organic scintillation detectors, respectively. We also refer the reader to an exhaustive comparative analysis of many different pulse shape processing methods comprehensively covered in Ref. psa_review , and to the references contained therein.

2 Pulse recognition

2.1 Signal derivative

The central procedure in the pulse recognition is the construction of the signal derivative . We use the following definition:


that takes advantage of integrating the signal at both sides of the -th point at which the derivative is to be calculated. is the total number of points composing the recorded signal. The points are enumerated from 0 to , which is a convention used throughout this paper, unless explicitly stated otherwise. A step-size is the default number of points to be taken for integration. As illustrated by Fig. 1, this procedure formally resembles a convolution between the signal and a see-saw-shaped filter function of unit height, up to the boundary effects regulated by the upper summation bound from Eq. (1). Evidently, when the step-size is adjusted so as to be wider than the period of noise in the signal (and narrower than the characteristic pulse length), the procedure acts as a low-pass filter, improving the signal-to-noise ratio in the derivative.

The number of operations required by the straightforward implementation of this algorithm is proportional to , making such approach computationally inefficient. Fortunately, recursive relations may be derived for calculating the consecutive terms, making the entire procedure linear in the number of required operations: . By defining the forward and backward sums and , respectively, as:


the derivative may be rewritten as: . The initial values and follow directly from Eq. (1). The recursive relations for subsequent pairs of and are given in Table 1, being listed according to the boundary effects.

Beginning of the waveform
Middle of the waveform
End of the waveform
Table 1: List of recursive relations for calculating forward and backward sums and from Eq. (2). The signal derivative may then be obtained as: . Cases are categorized based on the boundary effects (whether the integration windows defined by the step-size reach the boundaries of the waveform, composed of total of points), as illustrated in Fig. 1.

2.2 Derivative crossing thresholds

In order to recognize the presence of the pulses in the overall signal, their derivative must cross certain predefined thresholds. These thresholds need to be set in such a way as to reject most of the noise, but not to discard even the lowest pulses. Therefore, they must be adaptively brought into connection with the level of the noise characteristic of the current waveform, which is best expressed through the root mean square (RMS) of the noise. Figure 2 shows an example of one of the most challenging signals for this task, the signal from a MicroMegas detector. Top panel (a) shows the selected fraction of an actual recorded signal, with the strongest pulse corresponding to an intense -flash caused by the proton beam hitting the spallation target, while the bottom (b) panel shows its derivative calculated from Eq. (1). This signal is heavily affected by the random beats which do not qualify as the pulses of interest to any meaningful measurement (by beats we consider the coherent noise resembling acoustic beats, as shown in Fig. 2 and later in Fig. 10). Several tasks are immediately evident. First, the pulses themselves must be excluded from the procedure for determining the derivative thresholds, since they can only increase the overall RMS, thus leading to a rejection of the lowest pulses. However, the pulses can not be discriminated from the noise before the thresholds have been found. Second, the beats must not be assigned to the noise RMS, since they are only sporadic and can also only lead to an unwanted increase in thresholds. Finally, in some cases one can not even rely on the assumption of a fixed number of clear presamples before the first significant pulse, such as the initial -flash pulse. This is the case in measurements with high activity samples, when their natural radioactivity causes a continual stream of pulses, independent of the external experimental conditions. Another example is the intake of waveforms for certain calibration purposes, when no external trigger is used and signals are recorded without any guarantee of clear presamples. In order to meet all these challenges, the procedure of applying the weighted fitting to the modified distribution of derivative points. It may be decomposed into four basic steps, described throughout this section.

Figure 2: Top panel (a): example of the digitized signal from MicroMegas detector. Bottom panel (b): its derivative calculated from Eq. (1).

Step 1: build the distribution (histogram) of all derivative points. As Fig. 2 shows, all the points from the derivative baseline are expected to group around the value 0, forming a peak characterized by the RMS of the noise. On the other hand, the points from the sporadic pulses and/or beats are expected to form the long tails of the distribution. Since the central peak of the distribution carries the information about the sought for RMS, it needs to be reconstructed by means of (weighted) fitting.

A technicality is related to the treatment of the central bin, corresponding to the derivative value 0. It has been observed that in certain cases an excessive number of points is accumulated in this bin, making it reach out high above the rest of the distribution. Depending on the specific signal conditions, this feature has proven to be either beneficial or detrimental to the quality of the fitting procedure. Therefore, the content of the central (-th) bin is replaced by:


i.e. by the geometrical mean between the initial content and the arithmetic mean of the neighboring bins. Since the geometric mean is biased towards the smaller of the averaged terms, this solution was selected in an attempt of finding an ideal compromise between retaining the signature of the original bin content and bringing it down towards the main fraction of the histogram. It was found that after this modification the RMS of the fitted distribution is very well adjusted to the derivative baseline in both cases: when the initial bin content would have worked either to the advantage or the detriment of the fitting procedure.

Step 2: adjust the histogram range. After building the initial distribution, taking into account all derivative points and adjusting the central bin, the histogram range is reduced by cutting it symmetrically around 0 until 10% of its content has been discarded. This procedure helps in localizing the relevant part of the distribution by rejecting the sporadic far-away points, thus limiting the range of the distribution from to , which will be of central importance in defining the weights for the weighted fitting.

Step 3: emphasize the central peak. One must consider that even with appropriate weights, the fitting might still be heavily affected by the long tails of the distribution, increasing the final extracted RMS. In order to compensate for this effect, the central peak is better pronounced by exponentiating the entire distribution, i.e. by replacing the content of the -th histogram bin by the following value:


This procedure affects the width of the central peak, narrowing it somewhat when there are no significant tails. The lower extracted RMS is preferred over the higher one, in order for the derivative thresholds not to reject the lowest pulses. As it will be explained later, the accidental triggering of lower thresholds by the noise will be discarded by the appropriate pulse elimination procedure. Before exponentiating the histogram content, care must be taken to rescale it appropriately – e.g. by scaling the distribution peak to unity – in order to avoid the potential numerical overflow. Furthermore, a consistent normalization is crucial in making the procedure insensitive to the length of the recorded signal (i.e. the initial height of distribution), since the exponentiation is nonlinear in the absolute number of counts .

Step 4: perform the weighted fitting so as to best reconstruct the central peak. The remaining distribution is fitted to a Gaussian shape explicitly assumed to be centered at 0, by minimizing the following expression:


where is the abscissa coordinate of the -th bin, such that and . Parameters and are to be determined by fitting. At the end of the procedure,

is identified with the RMS of the central peak, i.e. with the RMS of the noise in the derivative. The selection of a Gaussian as a prior is justified by the Central Limit Theorem, applied to a sum of random noise values from Eq. (

1). Central to the fitting are the weights , which have been selected to follow the Gaussian dependence:


with a standard deviation

. By an empirical optimization it was set to . These weights efficiently suppress the impact from the tails of the distribution, while giving precedence to the central peak. For the fitting a Levenberg-Marquardt algorithm was adopted, as described in Ref. numc . Figure 3 shows the distribution of derivative points from Fig. 2, together with the central peak reconstruction by means of the weighted fitting.

Figure 3: (Color online) Distribution of derivative points from Fig. 2, with the result of the weighted fitting designed to reconstruct the width of the central peak. The dashed line shows the true distribution of points from Fig. 2

, arbitrarily scaled to a height of a fitted distribution. The exponentiated distribution was obtained starting from the original distribution scaled to unity.

While the weighted fitting is beneficial for rejecting the long tails of the distribution, the unweighted fitting has been found more appropriate for very narrow distributions covering only a few histogram bins. Due to the low number of bins and rapidly decreasing weighting factors, the weighted fitting procedure is then sensitive only to the narrow top of the distribution, which is effectively treated as flat, yielding an outstretched fit. Therefore, the unweighted fitting to the Gaussian shape from Eq. (5) is also performed. In addition, the RMS of the distribution is calculated directly as: . The lowest of the three results – from the weighted fitting, unweighted fitting and the direct calculation – is kept as the final one. The additional fitting and the direct calculation also serve as a contingency in case either of the fitting procedures fails to properly converge.

2.3 Pulse discrimination

From the derivative noise RMS extracted by one of the previously described procedures, the default values for the derivative crossing thresholds have been selected as

RMS, due to the fact that this range corresponds to 99.95% confidence interval under the assumption of normally distributed noise. Since the order of crossing these thresholds (together with some later analysis procedures) depends on the pulse polarity, all signals are treated as negative. This means that the signals are inverted, i.e. multiplied by

, if expected to be positive from an external input parameter.

Differentiating the unipolar pulse leads to a bipolar pulse in the derivative. Therefore, the derivative of the negative unipolar pulse must, ideally, make 4 threshold crossings in this exact order: lower-lower-upper-upper. However, in case of the lowest pulses or very high pileup, the integration procedure from Eq. (1) may flatten the final derivative, not causing the second threshold crossing. Hence, the principle of 4 threshold crossings was relaxed in order to facilitate the recognition of these pulses. Thus, crossing a single threshold suffices to trigger the pulse recognition. However, if both thresholds are crossed in the order lower-upper, a single pulse is recognized, instead of two. In summary, these are the threshold crossing possibilities that mark the presence of the pulse: lower-lower (without the subsequent upper crossing), upper-upper (without the previous lower crossing) and lower-lower-upper-upper. After initially locating the pulses between the points of the first and the last threshold crossing, their range is further extended until the derivative reaches 0 at both sides, unless there are neighboring pulses in line preventing the expansion.

Figure 4: (Color online) Pulse recognition procedure applied to the piled-up pulses. Top panel (a) shows the actual signal, with the red envelope marking the successful separation of the pulses. Bottom panel (b) shows the signal derivative crossing the appropriate thresholds and triggering the pulse recognition from panel (a). Derivative calculated with an unoptimized (too large) step-size is also shown.

The thresholds being low enough not to reject the lowest pulses will, from time to time, be accidentally triggered by the noise. These occurrences are dealt with by a set of elimination conditions, which are determined by means of the external input parameters. These conditions include the lower and upper limit for the pulse width, the lower limit for the pulse amplitude and the lower and upper limit for the area-to-amplitude ratio. The first elimination, based only on the pulse width, is performed immediately after the pulse recognition procedure. The final elimination, based on the pulse amplitudes and areas, may only be performed at a later stage, after the signal baseline has been calculated. However, it is paramount that the first stage of elimination be performed at this point, since several later procedures, such as the baseline calculation, depend on the reported pulse candidates. In case of an excessive number of falsely recognized pulses, the quality of procedures relying on the reported pulse positions may be compromised.

Figure 4 shows an example of a demanding case of pileup, where two pulses are successfully resolved. The top panel (a) shows the actual signal, with the red envelope confining separate pulses. The bottom panel (b) shows the optimized signal derivative crossing the thresholds, triggering the pulse recognition. It also illustrates the importance of optimizing the step-size for calculating the derivative from Eq. (1), since a further increase in step-size (dashed line) would flatten the derivative at the point of the second crossing, preventing the separation of two pulses from panel (a). For visual purposes, two displayed derivatives were normalized so that their thresholds coincide.

The described pulse recognition technique was found to perform very well for signals from a wide variety of detectors in use at n_TOF. The example from Fig. 4 confirms that with optimized parameters the procedure is able to resolve quite demanding pileups. Due to the relaxed threshold crossing conditions, it is also quite sensitive even to the lowest pulses, barely exceeding the level of the noise. Since the same sensitivity characterizing the pulse recognition procedure sporadically leads to an accidental threshold crossing due to noise, an elimination procedure has been implemented alongside it.

2.4 Multiple polarities

The adopted pulse recognition procedure lends itself easily to signals that exhibit pulses of both polarities. In this case two derivative passes should be made – one over the regular derivative, one over the inverted one (multiplied by ). Quite often, the reported pulse candidates from two passes will overlap, since the part of a real pulse from one pass will act as a false candidate within the other pass. The pulse candidates from two passes should be analyzed independently and then submitted to the pulse elimination algorithm. It was observed that even the quite relaxed elimination conditions successfully reject the false candidates from the selection of overlapping pulses.

2.5 Bipolar pulses

An additional pulse range adjustment procedure was implemented in order to accommodate bipolar pulses. Since the end of the pulse is determined by the derivative reaching 0 after the first unipolar part of the pulse, the recognition of bipolar pulses stops at the extremum of the second pole. However, once the signal baseline has been calculated, the boundary of the pulse may be shifted towards the point of the baseline crossing, keeping only the first pole of the pulse or fully covering both of them. In case of two immediate but not piled-up bipolar pulses, the first one ends at the extremum of its own second pole, where the next pulse is immediately recognized to start, due to the behavior of the derivative . Therefore, the starting points of the pulses need to be adjusted (with respect to the calculated baseline) in accordance with the requirements of a specific signal, so that the finally determined range of the second pulse does not start prematurely, preventing also the (optional) expansion of the first pulse.

3 Baseline

Three different baseline methods have been implemented, that may all be used within the same waveform, depending on the signal behavior. These are the constant baseline, the weighted moving average and the moving maximum. The use of the moving maximum is usually related only to the first part of the waveform, when the effect of the -flash upon the signal is extreme (there is also an alternative method of subtracting the baseline distortion pulse shape, designed for this region). Moving average is also related to the baseline distortion by -flash, however it is often the most appropriate method to be used throughout the entire waveform, especially if the baseline exhibits slow oscillations. Constant baseline is suitable only after the baseline has been fully restored after the initial -flash, or if the detector response to external influences is remarkably stable.

3.1 Constant baseline

A constant baseline is calculated as the average of all signal points between the pulse candidates reported by the pulse recognition procedure. In this way any need for an iterative procedure is avoided, while the baseline remains unaffected by the actual pulses.

3.2 Weighted moving average

Window protrudes at the beginning of the waveform
Window is contained within the waveform
Window protrudes at the end of the waveform
Window protrudes at both ends of the waveform
Table 2: List of recursive relations for evaluating the baseline from Eq. (7). The involved terms are defined by Eq. (9). The separate cases refer to the position of the averaging window, defined by the window parameter , relative to the edges of the waveform composed of points. Constants and have been introduced for the efficiency of the calculation.

The moving average is the appropriate method for determining the baseline whenever the clear information about the baseline is, in fact, available, i.e. when the uninterrupted portions of the baseline may indeed be found within the signal. The following definition is used for the weighted moving average:


with as the number of points (referred to as the window parameter) at each side of the -th one to be taken for averaging the signal , composed of the total of points. It should be noted that the averaging window is points wide. The weighting kernel is given by the cosine (i.e. Hann hann ) window, with additional weighting factors that are equal to the number of uninterrupted points within a given stretch of the baseline. Inside the reported pulse candidates, these weights should be much lower than unity (), so as to exclude the pulses from the baseline calculation. However, a finite non-zero value is required, in order to avoid division by zero in case the averaging window is completely contained within the pulse. For the weighting factors inside the pulses we have adopted the value . More precisely, for as the total number of pulses identified inside the waveform – with and denoting the first and the last index of the -th pulse (), respectively – the weighting factors are defined as:


where and . Evidently, the window parameter , given as an external parameter, should be large enough to connect the baseline at both sides of the widest pulse or the widest expected chain of piled-up pulses. The initial elimination of falsely recognized pulses (based on their widths) also plays a role in this procedure, since every reported pulse interrupts the baseline, affecting the weighting factors . Still, the procedure is quite robust against this change of the weighting factors.

Figure 5: (Color online) Independent examples of the adaptive baseline calculated using the weighted moving average procedure from Eq. (7).

The form of the summation bounds from Eq. (7) properly takes into account the boundary cases, when the averaging window reaches the edges of the signal. Once again, the straightforward implementation of the algorithm for evaluating Eq. (7) is of computational complexity. Hence, recursive relations have been derived, which provide a linear dependence in the number of operations for calculating the baseline throughout the entire waveform: . We define the following terms:


allowing to rewrite Eq. (7) as:


where the notation implies: . Initial values , and are to be calculated directly from Eq. (7). The recursive relations for calculating all subsequent terms are listed in Table 2, according to the position of the averaging window, relative to the waveform boundaries. It should be noted that the efficient calculation requires the terms and to be treated as constants and calculated only once, instead of repeating the calculation at each step. Figure 5 shows two examples of the performance of the described baseline procedure.

3.3 Moving maximum

Figure 6: (Color online) Proof of concept for finding the upper signal envelope by combining the forward and backward moving maximum. The tightened envelope is also shown. The signals have been artificially constructed.

The following baseline procedure is appropriate when the information about the signal baseline has been (almost) completely lost due to the sequential and persistent pileup of pulses, while the baseline itself is known not to be constant and no other a priori knowledge about it is available (example given later in Fig. 7). In this case the best, if not the only assumption to be made is that the baseline follows the signal envelope, defined by the dips between the pulses, especially those that manage to reach most deeply toward the true baseline. Since all signals are treated as negative, as stated before, the upper envelope needs to be found. This may me done by constructing two moving maxima – one that we refer to as the forward maximum, the other as the backward maximum – and taking the minimum of the two at each point of the signal (the advantages of this kind of competitive approach have already been explored in the past competitive ). We define the forward maximum at -th point as the maximal signal value from a moving window of points before the -th one, with backward maximum as the maximal value from the window of points after the -th one:


As before, is the total number of points in the waveform, with as the external input parameter. The upper envelope – following closely the upper edge of the signal, thus defining the baseline – may simply be obtained by taking the pointwise minimum:


Figure 6 illustrates the proof of the concept on artificially constructed signals. The straightforward implementation of this procedure is again of computational complexity. Therefore, a very elegant and efficient algorithm was adopted from Ref. max , that significantly speeds up the procedure, bringing it much closer to the linear dependence: . A simplified version of the code from Ref. max , excluding the calculation of the moving minimum and not requiring the deque data structure available in C++, is presented in Table 4 from A. Thus obtained envelope may be additionally tightened in order to obtain a smoother and somewhat less artificial baseline. The tightening code, which is more efficient than a quadratic one, is given in Table 5 from A. Figure 7 shows the result of this procedure on a selected portion of a real signal from a gaseous He detector.

Figure 7: (Color online) Example of the signal from a gaseous He detector, that requires the reconstruction of the upper envelope in order to identify the baseline. The envelope is shown both before and after the tightening procedure.

3.4 -flash removal

At neutron time-of-flight facilities the most common cause for a baseline distortion is the induction of a strong pulse by an intense -flash, which is released each time the proton beam hits the spallation target. The response of certain detectors to the -flash is remarkably consistent, which allows for a clear identification of the distorted baseline. By properly averaging a multitude of signals from an immediate vicinity of the -flash pulse, the detector response to a -flash may be recovered in form of an average baseline distortion pulse shape pu242 . In effect, this pulse shape serves as a priori knowledge of the baseline. In general, the baseline offset may be changed for various reasons, e.g. by simply adjusting the digitizer settings. Hence, if available, the shape of the distorted baseline is subtracted from the signal only after identifying and subtracting the primary baseline, which is – for obvious reasons – best found as the constant baseline offset. The positioning of the distorted baseline within the signal is performed relative to the -flash pulse, by fitting the externally selected portion of the pulse shape to a leading edge of the -flash pulse. The fitting routine, which is the same as for the regular pulses, is described in Section 4. Figure 8 shows an example of the adjustment of a distorted baseline to a signal from a MicroMegas detector, clearly revealing the true pulses rising above the baseline, thus providing access to the low time-of-flight, i.e. the high-neutron-energy region.

Figure 8: (Color online) Adjustment of a distorted baseline to a signal from a MGAS detector. The horizontal adjustment is performed relative to the initial, -flash pulse. The primary (vertical) offset is identified by the constant baseline procedure.

4 Pulse shape analysis

After baseline subtraction, the amplitude, area, status of the pileup and timing properties such as the time of arrival are determined for each pulse. Three different methods are available for finding the amplitudes: search for the highest point, parabolic fitting to the top of the pulse and a predefined pulse shape adjustment. By pulse shape we refer to the template pulse of a fixed form, given by the tabulated set of points , with as the time coordinate of the -th point and as its height (i.e. the pulse shape value). The optimal pulse shape is best obtained by averaging a large number of real pulses. Several example procedures for excluding unreliable pulses from the pulse shape extraction may be found in Ref. psa_sili .

Though the pulse shape fitting is generally the most appropriate method for pulse reconstruction, it may not always be applicable, especially if the detector exhibits pulses of strongly varying shapes. This is often the case with gaseous detectors, when the shape and length of the pulse depend on the initial point of ionization and/or the details of the particle trajectory inside the gas volume. The area under the pulse may be calculated by simple signal integration or from a pulse shape fit, if the latter option has been activated by means of the external input parameter. Finally, extraction of the timing properties relies on the digital implementation of the constant fraction discrimination, with a constant fraction factor of 30%.

4.1 Pulse shape fitting – the numerical procedure

Pulse shape fitting is a well established method psa_sili ; psa_scint ; psa_review . However, its straightforward implementation is of computational complexity – with as the number of points comprising a typical pulse – whereas our adopted procedure requires only

operations per pulse. It is important to note that any pulse shape from the following procedure is of the same sampling rate as the analyzed signal. If there is an initial mismatch between the sampling rates of the externally delivered pulse shape and the real signal, the pulse shape is first synchronized to the signal by means of linear interpolation.

Let us consider the predefined (and already synchronized) pulse shape , consisting of points, with the -th one as the highest point (). For a given pulse within the analyzed signal, the left and right fitting boundaries and are determined. These may correspond to the pulse boundaries coming directly from the pulse recognition procedure or may be further modified, depending on the pulse requirements. The pulse shape is shifted along the pulse, so that at each step the -th pulse shape point is aligned with an -th pulse point, where is confined by the fitting boundaries: . At every position the least squares optimization is performed by minimizing the sum or residuals:


where by and we have introduced the number of pulse shape points at each side of the -th one:


At each alignment position an optimal multiplicative factor is found from the minimization requirement: . Introducing the following terms:


the optimal may be expressed as:


The quality of the fit is evaluated at each alignment point by means of a reduced :


where the number of points taken by the fit is reduced by 2 due to 2 degrees of freedom: the horizontal and vertical alignment. A fit with a minimal reduced

is taken as the best result.

Pulse shape is contained within the pulse
Pulse shape protrudes at the beginning of the pulse
Pulse shape protrudes at the end of the pulse
Pulse shape protrudes at both ends of the pulse
Table 3: List of recursive relations for calculating the sums from Eq. (15). Different cases cover all possible combinations of summation bounds.

Equation (15) reveals nature of the procedure, with typically . However, recursive relations for the terms and may be obtained, allowing for their calculation using only operations. These relations are listed in Table 3, according to the manner in which the pulse shape and the fitted portion of the pulse are overlapped. By defining the term-wise inverted array as , it becomes evident that the final term from Eq. (15) formally corresponds to a convolution of the partial signal and a pulse shape . In order to calculate

at each alignment point in a least number of operations possible, a Fast Fourier Transform algorithm – of

computational complexity – was adopted directly from Ref. numc .

Once the best pulse shape alignment has been found by means of a minimal reduced , the pulse shape is resampled by linear interpolation, constructing the set of intermediate pulse shapes (). In symbolic and self-evident notation, these intermediate terms may be defined as:


Evidently, one may treat the initial pulse shape as the -th member , allowing to establish the uninterrupted indexing by . For intermediate pulse shapes the least squares adjustment by minimization of the associated Eq. (13) is performed only at the point of the best alignment of the initial pulse shape , calculating the associated members from Eq. (15) by direct summation. The adjustment producing a minimal reduced (for any ) is kept as the final result. The value has been adopted for the PSA framework described in this work.

4.2 Pulse shape fitting – the saturated pulses

An important feature of the adopted pulse shape fitting routines is the exclusion of saturated points from the fitting procedure. Here, saturation is defined by the recorded signal reaching the boundaries of the data range (i.e. the minimal or maximal channel) supported by the data acquisition system (example in Fig. 2). The saturation management may be directly implemented in Eq. (13) through the introduction of appropriate weighting factors , taking the values 0 or 1:


The weighting factors are given as , where we have introduced the following useful function:


Following the same procedure as for obtaining the expressions from Eq. (15), one arrives at the following generalized terms:


and to the corresponding expression for the reduced :


The drawback of this generalization is immediately evident: the term from Eq. (21) has become a convolution, in the same way as the term, thus requiring the application of a Fast Fourier Transform, as opposed to the less computationally expensive recursive relations from Table 3 (recursive relations completely analogous to those from Table 3 may now be used only for the term). Furthermore, under the assumption of properly set parameters of the data acquisition system, the saturated pulses are expected to appear only very rarely. For this reason it is advisable to keep the separate approaches – the one from Eq. (13) for unsaturated pulses and the one from Eq. (19) for saturated pulses – instead of applying the generalized and more computationally expensive procedure to both types of pulses.

4.3 Pulse shape fitting – the quality control


Figure 9: (Color online) Signal from a NaI detector characterized by a high density of piled-up pulses. The signal reconstructed by means of pulse shape fitting consists of the fitted and superimposed pulse shapes. Inset shows one of the three pulse shapes used, each adjusted to a given amplitude range.

Multiple pulse shapes may be provided as input to the program. In this case the pulse shape adjustment is performed for each pulse shape separately and among all fits, the one with the minimal reduced is kept. Allowing for the intake of multiple pulse shapes is not only beneficial to detectors exhibiting considerably differing pulses, but was also found specially suitable when the shape of the pulse varies slightly with its amplitude. Hence, among multiple pulse shapes that may be delivered, each may be best suited to a certain amplitude range. In addition, after each adjustment a fitted pulse shape is subtracted from the signal before proceeding to the next pulse in line. Thus, the pulse shape fitting is fully able to account and correct for pileup effects. Figure 9 shows an example of a demanding signal from a NaI detector – exhibiting a persistent pileup of bipolar pulses – and a complete signal reconstruction by means of pulse shape fitting. Three separate pulse shapes were used, each adjusted to a given amplitude range. One is shown in an inset of Fig. 9.

An additional pulse shape fitting control was implemented in form of discrepancy – a quantity similar to the reduced . Let the fitted pulse shape be aligned with the pulse in the original signal , so that the index-to-index correlation is established (we remind that the optimal pulse shape alignment is determined during the fitting procedure). For the total of pulses, let and be the indices of the first and the last point of the -th pulse () in the signal. Similarly, let and be the first and the last index of the pulse shape aligned to the -th pulse. The discrepancy for the -th pulse is calculated taking into account all the pulse shape points around the fitted pulse – even if they are outside the fitting range – as long as the pulse shape does not intrude into any of the neighboring pulses. In addition, the fitted pulse shape point is taken into account if and only if it is between the signal saturation boundaries and , even if the signal itself is saturated. An explicit expression for the discrepancy takes the form:


with and (where is the total number of points comprising the signal ). The -function is defined by Eq. (20) (note in place of the first argument). If the discrepancy exceeds the preset threshold value, which is set as an external input parameter, the fit is rejected.

Central to the scaling of is the pulse height determined directly from the highest point of the baseline-corrected signal; not from the height of the fitted pulse shape. As opposed to , the discrepancy has the following advantages:

  • Due to the pulse height replacing the signal baseline RMS, the high pulses – which are well discriminated from the baseline – are clearly favored by the lower discrepancy values, while the fits to the lower pulses are more susceptible to rejection.

  • In case of any systematic difference between the given pulse shape and the pulses in the signal, the terms from Eq. (23) scale with the pulse height ; scaling the discrepancy by the same factor compensates for this effect, canceling the negative bias towards the higher pulses.

In addition, adopting the condition expressed through the term helps in rejecting the exaggerated fits to severely saturated pulses, such as the ones caused by an intense -flash. When such pulse is saturated for a longer time than a regular pulse would be, only the steep leading edge of the pulse is fitted, due to the exclusion of the saturated points. By rejecting these fits, a subtraction of the overscaled pulse tails is avoided during the pileup correction procedure.

Figure 10: (Color online) Example of the pulse rejection capabilities, based only on the calculated discrepancy between the signal and the adjusted pulse shape.

Figure 10 shows an example of the powerful pulse rejection capabilities, based only on the properly set discrepancy threshold. The single fitted pulse is clearly meaningful, since it significantly deviates from the envelope of the noise. Initially, each of the signal oscillations within a beat is recognized as a potential pulse. Since the shape of these false pulses is incompatible with the given pulse shape, the calculated discrepancy is large and the fit is rejected.

5 Conclusions

The most prominent features of the new pulse shape analysis framework developed for the n_TOF-Phase3 have been described, including the pulse recognition, the baseline calculation and the pulse shape fitting procedures. The pulse recognition relies on the calculation of a custom derivative, as a difference between the signal integrals from both sides of a given point. A supporting procedure for defining the derivative crossing threshold was also described, which isolates the approximate root mean square of the derivative baseline, effectively rejecting the contribution from the beats and actual pulses, while avoiding the dependence on the well defined number of clear presamples.

Three different baseline calculation procedures have been adopted. The simplest one is the constant baseline, which requires a single pass through a signal, without any need for iterative techniques. One of two adaptive baseline options relies on the weighted averaging of the signal, being appropriate when clear portions of the baseline are indeed at hand. The second option is appropriate when this condition is not met – due to persistent pileup of pulses, completely concealing the baseline – and no a priori knowledge about the baseline is available. In this case the baseline is found as the upper signal envelope, since all regular pulses are treated as negative. In case some a priori knowledge of the baseline is available – coming from a consistent detector response to an intense -flash – the baseline distortion may be identified in a form of an appropriate pulse shape and may be subtracted from the signal, but only after correcting for the primary baseline offset.

The most basic implementations of previous procedures are of computational complexity, with as the total number of points in a digitized signal waveform and as a characteristic filter width of arbitrary size. Single waveforms recorded by the digital data acquisition system at n_TOF may, at present, reach the order of magnitude of points. Hence, the complexity constitutes a significant performance issue that had no alternative but to be resolved. Therefore, for all such procedures fast recursive algorithms were implemented, bringing the computational complexity to the , or at least to the approximate level. For the reasons of computational efficiency the pulse shape fitting routine was also described, though the procedure itself is well established. By the virtue of a complete a priori knowledge of the pulses, the pulse shape fitting procedure allows to subtract the adjusted pulse shapes from the signal, thus correcting for pileup effects and restoring both the energy and timing resolution of the detectors which are considerably affected by pileup.


This work was supported by the Croatian Science Foundation under Project No. 1680.

Appendix A Moving maximum code

Forward maximum Backward maximum
1 #define MIN(a,b) (a<b?a:b)
2 int i,U_first,U_last;
3 int U[(const int)(stop_at-start_at+1)];
4 U_first=0; U_first=0;
5 U_last=0; U_last=0;
6 U[U_first]=start_at; U[U_first]=stop_at;
7 for (i=start_at; i<=stop_at; i++) { for (i=stop_at; i>=start_at; i--) {
8   if (U[U_first]==i-N)   if (U[U_first]==i+N)
9     U_first++;     U_first++;
10   while (U_last>=U_first &&   while (U_last>=U_first &&
11          signal[i]>=signal[U[U_last]])          signal[i]>=signal[U[U_last]])
12     U_last--;     U_last--;
13   U[++U_last]=i;   U[++U_last]=i;
14   max_forwards[i]=signal[U[U_first]];   max_backwards[i]=signal[U[U_first]];
15 } }
16 for (i=start_at; i<=stop_at; i++) max[i]=MIN(max_forwards[i],max_backwards[i]);
Table 4: Simplified version of the code from Ref. max , adopted for the calculation of the moving maximum. Code input consists of the array signal and the integer parameters N, start_at and stop_at. Arrays max, max_forwards and max_backwards are to be initialized in advance, having the same number of points as the array signal. At the end of the procedure, array max holds the signal envelope as the final result.
Forward tightening Backward tightening
1 #define MAX(a,b) (a>b?a:b)
2 int i,j,last,node,NODES;
3 int index[(const int)(stop_at-start_at+1)];
4 double slope,last_slope,past_slope,A,B;
5 index[0]=start_at; index[0]=stop_at;
7 last_slope=-1.e300; last_slope=1.e300;
8 past_slope=-1.e300; past_slope=1.e300;
9 last=start_at; last=stop_at;
10 for (i=start_at+1; i<=stop_at; i++) { for (i=stop_at-1; i>=start_at; i--) {
11   slope=(max[i]-max[last])/(x[i]-x[last]);   slope=(max[i]-max[last])/(x[i]-x[last]);
12   if (last_slope>past_slope &&   if (last_slope<past_slope &&
13       last_slope>slope) {       last_slope<slope) {
14     index[NODES++]=i-1;     index[NODES++]=i+1;
15     last=i-1;     last=i+1;
16     last_slope=0;     last_slope=0;
17     slope=(max[i]-max[last])/(x[i]-x[last]);     slope=(max[i]-max[last])/(x[i]-x[last]);
18   }   }
19   past_slope=last_slope;   past_slope=last_slope;
20   last_slope=slope;   last_slope=slope;
21 } }
22 index[NODES++]=stop_at; index[NODES++]=start_at;
23 last=0; last=0;
24 for (i=1; i<NODES; i++) { for (i=1; i<NODES; i++) {
25   if (i==last+1) {   if (i==last+1) {
26     A=(max[index[i]]-max[index[last]])/     A=(max[index[i]]-max[index[last]])/
27       (x[index[i]]-x[index[last]]);       (x[index[i]]-x[index[last]]);
28     node=i;     node=i;
29   } else if (index[i]-index[last]<=N) {   } else if (index[last]-index[i]<=N) {
30     slope=(max[index[i]]-max[index[last]])/     slope=(max[index[i]]-max[index[last]])/
31           (x[index[i]]-x[index[last]]);           (x[index[i]]-x[index[last]]);
32     if (slope>=A) {     if (slope<=A) {
33       A=slope;       A=slope;
34       node=i;       node=i;
35     }     }
36   }   }
37   if (index[i]-index[last]>=N || i==NODES-1) {   if (index[last]-index[i]>=N || i==NODES-1) {
38     B=max[index[last]]-A*x[index[last]];     B=max[index[last]]-A*x[index[last]];
39     for (j=index[last]; j<=index[node]; j++)     for (j=index[last]; j>=index[node]; j--)
40       max_forwards[j]=A*x[j]+B;       max_backwards[j]=A*x[j]+B;
41     last=node;     last=node;
42     i=last;     i=last;
43   }   }
44 } }
45 for (i=start_at; i<=stop_at; i++) max[i]=MAX(max_forwards[i],max_backwards[i]);
Table 5: Code for tightening the signal envelope calculated by the code from Table 4. The final result is again stored in the array max, i.e. its contents are overwritten.

Table 4 presents a computationally efficient C++ code for finding the upper envelope of a signal. The code is a simplifed version of the one proposed in Ref. max . The array signal contains the signal. The external parameters N, start_at and stop_at define, respectively: the moving window width, the starting point and stopping point (start_atstop_at) of the fraction of the waveform to be taken into account. The arrays max, max_forwards and max_backwards are of the same length as the array signal (thus establishing one-to-one correspondence between the array terms; if necessary, the code can also be adjusted so as to use only stop_at-start_at+1 points for the array max and to completely avoid arrays max_forwards and max_backwards). At the end of the procedure, the baseline, i.e. the signal envelope is stored in array max.

Table 5 presents the code for tightening the envelope obtained using the procedure from Table 4. The main inputs to this code are an array x of positions of signal points and an array max from the previous procedure. As before, arrays max_forwards and max_backwards are only used as convenient temporary storage. The code proceeds by identifying the nodes, which define the locally steepest lines, when drawn from a previous node. The set of nodes is, in general, different when searched from the beginning or the end of the waveform. It was empirically found that the initialization last_slope=0 from line 16 is specially favorable – in contrast to initializations to extreme values – improving the quality of the tightened baseline. The nodes are then checked for a maximum of the slope between them, within a window of a preset width. If no node is contained within this window, the next available node is used. It is to be noted from lines 29 and 37 that the same moving window width N was used for this procedure as for finding the initial (untightened) envelope. From the results obtained by going forwards and backwards through the waveform, a final tightened one is determined as the pointwise maximum between the two.


  • (1) C. Rubbia, S. Andriamonje, D. Bouvet-Bensimon, et al., A high Resolution Spallation driven Facility at the CERN-PS to measure Neutron Cross Sections in the Interval from 1 eV to 250 MeV, CERN/LHC/98-02 (1998).
  • (2) C. Rubbia, S. Andriamonje, D. Bouvet-Bensimon, et al., A high Resolution Spallation driven Facility at the CERN-PS to measure Neutron Cross Sections in the Interval from 1 eV to 250 MeV, CERN/LHC/98-02-Add. 1 (1998).
  • (3) E. Chiaveri, Proposal for n_TOF Experimental Area 2 (EAR-2), CERN-INTC-2012-029/INTC-O-015 (2012).
  • (4) C. Weiß, E. Chiaveri, S. Girod, et al., Nucl. Instr. and Meth. A 799 (2015) 90.
  • (5) S. Barros, I. Bergström, V. Vlachoudis and C. Weiß, J. Instrum. 10 (2015) P09003.
  • (6) C. Guerrero, A. Tsinganis, E. Berthoumieux, et al., Eur. Phys. J. A 49 (2013) 27.
  • (7) S. Marrone, P. F. Mastinu, U. Abbondanno, et al., Nucl. Instr. and Meth. A 517 (2004) 389.
  • (8) C. Weiß, E. Griesmayer, C. Guerrero, et al., Nucl. Instr. and Meth. A 732 (2013) 190.
  • (9) C. Guerrero, U.Abbondanno, G.Aerts, et al., Nucl. Instr. and Meth. A 608 (2009) 424.
  • (10) R. Plag, M. Heil, F. Käppeler, et. al., Nucl. Instr. and Meth. A 496 (2003) 425.
  • (11) Y. Giomataris, Ph. Rebourgeard, J.P. Robert and G.Charpak, Nucl. Instr. and Meth. A 376 (1996) 29 .
  • (12) S. Andriamonje, M. Calviani, Y. Kadi, et al., J. Korean Phys. Soc. 59 (2011) 1597.
  • (13) D. B. Gayther, Metrologia 27 (1990) 221.
  • (14) C. Paradela, L. Tassan-Got, L. Audouin, et al., Phys. Rev. C 82 (2010) 034601.
  • (15) U. Abbondanno, G. Aerts, F. Álvarez, et al., Nucl. Instr. and Meth. A 538 (2005) 692.
  • (16) E. Berthoumieux, Preliminary report on BaF Total Absorption Calorimeter test measurement, Rap. Tech. CEA-Saclay/DAPNIA/SPhN (2004).
  • (17) W. Xiao, A. T. Farsoni, H. Yang, D. M. Hamby, Nucl. Instr. and Meth. A 769 (2015) 5.
  • (18) R. J. Cooper, D. C. Radford, K. Lagergren, et al., Nucl. Instr. and Meth. A 629 (2011) 303.
  • (19) S. N. Liddick, I. G. Darby, R. K.Grzywacz, Nucl. Instr. and Meth. A 669 (2012) 70.
  • (20) C. Guerrero, D.Cano-Ott, M.Fernández-Ordóñez, et al., Nucl. Instr. and Meth. A 597 (2008) 212.
  • (21) J. Kamleitner, S. Coda, S. Gnesin, Ph. Marmillod, Nucl. Instr. and Meth. A 736 (2014) 88.
  • (22) W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, Numerical Recipes in C: The Art of Scientific Computing (Second Edition), Cambridge: Cambridge University Press (1992).
  • (23) F. J. Harris, Proc. IEEE 66, no. 1 (1978) 51.
  • (24) M. Niedźiwiecki, W. A. Sethares, IEEE Trans. Signal Process. 43, no. 1 (1995) 1.
  • (25) D. Lemire, Nord. J. Comput. 13(4) (2006) 328.
  • (26) A. Tsinganis, E. Berthoumieux,C. Guerrero, et al., Nucl. Data Sheets 119 (2014) 58.