Automatic trajectory recognition in Active Target Time Projection Chambers data by means of hierarchical clustering

07/10/2018
by   Christoph Dalitz, et al.
HS Niederrhein
Berkeley Lab
0

The automatic reconstruction of three-dimensional particle tracks from Active Target Time Projection Chambers data can be a challenging task, especially in the presence of noise. In this article, we propose a non-parametric algorithm that is based on the idea of clustering point triplets instead of the original points. We define an appropriate distance measure on point triplets and then apply a single-link hierarchical clustering on the triplets. Compared to parametric approaches like RANSAC or the Hough transform, the new algorithm has the advantage of potentially finding trajectories even of shapes that are not known beforehand. This feature is particularly important in low-energy nuclear physics experiments with AT operating inside a magnetic field. The algorithm has been validated using data from experiments performed with the Active Target Time Projection Chamber (AT-TPC) at the National Superconducting Cyclotron Laboratory (NSCL).The results demonstrate the capability of the algorithm to identify and isolate particle tracks that describe non-analytical trajectories. For curved tracks, the vertex detection recall was 86% and the precision 94%. For straight tracks, the vertex detection recall was 96% and the precision 98%. In the case of a test set containing only straight linear tracks, the algorithm performed better than an iterative Hough transform.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

06/26/2020

Point Proposal Network for Reconstructing 3D Particle Positions with Sub-Pixel Precision in Liquid Argon Time Projection Chambers

Liquid Argon Time Projection Chambers (LArTPC) are particle imaging dete...
08/06/2020

Unsupervised Learning for Identifying Events in Active Target Experiments

This article presents novel applications of unsupervised machine learnin...
02/12/2016

An automatic method for segmentation of fission tracks in epidote crystal photomicrographs

Manual identification of fission tracks has practical problems, such as ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One of the present aims of modern low-energy nuclear physics is to provide a more complete understanding about the behavior of subatomic matter under large isospin (i.e. large neutron-proton imbalance) Thoennessen2011 . Near the landscape drip lines, atomic nuclei exhibit astonishing properties such as disappearance of magic numbers, exotic collective modes (i.e. development of neutron/proton halos and giant resonances) or a rearrangement of the matter in cluster and molecular structures. In addition, the vast scenario that composes the nucleosynthesis of heavy elements through different astrophysical processes, primarily involves very exotic nuclei. Aware of the impact that this research will pose, the nuclear physics community is striving to upgrade and construct state-of-the-art facilities worldwide in order to achieve the required exotic beam intensities that would allow for a comprehensive and efficient study of the unexplored region of the landscape Motobayashi2014 .

The production of exotic beams also involves the development of associated instrumentation capable of providing the observables of interest in a reasonable amount of time for the expected rates. Active Target Time Projection Chambers (henceforth AT) detectors are designed to provide high luminosity (rate of interactions) while preserving the resolution needed to obtain the variables from which relevant information about the nuclei is inferred BeceiroNovo2015124 . ATs consist of a relatively large gaseous medium that acts as target and detector at the same time. They provide a target thickness that could be up to two orders of magnitude larger than conventional solid targets and also the capability of recording three-dimensional particle tracks with high geometrical efficiency covering a solid angle of 4. In this sense, ATs detectors are able to push the rate limits at which low-energy nuclear physics experiments can be performed, with intensities as low as 100 pps (particle per second). Moreover, placing the detector inside a magnetic field enables high-resolution spectrometer operation BRADT201765 ; WUOSMAA20071290 .

Contrary to High Energy Physics (HEP) experiments, nuclear reactions at low energy (in the range of few hundreds of keV up to several hundred MeV) produce charged particles in a broad range of atomic number () and mass (), and also with very different kinetic energy. Since the reaction of interest occurs inside the AT medium, the detector has to cope with a large energy dynamic range which is directly reflected on the distribution of patterns and shapes of tracks that the particles describe in the gas under a magnetic field. While particles with relatively high energy will describe helical trajectories (projections can be considered circular), the ones with low energy will describe non-linear trajectories as they are progressively stopped in the gas. In addition, particle tracks are connected through the vertex of the reaction from which the energy of the beam at the interaction point can be inferred.

Generally speaking, ATs are versatile detectors that can be deployed in different experiments covering a wide variety of reactions such as resonant scattering or nucleon transfer, that are normally performed using beams with energies from few up to several tens of  MITTIG2015494 ; BeceiroNovo2015124 ; MITTIG2001495 . The particle multiplicity in these reactions is relatively low, from two up to several particles, depending on the physics case of interest. In order to reconstruct the kinematics of the reaction completely, the information from the beam and from each reaction product has to be extracted. That includes the angles with respect to the beam direction, and their range and/or their curvature, depending if a magnetic field was used. Other types of reactions for which ATs are one of the most promising approaches are elastic/inelastic scattering and charge exchange with fast beams (more than ). In this case, the relevant kinematic information is obtained from particles that are emitted at very forward angles in center-of-mass which essentially implies very low kinetic energy (or short range).

Although ATs are the answer to many experimental demands in low-energy nuclear physics experiments, the analysis of the data is a complicated and demanding task that needs to be performed in different steps. Before performing any comprehensive kinematic analysis, one needs to break down every image recorded by the AT in order to extract different features. The purpose is twofold: identify events where a reaction happened and isolate every particle track. Due to the large amount of collected data (data streaming during an experiment can be of the order of hundreds MB/sec), and efficient and fast discrimination of useful events is mandatory. The requirements are quite restrictive: The algorithm needs to isolate every track, even very short ones, while preserving the information about the reaction vertex.

A common approach for detecting shapes in noisy point clouds consists in a parametric description of the shapes and to search for parameter values representing shapes with many points. This requires the shape of the tracks to be known a priori, and it leads to a voting scheme for parameter values, which can be either done exhaustively for all points (Hough transform jeltsch16 ) or by random sampling (RANSAC fischler87 ). In the absence of a magnetic field, the tracks are straight lines, which can easily be described parametrically, and both RANSAC and the Hough transform have been applied to this special case dalitz17 ; ayyad18 . This is not generalizable to the case of unknown shapes like they occur in the presence of a magnetic field.

We therefore present a new, non-parametric approach in the present article. It is based on first building point triplets, which are then partitioned into clusters, such that each cluster represents a track. The idea of grouping triplets was recently suggested by Lezama et al. for detecting continuations in 2D dot patterns lezama17 . Their algorithm looks for symmetric triplets only, and the grouping is based on overlapping points. Although that algorithm could be generalized to 3D in a straightforward way, it is applicable only in the special case of locally equidistant points without random perpendicular spread around the actual curve. As these assumptions do not hold in the case of AT data, we change both steps of the algorithm: firstly, we do not look for symmetric triplets, but for triplets with approximately collinear branches, and, secondly, we group the triplets with a single link hierarchical clustering utilizing an appropriately defined triplet distance metric.

The algorithm has been developed within the framework of the AT detector developed at the NSCL (the so-called AT-TPC), which operates inside a solenoid that provides a magnetic field up to 2 T BRADT201765 . The source code of this framework is available from github111https://github.com/ATTPC/ATTPCROOTv2. Considering the conditions in which low-energy physics experiments are performed, the particles of interest describe trajectories that cannot be described by an analytical expression in closed form. This algorithm has been tailored to extract such curved tracks described by every reaction partner recorded on a event-by-event basis. Once the tracks are isolated, a Monte Carlo fit is applied to extract the variables of interest with better precision BRADT201765 . The fit makes use of the parameters of each curve, namely curvature and angle with respect to the beam direction, to numerically generate a collection of simulated tracks that are compared to the experimental one using an objective function. Therefore, the algorithm must provide the initial parameters with enough precision to ensure the proper convergence of the fit. With a broad experimental program222A list of approved experiments at the NSCL can be found here: https://enterprise.nscl.msu.edu/completedExperiments/ (e18027, e18019, e18008, e17504, e17025, e15250, e15238), the development of a generic and fast algorithm that addresses this unprecedented situation is mandatory for low-energy physics experiments using ATs in magnetic fields.

The present article is organized as follows: in section 2, we give a brief overview of the AT measurement technique, and in section 3 we describe the three experimental data sets used throughout this article. Section 4 describes the algorithm in detail, section 5 presents its performance on our test data sets, and section 6 makes recommendations for practical deployments of the algorithm.

2 Active target time projection chambers

The working principle of ATs is the same as Time Projection Chambers Nygren_1974 with the particularity that the tracking medium acts as the target. This limits the type of gas used for tracking since typical experiments in nuclear physics are performed with proton, deuteron, He or He targets, among others. The detector consists of gas volume in which a strong uniform electric field (typical values are

) is applied along one of the volume coordinates. The uniform electric field is created by a set of electrodes defining a drift volume (field cage) with a geometry defined by the requirements of the detector. Ionization electrons released by particles that cross the volume, drift towards the anode electrode where a segmented readout plane is deployed. Therefore, each ionization electron cloud is projected onto a pad of the plane from where the

and coordinates can be determined. The third coordinate is deduced from the drift time of the electrons assuming a constant drift velocity that depends on the gas and on the voltage. The pads also collect the total charge which is proportional to the local energy loss. Fig. 1 illustrates the main working principle of ATs using the prototype AT-TPC (pAT-TPC) as an example PhysRevC.87.054301 ; Suzuki12 . The detector has a cylindrical configuration with the electric field applied between the cathode and the pad plane (right end). The beam enters the chamber through the cathode end and it is slowed down in the gas. When a reaction takes place with one of the target nuclei ( refers to the reaction vertex), reaction products are emitted with a characteristic angle and energy distribution. Every particle track is projected into the pad plane as explained before: The beam and the products are projected into a small spot and a straight line, respectively, as can be seen in the figure. The energy of the reaction products can be inferred from the track length and from their characteristic energy loss pattern.

Figure 1: Schematic view of the prototype AT-TPC (pAT-TPC). The solid and dashed arrows represent particles and the ionization electrons, respectively. denotes the vertex of the reaction. More details can be found in Ref. PhysRevC.87.054301 .

Depending on the reaction of interest and also on the energy of the beam, particles with relatively large energy can escape from the detector. In this case, the identification of the particle and the determination of its energy is not possible anymore because part of the track is lost. As consequence, the total efficiency of the detector is reduced. In order to extend the energy dynamic range of the detector and preserve its excellent efficiency, one of the solutions is to place the AT inside a solenoid magnet. This enables the measurement of the magnetic rigidity of the particle by determining the radius of curvature of the track. The energy of the particle (and also its nature) can be determined with a very simple relationship:

(1)

where is the magnetic field, is the radius of curvature, is the momentum of the particle (from where the energy is inferred) and is its charge. The AT-TPC, which is an improved version of the pAT-TPC featuring larger volume and higher granularity, is presently the only AT operated under a magnetic field. The detector consist of a cylindrical gas volume of length and of radius with a pad plane segmented in triangular pads of (inner region) and (outer region) of height. The detector is placed inside a large-bore solenoid magnet capable of producing a uniform magnetic field up to . Further details about the detector can be found in Ref. BRADT201765 . The detector was successfully commissioned using the He+He (detector filled with He gas ayyad18 ) and Ar+proton (detector filled with CH gas) Bradt2018 reactions in inverse kinematics. These two reactions, which represent typical experiments in low-energy nuclear physics, provide very different benchmark points: In the former, the angle between both identical He particles, which amounts to exactly, can be used to infer the resolution of the detector and of the tracking method. The latter reaction provides a rather different scenario: The mass of the Ar is approximately 46 times larger than that of the proton, and therefore the tracks of the particles and their energy loss profiles exhibit a very different topology. In either case, the trajectories cannot be described by analytical expressions, which emphasizes the need of a novel tracking algorithm better suited to this type of experiments using the emerging AT technology in worldwide nuclear physics facilities.

3 Data description

The image of each track is recorded by measuring the drifting electrons that arrive to the highly-segmented pad plane (10,240 pads in total). The charge collected in each pad generates a pulse whose amplitude is proportional to the charge. The pulses are digitized in parallel using a dedicated and generic data-acquisition system (General Electronics for TPCs Pollacco201881 ) consisting of 40 ASIC boards featuring 256 12-bit ADC (4096 channels). The pulse is sampled with a maximum frequency of and 512 sampling time bins. The centroid of each pulse is used to extract the mean time () that is used to infer the coordinate using a simple relationship:

(2)

where is the drift velocity of the ionization electrons. is about 2.0 and for He and CH gases, respectively. Finally, the and coordinates are inferred from the centroid of the triangular pads with 0.5 and of height, deployed in the inner and outer region of the pad plane, respectively.

(a) Sketch of the AT-TPC
(b) Hit pattern visualized with the AT-TPC analysis software
Figure 2: A sketch of the AT-TPC representing a He+He event and a 3-dimensional hit pattern. A strong electric field (E) is applied within the gas volume to drift the electrons (e) to the pad plane where the tracks are projected on a triangular pad structure. Operating the detector in a magnetic field (B) allows the measurement of the curvature of the track.

An sketch of the AT-TPC and a 3-dimensional hit pattern of a He+He reaction event are shown in Fig. 2. In this event, a He beam particle penetrates the detector from the right side, and interacts with a He atom of the gas. After the reaction, both particles are emitted in forward direction, with a characteristic angle and energy, describing curved trajectories. Both particles stop in the gas as they continuously lose energy. Each point of the hit pattern corresponds to a pad that recorded the ionization electrons (e) that drifted from the point where the particle ionized the gas to the pad plane (situated on the left end cap). Fig. 3(a) shows the projection of the ionization electrons into the pads of the plane for this He+

He event. In order to remove noise points, the resulting hit pattern was filtered using the Statistical Outlier Removal filter provided by the Point Cloud Library (PCL) 

Rusu11 . It is worth mentioning that due to the stochastic nature of the time structure of the beam, two additional (pile-up) He beam particles entered the detector within the same time window as the particle that generated the event. In order to mitigate the effect of pile-up, which may cause a misidentification of the beam particle that reacted, the detector was tilted around 7 BRADT201765 . As can be seen in Fig. 3(a)

, the pile-up particles are projected into two short straight lines emerging from the same vertex. In addition, the track of the beam particle that generated the event does not appear in the hit pattern, probably because the reaction happened very close to the entrance window. Note that the absolute

coordinate depends on the timing of the particle that is detected first.

(a) He+He event
(b) Ar+p event
Figure 3:

Projections on the pad plane of two different events. The color scale shows the activation level or charge collected in each pad. As the particle stops, the radius of curvature decreases and the energy loss increases.

Fig. 3(b) shows the pad plane image of a proton emitted in the Ar+p reaction Bradt2018 . In this case, the gain in the region of the pad plane where the Ar was projected was highly reduced to avoid saturation effects, and therefore the corresponding track is not shown. As can be clearly seen, proton tracks have a stronger curvature that changes as the particle slows down. The complexity for this case resides in the fact that the phase space (i.e.: number of possible angle-energy states) of the proton can be vast, depending on the experiment. Since the length and curvature of the spiral strongly depends on the energy of the particle, it is extremely challenging to find a unique solution based on parametric approaches, as these curves cannot be described by a simple analytical approach.

According to these two scenarios that represent typical experiments with ATs, the track finding algorithm proposed in this work has to cope with several difficulties: multiple tracks have to be found and categorized to provide an efficient pile-up rejection mechanism, such tracks have to be found without any prior knowledge of their shape, the vertex has to be found even in absence of the beam particle track, the algorithm has to be immune to the large energy dynamic range for different particles and to noise sources (electronics, detector discharge…). The algorithm must be also capable of reconstructing a non-continuous track that has multiple gaps in the hit pattern, as the ones shown in Fig. 3. In addition, taking into account the amount of data generated during an experiment, the algorithm must be computationally inexpensive, or in the best-case scenario, run concurrently. While tracking every particle is desirable, the reconstruction of the kinematics of the reaction requires only the inference of the energy and angle of one of the reaction products of a binary reaction, preferably the lighter one. This entails a considerable flexibility in the design of the algorithm that can be adapted to different situations.

Step Parameter Default value
1) neighborship smoothing = neighbor distance
2) triplet building = tested neighbors of triplet mid point
= max number of triplets to one mid point
, where is
      the angle between the two triplet branches
3) triplet clustering = distance scale factor in metric
= threshold for cdist in clustering
4) pruning = min number of triplets per cluster
Table 1: Overview over the algorithm and the external parameters controlling each step. Default values are recommendations taken from section 5.

In order to test the performance of the proposed algorithm, we have prepared three different sets of data extracted from the above-mentioned experiments: two sets extracted from the He+He experiment with (data set I) and without magnetic field (data set III), respectively, and a set consisting of Ar+p events with magnetic field (data set II). Each data set is a collection of events acquired during these experiments. The He+He data sets are characterized, as explained before, by two tracks of different length, curved or straight depending on the magnetic field, and an additional straight track corresponding to the beam particle. These three tracks are connected by a common point known as reaction vertex. In the Ar+p data, only protons tracks, characterized by a very different radius of curvature, conform the hit pattern.

4 Track detection algorithm

The algorithm for grouping the points into possibly overlapping clusters or noise consists of four steps:

  1. smoothing by position averaging of neighboring points

  2. finding triplets of approximately collinear points

  3. single link hierarchical clustering of the triplets

  4. pruning by removal of small clusters

Each step can be controlled by external parameters which are listed in table 1. The parameter values should be adjusted for each experiments. The default values are only meant as first guesses, when no other information is available.

The individual steps and the meaning of the parameters are described in detail in the following subsections. It should be noted that, although steps 2)-4) are done on the smoothed data points, the point indices are not lost during smoothing, so that the final clustering actually is a grouping of the original points.

4.1 Pre-processing

Some steps in the algorithm are controlled by thresholds on distances. Although these thresholds can be provided by an operator, it is preferable to have reasonable guesses for these thresholds based on internal properties of the data. One particular property is the distance between neighboring points. This distance varies with the physical properties of the particles (e.g. their velocity) and the ambient chamber gas (e.g. its pressure) and is thus different from experiment to experiment. From a geometrical point of view, it has the effect of a scale parameter and can thus be considered as a characteristic length for the experimental setup. It is also an indicator for the spread of the points around the actual particle track.

Figure 4: Effect of the neighborhood smoothing with .

To obtain an estimator for the point distance, we compute, for each point, the distance to its nearest neighbors. The first quartile of all these distances is our characteristic length

. We have chosen the first quartile, because it typically lies within a track of a slower particle, even in the presence of considerable noise. This value will vary from point cloud to point cloud even for the same experimental setup, but it nevertheless can be considered as a scale parameter, because when all coordinates are scaled with the same factor, will scale with this factor, too.

To reduce the spread of the measured points around the particle trajectories, we perform a neighborship smoothing, which replaces the coordinates of each point by the mean of the points in its neighborship. A point is considered to belong to the neighborship of , if its distance is less than a threshold . The averaging sum always contains at least one point, because a point is always part of its own neighborship. As is a measure for the spread of the points, we use it as a scale factor for the default value, i.e., . Figure 4 shows the effect of this smoothing operation: points without neighbors in the range remain unchanged, whereas points in denser regions are moved towards the middle axis of the track.

4.2 Triplet grouping

The second step in the algorithm consists of building groups of three approximately collinear points, i.e., triplets. An example of a triplet is shown in figure 5(a). As TPC data are ordered in time direction, the indices of the node points , , and are subject to the condition . The cosine of the angle between the triplet branches is given by

(3)

For the hierarchical clustering described in the following subsection, each triplet is represented by two vectors, the midpoint

and the direction between the outer points:

(4)
(a) nodes and angle
(b) midpoint and direction
Figure 5: Properties of a triplet.

The triplets are computed from the point cloud in a loop over all points. Each point is considered as a possible midpoint , and all points among its nearest neighbors are tried as candidate points and . The triplet is discarded, if the triplet angle is greater than the threshold implied by , or, equivalently, if

(5)

From the remaining triplets of each midpoint, only the triplets with the smallest angle , or, equivalently, with the greatest are kept. For points, the total number of triplets is thus not more than .

4.3 Hierarchical clustering

The third step of the algorithm consists in a hierarchical clustering, which starts with each triplet as a separate cluster and merges in each iteration two clusters. The exact procedure is described in Algorithm 1. It depends on a distance measure on clusters that can be constructed from a distance measure on triplets in different ways, known as complete link, average link, or single link clustering theodoridis09 . In our situation, only the single link method is appropriate because, for curved tracks, the angle distance between triplets of the same cluster can become large. The single link method defines the cluster distance as

(6)
1:set of triplets , threshold on cluster distance
2:triplet clustering
3:
4:for  do
5:   from all pairs , select the pair with smallest distance
6:   if  then
7:      break
8:   end if
9:   
10:   
11:end for
12:return
Algorithm 1 Hierarchical clustering

This leads to the question how to define a metric on triplets. Let us start with the observation that two triplets and ( are similar when three conditions hold:

  1. the perpendicular distance between the midpoint and the extrapolated line is small, which is

    (7)
  2. the perpendicular distance between the midpoint and the extrapolated line is small, which is

    (8)
  3. the angle between their direction vectors is small, which is

    (9)

We combine these three distance measures into a single measure via

(10)

We use the tangens as an angle distance measure, and not one minus the cosine, because the tangens goes to infinity as , which means that perpendicular triplets have an infinite distance, regardless of their spatial distance. The scale factor allows for controlling the relative effect of angle and perpendicular distance.

We recommend to set the default value for proportional to . This has the effect that the distance measure becomes scale invariant, because when all point coordinates are scaled with the same factor, scales with this factor too. A scale invariant distance measure has the advantage, that an absolute threshold can be used for stopping the clustering process in Algorithm 1, because the distances fall into an approximately predictable range. Figure 6 shows a typical example, for which the threshold stops the clustering at the “elbow” of the cluster distances cdist.

Figure 6: In this example, the threshold stops the clustering such that five clusters are obtained.

4.4 Pruning

To distinguish actual tracks from random clusters due to noise, it is necessary to remove some of the clusters in a post-processing step. We have applied a very simple rule and removed all clusters containing less than triplets. If the data is known to be almost noise free, can be set to two. As this is rarely the case, we recommend a default value of .

4.5 Runtime considerations

As the four steps are processed sequentially, the order of the total runtime is the order of slowest step. Here is a runtime analysis of the individual steps where is the number of points in the point cloud:

  1. Both the estimation of and the neighborship smoothing are operations when they are implemented by simple loops. The nearest neighbor retrieval can be reduced to time with a kd-tree, but only on average. To achieve worst case runtime, other algorithms have been suggested beygelzimer06 ; vaidya89 . For the distance search, a utilization of a kd-tree leads to runtime, but this can be further reduced to when a range tree is used berg00 .

  2. The number of triplets that need to be evaluated for building the triplets is , which is . Utilization of the point order, as suggested in section 4.2, reduces the runtime by a constant factor, but does not change the exponent. As we do not try all possible triplets, but only search for candidate points among the nearest neighbors, the runtime reduces to , where the factor stems from the all-nearest-neighbor search.

  3. The runtime complexity of Algorithm 1 is where is the number of triplets theodoridis09 , but this can be reduced to with the utilization of Rohlf’s MST-algorithm muellner11 ; rohlf73 . As we make the restriction of keeping only the “best” triplets for each candidate midpoint, the number of triplets is only , not , as it were if we kept all triplets below the given collinearity threshold . The runtime of the clustering step is thus .

  4. As we cannot have more clusters than triplets, the runtime of the pruning is , which is due to the restriction mentioned in the preceding step.

The runtime is thus dominated by the hierarchical clustering and is in total. For the nearest neighbor and range search, we have used the kd-tree implementation that came with the Point Cloud Library (PCL) Rusu11 . For the hierarchical clustering, we have used the library fastcluster muellner13 . The worst-case runtime occurred on a point cloud consisting of 666 points with much noise and strongly curved tracks, which resulted in a large number of candidate triplets. In this case, the runtime was about 0.5 seconds on an Intel Core i5-6500 CPU @ 3.20GHz processor. The clustering step took about 70% of the total runtime.

5 Evaluation

With an evaluation of our algorithm, we pursue two different aims. One is the determination of parameters that lead to a decent clustering of the points into particle tracks. The other one is an assessment how accurate the physical properties of of the trajectories can be detected. To this end, we need ground truth data, in which each point is labeled with its track membership, and a similarity measure between the ground truth data and a test clustering.

It should be noted that, although both the parameter optimization and the quality assessment are done on the same test data, we did not optimize the parameters for each point cloud, but used a global optimum instead. That way, each individual point cloud only has a small effect on the parameter choice, and the quality assessment is not too much optimistically biased.

Figure 7: Screen-shot of the interactive software for ground truth labeling.

5.1 Ground truth data

For the creation of ground truth data, we have written an interactive software that displays the point cloud in such a way that cluster membership is visualized by colorization. Each point is assigned to exactly one cluster or it is labeled as noise. The list of clusters can be edited, and the cluster label of a point can be changed through a popup menu that appears when clicking on the point (see Figure

7). The point coordinates together with their cluster labels are eventually stored in a CSV file.

Obviously, this approach does not allow for vertex points to simultaneously belong to more than one trajectory. We have therefore covered this case by introducing special clusters that represent multiple memberships, e.g., “cluster 2;4” as a cluster label for points that belong both to clusters labeled as “2” and “4”. Although this could theoretically result in up to additional clusters, where is the number of particle tracks, in practice there are only few crossing trajectories and consequently only few vertices. It is thus a both simple and feasible way of encoding ambiguities.

Data set
Property I II III
# point clouds 99 136 104
# points 8 472 31 724 19 274
noise fraction 6.1% 27.7% 3.3%
minimum # tracks 1 1 2
median # tracks 1 1 3
maximum # tracks 6 6 6
# vertices 37 0 103
Table 2: Some statistics of the ground truth labels in different data sets. “#” stands for “number of”.

5.2 Evaluation criteria

We evaluate the quality of our algorithm with two different means: a generic measure for the distance between two different clusterings based on pairs of points, and by some specific physical properties of the detected clusters. The generic measure was used for optimizing the algorithm parameters, and the physical properties were evaluated to obtain an idea how good the algorithm works for processing AT measurements.

5.2.1 Clustering evaluation metrics

When comparing a test clustering with a ground truth clustering, the problem occurs that no correspondences between the cluster labels are known, and that such a mapping is not even well-defined because the number of clusters can be different, clusters may be split, merged or overlap. An established workaround for this problem is to consider all pairs of points and to check whether both points belong to the same (S) or to different (D) clusters in a clustering, which results, when comparing two clusterings, in four possible combinations (SS, SD, DS, or DD) pfitzner08 ; amigo09 . Over all pairs, the following numbers are counted:

  • = number of pairs with SS

  • = number of pairs with SD

  • = number of pairs with DS

  • = number of pairs with DD

Actually, these are only three independent numbers, because their sum is the total number of pairs. Depending on whether all four dependent or only three independent numbers are used, the following similarity measures can be constructed:

Rand statistic: (11)
Jaccard coefficient: (12)

These coefficients fall into the range , where or only hold for exactly identical clusterings. Both coefficients have the desirable property that they decrease when a ground truth cluster is split up or when two ground truth clusters are merged in the test clustering amigo09 . It is always and, due to the typically large number , the Rand coefficient often is much closer to one. The Jaccard coefficient is thus a more sensitive indicator for clustering similarity, and we have use for optimizing the parameters.

In our situation, we additionally have to deal with noise points, which we do by treating the noise cluster just like an ordinary cluster. Moreover, we take care of ambiguous cluster memberships at vertices in the following way: when one point of the pair is labeled as ambiguous in the ground truth data, then

  • if the other point belongs to a compatible ground truth cluster, the pair is counted as SD, if exactly one of the test cluster labels is “noise”, otherwise it is counted as SS

  • if the other point belongs to an incompatible ground truth cluster, the pair is counted as DS or DD, dependent on the test cluster labels

Here, a “compatible ground truth cluster” is either the same cluster, or one of the clusters that are represented by the ambiguous “overlap cluster”. Condition a) results in a slightly optimistic bias of the similarity measures because, at vertices, other than noise labels in the test clustering are disregarded. This has, however, only a slight effect due to the small number of ambiguous points.

5.2.2 Physical cluster properties

For reconstructing the kinematics of an AT experiment, the following questions are of particular importance:

  1. When there is a curved track, is it detected?

  2. When there is a vertex, is it detected?

  3. What was the smallest curve range that the algorithm detected?

Curved tracks only occurred in test sets I & II, so that the first question only applies to these data sets. We inspected all curved tracks in the ground truth data and counted how many of them were detected at all (more than 25% of its point non-noise), how many were split into more than one cluster and how many were merged with a different cluster.

Vertices are junction points between different tracks. When they join only two tracks, they appear as kinks in a single curve, i.e., as a discontinuity of the velocity direction vector. In the ground truth data, points near vertices are labeled with multiple memberships. In the track detection algorithm, vertices lead to points that belong to more than one triplet which belong to different clusters. When a vertex joins more than two tracks, different cluster/triplet combinations are possible, and more than one vertex would be detected by the algorithm. In order to avoid this, we have merged close by vertices with an average link hierarchical clustering muellner13 based on their spatial distance with a cutoff distance of . For the remaining detected vertex points, we have measured two numbers: the fraction of groundtruth vertices that have been detected (recall), and the fraction of detected vertices that actually corresponded to groundtruth vertices (precision).

We define the curve range as the largest pairwise distance between all points in the same cluster. The smallest detected curve range was the smallest cluster that corresponded to a ground truth cluster.

5.3 Results

For finding optimal parameter combinations, we have used the Nelder-Mead optimization nelder65 as implemented in the SciPy333https://scipy.org/ function scipy.optimize.minimize. This algorithm only works with real valued parameters, but some of our parameters can only take integer values, namely the parameters , , and . We therefore tested a fixed range of these integer parameters and optimized the remaining real valued parameters for each combination with scipy.optimize.minimize. The tested ranges for the integer parameters were , , and .

Set I Set II Set III
best 0.9726 0.8844 0.9706
8.31 7.83 7.82
14 10 20
6 2 5
0.0291 0.0212 0.0330
0.9221 1.6159 1.0560
3.27 7.48 2.03
13 19 8
4.7 3.3 3.4
Table 3: Parameter values that lead to the best Jaccard coefficients on the different data sets. denotes the arithmetic mean of all values in the specific data set.

Table 3 lists the parameters yielding the best Jaccard coefficient . The values for are very high for data sets I and III, which shows that the algorithm works quite well on data with a small level of noise. On data set II, is smaller, yet still satisfactory when the poor data quality is taken into account. The noise level has impact on the parameters and , too. This is easily understood, because more noise will lead to more random clusters, which need to be filtered out by a higher threshold for minimum number of triplets, . A higher threshold also allows for more tolerance with respect to the maximum cluster distance .

Without parameter optimization, some recommendations for an automatic parameter choice based on the characteristic point distance, , can also be drawn from table 3. The smoothing radius can be set to . The other scale dependent parameter, the weighting factor for the spatial distance with respect to the angle distance in Eq. (10), can be set to .

The parameter , which limits the number of neighbor points considered for building triplet, should be set according to the expected track curvature: the optimal value in table 3 is higher for data sets without (set III) or fewer (set I) bent trajectories.

5.3.1 Data set I

Out of the 99 curved tracks, 94 were correctly identified, none was split up, and 5 were merged with a touching straight track. An example for a merge is shown in figure 8(a). Except for one event, which had a straight track fragment with too few points, all other 4 merges could be correctly split up at the vertex when the parameters and were chosen smaller. As all of the merge cases returned only a single track, a practical approach would be to try more “split-friendly” parameters in cases for which the default parameters only yield a single track.

From a total of 37 vertices, 32 were correctly detected, which is a recall of 86%. Only two non-existent vertices have been erroneously detected, which corresponds to a precision of 94% among the returned vertices. The shortest range among the correctly identified tracks was about .

(a) merged tracks and missed vertex
(b) two of three tracks merged, but vertex not missed
Figure 8:

Examples for tracks merged into a single cluster. In the left example from set I, the track was not split at the vertex which lead to a miss of the marked vertex. In the right example from set III, the marked track was merged with the long track, but the vertex was nevertheless detected because of the third (blue) track. Red points have been classified as noise.

5.3.2 Data set II

Theoretically, all events in this data set only have one track in the form of a spiral plus noise. As this is known a priori from the experimental setup, a split up of a cluster by the algorithm does not pose much of a problem. We have therefore split up some spirals into more than one cluster in the ground truth data if the spiral had very wide gaps. This resulted in 217 tracks for the 136 events. Among these, 190 tracks were correctly detected, 18 were missed, 7 were split, and 2 were merged. Additionally, the algorithm returned 5 false positives, i.e., tracks that actually were noise.

(a) noise classified as track segment
(b) missed track segment
Figure 9: Examples for track detection errors (marked) in data set II. Red points have been classified as noise.

Fig. 9 shows typical examples for these errors: in Fig. 9(a), a track segment was found that actually was noise, and in Fig. 9(b), a track segment was missed because it contained less than triplets. Whilst the latter problem could be solved by decreasing , this would lead to more errors like in Fig. 9(a) as a side effect. There is thus a trade-off between these two types of errors.

5.3.3 Data set III

The poorest Jaccard coefficient (0.78) was obtained for the event shown in figure 8(b): one cluster was missed and merged with a different cluster due to a too small bent at the vertex. The vertex was nevertheless correctly detected due to the third track that also ends at the same vertex.

From a total of 103 vertices, 99 were correctly detected, which is a recall of 96%. In addition there were 2 false positive vertices, which leads to a precision of 98% among the detected vertices. The shortest range among the correctly identified tracks was about . It is shown in figure 10.

Figure 10: The shortest detected track (marked) in all events had a range of . Red points have been classified as noise.

As in this case all trajectories are straight lines, data set III provides a nice test set for comparing our new algorithm with the Hough transform, which is a standard algorithm for finding lines in point clouds dalitz17 . In order to achieve a fair comparison, we have first optimized the two parameters of the Hough transform, i.e. the spatial cell width and the noise threshold minvotes, such that the Jaccard coefficient was maximized. This yielded and . Even for this optimal parameter set, the Hough transform lead on average to a Jaccard coefficient , which is considerably smaller than the value of our new algorithm. The difference was statistically significant for a 5% significance level: the -value of the paired -test was .

The Hough transform typically has problems with tracks that come close to each other, because the points are assigned to tracks solely based on their position. An example is shown in figure 11: some points from a different track have been assigned to track 2. Our new algorithm also takes the local curve direction into account, which it is represented by the triplet orientation. This makes it possible to separate close by tracks with different orientations.

Figure 11: Result of the iterative Hough transform that does not correctly identify the tracks in this event: some of the points assigned to track 0 and track 3 actually belong to track 2.

6 Conclusions

ATs are increasing in popularity in the low-energy nuclear physics domain as imagers for nuclear reactions due to their compelling capabilities. However, the analysis of data acquired with these detectors is a challenging task, in particular if it operates in a magnetic field. In this case, particles describe non-linear curved trajectories that form complex projection patterns, specially when the particle multiplicity is high. We have developed a novel non-parametric clustering algorithm within the context of ATs to classify and separate tracks without previous knowledge of their shape, and find the vertex of the reaction.

The suggested algorithm for track detection was able to find most tracks and vertices on three different experimental AT data sets. In the case of absence of a magnetic field, in which case only straight tracks occur, the new algorithm performed better than an iterative Hough transform. One particular advantage of the new algorithm is that it directly returns possible vertex points as points assigned to more than one cluster. The vertex recognition rate was 86% in the presence of a magnetic field, and 96% in the absence of a magnetic field. Such high efficiency is mandatory for experiments where the rate of valid events is low. It is worth mentioning that the loss of efficiency caused by merging two tracks into a single one or by tracks with missing segments can be mitigated by applying simple kinematic constraints during the track reconstruction step BRADT201765 .

A drawback of the algorithm is its dependence on appropriate choices for a several parameter values. Although we have suggested default values for these parameters, it is advisable in general to optimize the parameters on a small selection of sample events of the same reaction type. This is, however, completely compatible with the fact that for every different experiment, many physical parameters that determine the topology and structure of the hit pattern (such as drift velocity or electronics sampling rate) must be adjusted before the actual data taking process.

It should be noted that the present work defines the problem of track detection as a clustering problem, but not as a curve detection problem. The particle trajectories are left to be interpolated from each cluster, for example with the method described in

lee00 . When such an interpolation is done, some clusters might represent physically improbable or even impossible trajectories. This could be a useful information that might be used as feedback to the algorithm for modifying parameters on the fly.

The promising results obtained in this work suggest that this algorithm is a powerful tool to analyze data taken with any of the ATs detectors that are operational in several nuclear physics facilities around the world BeceiroNovo2015124

Acknowledgments

The AT-TPC at the NSCL was partially supported by the National Science Foundation (NSF) under grant no. MRI-0923087. The commissioning of the AT-TPC was supported by the NSF under cooperative agreement no. PHY-1102511.

References

  • (1) M. Thoennessen, B. Sherrill, From isotopes to the stars, Nature 473 (7345) (2011) 25–26.
  • (2) Motobayashi, Tohru, World new facilities for radioactive isotope beams, EPJ Web of Conferences 66 (2014) 01013. doi:10.1051/epjconf/20146601013.
  • (3) S. Beceiro-Novo, et al., Active targets for the study of nuclei far from stability, Progress in Particle and Nuclear Physics 84 (2015) 124–165.
  • (4) J. Bradt, D. Bazin, F. Abu-Nimeh, T. Ahn, Y. Ayyad, S. Beceiro-Novo, L. Carpenter, M. Cortesi, M. Kuchera, W. Lynch, W. Mittig, S. Rost, N. Watwood, J. Yurkon, Commissioning of the active-target time projection chamber, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 875 (2017) 65–79. doi:10.1016/j.nima.2017.09.013.
  • (5) A. Wuosmaa, J. Schiffer, B. Back, C. Lister, K. Rehm, A solenoidal spectrometer for reactions in inverse kinematics, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 580 (3) (2007) 1290–1300. doi:10.1016/j.nima.2007.07.029.
  • (6) W. Mittig, S. Beceiro-Novo, A. Fritsch, F. Abu-Nimeh, D. Bazin, T. Ahn, W. Lynch, F. Montes, A. Shore, D. Suzuki, N. Usher, J. Yurkon, J. Kolata, A. Howard, A. Roberts, X. Tang, F. Becchetti, Active target detectors for studies with exotic beams: Present and next future, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 784 (2015) 494–498, symposium on Radiation Measurements and Applications 2014 (SORMA XV). doi:10.1016/j.nima.2014.10.048.
  • (7) W. Mittig, P. Roussel-Chomaz, Results and techniques of measurements with inverse kinematics, Nuclear Physics A 693 (1) (2001) 495–513, radioactive Nuclear Beams. doi:10.1016/S0375-9474(00)00690-4.
  • (8)

    M. Jeltsch, C. Dalitz, R. Pohle-Fröhlich, Hough parameter space regularisation for line detection in 3D, in: International Conference on Computer Vision Theory and Applications (VISAPP), 2016, pp. 345–352.

  • (9) M. A. Fischler, R. C. Bolles, Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, in: M. A. Fischler, , O. Firschein (Eds.), Readings in Computer Vision, Morgan Kaufmann, San Francisco (CA), 1987, pp. 726–740.
  • (10) C. Dalitz, T. Schramke, M. Jeltsch, Iterative Hough transform for line detection in 3D point clouds, Image Processing On Line 7 (2017) 184–196. doi:10.5201/ipol.2017.208.
  • (11) Y. Ayyad, W. Mittig, D. Bazin, S. Beceiro-Novo, M. Cortesi, Novel particle tracking algorithm based on the random sample consensus model for the active target time projection chamber (AT-TPC), Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 880 (2018) 166–173.
  • (12) J. Lezama, G. Randall, J.-M. Morel, R. Grompone von Gioi, An unsupervised algorithm for detecting good continuation in dot patterns, Image Processing On Line 7 (2017) 81–92. doi:10.5201/ipol.2017.176.
  • (13) D. R. Nygren, Proposal to investigate the feasibility of a novel concept in particle detection, LBL internal report, February 1974.
  • (14) D. Suzuki, A. Shore, W. Mittig, J. J. Kolata, D. Bazin, M. Ford, T. Ahn, F. D. Becchetti, S. Beceiro Novo, D. Ben Ali, B. Bucher, J. Browne, X. Fang, M. Febbraro, A. Fritsch, E. Galyaev, A. M. Howard, N. Keeley, W. G. Lynch, M. Ojaruega, A. L. Roberts, X. D. Tang, Resonant scattering of He: Limits of clustering in Be, Phys. Rev. C 87 (2013) 054301. doi:10.1103/PhysRevC.87.054301.
  • (15) D. Suzuki, M. Ford, D. Bazin, W. Mittig, W. Lynch, T. Ahn, S. Aune, E. Galyaev, A. Fritsch, J. Gilbert, F. Montes, A. Shore, J. Yurkon, J. Kolata, J. Browne, A. Howard, A. Roberts, X. Tang, Prototype AT-TPC: Toward a new generation active target time projection chamber for radioactive beam experiments, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 691 (2012) 39–54.
  • (16) J. Bradt, Y. Ayyad, D. Bazin, W. Mittig, T. Ahn, S. B. Novo, B. Brown, L. Carpenter, M. Cortesi, M. Kuchera, W. Lynch, S. Rost, N. Watwood, J. Yurkon, J. Barney, U. Datta, J. Estee, A. Gillibert, J. Manfredi, P. Morfouace, D. Pérez-Loureiro, E. Pollacco, J. Sammut, S. Sweany, Study of spectroscopic factors at N = 29 using isobaric analogue resonances in inverse kinematics, Physics Letters B (2018) 155–160doi:10.1016/j.physletb.2018.01.015.
  • (17) E. Pollacco, G. Grinyer, F. Abu-Nimeh, T. Ahn, S. Anvar, A. Arokiaraj, Y. Ayyad, H. Baba, M. Babo, P. Baron, D. Bazin, S. Beceiro-Novo, C. Belkhiria, M. Blaizot, B. Blank, J. Bradt, G. Cardella, L. Carpenter, S. Ceruti, E. D. Filippo, E. Delagnes, S. D. Luca, H. D. Witte, F. Druillole, B. Duclos, F. Favela, A. Fritsch, J. Giovinazzo, C. Gueye, T. Isobe, P. Hellmuth, C. Huss, B. Lachacinski, A. Laffoley, G. Lebertre, L. Legeard, W. Lynch, T. Marchi, L. Martina, C. Maugeais, W. Mittig, L. Nalpas, E. Pagano, J. Pancin, O. Poleshchuk, J. Pedroza, J. Pibernat, S. Primault, R. Raabe, B. Raine, A. Rebii, M. Renaud, T. Roger, P. Roussel-Chomaz, P. Russotto, G. Saccà, F. Saillant, P. Sizun, D. Suzuki, J. Swartz, A. Tizon, N. Usher, G. Wittwer, J. Yang, Get: A generic electronics system for {TPCs} and nuclear physics instrumentation, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 887 (2018) 81–93. doi:10.1016/j.nima.2018.01.020.
  • (18) R. B. Rusu, S. Cousins, 3D is here: Point Cloud Library (PCL), in: IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 1–4.
  • (19) S. Theodoridis, K. Koutroumbas, Pattern Recognition, 4th Edition, Academic Press, 2009.
  • (20)

    A. Beygelzimer, S. Kakade, J. Langford, Cover trees for nearest neighbor, in: Proceedings of the 23rd international conference on Machine learning, 2006, pp. 97–104.

  • (21) P. M. Vaidya, An O(n log n) algorithm for the all-nearest-neighbors problem, Discrete & Computational Geometry 4 (2) (1989) 101–115.
  • (22) M. de Berg, M. van Kreveld, M. Overmars, O. Schwarzkopf, Computational Geometry, 2nd Edition, Springer, 2000.
  • (23) D. Müllner, Modern hierarchical, agglomerative clustering algorithms, arXiv e-prints stat.ML/1109.2378 (2011).
    URL http://adsabs.harvard.edu/abs/2011arXiv1109.2378M
  • (24) F. J. Rohlf, Algorithm 76: Hierarchical clustering using the minimum spanning tree, The Computer Journal 16 (1) (1973) 93–95.
  • (25) D. Müllner, fastcluster: Fast hierarchical, agglomerative clustering routines for R and Python, Journal of Statistical Software 53 (9) (2013) 1–18. doi:10.18637/jss.v053.i09.
  • (26) D. Pfitzner, R. Leibbrandt, D. Powers, Characterization and evaluation of similarity measures for pairs of clusterings, Knowledge and Information Systems 19 (3) (2008) 361–394. doi:10.1007/s10115-008-0150-6.
  • (27)

    E. Amigó, J. Gonzalo, J. Artiles, F. Verdejo, A comparison of extrinsic clustering evaluation metrics based on formal constraints, Information retrieval 12 (4) (2009) 461–486.

  • (28) J. A. Nelder, R. Mead, A simplex method for function minimization, The Computer Journal 7 (4) (1965) 308–313.
  • (29) I.-K. Lee, Curve reconstruction from unorganized points, Computer aided geometric design 17 (2) (2000) 161–177.