This article explains an optimization-based approach for counting and localizing stars wthin a small cluster, based on photon counts in a focal plane array. The array need not be arranged in any particular way, and relatively small numbers of photons are required in order to ensure convergence. The stars can be located close to one another, and good performance was obtained when the separation was larger than Rayleigh radii. To ensure generality of our approach, it was constructed as a special case of a general theory built upon topological signal processing using the mathematics of sheaves. While familiarity with sheaves is assumed in Section 5, it is not essential to understand the algorithm derived using them that is discussed in Section 3.
1.1. Historical context
Perhaps the most famous algorithm for separating nearly coincident stars in the CLEAN algorithm and its generalizations (see [1, 3, 5] for instance, among many others). These greedy algorithms rely on the geometric structure of stars, namely that they are point sources or nearly so.
While greedy algorithms can yield good performance, they also can fail in dramatic ways. This can be especially pronounced if photon counts are low. This article proposes that an optimization-based approach can lend some robustness to an algorithm. Optimization-based imaging using the Wasserstein metric has recently led to improvements in molecular microscopy . Our approach is a generalization of this idea.
2. Detailed problem statement
We can model a scene of stars illuminating a focal plane by using a simplistic Dirac point-mass model,
where is the brightness of each star, is the location of each star in the focal plane, and can be taken to be the location of an arbitrary pixel.
The response of an actual sensor to the incoming photons from a set of stars in the scene will not be a sum of Dirac point-masses because of dispersion, diffraction, and other physical effects. Suppose that there are pixels in the focal plane, each collecting an integer number of photons. This means that the measurements lie in (rather than ). It is useful to record the photon counts as an integer-valued function . The problem to be solved in this article is therefore the determination of , and the sets and from .
3. Algorithmic implementation and results
The algorithm we used to count, locate, and characterize stars begins with a proposed maximum number of stars . Convergence is not ensured by Proposition 1 and Corollary 2 if the true number of stars is larger than . The algorithm proceeds through the following steps, which ultimately minimize the local consistency radii of the sheaf :
Collect photon counts from all pixels into a vector
For in ,
If , use the centroid of the photon count distribution as the initial guess for and the total photon count as
Otherwise, use the previous step’s set of star locations and magnitudes as an initial guess with and chosen randomly.
Iteration: Solve the minimization problem Equation 5 given stars using the centroids as initial guesses for .
Store: the residual error, along with the parameters for the stars.
Return the proposed number of stars, locations, and brightnesses corresponding to the smallest residual.
We found that the minimization problem Equation 5 was solvable using the standard Matlab fmincon function, though the initial guess stage was necessary to aid in ensuring convergence under repeated Monte Carlo runs.
As a test of this approach, star fields consisting of between and stars were simulated. These star fields were propagated through a simulated optical system, resulting in photon counts on an array of pixels. These photon counts were then used to determine star locations and brightnesses using the sheaf approach discussed above, a multiple centroiding approach, and the standard CLEAN algorithm. Figure 2 shows a typical result for photons.
Assuming that the correct brightness and location of star is given by
, and that the algorithm’s estimate of the same is, an algorithm can be scored by its efficiency, given by
where ranges over all functions. Note that
is the Jaccard index of the two sets of stars, given the functionused as a matching. The efficiency is closely related to a conical metric, but does not assume that the number of stars is the same in both sets. Higher efficiency means that the algorithm is doing a better job of determining the correct number of stars, their brightnesses, and locations. The best possible efficiency is , while the worst is .
In order to compare the performance of the sheaf algorithm with the standard CLEAN algorithm , we ran a set of Monte Carlo runs of different configurations of stars, as shown in Table 1. Using the efficiency Equation 2 to rate the performance of both algorithms, the results from all runs of all tests are shown in Figure 7. The sheaf algorithm did better than CLEAN in of the runs. In of those runs the difference in efficiency was greater than .
Delving more deeply into the performance differences, Figure 7 shows a breakdown of efficiency for each Test shown in Table 1. For Tests A and B the sheaf algorithm performs nearly optimally. For Tests F and G (the hardest tests), the sheaf algorithm outperforms the CLEAN algorithm by a wide margin – nearly twice the efficiency, as shown in Figure 8. For the other tests, the sheaf algorithm’s performance is substantially more variable than CLEAN.
The location errors for both algorithms and all tests are shown in Figure 9. As should be expected, location errors increase in both algorithms as the number of stars increase. Generally, the sheaf algorithm yields better location estimates than CLEAN for a smaller number of stars, while CLEAN yields better location estimates than the sheaf for a larger number of stars.
Brightness estimate errors for both algorithms and all tests are shown in Figure 10. Both tests yield highly variable performance on brightness estimates regardless of test conditions. At least in the sheaf algorithm case, this may be a direct result of the use of the Euclidean metric on , since – in contrast to the Wasserstein metric  – the convergence in brightness is not guaranteed.
The results we obtained from our algorithm typically exceeded the performance of the CLEAN algorithm, especially with more challenging scenes, which indicates that this method shows promise. The fact that it converges very well for small numbers of stars and less well for larger numbers of stars suggests there is room for improvement. In particular, the use of a general optimization solver is not particularly efficient, and the solver converges slowly.
Computational considerations were not central to our approach on this problem, but could be important in future work. We have already developed a general library for sheaf algorithms , and could port this library to a high performance computing platform. We expect that high-performance sheaf computational tools would have broad applicability, providing numerous avenues for improvements.
The authors thank Tom Ruekgauer and his team for the use of the simulated optical data presented in this article.
The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112090125.
5. Appendix: Modeling starfields sheafily
Consider that each star is determined by its magnitude (an element of ) and location (an element of ). These properties have different units and thereby suggest that the correct space for the parameters defining a star is
Conventionally, we use for the brightness (or magnitude) of the star, and will use for its location.
If there are stars present in the scene, this would suggest that is the correct space of parameters defining all stars. Since the order in which the stars are listed in such a product does not matter, if there are stars present in the scene, the correct space of parameters for defining all of the stars is the quotient space , where is the group of permutations on items.
5.1. Sheaf model warmup: known star count
If we know the number of stars is , we can express the relationship between the observations and the star parameters by a sheaf diagram
The diagram expresses the fact that each observation is determined by Equation 1, and that no observation functionally determines any other. One should be aware that the lack of functional dependence does not imply that the observations are independent. Specifically, it may happen that one or more measurements (values in the second row of the diagram) suffice to determine a unique value in , after which point all of the other measurements can be determined from this unique value. This is exactly the situation in which an adaptive method is warranted, since later measurements are not actually required!
Suppose we have a set of measurements at , , , , for which we have observations , , , respectively. These observations form a partial assignment to the above sheaf, which is supported on the second row.
The partial assignment is not supported at the top row, since that would be tantamount to knowing all of the star parameters. Nevertheless, assuming that no noise is present and that we have correctly found the parameters , , , , , then it should be the case that
for all .
If real measurements are taken from the scene determined by stars, then the previous discussions led to a simple sheaf diagram
simply aggregates all of the pixel values into a single vector. One can consider this as a function taking specific values on regions within the plane , so we can interpret .
In the (unlikely) situation that the pixels have taken exactly the values specified by Equation 1, this corresponds to a global section of the sheaf diagram in Equation 3. Since global sections have zero consistency radius and the stalks of the sheaf are all metric spaces, any assignment which is not a global section will have positive consistency radius.
On the other hand, if the function defined in Equation 4 is injective then there is only one global section. This means that we can recast the problem of obtaining the stars from the observations as the minimization of consistency radius. With essentially no further work, we obtain the following result.
If the function defined in Equation 4 is injective then the solution to
is the correct star decomposition for the scene if one exists.
The reader may correctly argue that the optimization problem in Equation 5 can be obtained easily – though not solved – by inspection! However, if the number of stars is not known, then the correct optimization problem that obtains the stars parameters is difficult to state explicitly. However, the correct optimization problem is still a consistency radius minimization, but for a more elaborate sheaf. This sheaf accounts both for the unknown number of stars and the fact that the number of measurements that should be taken depends on the (unknown) number of stars.
5.2. Sheaf model encore: unknown star count
Now let us assume that the scene is built from an unknown number of stars. We cannot use a single map, because we do not know . Therefore, we need to consider many possibilities for the number of stars. For instance, let us consider that there may be as many as stars in the scene.
The sheaf is defined by the diagram
in which each of the restriction maps are given by the map defined in Equation 4 with different numbers of stars.
Notice that the in is the maximum number of stars we will attempt to consider, which may not have much (if anything) to do with the actual number of stars . Of course, we hope that so that the actual number of stars is considered in our analysis. The global consistency radius of an assignment to this sheaf is given by
where is the measurement and the is the proposed -th star parameter when we are considering a representation of the scene with stars. The global consistency radius is simply the sum of consistency radii of each subproblem, in which we consider a given number of stars. On the other hand, each one of these subproblems is the local consistency radius for an open set in the base space topology for . If the functions are injective, then it is clear that if , the only set upon which the local consistency radius can vanish is the set consisting of only the observation (ie. the bottom row of the sheaf diagram). Conversely, if the functions are injective and , then the minimum local consistency radius possible on the open set constructed as the star over the element with stalk is zero.
One particular drawback of though, is that values of an assignment on the different stalks in the top row of Equation 6 have nothing to do with one another. Another expression of this fact is that the base space topology for is quite large.
5.3. Pivoting to an algorithm
If we combine the sheaf methodology in the previous sections (Sections 5.1 and 5.2) with observation that the collection of photon counts from each pixel are best thought of as a (single) function on the plane, we obtain a way to recover star parameters from a distributional measurement.
To formalize this idea, consider the space of functions on star locations. In brief, a measurement specifies the amount of mass on a set of star positions. The map reinterprets stars as measures
We can use these maps as the restriction maps in , which still works structurally, namely
Algorithmically, if we obtain a measurement as an element (which need not be a sum of a small number of Dirac distributions), then the best star decomposition is still obtained by minimizing local consistency radii:
and selecting the smallest for which this quantity is zero. In cases where the source magnitudes are widely varying, it is likely that a greedy approach is possible (and may be preferable). In Section 3, we employ a compromise approach by reusing solutions with smaller number of proposed stars when solving for larger numbers of proposed stars.
Suppose that the number of sources is fixed at and that is given by
where is given by Equation 4.
If we assign to represent the measurement in the sheaf (see Equation 6) then there is an extension to a global assignment in which the local consistency radius vanishes for all sufficiently small open sets provided that .
To that end, is assigned in the bottom row of the diagram in Equation 6.
Let us name the values in the assignment to the top row of the sheaf diagram
Notice that these values define many possibilities for the source parameters.
If the number of sources is not known, but decompositions with sources are encoded in the sheaf, then the consistency radius of any assignment supported on the star open set (minimal open set in the sense of inclusion) whose stalk is is given by
If we then declare that and whenever they are both defined, then this definition clearly results in the sum of Equation 8 being zero. ∎
If the function is injective, then the assignment for which the local consistency radius of vanishes occurs precisely at the number of sources .
Therefore, from a methodological perspective, the number of sources can be determined from the consistency filtration given enough samples. If there is noise, look for the “knee” in local consistency radius as the open set size increases. This is precisely our algorithmic approach.
-  J. A. Högbom. Aperture synthesis with a non-regular distribution of interferometer baselines. Astronomy and Astrophysics Supplement, 15, 1974.
Hesam Mazidia, Tianben Dinga, Arye Nehoraia, and Matthew D. Lewa.
Quantifying accuracy and heterogeneity in single-molecule super-resolution microscopy.Nature Communications, 11(6353), December 2020.
-  GC McKinnon, C Burger, and P Boesiger. Spectral baseline correction using CLEAN. Magnetic resonance in medicine, 13(1):145–149, 1990.
-  M. Robinson. Pysheaf: the python sheaf library, https://github.com/kb1dds/pysheaf/, 2020.
-  I. M. Stewart, D. M. Fenech, and T. W. B. Muxlow. A multiple-beam CLEAN for imaging intra-day variable radio sources. Astronomy and Astrophysics, 535(A81), 2011.
-  Mankei Tsang. Resolving starlight: a quantum perspective. Contemporary Physics, 60(4):279–298, 2019.