The Deep Neural Network based Photometry Framework for Wide Field Small Aperture Telescopes

06/28/2021 ∙ by Peng Jia, et al. ∙ 0

Wide field small aperture telescopes (WFSATs) are mainly used to obtain scientific information of point–like and streak–like celestial objects. However, qualities of images obtained by WFSATs are seriously affected by the background noise and variable point spread functions. Developing high speed and high efficiency data processing method is of great importance for further scientific research. In recent years, deep neural networks have been proposed for detection and classification of celestial objects and have shown better performance than classical methods. In this paper, we further extend abilities of the deep neural network based astronomical target detection framework to make it suitable for photometry and astrometry. We add new branches into the deep neural network to obtain types, magnitudes and positions of different celestial objects at the same time. Tested with simulated data, we find that our neural network has better performance in photometry than classical methods. Because photometry and astrometry are regression algorithms, which would obtain high accuracy measurements instead of rough classification results, the accuracy of photometry and astrometry results would be affected by different observation conditions. To solve this problem, we further propose to use reference stars to train our deep neural network with transfer learning strategy when observation conditions change. The photometry framework proposed in this paper could be used as an end–to–end quick data processing framework for WFSATs, which can further increase response speed and scientific outputs of WFSATs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Wide field small aperture telescopes (WFSATs) are working horses for time-domain astronomy, because it can cover a large field of view with high cadence in a cost effective way (Yuan et al., 2008; Cui et al., 2008; Glazier et al., 2020; Popowicz, 2018; Burd et al., 2005; Ratzloff et al., 2019; Sun and Yu, 2019). As the number of WFSATs increases, the data volume also increases. Since many celestial objects require immediate follow–up observations (such as: electromagnetic counterparts of gravitational wave sources or near earth objects), fast data processing frameworks with high efficiency play an important role in scientific research (Ma et al., 2007; Drake et al., 2011; Pablo et al., 2016; Masci et al., 2018; Xu et al., 2020). Because WFSATs are mainly used to observe point–like (stars) or streak–like targets (near earth objetcs), fast data processing frameworks for WFSATs mainly include following steps:
1. target detection: obtain positions of astronomical target candidates from raw observation images;

2. target classification: classify these candidates into different types of astronomical targets or bogus;


3. target information extraction: obtain magnitudes or positions of astronomical targets.

Because WFSATs are working remotely without active corrections of system aberrations or suppressions of noise, images are affected by uncontrollable noise and highly variable PSFs. These images would bring difficulties into development of image processing frameworks. For example, cosmic rays and ghost images would trigger false detections. Highly variable point spread functions (PSFs) would introduce uncertainty to astrometry and photometry results. Data processing frameworks based on classical methods often require frequent manual intervention to keep information extracted from observation data effective. Manual interventions would reduce data processing efficiency and further limit scientific outputs of WFSATs.

In recent years, machine learning algorithms have shown great success in image processing tasks, such as image classification

(Cabrera-Vives et al., 2017; Duev et al., 2019; He et al., 2020), image detection (González et al., 2018) and image segmentation (Burke et al., 2019; Hausen and Robertson, 2020). These algorithms are also imported into data processing frameworks for WFSATs. Machine learning based image classification algorithms (Jia et al., 2019; Turpin et al., 2020) are integrated with classical astronomical target detection algorithms, such as: SExtractor (Bertin and Arnouts, 1996) or simplexy in Astrometry (Lang et al., 2010), to detect and classify astronomical targets of different kinds. For these integrated astronomical target detection frameworks, classical detection algorithms will firstly obtain positions of astronomical target candidates. Then machine learning algorithms will classify these candidates into different types. At last, aperture photometry or point spread function (PSF) fitting photometry will be used to obtain magnitudes of astronomical targets for further scientific research.

However, because these integrated astronomical target detection frameworks have sequential structures (all processes are carried out sequentially), the performance of the overall framework is limited by all algorithms used in frameworks. Take the target detection task for example, only targets that can be detected by classical methods would be further classified. Because targets that are close to bright targets and dim streak–like targets can hardly been detected by classical methods, integrated methods would miss these targets.

Thanks to recent developments of deep neural networks (Goodfellow et al., 2016), end-to-end target detection algorithms have been proposed. End-to-end target detection algorithms could obtain positions and types of celestial targets from observed images at the same time. These algorithms split images into small regions and either directly classify these regions into different targets (one step detection framework) or merge these regions and then classify them into different types (two step detection framework). Because WFSATs have low spatial sampling rate (several arcsec per pixel) and short exposure time (tens of seconds to several seconds), astronomical targets detected by them are sparsely distributed and have small size. For targets with small size, two step detection frameworks have better performance (Ren et al., 2015). Therefore we have proposed to use feature pyramid nets and resnet (He et al., 2016) to form backbone neural networks and use faster-rcnn structure to form a two step astronomical target detection framework to process images obtained by WFSATs (Jia et al., 2020). Tested with simulated and real observation data, our framework has shown robust performance in detection of astronomical targets of different types.

However, it should be noted that positions and types of celestial objects obtained by the faster-rcnn based astronomical target detection framework can not fully satisfy scientific requirements, because magnitudes are missed in the output of our framework. Magnitudes of celestial targets play an important part for scientific observations carried out by WFSATs, such as observation of exoplanets or flares from stars. In this paper, we extend our faster-rcnn based end-to-end astronomical target detection framework to make it suitable to carry out photometry. We will discuss the structure of the DNN for photometry (PNET) in Section 2. In Section 3, we will use simulated data to test the performance of PNET. Because photometry is an regression problem, which is sensitive to variations of noises and PSFs, we further propose a transfer learning based neural network modification method. We will discuss this method in Section 4. At last, we will draw conclusions of this paper and anticipate our future work in Section 5.

2 The structure of the PNET

Because images obtained by WFSATs have low spatial sampling rates and short exposure time, almost all targets observed by WFSATs are diffuse point–like or streak–like targets. Therefore photometry and astrometry are the most basic and important information extraction step and the framework for data processing in WFSATs could be divided into following steps:
1. obtain positions and types of all celestial objects by the faster-rcnn;
2. transform positions of these celestial objects to the celestial coordinate;
3. obtain magnitudes of all detected celestial objects through aperture photometry or PSF–fitting photometry;
4. select several stars as references and use magnitude measurements and magnitudes in star catalogue to calibrate photometry results;
5. obtain calibrated magnitudes of other stars with calibration results of reference stars;
6. cross–match magnitudes and positions of these stars with different catalogue for different targets.

Because all celestial object candidates in observed images would be scanned by the faster-rcnn during the detection step (the first step), we could modify the structure of the faster-rcnn to obtain positions and magnitudes of astronomical targets at the same time. With this modification, we can increase automation degree of our framework and would be able to further increase photometry and astrometry accuracy through back propagation in the training step. Based on this principle, we propose a modified structure of our faster-rcnn based astronomical target detection framework (PNET) as shown in figure 1.

Figure 1: The modified faster-rcnn based astronomical target detection and photometry framework (PNET). It includes a Resnet50 and a feature pyramid neural network as the backbone network. Then output from the region proposal and ROI alignment network will be sent to box regression neural network to obtain rough positions and types of celestial objects. Stamp images cut from input images according to rough positions will be sent to the photometry and the astrometry neural network branches. These two neural network branches will output magnitudes and positions of celestial objects in high accuracy.

Comparing with our faster-rcnn based framework in Jia et al. (2020), we have added the astrometry and photometry neural network branches at the end of the box regression. The box regression neural network would output types and rough positions of astronomical targets (bounding boxes with four boundary pixels to indicate their position and classification results to indicate types of different celestial objects). Then we could obtain centres of bounding boxes and cut stamp images with size of pixels ( stands for size of images) as inputs of the photometry and the astrometry neural networks.

Because celestial objects with different magnitudes would have different size and inputs of the photometry and the astrometry neural network have the same size, the size of stamp images is important and requires additional manual innervations for different datasets. The size of stamp images should be not too large, because it would introduce additional background noise for dim stars and would obtain two stars in one stamp image. The size of stamp images should not be too small, which would introduce strong requirements of PSF uniformity and centroid accuracy (sub-pixel shifts would introduce strong bias, when size of stars is too small). In this paper, we set as 9, which is the size of most star images. It should be noted that the philosophy of photometry algorithm in the PNET can be viewed as mixture of aperture photometry and PSF-fitting photometry. Therefore, although stamp images with

pixels are smaller than the size of bright stars, it still could return effective measurements. The philosophy of the astrometry neural network can be viewed as mixture of the moment estimation and PSF-fitting astrometry. Therefore the astrometry results are also effective when stars are larger than stamp images.


The structure of the photometry neural network is shown in figure 2

. It is a convolutional neural network inspired by the VGG neural network

(Simonyan and Zisserman, 2014), which contains 11 convolutional layers and 4 fully connected layers. Because stamp images have very small size ( pixels in this paper), we set size of convolutional kernels in each layer to be

pixels. After each convolutional layer, we use Rectified Linear Unit (ReLU) as activation function. The output of the last convolutional layer is transferred to 4 fully connected layers to estimate magnitudes of celestial objects. The input of the photometry neural network is a stamp image and the output of the photometry neural network is magnitude of the input celestial object. The astrometry neural network has almost the same structure, except that there are two outputs for the astrometry neural network:

and positions in the camera (CCD) coordinate.

Figure 2: The structure of the photometry neural network. It includes 11 convolutional layers and 4 fully connected layers. The convolutional kernel is set to to better extract features of celestial objects.

The loss function is important for a neural network and it is directly related to the way we train the neural network. It should be noted that, we could either train the photometry neural network, the astrometry neural network and the detection neural network separately or take the faster-rcnn based framework as a whole framework to train. According to our experience, training would be more effective if the neural network could be trained together

(Jia et al., 2021) at the cost of larger GPU memory usage and longer training time. In this paper, we train the faster-rcnn based neural network along with the photometry neural network and the astrometry neural network together. The loss function of the faster-rcnn based celestial objects detection and photometry framework is defined in equation 1,

(1)

where is the classification loss function, is the Astrometry loss function, is the photometry loss function. These loss functions are defined in equation 2,

(2)

and stand for ground–truth and predicted positions of celestial objects. and stand for ground–truth and predicted magnitudes of celestial objects, where stands for index of celestial objects. and are mean square error of photometry and astrometry error. contains four loss functions: the first one is the classification loss ( stands for number of celestial objects, stands for target label and

stands for probability of classification results), the second one is bounding box regression loss (used to define rough position of celestial objects with four pixels: upper left, upper right, bottom left and bottom right), the third one is used to regularize the bounding box regression loss and the last one is a smoothed L1 loss defined in equation

3.

(3)

With modifications mentioned above, a trained PNET could obtain types, positions and magnitudes of different celestial objects at the same time. The flow chart of data processing procedure with the PNET could include the following steps:
1. obtain positions, magnitudes and types of all celestial objects by the PNET;
2. transform positions of these celestial objects to celestial coordinates;
3. select several stars as references and obtain magnitudes of all celestial objects according to references;
4. cross–match magnitudes and positions of all celestial objects with different catalogues.

We could find that with the PNET, the complexity for data processing framework of WFSATs has been reduced and we could obtain magnitudes, positions and types of celestial objects without much manual interventions. We will discuss the performance of the PNET in Section 3.

3 Training and testing the PNET with simulated data

3.1 Training the PNET

According to our experience, the bounding box regression in original faster-rcnn based astronomical target detection framework could return position accuracy better than 1 pixel (Jia et al., 2020), which is enough to cross-match stars in catalogue for WFSATs. The astrometry neural network in the PNET could obtain positions of stars with higher accuracy (better than 0.01 pixel for stars with moderate brightness), therefore we will leave along the performance of the astrometry neural network and test the performance of the photometry neural network, which is our main target for development of the PNET.

The PNET is trained and tested with simulated data in this paper, because we could obtain ground truth values of positions and magnitudes from simulated images. We use Durham adaptive optics simulation platform (DASP) (Basden et al., 2018) and a high reliable atmospheric turbulence simulation code (Jia et al., 2015b, a) to generate PSFs. Then we use the Skymaker (Bertin, 2009) to generate simulated images with these simulated PSFs. Images used in this paper are all ideal simulated images without ghost images, cosmic rays or clouds. These images are solely used to test the performance of the PNET in photometry accuracy. Parameters of the simulated telescope are shown in table 1. We generate 5000 simulated images to train the PNET and 500 simulated images to test the PNET. In these images, celestial objects have magnitudes from mag 10 to mag 23. There are no dense star fields (more than 70 stars in a pixels region) in these images, because considering the observation mode of WFSATs, it would be unlikely to obtain images with many dense star fields. Besides, according to the principle of the PNET, it would be unreliable to obtain effective information from dense star fields with the PNET. One frame of simulated images is shown in figure 3.

Parametres Value
Aperture 1 metre
Field of View 10 arcmin
Observation wavelength 500 nm
PixelScale 0.5 arcsec
FWHM of Seeing Disc 1 arcsec
Exposure Time 1 second
Readout Noise 1
Dark Current 1
Sky background 24 mag
Table 1: Parameters for image simulation in this paper.
Figure 3: One frame of simulated images. It is an ideal observation image with no structural noise images, such as: cosmic rays, ghost images or smear images. The simulated images are solely used to test the performance of the PNET.

It should be noted that celestial objects with different magnitudes have different contributions to photometry and astrometry loss. Physical limitations of astrometry and photometry accuracy, also known as Cramér-Rao bound, indicate us that the uncertainty of astrometry and photometry is related to , where stands for signal and N stands for noise (Mendez et al., 2014). Therefore, theoretically differences between predicted and ground–truth magnitudes or positions are smaller for brighter stars. If the number of stars with different magnitudes is close in the training set, the final astrometry and photometry accuracy would be strongly stretched to fit dim stars.

In this paper, to keep the astrometry and photometry results stable, we generate stars of different magnitudes with the same distribution: we set the slope of differential star counts to be 0.2 in the Skymaker for both simulated and real observed images. Then there would be more dim stars in simulated images and the photometry and astrometry results would be better for dim stars. In real applications, when distributions of stars are hard to define manually, we should either use training set with stars that satisfy real distribution or modify loss functions for the PNET with equation 4 and equation 5 to keep photometry and astrometry results stable for stars with different magnitudes.

(4)
(5)

Where and are theoretical limit or required accuracy of astrometry and photometry for celestial objects with magnitude of . As shown in these two equations, values of loss functions would be directly zero, when astrometry or photometry results are close to required accuracy.

The PNET is implemented with Pytorch

(Paszke et al., 2019) in a computer with 2 Xeon E5-2650 CPUs and 256 GB memory. The size of input images is pixels and would cost around 14 GB GPU memory for training and testing. Therefore, we use a RTX 3090 (with 24 GB GPU memory) to train and test the PNET. The PNET is initialized with random number as weights and optimization algorithms for the detection part is Adam algorithm (Kingma and Ba, 2014) and for the astrometry and photometry part is SGD algorithm (Ruder, 2016). We use the warm up method to set learning rates (Zhang et al., 2019)

and initial learning rate is 0.0003 for the photometry and astrometry neural network and 0.00003 for the detection neural network. The batch size for PNET is 1000. Trends of loss functions of training set and test set for different epochs are shown in figure

4. As shown in this figure, we stop training the PNET after 30 epochs and it would cost around 23 hours to train the PNET and 5 hours to train the photometry neural network solely.

Figure 4: Learning curve for the PNET. The loss function stops decreasing after 20 epochs. Therefore, we train the PNET with 30 epochs.

3.2 Testing the PNET with stamp images from simulated data

We use two data sets to test the performance of different parts of the PNET. The photometry part is trained and tested with stamp images cut from simulated images to show its feasibility in magnitude estimation. Small stamp images ( pixels) with different magnitudes (from mag 10 to mag 23) are cut from original simulated images as training and test set. We spilt stars of different magnitudes into 13 different categories. The statistical results are shown in table 2 and figure 5 along with results obtained by aperture photometry from the SExtractor. As shown in table 2

, we can find that the photometry part along in the PNET could estimate magnitude from stamp images. The photometry error of the PNET is below 0.1 mag for all stars and smaller than 0.004 mag for stars brighter than 18 mag. Besides, we can also find that the performance of the photometry part in the PNET is different from the aperture photometry in the SExtractor: the SExtractor has larger variance and smaller bias, while the PNET has larger bias and smaller variance. From figure

5, we can find that the photometry part in the PNET is better than that of the SExtractor. Besides, we should note that, the test set is ideal, which would lead to unreliable high accuracy. More realistic images are required to further test the performance of the PNET.

Magnitude NStar Photometry Error (PNET) EPhotometry error (SExtractor)
10-11 2571
11-12 2551
12-13 2398
13-14 2388
14-15 2418
15-16 2381
16-17 2302
17-18 2367
18-19 2435
19-20 2279
20-21 2424
21-22 2416
22-23 2369
Table 2: Mean values and variance of photometry error of the PNET and the SExtractor for stars with different magnitudes. The unit in magnitude is mmag.
Figure 5: Histogram of photometry error of different methods for all stamp images with different magnitudes. As we can find in this figure, for ideal conditions, the PNET and the SExtractor both have very stable photometry performance. The photometry part of the PNET is slightly better than the aperture photometry algorithm in the SExtractor for ideal conditions. This figure indicates us that the PNET could obtain photometry results, which are better than results obtained by the mature method.

3.3 Testing the PNET with full frame of simulated images

In this part, we use simulated images of full frame as shown in figure 3 to test the PNET. 500 simulated images are used to test the performance of the PNET and the SExtractor. The photometry error is used to evaluate photometry ability of both algorithms. The F1 and F2 scores are used to evaluate detection ability and astrometry accuracy of both algorithms (a effective detection is defined as a target that are detected with astrometry accuracy better than 0.5 pixel), which is defined as:

(6)

where precision and recall stands for precision and recall rate for different methods. Precision rate and recall rate are defined as:

(7)

where stands for true positive detection, stands for false positive detection, stands for false negative detection.

As we discussed earlier, there are more dim celestial objects in simulated images. Therefore, during training stage, there is a bias called ‘dimmer and more’. Although the Cramér-Rao bound indicates us theoretical limit for photometry of dim stars would be larger than that of bright stars, there are more dim stars during the training stage which would drive the PNET to learn a highly complex function: estimating accurate information from dim targets. Meanwhile, more dim stars would increase photometry accuracy of the PNET for dim stars. Therefore, it is a trade-off for users to make for the PNET: whether it should obtain high photometry accuracy for bright stars or to increase its photometry ability for dim stars. In this paper, we would not make any additional modifications to the loss function or the training data.

Statistical results of detection ability can be found in figure 6 and table 3. From this figure, we can find that the SExtractor has better detection ability than that of the PNET for bright targets. But the difference is small. Besides there are no structural noises in simulated images (cosmic rays, hot pixels or clouds), which will lead to unrealistic good results for the SExtractor: accuracy is almost 1 for bright targets. Detailed comparisons of detection abilities between the PNET and the SExtractor with real observed images can be found in Jia et al. (2020). Besides, for dim targets, the PNET is better than the SExtractor in target detection tasks. In this paper, we compare detection abilities of different methods to show that with more neural network branches, the detection ability of the PNET is not affected.

(a) The f1 and f2 score for different methods.
(b) The recall and precision rate for different methods.
Figure 6: The performance of the PNET and the SExtractor in celestial objects detection for stars with different magnitudes. Because simulated images have relatively good quality and some noise sources (such as hot pixels or cosmic rays) are not added in simulated images, the detection ability is good for both the PNET and the SExtractor. However, we still could find that for dim targets, the PNET is better than the SExtractor.
Magnitude Photometry error (PNET) Photometry error (SExtractor) F1/F2-Score (PNET) F1/F2-Score (SExtractor)
10-11 1.000/1.000 0.998/0.999
11-12 0.999/0.999 0.999/0.999
12-13 0.999/0.999 0.999/0.999
13-14 0.993/0.993 0.999/0.999
14-15 0.992/0.992 0.998/0.999
15-16 0.990/0.990 0.998/0.999
16-17 0.985/0.983 0.998/0.999
17-18 0.956/0.953 0.997/0.999
18-19 0.970/0.967 0.995/0.996
19-20 0.978/0.972 0.994/0.994
20-21 0.973/0.964 0.993/0.992
21-22 0.965/0.950 0.978/0.969
22-23 0.530/0.508 0.346/0.321
Table 3: Photometry accuracy, f1/f2 score of the PNET and the SExtractor for stars with different magnitudes. The unit used for photometry error is mmag. f1 and f2 score is used to evaluate detection ability of different methods.

Statistical results of photometry error can be found in table 3. We can find that photometry error is smaller than 0.01 mag for stars brighter than 20 mag. To better compare results obtained by our method and the SExtractor, we further plot distributions of mean of absolute values between photometry results and ground truth magnitudes in figure 7. As shown in figure 7, the PNET has better performance in photometry than the SExtractor. Besides, we can find that the photometry error increases as magnitudes of stars increase, which reflects the limitation of Cramér-Rao bound.

Figure 7: Comparison of absolute values of differences between the SExtractor and the PNET for celestial objects with different magnitudes. The curve shows maximal and minimal photometry error. As we can find in this figure, the photometry error of the PNET is smaller than the SExtractor. Also, the curve of the PNET reflects the Cramér-Rao bound: as magnitude of stars increases, the photometry error increases.

4 Keeping Performance of the PNET for images with variable PSFs

Observation conditions will change for real observations carried out by WFSATs, such as background variations and PSF variations. These variations would introduce additional noise to photometry results. For example, if seeing conditions get worse (PSFs would become bigger), magnitudes obtained by our methods would be larger. To reduce these effects, the whole framework needs to be calibrated before processing real observation images. Meanwhile, real observation experiences indicate us that although the shape of stars of background will change, they are still within a predefined criterion (Jia et al., 2020). Therefore, the PNET trained with simulated or real observation data could be further trained when observation condition changes to keep photometry or astrometry results stable. Based on this idea, we apply the transfer learning strategy (Zhuang et al., 2020) to keep results stable, when observation conditions change. Similar methods have already been tested for faster-rcnn based detection algorithms and have shown their effectiveness (Jia et al., 2020).

For the PNET, when observation condition changes, we propose to extract images of reference stars to obtain PSFs and background noise (Jia et al., 2020). Then we use simulation methods discussed in Section 3 to generate simulated images and train the PNET with these images. To test effectiveness of this approach, we introduce additional defocus and coma to PSFs to generate simulated images (PSF-000 stands for original images, PSF-040 stands for images generated with PSFs with small defocus and coma and PSF-120 stands for images generated with PSFs with large defocus and coma). Several stamp images are shown in figure 8. As can be seen from this figure, these PSFs have different shapes.

Figure 8: Stamp images from simulated images. PSF-000 stands for a stamp image from original images, PSF-040 stands for a stamp image from images with small defocus and coma and PSF-080 stands for a stamp image from images with large defocus and coma.

In this paper, we use simulated images generated from extracted PSFs to further train the PNET with 15 epochs. After training, the Precision-Recall curves (P-R) are shown in figure 9. The P-R curves are used to evaluate the detection performance: the performance of an algorithm is better if its P or R values are closer to one. We can find that through transfer learning, the PNET has better performance in detection than the original PNET. As the difference of PSFs increases, benefits brought by the transfer learning would increase. We further test the photometry accuracy of the PNET before and after training as shown in figure 10. As shown in these figures, the photometry accuracy has been stabilized through transfer learning. However, we still could find that the photometry accuracy is affected when PSF has changed.

(a) P-R curve of the PNET before and after transfer learning for images with PSF-040.
(b) P-R curve of the PNET before and after transfer learning for images with PSF-080.
Figure 9: P-R curve of the PNET before and after transfer learning for images with different PSFs. As we can find that with transfer learning, the detection ability of the PNET could be improved. Besides, as deformations of PSF increase, benefits brought by transfer learning could also increase.
(a) Mean of absolute values of differences between estimated magnitudes and ground-truth magnitudes for stars with different magnitudes.
(b) Variance of absolute values of differences between estimated magnitudes and ground-truth magnitudes for stars with different magnitudes.
Figure 10: Comparison of absolute values between estimated magnitudes and ground-based magnitudes of the PNET after transfer learning. As we can find that with transfer learning, the photometry accuracy could be stabilized. Photometry accuracy of PSF-120 is worse than PSF-000 and PSF-040, because PSFs are larger and more irregular for images with PSF-120.

5 Conclusions and future work

In this paper, we propose an end-to-end detection and photometry framework (PNET) for WFSATs. The PNET could obtain positions, types and magnitudes of celestial objects at the same time and has better performance than traditional methods. With transfer learning, the PNET could obtain stable photometry results. The PNET could be used as a general data processing framework for information extraction from data obtained by WFSATs.
However, there are still some problems to be solved in the future. First of all, because the PNET costs around 14 GB GPU memories and around 0.512 seconds in processing an image of size pixels, optimization and pruning of neural networks are required. Secondly, transfer learning would cost quite long time to maintenance the performance of the PNET and PSFs of newly observed images need to be extracted as prior condition. We need to investigate a more efficient method to extract PSFs and train the PNET. Our group are now developing PSF forecasting method and PSF reconstruction with the PNET to design an efficient astronomical targets extraction method for WFSATs.

Acknowledgements

Peng Jia would like to thank Professor Zhaohui Shang from National Astronomical Observatories, Professor Rongyu Sun from Purple Mountain Observatory, Dr. Huigen Liu from Nanjing University, Dr. Chengyuan Li and Dr. Bo Ma from Sun Yat-Sen University who provide very helpful suggestions for this paper. This work is supported by National Natural Science Foundation of China (NSFC) (11503018), the Joint Research Fund in Astronomy (U1631133, U1931207) under cooperative agreement between the NSFC and Chinese Academy of Sciences (CAS). Authors acknowledge the French National Research Agency (ANR) to support this work through the ANR APPLY (grant ANR-19-CE31-0011) coordinated by B. Neichel. This work is also supported by Shanxi Province Science Foundation for Youths (201901D211081), Research and Development Program of Shanxi (201903D121161), Research Project Supported by Shanxi Scholarship Council of China, the Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi (2019L0225).

Data Availability Statements

The code in this paper can be downloaded from https://zenodo.org/record/4784689#.YKxbe6gzaUk and after acceptance the code will be released in PaperData Repository powered by China-VO with a DOI number.

References

  • A. Basden, N. Bharmal, D. Jenkins, T. Morris, J. Osborn, J. Peng, and L. Staykov (2018) The durham adaptive optics simulation platform (dasp): current status. SoftwareX 7, pp. 63–69. Cited by: §3.1.
  • E. Bertin and S. Arnouts (1996) SExtractor: software for source extraction. Astronomy and astrophysics supplement series 117 (2), pp. 393–404. Cited by: §1.
  • E. Bertin (2009) SkyMaker: astronomical image simulations made easy.. Memorie della Societa Astronomica Italiana 80, pp. 422. Cited by: §3.1.
  • A. Burd, M. Cwiok, H. Czyrkowski, R. Dabrowski, W. Dominik, M. Grajda, M. Gorski, G. Kasprowicz, K. Krupska, K. Kwiecinska, et al. (2005) Pi of the sky: search for optical flashes of extragallactic origin. In Photonics Applications in Industry and Research IV, Vol. 5948, pp. 59481H. Cited by: §1.
  • C. J. Burke, P. D. Aleo, Y. Chen, X. Liu, J. R. Peterson, G. H. Sembroski, and J. Y. Lin (2019)

    Deblending and classifying astronomical sources with Mask R-CNN deep learning

    .
    MNRAS 490 (3), pp. 3952–3965. External Links: Document, 1908.02748 Cited by: §1.
  • G. Cabrera-Vives, I. Reyes, F. Förster, P. A. Estévez, and J. Maureira (2017) Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection. ApJ 836 (1), pp. 97. External Links: Document, 1701.00458 Cited by: §1.
  • X. Cui, X. Yuan, and X. Gong (2008) Antarctic schmidt telescopes (ast3) for dome a. In Ground-based and Airborne Telescopes II, Vol. 7012, pp. 70122D. Cited by: §1.
  • A. Drake, S. Djorgovski, A. Mahabal, J. Prieto, E. Beshore, M. Graham, M. Catalan, S. Larson, E. Christensen, C. Donalek, et al. (2011) The catalina real-time transient survey. Proceedings of the International Astronomical Union 7 (S285), pp. 306–308. Cited by: §1.
  • D. A. Duev, A. Mahabal, Q. Ye, K. Tirumala, J. Belicki, R. Dekany, S. Frederick, M. J. Graham, R. R. Laher, F. J. Masci, T. A. Prince, R. Riddle, P. Rosnet, and M. T. Soumagnac (2019) DeepStreaks: identifying fast-moving objects in the Zwicky Transient Facility data with deep learning. MNRAS 486 (3), pp. 4158–4165. External Links: Document, 1904.05920 Cited by: §1.
  • A. L. Glazier, W. S. Howard, H. Corbett, N. M. Law, J. K. Ratzloff, O. Fors, and D. del Ser (2020) Evryscope and K2 Constraints on TRAPPIST-1 Superflare Occurrence and Planetary Habitability. arXiv e-prints, pp. arXiv:2006.14712. External Links: 2006.14712 Cited by: §1.
  • R. E. González, R. P. Muñoz, and C. A. Hernández (2018) Galaxy detection and identification using deep learning and data augmentation. Astronomy and Computing 25, pp. 103–109. External Links: Document, 1809.01691 Cited by: §1.
  • I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio (2016) Deep learning. Vol. 1, MIT press Cambridge. Cited by: §1.
  • R. Hausen and B. E. Robertson (2020) Morpheus: a deep learning framework for the pixel-level analysis of astronomical image data. The Astrophysical Journal Supplement Series 248 (1), pp. 20. Cited by: §1.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 770–778. Cited by: §1.
  • Z. He, X. Er, Q. Long, D. Liu, X. Liu, Z. Li, Y. Liu, W. Deng, and Z. Fan (2020) Deep Learning for Strong Lensing Search: Tests of the Convolutional Neural Networks and New Candidates from KiDS DR3. MNRAS. External Links: Document, 2007.00188 Cited by: §1.
  • P. Jia, D. Cai, D. Wang, and A. Basden (2015a)

    Real-time generation of atmospheric turbulence phase screen with non-uniform fast fourier transform

    .
    Monthly Notices of the Royal Astronomical Society 450 (1), pp. 38–44. Cited by: §3.1.
  • P. Jia, D. Cai, D. Wang, and A. Basden (2015b) Simulation of atmospheric turbulence phase screen for large telescope and optical interferometer. Monthly Notices of the Royal Astronomical Society 447 (4), pp. 3467–3474. Cited by: §3.1.
  • P. Jia, X. Li, Z. Li, W. Wang, and D. Cai (2020)

    Point spread function modelling for wide-field small-aperture telescopes with a denoising autoencoder

    .
    MNRAS 493 (1), pp. 651–660. External Links: Document, 2001.11716 Cited by: §4.
  • P. Jia, Q. Liu, and Y. Sun (2020) Detection and Classification of Astronomical Targets with Deep Neural Networks in Wide-field Small Aperture Telescopes. AJ 159 (5), pp. 212. External Links: Document, 2002.09211 Cited by: §1, §2, §3.1, §3.3, §4.
  • P. Jia, M. Ma, D. Cai, W. Wang, J. Li, and C. Li (2021) Compressive shack–hartmann wavefront sensor based on deep neural networks. Monthly Notices of the Royal Astronomical Society 503 (3), pp. 3194–3203. Cited by: §2.
  • P. Jia, X. Wu, Z. Li, B. Li, W. Wang, Q. Liu, and A. Popowicz (2020) Modelling the point spread function of wide field small aperture telescopes with deep neural networks–applications in point spread function estimation. arXiv preprint arXiv:2011.10243. Cited by: §4.
  • P. Jia, Y. Zhao, G. Xue, and D. Cai (2019) Optical transient object classification in wide-field small aperture telescopes with a neural network. The Astronomical Journal 157 (6), pp. 250. Cited by: §1.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.1.
  • D. Lang, D. W. Hogg, K. Mierle, M. Blanton, and S. Roweis (2010) Astrometry. net: blind astrometric calibration of arbitrary astronomical images. The astronomical journal 139 (5), pp. 1782. Cited by: §1.
  • Y. Ma, H. Zhao, and D. Yao (2007) NEO search telescope in China. In Near Earth Objects, our Celestial Neighbors: Opportunity and Risk, G. B. Valsecchi, D. Vokrouhlický, and A. Milani (Eds.), IAU Symposium, Vol. 236, pp. 381–384. External Links: Document Cited by: §1.
  • F. J. Masci, R. R. Laher, B. Rusholme, D. L. Shupe, S. Groom, J. Surace, E. Jackson, S. Monkewitz, R. Beck, D. Flynn, et al. (2018) The zwicky transient facility: data processing, products, and archive. Publications of the Astronomical Society of the Pacific 131 (995), pp. 018003. Cited by: §1.
  • R. A. Mendez, J. F. Silva, R. Orostica, and R. Lobos (2014) Analysis of the cramér-rao bound in the joint estimation of astrometry and photometry. Publications of the Astronomical Society of the Pacific 126 (942), pp. 798. Cited by: §3.1.
  • H. Pablo, G. N. Whittaker, A. Popowicz, S. M. Mochnacki, R. Kuschnig, C. C. Grant, A. F. J. Moffat, S. M. Rucinski, J. M. Matthews, A. Schwarzenberg-Czerny, G. Handler, W. W. Weiss, D. Baade, G. A. Wade, E. Zocłońska, T. Ramiaramanantsoa, M. Unterberger, K. Zwintz, A. Pigulski, J. Rowe, O. Koudelka, P. Orleański, A. Pamyatnykh, C. Neiner, R. Wawrzaszek, G. Marciniszyn, P. Romano, G. Woźniak, T. Zawistowski, and R. E. Zee (2016) The BRITE Constellation Nanosatellite Mission: Testing, Commissioning, and Operations. PASP 128 (970), pp. 125001. External Links: Document, 1608.00282 Cited by: §1.
  • A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. (2019) Pytorch: an imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703. Cited by: §3.1.
  • A. Popowicz (2018) PSF photometry for BRITE nano-satellite mission. In Proc. SPIE, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10698, pp. 1069820. External Links: Document Cited by: §1.
  • J. K. Ratzloff, N. M. Law, O. Fors, H. T. Corbett, W. S. Howard, D. D. Ser, and J. B. Haislip (2019) Building the evryscope: hardware design and performance. Publications of the Astronomical Society of the Pacific 131 (1001), pp. 075001. Cited by: §1.
  • S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497. Cited by: §1.
  • S. Ruder (2016) An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747. Cited by: §3.1.
  • K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §2.
  • R. Sun and S. Yu (2019) Precise measurement of the light curves for space debris with wide field of view telescope. Astrophysics and Space Science 364 (3), pp. 39. Cited by: §1.
  • D. Turpin, M. Ganet, S. Antier, E. Bertin, L. Xin, N. Leroy, C. Wu, Y. Xu, X. Han, H. Cai, et al. (2020) Vetting the optical transient candidates detected by the gwac network using convolutional neural networks. Monthly Notices of the Royal Astronomical Society 497 (3), pp. 2641–2650. Cited by: §1.
  • Y. Xu, L. Xin, X. Han, H. Cai, L. Huang, H. Li, X. Lu, Y. Qiu, C. Wu, G. Li, et al. (2020) The gwac data processing and management system. arXiv preprint arXiv:2003.00205. Cited by: §1.
  • X. Yuan, X. Cui, G. Liu, F. Zhai, X. Gong, R. Zhang, L. Xia, J. Hu, J. Lawrence, J. Yan, et al. (2008) Chinese small telescope array (cstar) for antarctic dome a. In Ground-based and Airborne Telescopes II, Vol. 7012, pp. 70124G. Cited by: §1.
  • Z. Zhang, T. He, H. Zhang, Z. Zhang, J. Xie, and M. Li (2019) Bag of freebies for training object detection neural networks. arXiv preprint arXiv:1902.04103. Cited by: §3.1.
  • F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and Q. He (2020) A comprehensive survey on transfer learning. Proceedings of the IEEE 109 (1), pp. 43–76. Cited by: §4.