Wide field small aperture telescopes (WFSATs) are working horses for time-domain astronomy, because it can cover a large field of view with high cadence in a cost effective way (Yuan et al., 2008; Cui et al., 2008; Glazier et al., 2020; Popowicz, 2018; Burd et al., 2005; Ratzloff et al., 2019; Sun and Yu, 2019). As the number of WFSATs increases, the data volume also increases. Since many celestial objects require immediate follow–up observations (such as: electromagnetic counterparts of gravitational wave sources or near earth objects), fast data processing frameworks with high efficiency play an important role in scientific research (Ma et al., 2007; Drake et al., 2011; Pablo et al., 2016; Masci et al., 2018; Xu et al., 2020). Because WFSATs are mainly used to observe point–like (stars) or streak–like targets (near earth objetcs), fast data processing frameworks for WFSATs mainly include following steps:
1. target detection: obtain positions of astronomical target candidates from raw observation images;
2. target classification: classify these candidates into different types of astronomical targets or bogus;
3. target information extraction: obtain magnitudes or positions of astronomical targets.
Because WFSATs are working remotely without active corrections of system aberrations or suppressions of noise, images are affected by uncontrollable noise and highly variable PSFs. These images would bring difficulties into development of image processing frameworks. For example, cosmic rays and ghost images would trigger false detections. Highly variable point spread functions (PSFs) would introduce uncertainty to astrometry and photometry results. Data processing frameworks based on classical methods often require frequent manual intervention to keep information extracted from observation data effective. Manual interventions would reduce data processing efficiency and further limit scientific outputs of WFSATs.
In recent years, machine learning algorithms have shown great success in image processing tasks, such as image classification(Cabrera-Vives et al., 2017; Duev et al., 2019; He et al., 2020), image detection (González et al., 2018) and image segmentation (Burke et al., 2019; Hausen and Robertson, 2020). These algorithms are also imported into data processing frameworks for WFSATs. Machine learning based image classification algorithms (Jia et al., 2019; Turpin et al., 2020) are integrated with classical astronomical target detection algorithms, such as: SExtractor (Bertin and Arnouts, 1996) or simplexy in Astrometry (Lang et al., 2010), to detect and classify astronomical targets of different kinds. For these integrated astronomical target detection frameworks, classical detection algorithms will firstly obtain positions of astronomical target candidates. Then machine learning algorithms will classify these candidates into different types. At last, aperture photometry or point spread function (PSF) fitting photometry will be used to obtain magnitudes of astronomical targets for further scientific research.
However, because these integrated astronomical target detection frameworks have sequential structures (all processes are carried out sequentially), the performance of the overall framework is limited by all algorithms used in frameworks. Take the target detection task for example, only targets that can be detected by classical methods would be further classified. Because targets that are close to bright targets and dim streak–like targets can hardly been detected by classical methods, integrated methods would miss these targets.
Thanks to recent developments of deep neural networks (Goodfellow et al., 2016), end-to-end target detection algorithms have been proposed. End-to-end target detection algorithms could obtain positions and types of celestial targets from observed images at the same time. These algorithms split images into small regions and either directly classify these regions into different targets (one step detection framework) or merge these regions and then classify them into different types (two step detection framework). Because WFSATs have low spatial sampling rate (several arcsec per pixel) and short exposure time (tens of seconds to several seconds), astronomical targets detected by them are sparsely distributed and have small size. For targets with small size, two step detection frameworks have better performance (Ren et al., 2015). Therefore we have proposed to use feature pyramid nets and resnet (He et al., 2016) to form backbone neural networks and use faster-rcnn structure to form a two step astronomical target detection framework to process images obtained by WFSATs (Jia et al., 2020). Tested with simulated and real observation data, our framework has shown robust performance in detection of astronomical targets of different types.
However, it should be noted that positions and types of celestial objects obtained by the faster-rcnn based astronomical target detection framework can not fully satisfy scientific requirements, because magnitudes are missed in the output of our framework. Magnitudes of celestial targets play an important part for scientific observations carried out by WFSATs, such as observation of exoplanets or flares from stars. In this paper, we extend our faster-rcnn based end-to-end astronomical target detection framework to make it suitable to carry out photometry. We will discuss the structure of the DNN for photometry (PNET) in Section 2. In Section 3, we will use simulated data to test the performance of PNET. Because photometry is an regression problem, which is sensitive to variations of noises and PSFs, we further propose a transfer learning based neural network modification method. We will discuss this method in Section 4. At last, we will draw conclusions of this paper and anticipate our future work in Section 5.
2 The structure of the PNET
Because images obtained by WFSATs have low spatial sampling rates and short exposure time, almost all targets observed by WFSATs are diffuse point–like or streak–like targets. Therefore photometry and astrometry are the most basic and important information extraction step and the framework for data processing in WFSATs could be divided into following steps:
1. obtain positions and types of all celestial objects by the faster-rcnn;
2. transform positions of these celestial objects to the celestial coordinate;
3. obtain magnitudes of all detected celestial objects through aperture photometry or PSF–fitting photometry;
4. select several stars as references and use magnitude measurements and magnitudes in star catalogue to calibrate photometry results;
5. obtain calibrated magnitudes of other stars with calibration results of reference stars;
6. cross–match magnitudes and positions of these stars with different catalogue for different targets.
Because all celestial object candidates in observed images would be scanned by the faster-rcnn during the detection step (the first step), we could modify the structure of the faster-rcnn to obtain positions and magnitudes of astronomical targets at the same time. With this modification, we can increase automation degree of our framework and would be able to further increase photometry and astrometry accuracy through back propagation in the training step. Based on this principle, we propose a modified structure of our faster-rcnn based astronomical target detection framework (PNET) as shown in figure 1.
Comparing with our faster-rcnn based framework in Jia et al. (2020), we have added the astrometry and photometry neural network branches at the end of the box regression. The box regression neural network would output types and rough positions of astronomical targets (bounding boxes with four boundary pixels to indicate their position and classification results to indicate types of different celestial objects). Then we could obtain centres of bounding boxes and cut stamp images with size of pixels ( stands for size of images) as inputs of the photometry and the astrometry neural networks.
Because celestial objects with different magnitudes would have different size and inputs of the photometry and the astrometry neural network have the same size, the size of stamp images is important and requires additional manual innervations for different datasets. The size of stamp images should be not too large, because it would introduce additional background noise for dim stars and would obtain two stars in one stamp image. The size of stamp images should not be too small, which would introduce strong requirements of PSF uniformity and centroid accuracy (sub-pixel shifts would introduce strong bias, when size of stars is too small). In this paper, we set as 9, which is the size of most star images. It should be noted that the philosophy of photometry algorithm in the PNET can be viewed as mixture of aperture photometry and PSF-fitting photometry. Therefore, although stamp images with
pixels are smaller than the size of bright stars, it still could return effective measurements. The philosophy of the astrometry neural network can be viewed as mixture of the moment estimation and PSF-fitting astrometry. Therefore the astrometry results are also effective when stars are larger than stamp images.
The structure of the photometry neural network is shown in figure 2
. It is a convolutional neural network inspired by the VGG neural network(Simonyan and Zisserman, 2014), which contains 11 convolutional layers and 4 fully connected layers. Because stamp images have very small size ( pixels in this paper), we set size of convolutional kernels in each layer to be
pixels. After each convolutional layer, we use Rectified Linear Unit (ReLU) as activation function. The output of the last convolutional layer is transferred to 4 fully connected layers to estimate magnitudes of celestial objects. The input of the photometry neural network is a stamp image and the output of the photometry neural network is magnitude of the input celestial object. The astrometry neural network has almost the same structure, except that there are two outputs for the astrometry neural network:and positions in the camera (CCD) coordinate.
The loss function is important for a neural network and it is directly related to the way we train the neural network. It should be noted that, we could either train the photometry neural network, the astrometry neural network and the detection neural network separately or take the faster-rcnn based framework as a whole framework to train. According to our experience, training would be more effective if the neural network could be trained together(Jia et al., 2021) at the cost of larger GPU memory usage and longer training time. In this paper, we train the faster-rcnn based neural network along with the photometry neural network and the astrometry neural network together. The loss function of the faster-rcnn based celestial objects detection and photometry framework is defined in equation 1,
where is the classification loss function, is the Astrometry loss function, is the photometry loss function. These loss functions are defined in equation 2,
and stand for ground–truth and predicted positions of celestial objects. and stand for ground–truth and predicted magnitudes of celestial objects, where stands for index of celestial objects. and are mean square error of photometry and astrometry error. contains four loss functions: the first one is the classification loss ( stands for number of celestial objects, stands for target label and
stands for probability of classification results), the second one is bounding box regression loss (used to define rough position of celestial objects with four pixels: upper left, upper right, bottom left and bottom right), the third one is used to regularize the bounding box regression loss and the last one is a smoothed L1 loss defined in equation3.
With modifications mentioned above, a trained PNET could obtain types, positions and magnitudes of different celestial objects at the same time. The flow chart of data processing procedure with the PNET could include the following steps:
1. obtain positions, magnitudes and types of all celestial objects by the PNET;
2. transform positions of these celestial objects to celestial coordinates;
3. select several stars as references and obtain magnitudes of all celestial objects according to references;
4. cross–match magnitudes and positions of all celestial objects with different catalogues.
We could find that with the PNET, the complexity for data processing framework of WFSATs has been reduced and we could obtain magnitudes, positions and types of celestial objects without much manual interventions. We will discuss the performance of the PNET in Section 3.
3 Training and testing the PNET with simulated data
3.1 Training the PNET
According to our experience, the bounding box regression in original faster-rcnn based astronomical target detection framework could return position accuracy better than 1 pixel (Jia et al., 2020), which is enough to cross-match stars in catalogue for WFSATs. The astrometry neural network in the PNET could obtain positions of stars with higher accuracy (better than 0.01 pixel for stars with moderate brightness), therefore we will leave along the performance of the astrometry neural network and test the performance of the photometry neural network, which is our main target for development of the PNET.
The PNET is trained and tested with simulated data in this paper, because we could obtain ground truth values of positions and magnitudes from simulated images. We use Durham adaptive optics simulation platform (DASP) (Basden et al., 2018) and a high reliable atmospheric turbulence simulation code (Jia et al., 2015b, a) to generate PSFs. Then we use the Skymaker (Bertin, 2009) to generate simulated images with these simulated PSFs. Images used in this paper are all ideal simulated images without ghost images, cosmic rays or clouds. These images are solely used to test the performance of the PNET in photometry accuracy. Parameters of the simulated telescope are shown in table 1. We generate 5000 simulated images to train the PNET and 500 simulated images to test the PNET. In these images, celestial objects have magnitudes from mag 10 to mag 23. There are no dense star fields (more than 70 stars in a pixels region) in these images, because considering the observation mode of WFSATs, it would be unlikely to obtain images with many dense star fields. Besides, according to the principle of the PNET, it would be unreliable to obtain effective information from dense star fields with the PNET. One frame of simulated images is shown in figure 3.
|Field of View||10 arcmin|
|Observation wavelength||500 nm|
|FWHM of Seeing Disc||1 arcsec|
|Exposure Time||1 second|
|Sky background||24 mag|
It should be noted that celestial objects with different magnitudes have different contributions to photometry and astrometry loss. Physical limitations of astrometry and photometry accuracy, also known as Cramér-Rao bound, indicate us that the uncertainty of astrometry and photometry is related to , where stands for signal and N stands for noise (Mendez et al., 2014). Therefore, theoretically differences between predicted and ground–truth magnitudes or positions are smaller for brighter stars. If the number of stars with different magnitudes is close in the training set, the final astrometry and photometry accuracy would be strongly stretched to fit dim stars.
In this paper, to keep the astrometry and photometry results stable, we generate stars of different magnitudes with the same distribution: we set the slope of differential star counts to be 0.2 in the Skymaker for both simulated and real observed images. Then there would be more dim stars in simulated images and the photometry and astrometry results would be better for dim stars. In real applications, when distributions of stars are hard to define manually, we should either use training set with stars that satisfy real distribution or modify loss functions for the PNET with equation 4 and equation 5 to keep photometry and astrometry results stable for stars with different magnitudes.
Where and are theoretical limit or required accuracy of astrometry and photometry for celestial objects with magnitude of . As shown in these two equations, values of loss functions would be directly zero, when astrometry or photometry results are close to required accuracy.
The PNET is implemented with Pytorch(Paszke et al., 2019) in a computer with 2 Xeon E5-2650 CPUs and 256 GB memory. The size of input images is pixels and would cost around 14 GB GPU memory for training and testing. Therefore, we use a RTX 3090 (with 24 GB GPU memory) to train and test the PNET. The PNET is initialized with random number as weights and optimization algorithms for the detection part is Adam algorithm (Kingma and Ba, 2014) and for the astrometry and photometry part is SGD algorithm (Ruder, 2016). We use the warm up method to set learning rates (Zhang et al., 2019)
and initial learning rate is 0.0003 for the photometry and astrometry neural network and 0.00003 for the detection neural network. The batch size for PNET is 1000. Trends of loss functions of training set and test set for different epochs are shown in figure4. As shown in this figure, we stop training the PNET after 30 epochs and it would cost around 23 hours to train the PNET and 5 hours to train the photometry neural network solely.
3.2 Testing the PNET with stamp images from simulated data
We use two data sets to test the performance of different parts of the PNET. The photometry part is trained and tested with stamp images cut from simulated images to show its feasibility in magnitude estimation. Small stamp images ( pixels) with different magnitudes (from mag 10 to mag 23) are cut from original simulated images as training and test set. We spilt stars of different magnitudes into 13 different categories. The statistical results are shown in table 2 and figure 5 along with results obtained by aperture photometry from the SExtractor. As shown in table 2
, we can find that the photometry part along in the PNET could estimate magnitude from stamp images. The photometry error of the PNET is below 0.1 mag for all stars and smaller than 0.004 mag for stars brighter than 18 mag. Besides, we can also find that the performance of the photometry part in the PNET is different from the aperture photometry in the SExtractor: the SExtractor has larger variance and smaller bias, while the PNET has larger bias and smaller variance. From figure5, we can find that the photometry part in the PNET is better than that of the SExtractor. Besides, we should note that, the test set is ideal, which would lead to unreliable high accuracy. More realistic images are required to further test the performance of the PNET.
|Magnitude||NStar||Photometry Error (PNET)||EPhotometry error (SExtractor)|
3.3 Testing the PNET with full frame of simulated images
In this part, we use simulated images of full frame as shown in figure 3 to test the PNET. 500 simulated images are used to test the performance of the PNET and the SExtractor. The photometry error is used to evaluate photometry ability of both algorithms. The F1 and F2 scores are used to evaluate detection ability and astrometry accuracy of both algorithms (a effective detection is defined as a target that are detected with astrometry accuracy better than 0.5 pixel), which is defined as:
where precision and recall stands for precision and recall rate for different methods. Precision rate and recall rate are defined as:
where stands for true positive detection, stands for false positive detection, stands for false negative detection.
As we discussed earlier, there are more dim celestial objects in simulated images. Therefore, during training stage, there is a bias called ‘dimmer and more’. Although the Cramér-Rao bound indicates us theoretical limit for photometry of dim stars would be larger than that of bright stars, there are more dim stars during the training stage which would drive the PNET to learn a highly complex function: estimating accurate information from dim targets. Meanwhile, more dim stars would increase photometry accuracy of the PNET for dim stars. Therefore, it is a trade-off for users to make for the PNET: whether it should obtain high photometry accuracy for bright stars or to increase its photometry ability for dim stars. In this paper, we would not make any additional modifications to the loss function or the training data.
Statistical results of detection ability can be found in figure 6 and table 3. From this figure, we can find that the SExtractor has better detection ability than that of the PNET for bright targets. But the difference is small. Besides there are no structural noises in simulated images (cosmic rays, hot pixels or clouds), which will lead to unrealistic good results for the SExtractor: accuracy is almost 1 for bright targets. Detailed comparisons of detection abilities between the PNET and the SExtractor with real observed images can be found in Jia et al. (2020). Besides, for dim targets, the PNET is better than the SExtractor in target detection tasks. In this paper, we compare detection abilities of different methods to show that with more neural network branches, the detection ability of the PNET is not affected.
|Magnitude||Photometry error (PNET)||Photometry error (SExtractor)||F1/F2-Score (PNET)||F1/F2-Score (SExtractor)|
Statistical results of photometry error can be found in table 3. We can find that photometry error is smaller than 0.01 mag for stars brighter than 20 mag. To better compare results obtained by our method and the SExtractor, we further plot distributions of mean of absolute values between photometry results and ground truth magnitudes in figure 7. As shown in figure 7, the PNET has better performance in photometry than the SExtractor. Besides, we can find that the photometry error increases as magnitudes of stars increase, which reflects the limitation of Cramér-Rao bound.
4 Keeping Performance of the PNET for images with variable PSFs
Observation conditions will change for real observations carried out by WFSATs, such as background variations and PSF variations. These variations would introduce additional noise to photometry results. For example, if seeing conditions get worse (PSFs would become bigger), magnitudes obtained by our methods would be larger. To reduce these effects, the whole framework needs to be calibrated before processing real observation images. Meanwhile, real observation experiences indicate us that although the shape of stars of background will change, they are still within a predefined criterion (Jia et al., 2020). Therefore, the PNET trained with simulated or real observation data could be further trained when observation condition changes to keep photometry or astrometry results stable. Based on this idea, we apply the transfer learning strategy (Zhuang et al., 2020) to keep results stable, when observation conditions change. Similar methods have already been tested for faster-rcnn based detection algorithms and have shown their effectiveness (Jia et al., 2020).
For the PNET, when observation condition changes, we propose to extract images of reference stars to obtain PSFs and background noise (Jia et al., 2020). Then we use simulation methods discussed in Section 3 to generate simulated images and train the PNET with these images. To test effectiveness of this approach, we introduce additional defocus and coma to PSFs to generate simulated images (PSF-000 stands for original images, PSF-040 stands for images generated with PSFs with small defocus and coma and PSF-120 stands for images generated with PSFs with large defocus and coma). Several stamp images are shown in figure 8. As can be seen from this figure, these PSFs have different shapes.
In this paper, we use simulated images generated from extracted PSFs to further train the PNET with 15 epochs. After training, the Precision-Recall curves (P-R) are shown in figure 9. The P-R curves are used to evaluate the detection performance: the performance of an algorithm is better if its P or R values are closer to one. We can find that through transfer learning, the PNET has better performance in detection than the original PNET. As the difference of PSFs increases, benefits brought by the transfer learning would increase. We further test the photometry accuracy of the PNET before and after training as shown in figure 10. As shown in these figures, the photometry accuracy has been stabilized through transfer learning. However, we still could find that the photometry accuracy is affected when PSF has changed.
5 Conclusions and future work
In this paper, we propose an end-to-end detection and photometry framework (PNET) for WFSATs. The PNET could obtain positions, types and magnitudes of celestial objects at the same time and has better performance than traditional methods. With transfer learning, the PNET could obtain stable photometry results. The PNET could be used as a general data processing framework for information extraction from data obtained by WFSATs.
However, there are still some problems to be solved in the future. First of all, because the PNET costs around 14 GB GPU memories and around 0.512 seconds in processing an image of size pixels, optimization and pruning of neural networks are required. Secondly, transfer learning would cost quite long time to maintenance the performance of the PNET and PSFs of newly observed images need to be extracted as prior condition. We need to investigate a more efficient method to extract PSFs and train the PNET. Our group are now developing PSF forecasting method and PSF reconstruction with the PNET to design an efficient astronomical targets extraction method for WFSATs.
Peng Jia would like to thank Professor Zhaohui Shang from National Astronomical Observatories, Professor Rongyu Sun from Purple Mountain Observatory, Dr. Huigen Liu from Nanjing University, Dr. Chengyuan Li and Dr. Bo Ma from Sun Yat-Sen University who provide very helpful suggestions for this paper. This work is supported by National Natural Science Foundation of China (NSFC) (11503018), the Joint Research Fund in Astronomy (U1631133, U1931207) under cooperative agreement between the NSFC and Chinese Academy of Sciences (CAS). Authors acknowledge the French National Research Agency (ANR) to support this work through the ANR APPLY (grant ANR-19-CE31-0011) coordinated by B. Neichel. This work is also supported by Shanxi Province Science Foundation for Youths (201901D211081), Research and Development Program of Shanxi (201903D121161), Research Project Supported by Shanxi Scholarship Council of China, the Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi (2019L0225).
Data Availability Statements
The code in this paper can be downloaded from https://zenodo.org/record/4784689#.YKxbe6gzaUk and after acceptance the code will be released in PaperData Repository powered by China-VO with a DOI number.
- The durham adaptive optics simulation platform (dasp): current status. SoftwareX 7, pp. 63–69. Cited by: §3.1.
- SExtractor: software for source extraction. Astronomy and astrophysics supplement series 117 (2), pp. 393–404. Cited by: §1.
- SkyMaker: astronomical image simulations made easy.. Memorie della Societa Astronomica Italiana 80, pp. 422. Cited by: §3.1.
- Pi of the sky: search for optical flashes of extragallactic origin. In Photonics Applications in Industry and Research IV, Vol. 5948, pp. 59481H. Cited by: §1.
Deblending and classifying astronomical sources with Mask R-CNN deep learning. MNRAS 490 (3), pp. 3952–3965. External Links: Cited by: §1.
- Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection. ApJ 836 (1), pp. 97. External Links: Cited by: §1.
- Antarctic schmidt telescopes (ast3) for dome a. In Ground-based and Airborne Telescopes II, Vol. 7012, pp. 70122D. Cited by: §1.
- The catalina real-time transient survey. Proceedings of the International Astronomical Union 7 (S285), pp. 306–308. Cited by: §1.
- DeepStreaks: identifying fast-moving objects in the Zwicky Transient Facility data with deep learning. MNRAS 486 (3), pp. 4158–4165. External Links: Cited by: §1.
- Evryscope and K2 Constraints on TRAPPIST-1 Superflare Occurrence and Planetary Habitability. arXiv e-prints, pp. arXiv:2006.14712. External Links: Cited by: §1.
- Galaxy detection and identification using deep learning and data augmentation. Astronomy and Computing 25, pp. 103–109. External Links: Cited by: §1.
- Deep learning. Vol. 1, MIT press Cambridge. Cited by: §1.
- Morpheus: a deep learning framework for the pixel-level analysis of astronomical image data. The Astrophysical Journal Supplement Series 248 (1), pp. 20. Cited by: §1.
- Deep residual learning for image recognition. In , pp. 770–778. Cited by: §1.
- Deep Learning for Strong Lensing Search: Tests of the Convolutional Neural Networks and New Candidates from KiDS DR3. MNRAS. External Links: Cited by: §1.
Real-time generation of atmospheric turbulence phase screen with non-uniform fast fourier transform. Monthly Notices of the Royal Astronomical Society 450 (1), pp. 38–44. Cited by: §3.1.
- Simulation of atmospheric turbulence phase screen for large telescope and optical interferometer. Monthly Notices of the Royal Astronomical Society 447 (4), pp. 3467–3474. Cited by: §3.1.
Point spread function modelling for wide-field small-aperture telescopes with a denoising autoencoder. MNRAS 493 (1), pp. 651–660. External Links: Cited by: §4.
- Detection and Classification of Astronomical Targets with Deep Neural Networks in Wide-field Small Aperture Telescopes. AJ 159 (5), pp. 212. External Links: Cited by: §1, §2, §3.1, §3.3, §4.
- Compressive shack–hartmann wavefront sensor based on deep neural networks. Monthly Notices of the Royal Astronomical Society 503 (3), pp. 3194–3203. Cited by: §2.
- Modelling the point spread function of wide field small aperture telescopes with deep neural networks–applications in point spread function estimation. arXiv preprint arXiv:2011.10243. Cited by: §4.
- Optical transient object classification in wide-field small aperture telescopes with a neural network. The Astronomical Journal 157 (6), pp. 250. Cited by: §1.
- Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.1.
- Astrometry. net: blind astrometric calibration of arbitrary astronomical images. The astronomical journal 139 (5), pp. 1782. Cited by: §1.
- NEO search telescope in China. In Near Earth Objects, our Celestial Neighbors: Opportunity and Risk, G. B. Valsecchi, D. Vokrouhlický, and A. Milani (Eds.), IAU Symposium, Vol. 236, pp. 381–384. External Links: Cited by: §1.
- The zwicky transient facility: data processing, products, and archive. Publications of the Astronomical Society of the Pacific 131 (995), pp. 018003. Cited by: §1.
- Analysis of the cramér-rao bound in the joint estimation of astrometry and photometry. Publications of the Astronomical Society of the Pacific 126 (942), pp. 798. Cited by: §3.1.
- The BRITE Constellation Nanosatellite Mission: Testing, Commissioning, and Operations. PASP 128 (970), pp. 125001. External Links: Cited by: §1.
- Pytorch: an imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703. Cited by: §3.1.
- PSF photometry for BRITE nano-satellite mission. In Proc. SPIE, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10698, pp. 1069820. External Links: Cited by: §1.
- Building the evryscope: hardware design and performance. Publications of the Astronomical Society of the Pacific 131 (1001), pp. 075001. Cited by: §1.
- Faster r-cnn: towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497. Cited by: §1.
- An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747. Cited by: §3.1.
- Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §2.
- Precise measurement of the light curves for space debris with wide field of view telescope. Astrophysics and Space Science 364 (3), pp. 39. Cited by: §1.
- Vetting the optical transient candidates detected by the gwac network using convolutional neural networks. Monthly Notices of the Royal Astronomical Society 497 (3), pp. 2641–2650. Cited by: §1.
- The gwac data processing and management system. arXiv preprint arXiv:2003.00205. Cited by: §1.
- Chinese small telescope array (cstar) for antarctic dome a. In Ground-based and Airborne Telescopes II, Vol. 7012, pp. 70124G. Cited by: §1.
- Bag of freebies for training object detection neural networks. arXiv preprint arXiv:1902.04103. Cited by: §3.1.
- A comprehensive survey on transfer learning. Proceedings of the IEEE 109 (1), pp. 43–76. Cited by: §4.