The origin of information explosion is astronomy, which is first facing challenges of big data. In the new century, with the development of astronomical observation technique, astronomy has already entered a informative big data era and astronomical data is rapid growth in terabytes (TB) or even petabytes (PB). When Sloan Digital Sky Survey project started in 2000, data being collected by telescope in New Mexico in few weeks is greater than all historical data. In 2010, information files contained 1.4 2 bytes. But, Large Synoptic Survey Telescope (LSST) can be used in Chile in 2019, which can get the same information within 5 days. Now, a number of countries are running large-scale sky survey project. Except SDSS, these projects include PanSTARRS (Panoramic Survey Telescope and Rapid Response System), WISE (Widefield Infrared Survey Explorer), 2MASS (Two Micron All Sky Survey), Gaia of the European Space Agency (ESA), UKIDSS (UKIRT Infrared Deep Sky Survey), NVSS (NRAO VLA Sky Survey), FIRST (Faint Images of the Radio Sky at Twenty-cm), 2df (Two-degree-Field Galaxy Redshift Survey), LAMOST (Large Sky Area Multi-Object Fiber Spectroscopic Telescope), GWAC (Ground Wide Angle Camera) in China and so on. These sky surveys are generating a large number of astronomical data.
Astronomical data also has four V characteristics of big data: Volume, Velocity, Variety and Veracity. For example, LSST covers all sky and store in the database one cycle per 7 days. GWAC covers 5000 degree and stores in real-time per 15 seconds. Astronomy moves on to a big data era. Therefore, it is important to study big data which is generated by astronomy. However, the existing database systems cannot support the demand of astronomical data, especially for the real-time and GWAC’s scalability. We need to design our database system for the discovery of transient celestial phenomena in short-timescale.
The rest of the paper is organized as follows. Section 2 surveys the background. Then, there are the problem definition and basic knowledge in Section 3. And, in Section 6, we list candidate systems used in astronomy and point out the advantages and disadvantages of these systems. Then, in Section 4, we give the functional analysis of our database prototype. In addition, we point out the challenges to the design of our database prototype in Section 5. Finally, Section 7 concludes this paper.
GWAC is built in China, which consists of 36 wide angle telescopes with 18cm aperture. Each telescope equips 4k 4k charge coupled device (CCD) detector. Cameras cover 5000 degree. Temporal sampling is 15 seconds. Cameras detect objects of fixed sky area for 8 hours each observation night. GWAC has special adventure in the time-domain astronomical observation, according to size of observation field and sampling frequency of observation time. It is great challenges for data management and processing in giant data and high temporal sampling.
|Cameras||One Day(8 hours)||One Year(260 days)||Ten Years|
|1||3.37 10||61.88GB||8.77 10||15.71TB||8.77 10||157.1TB|
|36||1.21 10||2.17TB||3.16 10||565.62TB||3.16 10||5.52PB|
As shown in Table 1, GWAC works 8 hours a night, 260 days a year on average. The index of star catalog data in GWAC includes: each star catalog in one image having 1.756 10 records. So, camera array can generate 6.3 10 records in 15 seconds, contain 1920 36 = 69120 images and occupy about 2.17TB storage space in each night. The requirements of database management systems (DBMS) are: (1) rapid big data storage capacity that all star catalogs are ingested within 15 seconds and 2.17TB star catalog data in each observation night should be stored before next observation night. (2) high-speed data acquisition can be analyzed in real-time and rapid contextual computing capacity when facing mass of incessancy and high-density star catalogs. This means relevance star catalog data generated by one CCD within 15 seconds and reference star catalog to form light curves. (3) In 10-year design cycle, GWAC will generate about 5.53PB size of star catalogs. Therefore, DBMS for storing star catalog data must have the great management ability for massive data.
For GWAC, the most immediate way to data management and design of processing system is the database (only for data storage) and peripheral program (rapid operation and the result obtaining). Xu et al.  researched by cross-match for key technique developed a sky partitioning algorithm based on longitude and latitude of space and increased the speed of cross match compute rapidly. Based on the advantage of parallel compute by graphics processor, Zhao et al.  used graphics processor accelerate method to speed up the image subtraction processing. Zhao et al.  developed point source extracting program SEXtractor in the field of astronomy which is developed by graphic processor accelerate. Wang et al.  developed a cross-match accelerating algorithm based on graphic processor. The advantages of this plan are the straightforward thoughts and many mature techniques. The disadvantages are that database is exchanging with peripheral programs and bring useless time loss of I/O. Combination of program results in lack of optimizations as a whole.
Jim Gray directed the development of Skyserver and purposed Zone algorithm [5, 4]. It means that we can use SQL of DBMS to realize hyperspace index replacement classical Hierarchical Triangular Mesh (HTM). This method can reduce data interaction and increase speed. This is the principle of large-scale scientific computation and database architecture design: designing philosophy to bring computation to data rather than putting data to computation. This paper inspired by this idea, purposes design idea that combines data processing of GWAC and data management to a database platform.
Massive astronomical data is great challenge for data storage and management. Therefore, rapid processing of massive astronomical data is very important. GWAC astronomical database can provide inquiry service and form a light curve. The database contains two scientific targets:
Rapid big data storage capacity. All star catalogs generated by cameras can be stored within 15 seconds, and 2.17TB star catalog data in each observation night should be stored before next observation night.
High-speed data collection. Data can be analyzed in real-time, including efficient detection and dynamic recognition astronomical object.
The main scientific target of GWAC is to search optical transient sources in real time and locate in observation sky and formulate star catalog index. GWAC works sky survey every 15 seconds. Facing incessancy observation intensive stellar field and massive star catalog in short timescale, data processing system must have relevance computing ability to rapidly recognize celestial objects and data processing algorithm, meaning relevance star catalog data generated by each CCD in 15 seconds and reference star catalog to generate light curve. So, the goal is that we need to develop a database which can integrate the algorithm of the point source identification and has high expansibility.
3.1 Problem Definition
Point source extraction. The camera array () is consist of 36 wide angle telescopes. Each wide angle telescope T(i=1, 2, …, 36) equips 4k 4k CCD detector. Point source extraction is to transform optical image into figure signal by CCD detector which forms star catalog data.
|ID||long int||Every inserted source measurement gets a unique id. generated by the source extraction procedure.|
|imageid||int||The reference ID to the image from which the source was extracted.|
|zone||small int||The zone ID in which a source declination resides, calculated by the source extraction procedure.|
|ra||double||Right ascension of a source (J2000 degrees), calculated by the source extraction procedure.|
|dec||double||Declination of a source (J2000 degrees) as above.|
|mag||double||The magnitude of a source.|
|mag_error||double||The error of magnitude.|
|pixel_x||double||The instrumental position of s source on CCD along x.|
|pixel_y||double||The instrumental position of s source on CCD along y.|
|ra_err||double||The 1-sigma error on ra (degrees).|
|dec_err||double||The 1-sigma error on declination (degrees).|
|x||double||Cartesian coordinates representation of RA and declination, calculated by the source extractor procedure.|
|y||double||Cartesian coordinates representation of RA and declination, as above.|
|z||double||Cartesian coordinates representation of RA and declination, as above.|
|flux||double||The flux measurements of a source, calculated from the mag value.|
|flux_err||double||The flux error of a source.|
|flag||int||The source extraction uses a flag for a source to tell for instance if an object has been truncated at the edge of the image.|
The source extraction estimates the background of the image.
|threshold||double||The threshold indicates the level from which the source extraction should start treating pixels as if they were part of objects.|
|ellipticity||double||Ellipticity is how stretched the object is.|
|class_star||double||The source extractions classification of the objects.|
Cross-match. It contrasts and matches the object catalog with template catalog. As shown in Figure 1, if object catalog could match template catalog, the pipeline will enter timing sequence photometry channel to process and manage light curve. If the star cannot be matched in template catalog, it is transient source (candidate). Cross-match is the key algorithm in GWAC searching transient source and generation light curve. Cross-match issue must depend on effective partition strategy. We will divide the sky into each horizontal strip in the pixel coordinate and each source has a strip belonging to itself. At first, cross-match can compare strip property and decrease times of comparing. Strip can be integrated inside database as the basic unit of data processing.
Light curve. It is the change of the brightness of the object relative to the time. It is the function of time, which usually shows a particular frequency interval.
3.2 Camera Array Processing Flow
There is a flow graph for GWAC data processing in Figure 2. According to basic preprocessing of original image, it will extract point source and astrometric calibration of star catalog. Then, it completes tasks about relative discharge calibration, real-time dynamic identification of astronomical object and light curve mining based on cross-match observation data and reference star catalog.
Overall, objectives of our database prototype are: (1) capacity of rapid storage data, object catalogs generated by cameras should be ingested within 15 seconds. 2.17TB star catalog data in each observation night should be stored before next observation night. (2) high-speed data acquisition can be analyzed in real time and has rapid contextual computing capacity when facing mass of incessancy and high-density star catalog. This means relevance star catalog data generated by one CCD in 15 seconds and reference star catalog to generate light curve.
4 Functional Analysis & Requirements
The key steps in processing GWAC’s data are shown in Figure 3. Within 15 seconds, we need to ingest 1.756 10 records into the database, complete the cross-match, generate the light curve and fulfill the task of data mining. The core functional analysis is as follows:
4.1 Real-Time Storage
The system could rapidly store object star catalog generated by cameras in 15 seconds. 2.17TB star catalog can be stored as increment data and make sure that data could be stored in real time. Also, system merges increment data daily during data storage low cycle (non observation night and other time) to increase data storage capacity and decrease storage delay.
The system should establish efficient index mechanism, optimize database join operation, increase star catalog relevance and efficiency of cross-match.
4.3 High Scalability
As the observation data of each wide angle telescope is increasing, it is hard to use one server storage and analysis different telescopes data. We need to design high reliable distributed clusters architecture. There are different subset clusters for every wide angle telescope storage star catalog data to ensure consistency of star catalog data. Also it realize whole system processing capacity linear growth and high throughput rate and low delay.
4.4 Data Mining
In database, we need to use technique of data mining to find meaningful astronomical phenomena. The process of data mining can divide into online mining and offline mining (shown in Figure 4).
Online mining. For data flow which observation time is shorter than one night, real-time monitoring and inner analysis windows should be used for dynamic recognition of astronomical targets.
Offline mining. For long-timescale data which observation time is longer than one night, using full scale historical data predict waveform of light curve and judging change cycle of curve to analyze fluctuation features of star body is better.
5 Major Challenges
Based on the functional analysis in Section 4, the main challenge issues for our database prototype can be concluded as below.
5.1 Customized Operators
For characteristics of astronomical data, we can customize operators in our database prototype. There are the customized operators as below:
Increment storage operator “DeltaInsert” ensures data real-time storage.
Range join operator “RangeJoin” ensures rapid cross-match.
In addition, our database prototype is designed with batch processing and query plan adapter of streaming processing, by uniform query system interface. it cannot only get real-time data flow query result, but also it can get query result of full-history form of offline historical data.
|Storage engine||Array data model||Memory affair storage||Binary association table||Assembly storage||RDD|
|Advantage||Shared-nothing design, SciDB-R||Support ACID Affair, Fault tolerant automatically and load balancing, Division storage||Automatic indexing, index not taking extra storage space||Powerful query language, support dynamic query and fully index||Easy to scale out clusters and fast iterative computations|
|Disadvantage||Not support complex search condition||Low open source version||Low speed of insert operation||Not support transaction operation||Low data storage efficiency|
5.2 Large-scale Data Management
With the increment of the amount of data generated by GWAC, our database prototype needs to deal with the problem of data management in PB level. A large-scale data management engine needs to be designed to ensure data consistency and integrity, and easy to scale out.
5.3 Scalable Query Processing
In a large-scale cluster environment, our database prototype needs to ensure a low query response time. We should use the design philosophy of massively parallel processing (MPP) to implement scalable query processing.
5.4 Long-term Data Storage
Since the life cycle of GWAC is 10 years, it is essential to provide the hardware and the storage strategy to save all historical data. Original image data can verify correctness of related analysis and provide original data for image analysis in depth.
6 Candidate Systems
For characteristic and functional requirements of astronomical data, there are some candidate systems preparing to be compared and their characteristics are summarized in Table 3.
SciDB is a new science database for scientific data. Application areas include astronomy, particle physics, fusion, remote sensing, oceanography, and biology . Scientific data often does not fit easily into a relational model of data. Searching in a high dimensional space is natural and fast in a multi-dimensional array data model, but often slow and awkward in a relational data model.
Array DBMS is a natural fit for science data. So, SciDB uses array data model as the storage engine. Another characteristic of SciDB is sparse or dense array. Some arrays have a value for every cell in an array structure. It have ability to process skewed data. Moreover, SciDB uses the shared-nothing distributed storage framework. This is easy to scale out the cluster. In addition, SciDB has built an interface for R language that lets R scripts access data residing in SciDB. However, SciDB can not support complex search condition. And, the effect of data storage in real-time is not good.
OceanBase is a high-performance distributed database. It can realize cross row and cross table affairs based on hundred billions records and hundreds TB data.
Because OceanBase is a relational DBMS, it uses memory affair storage as the storage engine, and can support ACID affair. And, in the distributed environment, OceanBase provides automatic fault tolerance, load balancing and division storage. However, the open source version of OceanBase is low, and the stability of the system is not strong.
MonetDB is also a relational DBMS for high-performance applications in data mining, scientific databases, XML Query, text and multimedia retrieval, that is developed at the CWI database architectures research group since 1993 .
MonetDB is designed to exploit the large main memory of modern computer systems effectively and efficiently during query processing, while the database is persistently stored on disk. The core architecture of MonetDB has proved to provide efficient support not only for the relational data model and SQL, but also for the non-relational data model, e.g., XML and XQuery [7, 10]. In addition, MonetDB supports column-store by using binary association table as storage engine. Moreover, It can build the index automatically, the index can not take extra storage space. Yet, the efficiency of insertion operation is low, especially for incremental insertion.
MongoDB  is an agile database that allows schemas to change quickly as applications evolve, while still providing the functionality that developers expect from traditional databases.
Figure 5 shows the basic difference between the schema-free document database structure and the relational database. While the tables in a relational database have a fixed format and fixed column order, a MongoDB collection can contain entities of different types in any order. The element dbRef allows the creation of an explicit reference to another document in the same database or in another database on another server.
MongoDB uses assembly storage as storage engine to support dynamic query and fully index, and has a powerful query language. However, it can not support transaction operation well.
Apache Spark 
is an open-source cluster computing framework for big data processing. It has the distributed data frame, and goes far beyond batch applications to support a variety of compute-intensive tasks including interactive queries, streaming, machine learning and graph processing.
Spark use RDD  as storage engine to ensure the correctness and fault tolerance of the query processing. It is easy to scale out clusters, and has fast iterative computing power. However, because spark needs to rely on other storage frameworks, its data storage efficiency is low.
We investigate and survey requirements of databases in astronomy and introduce the background knowledge, points out core problems and the main challenges. However, none of these candidates is suitable for large time-domain surveys and that a new system should be developed to meet the challenges.
This research was partially supported by the grants from the National Key Research and Development Program of China (No. 2016YFB1000602, 2016YFB1000603); the Natural Science Foundation of China (No. 91646203, 61532016, 61532010, 61379050, 61762082); the Fundamental Research Funds for the Central Universities, the Research Funds of Renmin University (No. 11XNL010); and the Science and Technology Opening up Cooperation project of Henan Province (172106000077).
-  Mongodb. http://www.mongodb.org/.
-  Oceanbase. https://github.com/alibaba/oceanbase/tree/master/oceanbase_0.4.
-  Spark. http://spark-project.org/.
-  Zone. https://arxiv.org/ftp/cs/papers/0408/0408031.pdf.
-  Zone project. http://research.microsoft.com/apps/pubs/default.aspx?id=64524.
-  Parinaz Ameri, Richard Lutz, Thomas Latzko, and J?rg Meyer. Management of meteorological mass data with mongodb. In Einviroinfo, 2014.
-  Peter Boncz, Torsten Grust, Maurice Van Keulen, Stefan Manegold, Jan Rittinger, and Jens Teubner. Monetdb/xquery: a fast xquery processor powered by a relational engine. Sigmod, pages 479–490, 2006.
-  By R E Bryant, R H Katz, and E D Lazowska. Bigdata computing: Creating revolutionary breakthroughs in commerce, science, and society. 2008.
-  Chenzhou Cui, Yu Ce, Xiao Jian, Boliang He, Changhua Li, Dongwei Fan, Chuanjun Wang, and Zihuang Cao. Astronomy research in big-data era (in chinese). Chin Sci Bull, 60(z1):445–449, 2015.
-  S. Idreos, F. E. Groffen, N. J. Nes, S. Manegold, K. S. Mullender, and M. L. Kersten. Monetdb: Two decades of research in column-oriented database architectures. IEEE Data Eng Bull, 35(1):2012, 2012.
-  Stefan Manegold, Martin L. Kersten, and Peter Boncz. Database architecture evolution: mammals flourished long before dinosaurs became extinct. Proceedings of the Vldb Endowment, 2(2):1648–1653, 2009.
-  Wan Meng. Column store for gwac: A high cadence high density large-scale astronomical light curve pipeline and distributed shared-nothing database. Publications of the Astronomical Society of the Pacific, 2016.
-  A. I. Naimi and D. J. Westreich. Big data: A revolution that will transform how we live, work, and think. Information, 17(1):181–183, 2014.
James G. Shanahan and Laing Dai.
Large scale distributed data science using apache spark.In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015.
-  Michael Stonebraker, Jacek Becla, David J. Dewitt, Kian Tat Lim, David Maier, Oliver Ratzesberger, and Stanley B. Zdonik. Requirements for science data bases and scidb. In CIDR, 2009.
-  Alex S. Szalay, Jose A. Blakeley, Alex S. Szalay, and Jose A. Blakeley. Gray’s laws: Database-centric computing in science. 2009.
-  Senhong Wang, Yan Zhao, Qiong Luo, Chao Wu, and Yang Xv. Accelerating in-memory cross match of astronomical catalogs. In IEEE International Conference on Escience, pages 326–333, 2013.
-  Yang Xu, Chao Wu, Meng Wan, Jiuxin Zhao, Haijun Tian, Yulei Qiu, Jianyan Wei, and Yong Liu. A fast cross-identification algorithm for searching optical transient sources. Astronomical Research & Technology, 10(3):273–282, 2013.
-  Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy Mccauley, Michael J. Franklin, Scott Shenker, and Ion Stoica. Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing. In Usenix Conference on Networked Systems Design and Implementation, pages 141–146, 2012.
-  Baoxue Zhao, Qiong Luo, and Chao Wu. Parallelizing astronomical source extraction on the gpu. In IEEE International Conference on Escience, pages 88–97, 2013.
-  Yan Zhao, Qiong Luo, Senhong Wang, and Chao Wu. Accelerating astronomical image subtraction on heterogeneous processors. In IEEE International Conference on Escience, pages 70–77, 2013.