Dunhuang Grotto Painting Dataset and Benchmark

07/10/2019 ∙ by Tianxiu Yu, et al. ∙ 4

This document introduces the background and the usage of the Dunhuang Grotto Dataset and the benchmark. The documentation first starts with the background of the Dunhuang Grotto, which is widely recognised as an priceless heritage. Given that digital method is the modern trend for heritage protection and restoration. Follow the trend, we release the first public dataset for Dunhuang Grotto Painting restoration. The rest of the documentation details the painting data generation. To enable a data driven fashion, this dataset provided a large number of training and testing example which is sufficient for a deep learning approach. The detailed usage of the dataset as well as the benchmark is described.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Background and Motivation

The Mogao Grottoes, also known as the Thousand Buddha Grottoes or Caves of the Thousand Buddhas, consist 492 temples which spread over 25 km (16 mi) in the area to the southeast of the ancient city Dunhuang, an oasis located at a religious and cultural crossroads on the Silk Road, in Gansu province, China. The grottoes may also be known as the Dunhuang Caves. The grottoes contain more than 10000 full frame painting, which are consecutively created by ancient artists over a thousand years in between the 4th and the 14th centuries. To the present, more than 45,000 square meters’ murals and 2,000-plus painted sculptures are preserved. The murals are of great value for historical, artistic and technological research with the earliest ones dating back to over 1,600 years ago. The Mogao Grottoes is recognized as the United Nations world heritage in 1987.

The mural paintings, however, are suffering from various damage and aging over thousands of years. In 1970s, the Dunhuang Academy is established to systematically preserve the heritage. From the study, half of them suffer from corrosion and aging. Because the paintings are created by different artists from 10 centuries, it is non-trivial for manual restoration. And therefore, we release the first Dunhuang Challenge with 600 paintings, which enables an open and public attention in the research community on data driven e-heritage restoration.

This year, the academy is proposing to collaborate with Microsoft Research and other researchers over the world, aiming to solve the automatic restoration of the wall painting using computer vision and machine learning technology.

Cave 7 of the Mogao Grottoes was excavated in the Mid-Tang Dynasty (AD 766-835), the murals on the north and south walls feature a range of rich content, such as Buddha statues, bodhisattvas, sponsors, architecture, dance, music, and decorative patterns. Based on the digitization of the south and north walls’ murals of Cave 7 of the Mogao Grottoes, 600 images, with resolutions between 500-800 pixels, from different murals were selected for the data set in line with the principle of image content integrity. Out of these 600 images, 500 are stored in the “train” folder as the training data set while the remaining 100 in the “test” folder as the test data set.

Figure 1: Overview of Dunhuang Grottoes. Left: the Buddha sculpture, Middle: inside of a grotto and Right: Outside view of the Grottoes.
Figure 2: Left: Wall painting damaged from aging; Right: Partially manual restoration

2 Dataset Generation

Figure 3: Overview of the Grotto 7, while the wall painting is well preserved in general, many local area is deteriorated because of the moisture and pests.
Figure 4: A glance of the 600 images collection of the dataset which is generated from Grotto 7. When generating the collection, we consider a balance of the scenes such as: Buddhas, human, architectures and etc.

2.1 Generating the clear training and testing set

We use the wall painting on the No.7 Grotto for the data generation. The painting has a balance of the well preserved region and the deteriorated region. Fig. 4 is an overview of the grotto wall painting.

To archaeologist in Dunhuang contributes to divide and slide the huge grotto painting into 600 dataset images. Each of the image is now focusing on a theme such as: Buddha, architecture, decoration, and human. Each of the image is around pixels and the dpi is 75.

Dataset Split

The 600 images are randomly split in to 500 images training and validation set and the 100 images for testing.

Later, as described in Sec. 2.2, we provide a method to generate the deteriorated images which best simulates the real deterioration. However, the users are encouraged to generate their own deterioration for training.

2.2 Generating the deteriorated training and testing set

For users to better understanding the deterioration from aging. We introduce one method along with the deteriorated image on the 500 training set. However, the users are always encouraged to generate there own deteriorated data. The code is not published during the challenge.

In detail, Stimulating deteriorated non-rigid regions in an image involves two stages: 1. random mask generation; and 2. masking image.

Random Mask Generation

The process of mask generation could be decomposed into following steps:
1) Initialize a square blank image with all value set as 1. This blank image serves as a canvas for drawing mask. The size of initial mask is 256x256.
2) Randomly pick a start point on the blank image, and set the pixel value to 0.
3) Iteratively perform random walk from to . Once a pixel is traversed, its value will be set to 0. Note that a pixel is allowed to be walked on more than 1 time. The default number of walk steps is 10,000.

(a) mask 1.
(b) mask 2.
(c) mask 3.
Figure 5: Typical examples of generated masks.

Masking Images

All groundtruth images in test set are used to make testing samples by two steps: 1) Rescale mask into the groundtruth image size; and 2) Mask corresponding RGB pixels in the groundtruth image with value [0,0,0].

(a) Original image
(b) Mask of deterioration
(c) Deteriorated image
Figure 6: Image triplet of the testing set.

3 Dataset Usage

3.1 Access the Challenge Dataset

Challenge Dataset can be downloaded from the cloud platform .

Registration is required.

3.2 Content of The Downloaded Package

You will receive a zip file package containing a few folders:

train

This folder contains the training images. The good images are indexed from 001 to 499. Each good images are associated with two images: one for the binary mask of the deteriorated area and one for the deteriorated image. For example, the image triplet indexed as 001 has three images:

which are the good image, mask of the deteriorated area and the deteriorated image respectively.

The mask and the masked images are provided as a baseline method of simulating the deterioration. It is up to the user to determine whether use it or not.

test

In the testing dataset you will find 100 deteriorated images indexed from 501 to 600. Each index are associated to two images. For example, index 501 are associated to:

which are the binary mask of the deteriorated area and the deteriorated image.

The task of this dataset is to restore the image from the deteriorated image.

4 Evaluation Metric

4.1 The Evaluation Set

As previously introduced in Sec. 2.1. The 100 evaluation set are randomly selected from the Dataset. Only the deteriorated images are available to the users during the challenge. The ground truth are not accessible. Users are encouraged to submitted their restored images to the server and the server will compare the submitted results with the groundtruth.

Fig. 6 is a glance of the testing data.

4.2 Evaluation metrics

The cloud platform with automatically generate evaluation using the following metrics:

Dissimilarity Structural Similarity Index Measure (DSSIM)

The difference between a ground truth image and a restored image are evaluated on Dissimilarity Structural Similarity Index Measure (DSSIM). The DSSIM is defined as:

(1)

For details of , please further refer to [2].

The overall performance is evaluated using mean DSSIM scores across the test set. The larger the mean value reflects the better results.

Local Mean Squared Error (LMSE)

Furthermore, we use the Local Mean Squared Error (LMSE) in [1] to measure patch-based local differences of two images. Suppose the image pair are of height and width , the LMSE summed over all local windows of size and spaced in steps of and for vertical and horizontal directions respectively:

(2)

where and are image patches that are cropped from the images using local sliding windows. The is set to 0.1 in our metric. is the Mean Squared Error (MSE) is defined as follows:

(3)

where , and is height, width and color channel numbers of the patches to be measured.

4.3 Format of Submission

Format of Filenames

In order to let the evaluation system associate the testing samples and their restored results, the filenames of the result image MUST fit in the following pattern:

outtestfilename.jpg’

where ’out’ is the prefix attached to the file name of its corresponding testing sample (‘testfilename.jpg’). For example, if the file name of an input testing image is ‘001.jpg’, its restored result should be named as ‘out001.jpg’.

Format of Images

The result images must be in the format of ‘JPG’ and named as ‘*.jpg’.

Format of Submission

The submission is a zip file. All of the 100 restored images must be packed in the zip file. Partial results will not be accepted.

Acknowledgement

This project is supported by Dunhuang Academy and Microsoft Research.

We thank the following people who contribute to create the Dataset: Katsushi Ikeuchi (Microsoft Research Asia), Xudong Wang (Dunhuang Academy), Takeshi Masuda (AIST, Japan), Takeshi Oishi (The University of Tokyo, Japan), Guillaume Caron (Universite de Picardie Jules Verne, France), Rei Kawakami (The University of Tokyo, Japan), Jiawan Zhang (Tianjin University, China).

References

  • [1] Grosse, R., Johnson, M.K., Adelson, E.H., Freeman, W.T.: Ground truth dataset and baseline evaluations for intrinsic image algorithms. In: 2009 IEEE 12th International Conference on Computer Vision. pp. 2335–2342 (Sep 2009). https://doi.org/10.1109/ICCV.2009.5459428
  • [2] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P., et al.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004)