A Benchmark dataset for both underwater image enhancement and underwater object detection

06/29/2020 ∙ by Long Chen, et al. ∙ 7

Underwater image enhancement is such an important vision task due to its significance in marine engineering and aquatic robot. It is usually work as a pre-processing step to improve the performance of high level vision tasks such as underwater object detection. Even though many previous works show the underwater image enhancement algorithms can boost the detection accuracy of the detectors, no work specially focus on investigating the relationship between these two tasks. This is mainly because existing underwater datasets lack either bounding box annotations or high quality reference images, based on which detection accuracy or image quality assessment metrics are calculated. To investigate how the underwater image enhancement methods influence the following underwater object detection tasks, in this paper, we provide a large-scale underwater object detection dataset with both bounding box annotations and high quality reference images, namely OUC dataset. The OUC dataset provides a platform for researchers to comprehensive study the influence of underwater image enhancement algorithms on the underwater object detection task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

During the past few years, underwater object detection (UOD) [1, 2, 3] has drawn considerable attentions in both marine engineering and aquatic robot. Due to complicated underwater environment and lighting conditions, detecting objects in the water is a challenging problem. The underwater images suffer from serious wavelength-dependent absorption and scattering, which reduces visibility, decrease contrast, and even introduce color casts. This adverse effects limit many practical applications of underwater images and videos in marine biology, archaeology, and ecological. Hence, many underwater image enhancement (UIE) algorithms are employed as a preprocessing step for UOD tasks to improve the detection accuracy of the detectors by boosting the quality of underwater images [4, 5, 6].

Fig. 1: Sampling images from the constructed OUC. Top row: raw underwater images taken in diverse underwater scenes; Bottom row: the corresponding reference images and bounding box annotations.

Despite the prolific work, comprehensive study and insightful analysis of the relationship between UIE and UOD tasks remain insufficient due to the lack of a publicly available underwater image datasets with both bounding box annotations and reference images (i.e, the underwater images without degradation). Since there is no reference images, previous work [4] only investigated how UIE algorithms influence UOD tasks by study the relationships between the non-reference image quality assessment metrics [7, 8]

and the detection accuracy. However, non-reference image quality evaluation metrics can only explain part characteristics of the image quality and are not always consistent with the human subjective perception

[4]. A comprehensive investigation of the relationship between two tasks should also on the relationship between the detection accuracy and full-reference image evaluation metrics [9, 10], which can extensively evaluate the characteristics of image quality in terms of colors, textures, image contents and structures. However, the reference images are necessary when conducts full-reference image quality evaluations. Recently, several underwater image synthesis (UIS) algorithms [11, 12, 13] had been proposed to synthesize underwater images from high-quality in-air images, then another UIE model is trained on the image pairs to improve the visibility of underwater images. However, the synthetic images are not realistic enough and greatly influence the performance of late UIE models. Differently, Li et al. [14] employed eleven different UIE algorithms to enhance the underwater images, and choose the high quality reference images from the eleven enhanced results using human subjective perception, i.e, the perception of human. Nevertheless, the subjective perception can be ambiguous and tendentious since different people may have different preferences and biases. Also, human perception is unable to perceive minor differences existing in two visual-similar images. To make up the deficiency of subjective perception, we combine it with the objective assessment to select high quality reference images, which is more robust and dependable than only subjective perception.

In this paper, we construct an underwater dataset, namely OUC dataset, which contains underwater images, corresponding reference images and bounding box annotations. To generate robust reference images we propose a novel hybrid reference images generation method which combines subjective perception and objective assessment. Fig. 4 presents several sampling underwater images and corresponding reference images with bounding box annotations generated by our hybrid reference image generation method. The raw underwater images in OUC dataset suffer from diverse degrees of haze and contrast decrease. In contrast, the corresponding reference images are characterized by natural color, improved visibility and appropriate brightness. With this dataset, we conduct a comprehensive study of the state-of-the art UIE and UOD algorithms qualitatively and quantitatively. Most importantly, we can investigate how UIE algorithms influence UOD tasks that enables insights into their performance and sheds light on the future research. The main contributions of this paper are summarized as follows:

  • We propose a novel reference images generation method which integrates both subjective perception and objective assessment. Through generating dependable high quality reference images for underwater images, we construct a large-scale underwater dataset, namely OUC, which provides underwater images, corresponding high quality reference images, and object-level bounding box annotations.

  • We conduct comprehensive study of the strengths and limitations of different UIE algorithms on the constructed OUC dataset. In addition, this dataset also provide a platform to study the influence of UIE algorithms on the UOD algorithms.

Ii Related Work

Ii-a Underwater image enhancement

Underwater images enhancement plays an important role in practical applications that explore and develop the underwater world, such as autonomous underwater vehicles (AUVs) [15, 16, 17], unmanned underwater vehicles (UUVs) [18], and remotely operated vehicles (ROVs) [19] navigation. A variety of UIE methods have been proposed and can be divided into three categories. The first line of research is to modify the image pixel values to improve image the contrast, remove haze and correct color casts. It can be divided into spatial domain adjustment and transform domain adjustment. The spatial domain methods [20, 21, 22] perform adjustment directly in captured underwater images. The transform domain methods [23] first transform the captured underwater image into a specific domain, and then perform adjustment for haze removal and color correction. These methods can improve the visual quality to some extent, but may degrade details, accentuate noise, introduce artifacts, and cause color distortions.

The second line is physical model-based methods [24, 25, 26, 27, 28, 29, 30]

, which takes the underwater image enhancement as the inverse problem of underwater image degradation. It first construct and estimate a physical image degradation process, then recover the potential high quality image from the estimated physical degradation model. To estimate the parameters of the underwater image degradation model, many UIE algorithms

[25, 26] adapt the classic dark channel prior (DCP) [24], which is designed for dehazing in the natural scenes, to underwater scenes. However, these priors do not always works in some cases, e.g., for underwater images that contain white objects or regions, the DCP-based UIE algorithms show limited improvement of the visual quality, or even aggravate the degradation.

The third line is the deep learning based UIE algorithms, which can be trained using underwater and corresponding reference images. Due to the lack of training pairs, Li et al.

[12] propose an underwater image synthesis model, called WaterGAN, to convert high-quality in-air images and corresponding depth images into underwater-like images. Then, these synthetic image pairs in turn are used to train another two stage deep UIE network. Inspired by Cycle-Consistent Adversarial Networks [31] which allows learning the mutual mappings between two domains from unpaired data, Fabbri et al. [32] propose a weakly supervised underwater image synthesis model to synthesize underwater images from high quality in-air images, and then use these synthetic image pairs for training another deep UIE network. Differently, Li et al. [b11] generates training data by exploiting a physically underwater image degradation model and a fixed set of predefined parameters. However, the performances of deep UIE network heavily depend on the quality of the synthetic images, which cannot be perfectly solved by previous underwater image synthesis methods. Therefore, the performance of deep learning-based UIE methods still lag behind conventional state-of-the-art UIE algorithms. To achieve dependable high quality training data, Li et al. [14] collect a real-world underwater dataset and process the dataset using eleven image enhancement methods. Then, they invite volunteers to select satisfactory reference images via conducting pairwise comparisons. This method generates, at least to some extent, trustworthy reference images by applying subjective perception of human visual system.

Ii-B Underwater Image Quality Evaluation

Image quality assessment techniques play an important role in underwater image enhancement task, especially beneficial for the development UIE algorithms, it can be divide into subjective assessment and objective assessment. The subjective assessment is usually regarded as the most reliable method of quantifying perceptual quality of content since in most cases such content is meant to be viewed by humans [33, 34]. However, the subjective assessment depending on the judgement of human observers can be ambiguous and tendentious since the subjective perceptions of different observers are inconsistent.

The objective image quality assessment metrics are used to measure some important characteristics of the images using statistical numbers, it can be further divided into full-reference image quality assessment metrics [35] and non-reference image quality assessment metrics [36, 37]. Most of previous works [25, 26, 27, 28] only use the non-reference metrics to evaluate UIE algorithms since the underwater datasets do not contain the reference images. Underwater color image quality evaluation metric (UCIQE) [36] and underwater image quality measure (UIQM) [37]

are two widely used non-reference metrics. UCIQE quantifies the non-uniform color casts, blurring, and low contrast, and then combines these three components in a linear manner. UIQM consists of three attribute measures: a colorfulness measure, a sharpness measure, and a contrast measure. Full-reference metrics are commonly used in cases where reference images exist. For example, the Peak Signal to Noise Ratio (PSNR) is used to measure the similarity between the enhanced underwater images and the reference images in terms of content, and the SSIM

[35] is employed to measure the structure and texture similarity of the enhanced images and the reference images. One major limitation of contemporary objective assessment metrics is that they are usually sensitive to only one or limited types of distortions, while ignoring evaluating distortions of other types, e.g. color distortion, blurry appearance, or decreasing contrast on the underwater images. Therefore, tremendous efforts are highly demanded to more effective image quality assessment methods.

In this section, we construct a large-scale underwater dataset called OUC, which provides underwater images, corresponding reference images and bounding box annotations. We first introduce the collection of the underwater images, then present a novel method to produce the reference images by combining subjective perception and objective assessment.

Fig. 2: Examples of the raw underwater images in the OUC-VISION dataset. These images have different illuminations and haze degrees since they are taken under different underwater environments.

Iii Reference Image Generation

Iii-a Collection of Underwater Images

We aims to construct a large-scale underwater dataset which enable researchers to investigate how UIE influence UOD. Hence, the underwater dataset should contain underwater images, reference images and bounding box annotations. When constructing the underwater dataset, we have three objectives:
1) The amount of underwater images should be large enough and bounding box level annotations are needed.
2) The underwater images should suffer from a diversity of degradation.
3) The quality of the reference images should be assured so that the image pairs enable fair evaluation of different UIE algorithms.

To achieve the first two objectives, we choose a large underwater dataset OUC-VISION [38] that provides underwater images and bounding box annotations. This dataset contains 4,400 underwater images that are captured under different illuminations simulated by a special designed lighting system. In addition, three degrees of turbidity variations, i.e., limpidity, medium and turbidity, are simulated by adding soil to the water. Hence, the underwater images of OUC-VISION suffer from a diversity of illumination variations and turbidity variations. The images are of resolution 486x648 pixels. Fig. 2 shows some examples of the raw underwater images in our OUC dataset which are selected from the OUC-VISION dataset. These images have different characteristics of underwater image (e.g., different color casts, decreased contrast, and haze levels). To achieve the truth-worthy reference images, we propose a novel hybrid reference image generation method which incorporates both subjective perception and objective assessment.

Fig. 3: The inconsistency of different observers’ subjective perception.
Fig. 4: Results generated by different methods. From left to right are raw underwater images, and the results of DCP, UDCP, GDCP, Blurriness, Regression, Redchannel, Histogram, Fusion, Twostep, Retinex, and Dive+. Red boxes indicate the final reference images.

Iii-B Hybrid Image Generation Method

Previous work [4] first enhanced underwater images using different UIE algorithms, and then invites multiple observers to select high-quality reference images from the enhanced results, however, using only subjective perception to select images can be tendentious: 1) In many practical cases, the compared images appear the same visual quality that the observers have difficulties in distinguishing them and choosing the best one. For instance, as shown in the top row of Fig. 3, two observers select the results of different UIE methods as the final reference images since the visual appearance of two results are extremely similar. 2) The subjective perception is related to Human Visual System, different observers may have different preferences and biases, and no universal standards exist. As shown in the bottom row of Fig. 3, the two observers have different preferences and choose different enhanced images as the reference images. To solve these concerns, we propose hybrid reference images generation method by combining the subjective perception and a novel pair-wise objective assessment metric.
The pairwise objective assessment metric. Specially, when the observes cannot make the decision according to their subjective perceptions in the pairwise comparison, a novel designed pairwise objective assessment metric is employed to select the better one from the two enhanced results. The pairwise objective assessment metric (denotes as ) depends on the union scores of UIQM and UCIQE, and the pairwise objective score of the -th UIE method’s result is formulated as Eq. 1.

(1)

For the novel objective metric, we assume UIQM and UCIQE as equally important, so we first normalize UIQM as .

(2)
(3)
Method DCP UDCP GDCP Blurriness Regression RC Histogram Fusion TwoStep Retinex Dive+
Percentage (%) 3.68 4.50 1.30 4.40 0.00 0.00 0.70 25.10 0.00 41.72 18.60
TABLE I: Percentage of the reference images from the results of different methods.

The process of the reference images generation. We first enhance the underwater images using eleven image enhancement methods, including 7 physical-model-based UIE methods (i.e., DCP [24], UDCP [25], GDCP [26], Blurriness [27], Regression [28], RedChannel [29] and Histogram [30]), 3 model-free UIE methods (i.e., Fusion [20], Twostep [21], and Retinex [22]

), and 1 commercial application for enhancing underwater images (i.e., dive+). We do not exploit deep learning-based UIE methods since we have no training image pairs. At last, we totally obtain 11x4400 enhanced results. With the raw underwater images and the enhanced results, we invite 28 observers, all of whom are students with image processing and computer vision experience, to perform pairwise comparison. They are allowed to draw support from the pairwise objective assessment metric when they cannot make the decision on two ambiguous images in the pairwise comparison. There is no time constraint for observers and zoom-in operation is allowed.

Fig. 5: Quality comparison of different UIE methods on images captured with different light conditions on the OUC dataset. The top image is captured under light condition 1 and suffer from serious reddish color distortion, the middle one incorporate slightly color distortion and haze effect due to the light condition 2, and the bottom one incorporate evident haze effect and minor color distortion because of light condition 3.

The generation of the reference images can be divided into three stages: 1) The reference image selection by a single observer; 2) Check the reference images again and remove unsatisfactory images; 3) Combine the results of all the observers to get the final reference images. For each raw underwater image, the observer is shown two randomly selected enhanced result for pairwise comparison at one time. The observer needs to choose the preferred one or press the button which helps to select the better image using the pairwise objective metric. The result winning the pairwise comparison will be compared again in the next round, until the best one is selected. After the observer finishes the selection work, he/she needs to inspect the reference images set again and remove unsatisfactory images. Then, the reference images of all observers are combined together. For each raw underwater image, if more than half the number of observers remove its corresponding reference images, this underwater image and its reference images will be removed from the final dataset. Finally, the enhanced image selected by more than 50 percentage of observers is selected as the final reference image.

We totally achieve 3,698 available reference images which have higher quality than the result of any individual UIE methods. To visualize the process of reference image generation, we present some cases that the results of some methods are shown and indicate which one is the final reference image in Fig. 4. Furthermore, the percentage of the reference images from the results of different methods is present in Table I, the top 1 performance in red, whereas the second top one is in blue.

Methods UDCP GDCP Blurriness Regression RedChannel Histogram Fusion Twostep Retinex
MSE 3.6147 2.4115 0.6100 0.4294 7.1857 0.5325 0.2816 1.6242 0.3469
PSNR 12.9696 15.7764 20.9975 22.1125 9.7217 21.3031 28.5319 16.1558 28.0519
SSIM 0.4807 0.6316 0.7239 0.5343 0.1798 0.7531 0.8794 0.6108 0.8886
PCQI 0.4181 0.5660 0.6493 0.6620 0.1694 0.8089 0.8940 0.4785 0.8367
mAP 87.1 86.9 86.4 81.6 41.6 81.5 83.9 74.8 87.2
TABLE II: Full-Reference image quality and detection accuracy evaluations of different UIE algorithms on the OUC dataset.

Iii-C Evaluation of different UIE algorithms on OUC dataset.

We also evaluate different UIE algorithms on the OUC dataset. We resize all the images into 512x512 pixels, and divide the OUC dataset into training set containing 2,500 image pairs and testing set containing 1,198 image pairs. Fig. 5 shows qualitative comparisons of different UIE algorithms on underwater images different light conditions. The top image is captured under light condition 1 and suffer from serious reddish color distortion, the middle one incorporate slightly color distortion and haze effect due to the light condition 2, and the bottom one incorporate evident haze effect and minor color distortion because of light condition 3. We observe that none of the physical model-based methods were able to solve the reddish color distortion. This is because the existence of reddish underwater images has violated the physical prior. In water, the red light first disappears because of its longest wavelength, followed by the green light and then the blue light. Such selective attenuation in water results in the greenish and bluish underwater images, and seldom reddish underwater images. In addition, among all the physical model-based algorithms, Regression, Histogram and RedChannel cannot well deal with underwater images with all kinds of light conditions. Regression introduces serious blueish color distortion due to its inaccurate color correction algorithm, and Histogram introduces greenish color distortion due to its histogram distribution prior. RedChannel greatly decreases the brightness which seriously smears the details of images. Moreover, Two-step, one of the non-physical model-based algorithms, also fails on all kinds of light conditions. It over-enhances the contrast and generates unnatural images. In contrast, OurPatch well deals with all kinds of underwater images in terms of both color distortion and haze effect, while the remainders only work in special scene. For example, GDCP and Fusion remove the haze effects and greatly improve the visibility of underwater images captured under the light conditions 2 and 3. UDCP greatly removes haze, however, it introduces bluish color tone into the images captured under the light condition 2 and reddish color tone into the images captured under the light condition 3. Blurriness greatly remove haze on images captured under all of the three light conditions, but fails to remit the color casts in images captured under the light condition 1. These physical model-based methods all fail on some underwater images captured under specific light conditions due to the limitations of the priors used in them. Among the non-physical model-based methods, Retinex greatly remove haze and remit color cast in all kinds of underwater images, but its results suffer from limited saturation.

Table II reports the quantitative scores of different UIE algorithms on the testing set of OUC. In terms of the four full-reference image quality metrics, Fusion achieves the best MSE, PSNR, and PCQI scores, while Retinex achieves the best SSIM score. In terms of mAP, Retinex achieve the best detection accuracy 87.2 mAP.

Iv Conclusion

In this paper, we propose a novel reference images generation method which integrates both subjective perception and objective assessment. Through generating dependable high quality reference images for underwater images, we construct a large-scale underwater dataset, namely OUC, which provides underwater images, corresponding high quality reference images, and object-level bounding box annotations.

References

  • [1] Lee, D., Kim, G., Kim, D., Myung, H., and Choi, H. T. (2012). Vision-based object detection and tracking for autonomous navigation of underwater robots. Ocean Engineering, 48, 59-68.
  • [2]

    Foresti, G. L., and Gentili, S. (2000). A vision-based system for object detection in underwater images. International journal of pattern recognition and artificial intelligence, 14(02), 167-188.

  • [3] Rizzini, D. L., Kallasi, F., Oleari, F., and Caselli, S. (2015). Investigation of vision-based underwater object detection with multiple datasets. International Journal of Advanced Robotic Systems, 12(6), 77.
  • [4] Liu, R., Fan, X., Zhu, M., Hou, M., and Luo, Z. (2020). Real-world Underwater Enhancement: Challenges, Benchmarks, and Solutions under Natural Light. IEEE Transactions on Circuits and Systems for Video Technology. [Online]. Available: https://doi.org/10.1109/TCSVT.2019.2963772
  • [5] Bazeille, S., Quidu, I., Jaulin, L., and Malkasse, J. P. (2006, October). Automatic underwater image pre-processing.
  • [6] Schettini, R., and Corchs, S. (2010). Underwater image processing: state of the art of restoration and image enhancement methods. EURASIP Journal on Advances in Signal Processing, 2010, 1-14.
  • [7] Panetta, K., Gao, C., and Agaian, S. (2015). Human-visual-system-inspired underwater image quality measures. IEEE Journal of Oceanic Engineering, 41(3), 541-551.
  • [8] Yang, M., and Sowmya, A. (2015). An underwater color image quality evaluation metric. IEEE Transactions on Image Processing, 24(12), 6062-6071.
  • [9] Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
  • [10] Wang, S., Ma, K., Yeganeh, H., Wang, Z., and Lin, W. (2015). A patch-structure representation method for quality assessment of contrast changed images. IEEE Signal Processing Letters, 22(12), 2387-2390.
  • [11] Li, C., Anwar, S., and Porikli, F. (2020). Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognition, 98, 107038.
  • [12] Li, J., Skinner, K. A., Eustice, R. M., and Johnson-Roberson, M. (2017). WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robotics and Automation letters, 3(1), 387-394.
  • [13] Fabbri, C., Islam, M. J., and Sattar, J. (2018, May). Enhancing underwater imagery using generative adversarial networks. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 7159-7165). IEEE.
  • [14] C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao. An Underwater Image Enhancement Benchmark Dataset and Beyond. IEEE Transactions on Image Processing., vol. 29, pp.4376-4389, 2019.
  • [15] Marani, G., Choi, S. K., and Yuh, J. (2009). Underwater autonomous manipulation for intervention missions AUVs. Ocean Engineering, 36(1), 15-23.
  • [16] Clark, C. M., Forney, C., Manii, E., Shinzaki, D., Gage, C., Farris, M., and Moline, M. (2013). Tracking and following a tagged leopard shark with an autonomous underwater vehicle. Journal of Field Robotics, 30(3), 309-322.
  • [17] Lee, P. M., Jeon, B. H., and Kim, S. M. (2003, September). Visual servoing for underwater docking of an autonomous underwater vehicle with one camera. In Oceans 2003. Celebrating the Past… Teaming Toward the Future (IEEE Cat. No. 03CH37492) (Vol. 2, pp. 677-682). IEEE.
  • [18] Xu, J., Wang, M., and Qiao, L. (2015). Dynamical sliding mode control for the trajectory tracking of underactuated unmanned underwater vehicles. Ocean engineering, 105, 54-63.
  • [19] Bogue, R. (2015). Underwater robots: a review of technologies and applications. Industrial Robot: An International Journal.
  • [20] Ancuti, C., Ancuti, C. O., Haber, T., and Bekaert, P. (2012, June). Enhancing underwater images and videos by fusion. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 81-88). IEEE.
  • [21] Fu, X., Fan, Z., Ling, M., Huang, Y., and Ding, X. (2017, November). Two-step approach for single underwater image enhancement. In 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS) (pp. 789-794). IEEE.
  • [22] Fu, X., Zhuang, P., Huang, Y., Liao, Y., Zhang, X. P., and Ding, X. (2014, October). A retinex-based enhancing approach for single underwater image. In 2014 IEEE International Conference on Image Processing (ICIP) (pp. 4572-4576). IEEE.
  • [23] Singh, G., Jaggi, N., Vasamsetti, S., Sardana, H. K., Kumar, S., and Mittal, N. (2015, February). Underwater image/video enhancement using wavelet based color correction (WBCC) method. In 2015 IEEE Underwater Technology (UT) (pp. 1-5). IEEE.
  • [24] He, K., Sun, J., and Tang, X. (2010). Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, 33(12), 2341-2353.
  • [25] Drews, P., Nascimento, E., Moraes, F., Botelho, S., and Campos, M. (2013). Transmission estimation in underwater single images. In Proceedings of the IEEE international conference on computer vision workshops (pp. 825-830).
  • [26] Peng, Y. T., Cao, K., and Cosman, P. C. (2018). Generalization of the dark channel prior for single image restoration. IEEE Transactions on Image Processing, 27(6), 2856-2868.
  • [27] Peng, Y. T., and Cosman, P. C. (2017). Underwater image restoration based on image blurriness and light absorption. IEEE Transactions on Image Processing, 26(4), 1579-1594.
  • [28] Li, C., Guo, J., Guo, C., Cong, R., and Gong, J. (2017). A hybrid method for underwater image correction. Pattern Recognition Letters, 94, 62-67.
  • [29] Galdran, A., Pardo, D., Picón, A., and Alvarez-Gila, A. (2015). Automatic red-channel underwater image restoration. Journal of Visual Communication and Image Representation, 26, 132-145.
  • [30] Li, C. Y., Guo, J. C., Cong, R. M., Pang, Y. W., and Wang, B. (2016). Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Transactions on Image Processing, 25(12), 5664-5677.
  • [31]

    Zhu, J. Y., Park, T., Isola, P., and Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2223-2232).

  • [32] Fabbri, C., Islam, M. J., and Sattar, J. (2018, May). Enhancing underwater imagery using generative adversarial networks. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 7159-7165). IEEE.
  • [33] Mohammadi, P., Ebrahimi-Moghadam, A., and Shirani, S. (2014). Subjective and objective quality assessment of image: A survey. arXiv preprint arXiv:1406.7799.
  • [34] Seshadrinathan, K., Soundararajan, R., Bovik, A. C., and Cormack, L. K. (2010). Study of subjective and objective quality assessment of video. IEEE transactions on Image Processing, 19(6), 1427-1441.
  • [35] Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
  • [36] Yang, M., and Sowmya, A. (2015). An underwater color image quality evaluation metric. IEEE Transactions on Image Processing, 24(12), 6062-6071.
  • [37] Panetta, K., Gao, C., and Agaian, S. (2015). Human-visual-system-inspired underwater image quality measures. IEEE Journal of Oceanic Engineering, 41(3), 541-551.
  • [38] Jian, M., Qi, Q., Dong, J., Yin, Y., Zhang, W., and Lam, K. M. (2017, July). The OUC-vision large-scale underwater image database. In 2017 IEEE International Conference on Multimedia and Expo (ICME) (pp. 1297-1302). IEEE.