Background. Within the last few years, great efforts have been made for underwater robotics. For example, Gong et al. designed a soft robotic arm for underwater operation . Cai et al. developed a hybrid-driven underwater vehicle-manipulator for collecting marine products . Towards intelligent autonomous robots, visual methods are usually adopted for underwater scene perception [1, 2, 3, 4].
Problem & motivation. Although visual restoration has proven to be helpful for traditional man-made features (e.g., SIFT ) , the relation between image quality and convolutional representation remains unclear. As demonstrated in Fig. 1, underwater scenes are always degenerated, and moreover, the degeneration usually has different styles, i.e., color distortion, haziness, and illumination (see the top line). By filtering-based restoration (FRS)  and GAN-based restoration (GAN-RS) , higher-quality images are generated. Although each column of Fig. 1 is the same scenario, their detection results are diverse with DRN detector . Therefore, scopes of restoration and detection should have latent relevance that should be investigated. To this end, we study to answer a question how does visual restoration contribute to object detection in aquatic scenes?
In addition, visual restoration exactly produces the change of data domain, and it is known that data domain is important for data-driven learning process [21, 22, 23, 24, 25]. However, under the condition of different data domains, within-domain and cross-domain detection performances have rarely been studied. That is, the domain effect on object detection remains unclear. In our opinion, exploring the effect of data domain is instructive for building robust real-world detectors. Thereby, we are motivated to investigate the relation between image quality and detection performance based on visual restoration to unveil domain effect on object detection. In this way, the relation of restoration to detection can also be exposed.
Our work. In this paper, we joint analyze visual restoration and object detection for underwater robotic perception. At first, we construct quality-diverse data domains with FRS and GAN-RS for both training and testing. FRS is a traditional filtering method and GAN-RS is a learning-based scheme, so they can be representative for the restoration sphere. Further, we investigate typical single-stage detectors (i.e., SSD , RetinaNet , RefineDet , and DRN ) on different data domains, then within-domain and cross-domain performances are analyzed. Finally, real-world experiments are conducted on the seabed for online object detection. Based on our study, the relation of restoration-based data domain to detection performance is unveiled. As a result, although it induces adverse effects on object detection, visual restoration efficiently suppresses domain shift (i.e., discordance between training domain and testing domain) between training images and practical scenes. Thus, visual restoration still plays an essential role in aquatic robotic perception. Our contributions are summarized as follows:
We reveal three domain effects on detection: 1) Domain quality has a negligible effect on within-domain convolutional representation and detection accuracy after sufficient training; 2) low-quality domain brings about better generalization in cross-domain detection; 3) in domain-mixed training, low-quality domain can hardly be well learned.
We indicate that restoration is a thankless operation for improving within-domain detection accuracy. In detail, it reduces recall efficiency . However, visual restoration is beneficial in reducing domain shift between training data and practical aquatic scenes so that online detection performance can be boosted. Therefore, it is an essential operation in real-world object perception.
Based on our analysis, online object detection is successfully conducted on the field unstructured seabed with an aquatic vision-based robot.
Ii Related Work
Underwater visual restoration. Because of natural physical phenomenon, underwater visual signal is usually degenerated, forming low-quality vision. In detail, underwater image/video shows low contrast, high color distortion, and strong haziness, making image processing difficult. Schechner and Karpel attributed this degeneration to visual absorption and scattering 
. Overcoming this difficulty, Peng and Cosman proposed a restoration method based on image blurriness and light absorption, which estimated scene depth for image formation model. Chen et al. adopted filtering model and artificial fish algorithm for real-time visual restoration . Li et al. hierarchically estimated background light and transmission map, and their method was characterized by minimum information loss . Chen et al. proposed a weakly supervised GAN and an adversarial critic training to achieve real-time adaptive restoration . Recently, Liu et al. built an underwater enhancement benchmark for follow-up works, whose samples were collected on the seabed under natural light .
With the above-mentioned studies, it is revealed that visual restoration is beneficial in clearing image details and producing salient low-level features. For example, canonical SIFT  algorithms deliver a huge performance improvement based on restoration . However, how visual restoration contributes to CNN-based feature representation remains unclear. Moreover, visual restoration is tightly related to data domain, so we explore domain effect based on restoration.
Object detection & domain adaption.
During the deep learning era, single-stage object detection uses a single-shot network for regression and classification. As a pioneering work, Liuet al. proposed SSD for real-time detection. Inspired by feature pyramid network, Li et al. developed RetinaNet to propagate CNN features in a top-down manner for enlarging shallow layers’ receptive field . Zhang et al. introduced two-step regression to the single-stage pipeline and designed RefineDet for addressing class imbalance problem. Chen et al. proposed DRN with anchor-offset detection that achieved single-stage region proposal . Although some two-stage detectors  and anchor-free detectors  could induce higher accuracy, the single-stage methods maintain a better accuracy-speed trade-off for robotic tasks.
Above detectors generally assume that training and test samples fall within an identical distribution. However, real-world data usually suffer from domain shift, which affects detection performance. Hence, cross-domain robustness of object detection is recently explored. Chen et al. proposed adaptive components for image-level and instance-level domain shift based on -divergence theory . Xu et al. utilized deformable part-based model and adaptive SVM for mitigating domain shift problem . Raj et al. developed subspace alignment approach for detecting object in real-world scenarios . For alleviating the problem of domain shift, Khodabandeh et al. exploited a robust learning method with noisy labels . Inoue et al. proposed a cross-domain weakly-supervised training based on domain transfer and pseudo-labeling for domain adaptive object detection .
These works have indicated how to moderate the domain shift problem, but there has been relatively little work extensively studying the domain effect on detection performance. In contrast, based on underwater scenarios, we investigate the effect of quality-diverse data domain on object detection. Kalogeiton et al. analyzed detection performance based on different image quality , but we have advantages over their work: 1)  was reported before deep learning era, but we analyze deep learning-based object detection; 2)  considered the impact of simple factors (e.g., Gaussian blur), but our domain change is derived from realistic visual restoration; 3)  only analyzed cross-domain performance, but we investigate both cross-domain and within-domain performances; 4) our work contributes to aquatic robotics.
Iii-a Preliminary of Data Domain Based on Visual Restoration
Domain generation. The dataset is public available for underwater object detection, i.e., Underwater Robotic Picking Contest 2018111http://en.cnurpc.org/ (URPC2018). This dataset is collected on the natural seabed at Zhangzidao, Dalian, China. URPC2018 is composed of 2,901 aquatic images for training and 800 samples for testing. In addition, it contains four categories, i.e., “trepang”, “echinus”, “shell”, and “starfish”.
Based on URPC2018, three data domain are generated. 1) domain-O: The original dataset with train set and test set; 2) domain-F: All samples are processed by FRS, producing train-F set for training and test-F set for testing; 3) domain-G: All samples are restored by GAN-RS, generating train-G set for training and test-G set for testing. Mixed train, train-F, and train-G are denoted as train-all. As shown in Fig. 2, domain-O has strong color distortion, haziness, and low contrast. The degenerated visual samples are effectively restored in domain-F and domain-G.
Domain analysis. According to , Lab color space has well ability to describe underwater properties of images. Thus, Fig. 3 illustrates a-b distribution in Lab color space. As a result, the distribution of domain-O consistently gathers far from the color balance point (i.e., ). The bias between distribution center and the balance point means strong color distortion, and the concentrated distribution indicates strong haziness. On the contrary, different from domain-O, the distributions of domain-F and domain-G have a trend of color balance and haze removal.
Underwater color image quality evaluation metric (UCIQE) and underwater image quality measure (UICM, UISM, UIConM, UIQM)  are used to describe domain quality. UCIQE quantifies image quality via chrominance, saturation, and contrast. UIQM is a comprehensive quality representation of an underwater image, in which UICM, UISM, and UIConM separately describe color, sharpness, and contrast. Referring to Table I, benefited from visual restoration, domain-F brings about best UCIQE and UICM while domain-G induces the best UISM, UIConM, and UIQM. Therefore, we define domain-F and domain-G as high-quality domains with high-quality samples. In contrast, domain-O is defined as a low-quality domain with low-quality samples. Besides, referring to Fig. 3 and Table. I, GAN-RS has better restoration results, so we define that GAN-RS induces a higher restoration intensity than FRS.
Iii-B Preliminary of Detector
According to , two-stage methods have no advantage over single-stage approaches on URPC2018. Therefore, because of the ability to induce both high accuracy and real-time inference speed, we leveraged single-stage detectors to perform underwater offline/online object detection. In detail, this paper investigates SSD, RetinaNet, RefineDet, and DRN. All these detectors are trained based on train, train-F, train-G, or train-all. As for training details, an SGD optimizer with momentum and weight decay is employed, and batch size is . We use the initial learning rate of for the first iteration steps, then use the learning rate of for the next steps and for another steps. In this manner, all detectors can be sufficiently trained. For evaluation, mean average precision (mAP) is employed to describe detection accuracy.
Iii-C Preliminary of Aquatic Robot
As shown in Fig. 4, the aquatic robot is equipped with a camera and a soft robotic arm for online object detection and grasping. It is m in length, m in width, m in height, and kg in weight. In the robot, we deploy a microcomputer with an Intel I5-6400 CPU, an NVIDIA GTX 1060 GPU, and 8 GB RAM as the processor. Thus, the robot has strong computing power for online object detection.
|method||train data||test data||mAP||trepang||echinus||shell||starfish|
Iv Experimental Analysis
Iv-a Within-Domain Performance
In this test, detectors’ training and evaluation are based on identical data domain. The following analysis will unveil two points: 1) Domain quality has an ignorable effect on detection performance; 2) restoration is a thankless method for improving within-domain detection performance, because of the problem of low recall efficiency. Note that low recall efficiency means low precision under the condition of the same recall rate .
|method||train data||test data||mAP||trepang||echinus||shell||starfish|
Numerical analysis. At first, we train and evaluate SSD with different input sizes (i.e., 320 and 512) and backbones (i.e., VGG16 , MobileNet , and ResNet101 ). As shown in Table II, on domain-O, domain-F, and domain-G, SSD320-VGG16 achieves mAP of , and SSD512-VGG16 obtains mAP of . It is seen that the accuracy decreases with the rise of restoration intensity. From backbone-variable assessments, the same trend emerges. Note that ResNet101 performs inferiorly to VGG16 and MobileNet, because large receptive field in ResNet101 is unfavorable to an immense number of small objects in URPC2018. Referring to Table III, all of RetinaNet512, RefineDet512, and DRN512 can achieve the highest mAP on domain-O and see the lowest mAP on domain-G. Thus, in terms of mAP, detection accuracy is negatively correlated with domain quality. However, mAP cannot reflect accuracy details, so the following analysis will continue investigating within-domain performance.
Visualization of convolutional representation. The human perceives domain quality based on object saliency. As a result, compared to low-quality domain, the human can more easily detect objects in high-quality domain since high-quality samples contain salient object representation. Thereby, we are inspired to investigate object saliency in CNN-based detectors. Fig. 5 demonstrates multi-scale features in SSD and DRN. These features serve as the input of detection heads, so they are final convolutional features for detection. Referring to Fig. 5, despite of domain diversity, there is relatively little difference in object saliency in multi-scale feature maps. Hence, in terms of object saliency, domain quality has an ignorable effect on convolutional representation.
|method||train data||test data||mAP||trepang||echinus||shell||starfish|
|method||train data||test data||mAP||trepang||echinus||shell||starfish|
Precision-recall analysis. As shown in Fig. 6, precision-recall curves are employed for further analysis of detection performance. It is can be seen that precision-recall curves have two typical appearances. On one hand, the high-precision part contains high-confident detection results, and here domain-related curves are highly overlapped. Referring to “echinus” detected by DRN512-VGG16, curves of domain-O, domain-F, and domain-G cannot be separated when recall rate less than . That is, when detecting high-confident objects, domain difference is negligible for detection accuracy. On the other hand, curves are separated in the low-precision part. In detail, the curve of domain-F is usually below that of domain-O, while the curve of domain-G is usually below that of domain-F. That is, when detecting hard objects (i.e., low-confident detection results), false positive increases with the rise of domain quality. For example, when recall rate equals in “starfish” detected by SSD512-VGG16, the precision of domain-F is lower than that of domain-O, and the precision of domain-G is lower than that of domain-F. Therefore, recall efficiency is gradually reduced with increasing restoration intensity.
Based on aforementioned analysis, it can be concluded that visual restoration impairs recall efficiency and is unfavorable for improving within-domain detection. In addition, because domain-related mAP is relatively close and high-confident recall is far more important than low-confident recall in robotic perception, we conclude that domain quality has an ignorable effect on within-domain object detection.
Iv-B Cross-Domain Performance
In this test, detectors are trained and evaluated on different data domains. The following analysis will expose three viewpoints: 1) It is widely accepted that domain shift induces significant accuracy drop; 2) For cross-domain inference, learning based on low-quality domain has better generalization ability towards high-quality domain; 3) in domain-mixed learning, low-quality domain has smaller contribution so that low-quality samples cannot be well learned.
Cross-domain evaluation. We use domain-O and domain-G for evaluation of direction-related domain shift. That is, we train detectors on train and evaluate them on test-G, or vice versa. As shown in Table IV, mAP of all categories seriously declines. As a result, if train and test-G are employed, SSD512-VGG16 suffers mAP drop while DRN512-VGG16 encounters decrease in mAP. However, if train-G and test are adopted, SSD and DRN would suffer from a more dramatic accuracy exacerbation, i.e., mAP drops of and . According to different degrees of accuracy drop caused by direction-opposite domain shift, it is seen that the generalization of train towards test-G is better than that of train-G towards test. Therefore, it can be concluded that compared to high-quality domain, low-quality domain induces better cross-domain generalization ability.
Cross-domain training. For exploring detection performance with domain-mixed learning, we use train-all to train detectors then evaluate them on test, test-F, and test-G. Referring to Table V, on test-F and test-G, SSD512-VGG16 and DRN512-VGG16 perform on-par with their within-domain performances. However, both SSD512-VGG16 and DRN512-VGG16 see dramatically worse accuracies on test, i.e., mAP drop. With the same training settings, within-domain performances can be similarly produced on high-quality domain-F and domain-G, but low-quality domain-O suffers from significant accuracy decline. That is, when train-all is adopted, samples in train lose their effects to some extent. Thus, we conclude that cross-domain training is thankless for improving detection performance. Moreover, quality-diverse data domain has different contributions to the learning process so that low-quality samples cannot be well learned if mixed with high-quality samples.
Iv-C Domain-Effect in Robotics
In this test, we conduct real-world experiments with the aquatic robot. The test venue is the natural seabed, located at Jinshitan, Dalian, China. The following analysis will answer the question how does visual restoration contribute to object detection?
Online object detection in aquatic scenes. Based on our aquatic robot, we use DRN512-VGG16 to detect underwater objects. According to different training domain, we denote detection methods as DRN512-VGG16-O, DRN512-VGG16-F, and DRN512-VGG16-G, which are trained on train, train-F, train-G, respectively. If DRN512-VGG16-F or DRN512-VGG16-G is employed, corresponding visual restoration (i.e., FRS or GAN-RS) should also be adopted to cope with online data. As shown in Fig. 7, DRN512-VGG16-O almost completely loses its effect on object perception. Besides, DRN512-VGG16-F and FRS also have difficulty in detecting underwater objects. In contrast, DRN512-VGG16-G and GAN-RS have higher recall rate and detection precision in this real-world task. Because of the same detection method and content of training data, the huge performance gap should be caused by training domain. The experimental video is available at https://youtu.be/RekqnNa0JY0.
Online domain analysis. As shown in Fig. 8, there is a huge discrepancy between online domain and domain-O. Thus, DRN512-VGG16-O suffers from serious degeneration on detection accuracy. Domain shift is moderated by FRS, but FRS is not sufficient to preserve detection performance in this scenario. On the contrary, GAN-RS has higher restoration intensity. As a result, processed by GAN-RS, online domain and domain-G are highly overlapped as illustrated in Fig. 8. Therefore, DRN512-VGG16-G and GAN-RS are able to perform this detection task well. It can be seen that the problem of domain shift is gradually solved with increasing restoration intensity. In addition, underwater scene domains are manifold (see Fig. 1), so domain-diverse data collection is unattainable. Therefore, contributing to domain shift suppression, visual restoration is essential for object detection in underwater environments.
This paper has exposed phenomena of domain-related detection learning, and we discuss the following points to inspire future works.
Recall efficiency. In within-domain tests, high-quality domain induces lower detection performance, because of low recall efficiency. Thus, high-quality domain incurs more false positives. However, object candidates that could bring about false positives exist in both training and testing phase. Under this condition, it is seen that the learning of these candidates is insufficient. Therefore, we advocate further research on how these candidates separately impact training and inference for exploring more efficient learning methods.
CNN’s domain selectivity. In cross-domain training, low-quality samples lose their effects so that accuracy drops on test set. It is seen that the learning of CNN is characterized by domain selectivity. That is, samples’ contributions are different in CNN-based detection learning. Therefore, we advocate further research on CNN’s domain selectivity for building more robust real-world detectors.
In this paper, we have taken aim at domain analysis based on visual restoration and object detection for underwater robotic perception. Firstly, quality-diverse data domains are derived from URPC2018 dataset with FRS and GAN-RS. Furthermore, single-shot detectors are trained and evaluated, where within-domain and cross-domain performance are unveiled. Finally, we conduct online object detection to reveal the effect of visual restoration on object detection. As a result, we conclude novel viewpoints as follows: 1) Domain quality has an ignorable effect on within-domain convolutional representation and detection accuracy; 2) low-quality domain induces high cross-domain generalization ability; 3) low-quality domain can hardly be well learned in a domain-mixed learning process; 4) visual restoration is a thankless method for elevating within-domain performance, and it incurs relatively low recall efficiency; 5) visual restoration is essential in online robotic perception since it can relieve the problem domain shift.
In the future, we will further explore domain-related recall efficiency and learning selectivity. Additionally, more robotic tasks will be carried out based on our analysis.
-  Z. Gong, J. Cheng, X. Chen, W. Sun, X. Fang, K. Hu, Z. Xie, T. Wang, and L. Wen, “A bio-inspired soft robotic arm: Kinematic modeling and hydrodynamic experiments,” J. Bionic Eng., vol. 15, no. 2, pp. 204–219, 2018.
-  M. Cai, Y. Wang, S. Wang, R. Wang, Y. Ren, and M. Tan, “Grasping marine products with hybrid-driven underwater vehicle-manipulator system,” IEEE Trans. Autom. Sci. Eng., doi: 10.1109/TASE.2019.2957782.
-  J. Gao, A. A. Proctor, Y. Shi, and C. Bradley, “Hierarchical model predictive image-based visual servoing of underwater vehicles with adaptive neural network dynamic control,” IEEE Trans. Cybern., vol. 46, no. 10, pp. 2323–2334, 2015.
-  A. Kim and R. M. Eustice, “Real-time visual SLAM for autonomous underwater hull inspection using visual saliency,” IEEE Trans. Robot., vol. 29, no. 3, pp. 719–733, 2013.
-  Y. Hu, W. Zhao, and L. Wang, “Vision-based target tracking and collision avoidance for two autonomous robotic fish,” IEEE Trans. Ind. Electron., vol. 56, no. 5, pp. 1401–1410, 2009.
Y.-Y. Schechner and N. Karpel, “Clear underwater vision,” in
Proc. IEEE Conf. Comput. Vis. Pattern Recognition, Washington, USA, Jun. 2004, pages I-536–I-543.
-  R. Liu, X. Fan, M. Zhu, M. Hou, and Z. Luo, “Real-world underwater enhancement: Challenges benchmarks and solutions,” arXiv:1901.05320, 2019.
-  C. Li, J. Guo, R. Cong, Y. Pang, and B. Wang, “Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior,” IEEE Trans. Image Process., vol. 25, no. 12, pp. 5664–5677, 2016.
-  Y.-T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Trans. Image Process., vol. 26, no. 4, pp. 1579–1594, 2017.
-  X. Chen, Z. Wu, J. Yu, and L. Wen, “A real-time and unsupervised advancement scheme for underwater machine vision,” in Proc. IEEE Int. Conf. Cyber Technol. Autom., Control, Intell. Syst., Hawaii, USA, Aug. 2017, pp. 271–276.
-  X. Chen, J. Yu, S. Kong, Z. Wu, X. Fang, and L. Wen, “Towards Real-Time Advancement of Underwater Visual Quality with GAN,” IEEE Trans. Ind. Electron., vol. 66, no. 12, pp. 9350–9359, 2019.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis., Amsterdam, Netherlands, Oct. 2016, pp. 21–37.
-  S. Zhang, L. Wen, X. Bian, Z. Lei, and S. Z. Li, “Single-shot refinement neural network for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognition, Salt Lack City, USA, Jun. 2018, pp. 4203–4212.
-  T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal loss for dense object detection,” in Proc. IEEE Int. Conf. Comput. Vis., Venice, Italy, Oct. 2017, pp. 2980–2988.
-  X. Chen, X. Yang, S. Kong Z. Wu, and J. Yu, “Dual refinement network for single-shot object detection,” in Proc. Int. Conf Robot. Autom., Montreal, Canada, May 2019, pp. 8305–8310.
-  L. Pang, Z. Cao, J. Yu, P. Guan, X. Rong, H. Chai, “A visual leader-following approach with a TDR framework for quadruped robots,” IEEE Trans. on Syst., Man, and Cybern. Syst., doi: 10.1109/TSMC.2019.2912715, 2019.
-  D.-G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004.
C. Chi, S. Zhang, J. Xing, Z. Lei, S. Z. Li, and X. Zou, “Selective refinement network for high performance face detection,” inProc. AAAI Conf. Artifical Intell., Honolulu, USA, Jul. 2019, pp. 8231–8238.
-  Y. Zhu, C. Zhao, H. Guo, J. Wang, X. Zhao, and H. Lu, “Attention couplenet: Fully convolutional attention coupling network for object detection,” IEEE Trans. Image Process., vol. 28, no. 1, pp. 113–126, 2019.
-  X. Zhou, J. Zhuo, and P. Krähenbühl, “Bottom-up object detection by grouping extreme and center points,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognition, Long Beach, USA, Jun. 2019, pp. 850–859.
-  Y. Chen, W. Li, C. Sakaridis, D. Dai, L. Van Gool, “Domain adaptive faster R-CNN for object detection in the wild,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognition, Salt Lack City, USA, Jun. 2018, pp. 3339–3348.
-  J. Xu, S. Ramos, D. Vázquez, and A. M. López, “Domain adaptation of deformable part-based models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 12, pp. 2367–2380, 2014.
-  A. Raj, V. P. Namboodiri, and T. Tuytelaars, “Subspace alignment based domain adaptation for RCNN detector,” arXiv:1507.05578, 2015.
-  M. Khodabandeh, A. Vahdat, M. Ranjbar, and W. G. Macready, “A robust learning approach to domain adaptive object detection,” Proc. IEEE Int. Conf. Comput. Vis., Seoul, Korea, Oct. 2019, pp. 480–490.
-  N. Inoue, R. Furuta, T. Yamasaki, and K. Aizawa, “Cross-domain weakly-supervised object detection through progressive domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognition, Salt Lack City, USA, Jun. 2018, pp. 5001–5009.
-  V. Kalogeiton, V. Ferrari, and C. Schmid, “Analysing domain shift factors between videos and images for object detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 11, pp. 2327–2334, 2016.
-  J.-Y. Zhu, T. Park, P. Isola, and A.-A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. Int. Conf. Comput. Vis., Venice, Italy, Oct. 2017, pp. 2223–2232.
-  W. H. Lin, J. X. Zhong, S. Liu, T. Li, and G. Li, “RoIMix: Proposal-fusion among multiple images for underwater object detection,” arXiv:1911.03029, 2019.
-  M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 6062–6071, 2015.
-  K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” IEEE J. Ocean. Eng., vol. 41, no. 3, pp. 541–51, 2015.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognition, Las Vegas, USA, Jun. 2016, pp. 770–778.
-  A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv:1704.04861, 2017.