Deep learning has become the de-facto
approach in image and speech recognition, text processing, multi-modal learning with superior accuracy and robustness over other machine learning approaches[2, 3]. Among many application areas for deep learning, the most prominent ones are agricultural inspection for fruit counting  and disease detection , vehicle and pedestrian traffic monitoring [6, 7], structural health monitoring of critical infrastructure [8, 9, 10] to name a few.
MAVs are versatile platforms for collecting imagery which can be used to train and test deep networks. These platforms are especially cut out for applications which require multiple traversals of large volumes with hard-to-reach spots for a human worker such as large infrastructure like tunnels, bridges and towers [11, 12, 13]
. Collecting large image datasets required for training a deep network from such environments has become possible with the emerge of complete estimation and navigation stacks for MAVs[1, 14, 15]. This work relies on such datasets collected by .
Critical infrastructure such as dams, bridges and skyscrapers experience structural deterioration due to corrosion, aging and tremendous repetitive loads. Other external factors exacerbating this problem are earthquakes and adverse extreme atmospheric conditions such as storms. In order to avoid possible catastrophic consequences such as demolishment of these infrastructure, flood and fire, periodical inspection and maintenance are indispensable. The situation is far more severe for dams and penstocks since many hydraulic power plants and water conduits in The U.S. were built more than a half century ago 333https://www.infrastructurereportcard.org/cat-item/dams/.
Penstocks are steel or concrete tunnels that carry water from upstream to turbines at the bottom of a dam to generate electricity. Visual inspection of a penstock is possible only when it is completely dewatered which in turn interrupts electric generation and downstream regulation. Conventional inspection methods require building a scaffolding inside the penstock and an inspector to either climb up inside the penstock or swing down from the gate to spot regions on the tunnel surface that require maintenance using only a hand-held torch. However, these methods are time-consuming, labor-intensive, dangerous, inaccurate due to difficult low-light working conditions and rely on the inspector’s subjective decision. For these reasons, it is crucial to perform visual inspection faster and more accurate with the least human intervention to reduce maintenance cost, safety threads and increase effectiveness.
In this study, we propose a data-driven, deep learning method to automatically identify corroded spots on the surface of a penstock. The proposed U-Net design requires a small training dataset consisting of less than 40 manually annotated image samples. This network performs pixel-wise classification of image regions into groups such as coating, water, rivet, wet or corroded surface. The off-line classification algorithm runs at 12 frame-per-second (fps) on the original images which offers real-time, onsite rapid processing of high-resolution raw images. Fig. 5
shows an example output of our classifier.
We rely on the datasets collected by  which uses an autonomous MAV to traverse a penstock while collecting imagery from four onboard color cameras effective field of view of which covers the whole annulus of the tunnel. In this respect, the proposed method complements  in providing an end-to-end autonomous tunnel inspection system. Our U-Net works successfully despite low-light conditions and excessive noise due to dust and mist particles occluding camera views (Fig. 4). To our knowledge, this is the first study that can do automated defect detection on a dataset collected autonomously from such critical infrastructure using an MAV.
In the subsequent sections, we discuss the class imbalance problem and consider different loss functions to mitigate this problem. We generalize the focal loss function which is originally proposed for a single class problem  for our multi-class segmentation problem. Also, we analyze our experimental results with different metrics to signify the importance of choosing the right metrics in performance evaluation of the learned models. Our empirical experiments show that the focal loss function, associated with a proper set of class weights can improve the segmentation results. We also discuss the effect of the class weight on the performance of the weighted focal loss.
This work presents multiple contributions : First, we extend the focal loss function to handle multi-class segmentation problem which is originally proposed for single class detection. Secondly, the proposed U-Net is retrofitted to work in low-light conditions and in presence of excessive image noise due to dust and mist particles causing significant camera occlusion. Finally, this work complements our previous work  in offering an end-to-end autonomous tunnel inspection system consisting of autonomous data collection with an MAV and automated image annotation with minimal input from human operators for both flying the MAV and training the deep network. To our knowledge, this is the first study that offers a complete solution to inspection of such critical large infrastructure under challenging low-light and high-noise environments.
Ii Related Work
Ii-a Deep Learning for Visual Infrastructure Inspection
There have been a huge interest in using deep learning techniques for infrastructure inspection. In one of the recent studies,  introduces a sliding-window technique using a CNN-based classifier to detect cracks on concrete and steel surfaces. The major drawback of this method is that it cannot satisfy real-time processing requirements and would fail in detecting small defects which are very frequent in our images. This type of framework is also not data-efficient since it proceeds an image patch as a single sample. Indeed, authors mention that they train with 40K image patches cropped out from 277 high resolution images captured with a hand-held camera which is far beyond the number of training images we use in this work.
uses CNNs to detect defects on different types of materials such as fabric, stone and wood. They compare their network to conventional machine learning methods such as Gabor Filter, Random Forest and Independent Component Analysis. However, the authors do not aim at all for a complete system for autonomous data collection and automated defect detection. Finally, their training dataset consists of 3000 images which is also far larger than ours.
In a similar application to ours, 
proposes to feed a CNN with low-level features such as edges so as to obtain a mixture of low and high level features before classifying pixels to detect defects on concrete tunnel surfaces. Unlike this work, we propose an end-to-end fully convolutional neural network that does not depend on handcrafted features which also works with arbitrary input image sizes. Indeed, some of the low-level features used in their work are neither easy to obtain nor provide useful information for the CNN such as edges, texture and frequency. This is due to the characteristics of the noise caused by dust and mist as well as the complex appearance of the penstock surface.
Ii-B Small training dataset
The quality and size of training data is crucial for supervised learning tasks such as image segmentation. The data size gains more importance for deep neural networks which are known to require big train sets. However, amount of training data is limited in field robotics applications due to the cost of the data collection process. To mitigate this problem, several solutions have been proposed such as generating synthetic data, data augmentation 
, using transfer learning and designing data-efficient approaches, to name a few.
A well-known successful data-efficient deep neural networks is U-Net  which won the Cell Tracking Challenge at ISBI 2015 111http://www.celltrackingchallenge.net/. U-Net can be trained using only as few as data samples. Due to the above considerations, in this work, we adopt this network design after some simplifications.
Another work related to our segmentation task is  on biomedical imaging. The authors focus on instance segmentation which attempts to label individual biological cells. Their problem is also less challenging, due to well-controlled lighting conditions which is not the case in our application as seen in Fig. 5.
Ii-C Class Imbalance in Deep Learning
Class imbalance, on which exists vast literature in classical machine learning domain, has not attracted significant attention in the deep learning context. One of the very few studies on this topic is by Buda et. al  who provide a systematic investigation on class imbalance in deep learning. However, their solutions focus on redistributing the frequency of classes in the training dataset using sampling, two-phase training and thresholding rather than loss functions like us.
Authors of  propose a focal loss as an alternative to sampling methods. However, it is meant to solve the single-class detection problem. In this study, we extend the horizon of its potential usage by investigating its use in a multi-class segmentation problem.
Unlike the above studies, our images suffer from imperfect lighting conditions such as high exposure on reflective areas, low exposure under non-reflective areas. Furthermore, the propeller downwash kicks dust and mist which occludes the camera view. In addition, the corroded spots are highly nonhomogeneous in appearance and size, making the segmentation further challenging.
Iii-a Data Collection
The dataset used in this study is collected with a customized DJI-F550 platform described in  that autonomously flies inside a penstock at Center Hill Dam, TN. The MAV, shown in Fig. 1, is equipped with four synchronized color cameras mounted such that the combined field of view covers the annulus of the tunnel. A typical dataset contains 3600 color images from each camera saved in a compressed format.
Iii-B Data Preprocessing
Weak on-board illumination, reflective wet surfaces and noise due to dust and mist particles cause images to be either pale and textureless, very bright or occluded with bright dust traces (Fig. 4). Hence, we preprocess the images to suppress these effects before feeding them into our network. We apply limited adaptive histogram equalization using OpenCV’s CLAHE module to mitigate the brightness imbalance. Also, image regions occluded by MAV’s landing gear and camera lens covers are masked out. In order to do inference in the first-person view, we omit image undistortion.
We pick 70 images, 38 of which is used for training. The training images are extracted from the first portion of the video while the test images are extracted from the later. In order to make sure that the images are captured from different view points, we pick one every
frames, starting from the moment the robot reaches its horizontal velocity of. Fig. 5a shows a sample labeled image.
We label pixels in an image according to different classes including: normal coating, wet coating, corroded, rivet, water and others. Fig. 5b shows the percentage of these classes in the training dataset.
At this point we have to emphasize that although the total corroded spots constitute of the unoccluded pixels, the human data labeler was confident only for of the whole set. For of it, the labelers was not confident with their choice due to lack of image details. The latter group of image regions are marked by pink color in Fig 5a. One can ignore the pixels belonging to these regions in the training set to avoid noisy labeling at the expense of loosing positive samples for training. In this study, we are aware that these regions can cause problems and use them with caution as corroded spots.
Iv Problem Definition
Iv-a The Multi-class Segmentation Problem
Our goal is to detect corroded spots from images captured by an autonomous MAV flying inside a penstock. There are various object detection and semantic segmentation techniques for single-class or multi-class segmentation that could possibly work for this purpose. However, we chose semantic segmentation rather than object detection because corroded spots in our data set are highly variant in size, shape, position, and intensity value, making it challenging to detect them using bounding boxes.
We formulate this problem as a multi-class segmentation task. Let be the set of training images which have the dimension of . Each image is associated with a mask of size that tells the class every pixel on the image belongs to, assuming that each pixel belongs to only a single class. More specifically, a pixel on an image has an intensity value of and a label , where is the number of classes considered in the problem.
During the training process, the network is fed with samples which are pairs . Essentially, the network attempts to map to a label but it is only able to achieve an estimate . The difference between and
indicates how good the current model is and provides a training signal to adjust the model’s hyperparameters accordingly. During testing, the network is only fed with samplesand attempts to predict the corresponding labels.
Iv-B Loss Function and Updating Parameters of Deep Network
In a supervised deep learning framework, a loss function acts as a measure of the goodness of the current model state during the training process. Let y and be the ground-truth labels and the network outputs. The training process can be considered as a highly non-linear optimization problem that minimizes a loss or objective function . We do this by iteratively updating the network parameters, , as a function of the derivative of the loss function with respect to which is written by the recurrence relation
where represents the parameters of the network at iteration and is the learning rate. Thus, the choice of the loss function is one of the determining factors on how a deep network frame performs since with the right loss function the training process can converge much faster and also result in a network that can do more accurate inference on the test data.
V Class Imbalance and Loss Functions
V-a Class Imbalance
Each data sample in the training dataset contributes to the update Eq. 1 through the loss function,
, regardless of which training scheme is used during training such as stochastic gradient descent and mini-batch stochastic gradient descent. Thus, the frequency of a class in the training dataset determines the shape of a given loss function, i.e. the more samples a class contains, the more the class affects the loss function which therefore affects the training process. A dominant class in the training dataset contributes much more to the loss function,, than other class samples. This then biases the training process in a way that the network assigns all training data to the dominant class for the sake of minimizing the loss function. Consequently, the trained model could wrongly predict all test samples to be belonging to this class. This problem is called class imbalance.
Generally speaking, class imbalance is a common problem in tasks such as object detection and segmentation. In robotics and medical image applications, this problem is extremely critical since the training data set is often small due to the expensive data acquisition process.
To mitigate this problem, there have been extensive works on strategic sampling. In this study, we generalize the focal loss, introduced in  as an alternative to these sampling-based methods, to multi-class segmentation problems.
V-B Softmax Cross Entropy (SCE) and Weighted Softmax Cross Entropy (W-SCE)
Let be the number of classes considered in the classification problem and be the number of samples used to calculate the loss to update the network parameters. Let , with
be the one-hot vector of the true labels and the corresponding softmax output from the network respectively. The softmax cross entropy loss can be defined as
where is essentially the network’s confidence of the sample being classified as class and .
To address the class imbalance problem, one common trick is to associate weighting factors with for classes. These can be set by inverting the class frequency in the training data set or by tuning as hyperparameters. In this study, we use the former method.
The cross entropy loss becomes weighted cross entropy loss
An advantage of this loss function is that it can emphasize the importance of rare classes or class of concern in the loss function, providing better training signals to the network. However, this loss function cannot differentiate between easy and hard samples.
V-C Focal Loss for Multi-class Classification
The effect of the focal loss and value can be explained as follows: When a hard sample is misclassified with low confidence on the true class, i.e. is small, the weighting factor becomes close to preserving that sample’s contributions to the total loss. In contrast, an easy sample correctly classified with a high confidence value, , will have its weight close to reducing its contribution to the total loss. In summary, the focal loss function can appreciate the importance of hard samples, regardless of which class they belong to, by giving more them more weight and downweight easy samples.
According to the authors, the focusing parameter controls the rate at which how easy samples are downweighted over time. In the special case of , the focal loss becomes equivalent to cross entropy. When is high, the weighting factor is exponentially small, extending the range of samples considered as easy samples.
V-D Weighted Focal Loss (W-Focal)
A drawback of focal loss function is that it can underestimate the importance of samples in the classes of concern. In addition, it is sensitive to wrong labeled samples in the training dataset since the wrong-labeled samples would mistakenly be considered as hard samples. We discuss these problem in more details in Sec. VII.
Thus, in addition to , we also investigate the performance of weighted focal loss which writes as
Essentially, can solve the aforementioned problem that focal loss suffers from by adjusting to emphasize the importance of a certain class , as well as reducing associated with classes that might have been labeled wrongly.
There have been successive Fully Convolutional Networks (FCNs) for image Semantic Segmentation followed by  such as U-Net , Deepnet , and Segnet . In this study, we adopt U-Net  as it has shown superior performance in biomedical applications. Also, since U-Net design is simple, we could focus more on investigating on alternative loss functions and their performance with small training dataset and class imbalance.
Unlike the original U-Net, we reduce the number of features on each block by on each block such that the inference during testing can be done in real time. We also make use of batch-norm  on every convolution layer as a means to achieve better model regularization.
Vi-B Training Scheme
The deep network is implemented in Tensorflow using mini-batch gradient descent with the batch-size of 2. We use an Adam Optimizer with default parameters.
To cope with the small training dataset, we intensively utilize data augmentation techniques including random rotation, random cropping and padding, random gamma shifting, random brightness shifting, and random color shifting.
In order to evaluate the performance of every loss function, we create multiple variances to train and test. For each variance of the model, we associate U-Net with one of the following loss functions:
Softmax cross-entropy (SCE)
Weighted softmax cross-entropy (W-SCE)
Focal loss (Focal). We evaluate three variances correponding to .
Weighted focal loss (W-Focal). We evaluate three variances correponding to .
While the loss functions in the variances are different, other settings are kept the same. The weights for pixel classes are chosen to be approximately inverse of their frequencies in the training dataset, except the for classes and since their annotation is too noisy. Indeed, our set of weights is , , , , , .
Vii Results and Evaluation
In this section, we first discuss the metrics that we use in evaluation. Then, we present our experimental results and discuss the performance of different loss functions from both qualitative and quantitative perspectives. Since corrosion detection is the primary concern of this work, we present all metrics and evaluations for the corroded class.
Vii-a Evaluation Metrics
Choosing a proper evaluation metric for a segmentation task is of big importance since this will decide which models are favored. According to Csurka et. al, judging a segmentation algorithm is difficult since it is highly application dependent.
In this study, our quantitative results are reported in terms of four metrics: Dice similarity coefficient (DSC), sensitivity, specificity, and total error which write as
where, Pixels correctly classified as corroded in the ground truth and by algorithm; Pixels not classified as corroded in the ground truth, but classified as corroded by algorithm; Pixels not classified as corroded in ground truth and by algorithm; Pixels classified as corroded in ground truth, but not classified as corroded by algorithm.
While sensitivity is a measure of rate, specificity is a measure of rate. DSC can reflect the rate as well as penalize and . On the other hand, total error introducing an adjustable value, can provide a flexible measure to reflect how much more missing a costs than missing a . Note that total error does not take into account the as DSC does and total error is dependent on the . Thus, DSC and total error do not necessarily agree on judging which model is better.
In our experiments, is set to be , addressing the importance of identifying corroded spots. We choose the favorable model as the one with high DSC and acceptable total error.
Vii-B Quantitative and Qualitative Results
DSC, Sensitivity, Specificity: Higher is better. Total Error: Smaller is better
Tab. I shows the quantitative results while Fig. 4 shows the qualitative results of different loss functions with fixed class weights based on four metrics. We use softmax cross entropy as the base loss to which other losses are compared.
As seen in Fig. 4, row , the base loss fails to preserve the boundaries of corroded regions due to the imbalance problem. As a consequence, its sensitivity values are much smaller than its specificity values.
Incorporating class weights to the base loss function can significantly improve the sensitivity value from, to , increase DSC and lower total error values. The visualization in Fig. I, row shows that the weighted softmax cross entropy loss helps preserving the boundaries better than the base loss at the expense of having more false positives. This is because class weights help balancing the contribution of classes to the total loss giving more training signal from the positive samples.
On the other hand, the focal loss is shown to be superior than the base loss in specificity and DSC values. However, it also fails to preserve the boundaries of corroded regions as in the base loss case. Its sensitivity values do not surpass those of the base loss. The visualization in Fig. I, row demonstrates that the focal loss tends to have less false positives than the weighted softmax entropy but have more false negatives resulting in higher specificity but lower sensitivity. This suggests that, in this case, the focal loss tends to consider false positive samples harder than false negative samples. This can be explained by the fact that there are dust noise as well as reflective regions on images that look like corrosion. During the training, the network may output low confidence on classifying them as non-corroded. Another attribute to low sensitivity can come from wrong annotations, especially in the image regions that the labeler is uncertant about the correct label, as we discuss in Sec. III.
Taking the advantages of weighted loss and the focal loss into account, the weighted focal loss performs better than other methods. Indeed, it has a draw with the focal loss on DSC values while yielding higher sensitivity values and lower total error values. In our experiments, the weighted focal loss and focal loss performs the best with . Fig. I, row shows that not only corrosion but wet region and rivet regions are cleaner and more complete in weighted loss function results.
Comparison on the whole video images can be found at this source.
Vii-C Effects of Class Weights
In attempts to investigate the effect of class weights on the performance of loss functions, we conduct an experiment with W-SCE and W-Focal losses in which we adjust the weight of class corroded while keeping the remaining class weights and other settings the same. Let () be the weight value of corroded pixel after being normalized by (thus, corresponds to results shown in Tab. I). We train the networks as before with W-SCE and W-Focal losses using three additional values of which are and report the results in Tab. II.
DSC, Sensitivity, Specificity: higher is better. Total Error: smaller is better
The results in Tab. II demonstrate that the class weights largely effect the metrics scores. Higher weight for corroded class results in higher sensitivity values, but smaller specificity values since the model will favor the corroded class for the sake of minimizing the loss. As a result, DSC and total error values fluctuate around an value for each metric. For DSC, they are the highest with . For total error, they are optimal with .
These observations repeat suggestion that the loss function is highly dependent to the application and introducing proper class weights can help achieve an optimal solution for a specific metric that we choose.
This work attempts to offer an automated solution for safer, faster, cost-efficient and objective infrastructure inspection with a focus on penstocks. We propose a data-efficient, data-driven image segmentation method using a fully convolution neural network that can detect highly non-homogeneous objects under low-light and high-noise conditions in real time. Our method can be seamlessly combined with other MAV planning algorithms to provide a completely automated and real-time inspection tool to replace humans in labor-intensive and dangerous tasks. Our analysis on different loss functions can provide hints to general image segmentation problems with class imbalance. The experimental results obtained with the dataset collected at Center Hill Dam, TN demonstrate that the focal loss, in combination with a proper set of class weights yields better segmentation results than the commonly used softmax cross entropy loss. One limitation of the focal loss and weighted focal loss is that their outputs tend to vary at different testing times. This can be addressed in a future work.
-  T. Özaslan, G. Loianno, J. Keller, C. J. Taylor, V. Kumar, J. M. Wozencraft, and T. Hood, “Autonomous navigation and mapping for inspection of penstocks and tunnels with mavs,” IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1740–1747, 2017.
-  Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015.
-  L. Deng, D. Yu, et al., “Deep learning: Methods and applications,” Foundations and Trends® in Signal Processing, vol. 7, no. 3–4, pp. 197–387, 2014.
-  I. Sa, Z. Ge, F. Dayoub, B. Upcroft, T. Perez, and C. McCool, “Deepfruits: A fruit detection system using deep neural networks,” Sensors, vol. 16, no. 8, p. 1222, 2016.
-  S. P. Mohanty, D. P. Hughes, and M. Salathé, “Using deep learning for image-based plant disease detection,” Frontiers in plant science, vol. 7, p. 1419, 2016.
-  P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun, “Pedestrian detection with unsupervised multi-stage feature learning,” in
-  Y. Lv, Y. Duan, W. Kang, Z. Li, F.-Y. Wang, et al., “Traffic flow prediction with big data: A deep learning approach.” IEEE Trans. Intelligent Transportation Systems, vol. 16, no. 2, pp. 865–873, 2015.
-  Y.-J. Cha, W. Choi, and O. Büyüköztürk, “Deep learning-based crack damage detection using convolutional neural networks,” Computer-Aided Civil and Infrastructure Engineering, vol. 32, no. 5, pp. 361–378, 2017.
-  C. M. Yeum and S. J. Dyke, “Vision-based automated crack detection for bridge inspection,” Computer-Aided Civil and Infrastructure Engineering, vol. 30, no. 10, pp. 759–770, 2015.
-  K. Makantasis, E. Protopapadakis, A. Doulamis, N. Doulamis, and C. Loupos, “Deep convolutional neural networks for efficient vision based tunnel inspection,” in Intelligent Computer Communication and Processing (ICCP), 2015 IEEE International Conference on. IEEE, 2015, pp. 335–342.
-  T. Özaslan, G. Loianno, J. Keller, C. J. Taylor, and V. Kumar, “Spatio-temporally smooth local mapping and state estimation inside generalized cylinders with micro aerial vehicles,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 4209–4216, Oct 2018.
-  J. Quenzel, M. Nieuwenhuisen, D. Droeschel, M. Beul, S. Houben, and S. Behnke, “Autonomous mav-based indoor chimney inspection with 3d laser localization and textured surface reconstruction,” Journal of Intelligent & Robotic Systems, pp. 1–19, 2018.
-  M. Burri, J. Nikolic, C. Hürzeler, G. Caprari, and R. Siegwart, “Aerial service robots for visual inspection of thermal power plant boiler systems,” in Applied Robotics for the Power Industry (CARPI), 2012 2nd International Conference on. IEEE, 2012, pp. 70–75.
-  K. Mohta, M. Watterson, Y. Mulgaonkar, S. Liu, C. Qu, A. Makineni, K. Saulnier, K. Sun, A. Zhu, J. Delmerico, et al., “Fast, autonomous flight in gps-denied and cluttered environments,” Journal of Field Robotics, vol. 35, no. 1, pp. 101–120, 2018.
-  G. Loianno, C. Brunner, G. McGrath, and V. Kumar, “Estimation, control, and planning for aggressive flight with a small quadrotor with a single camera and imu,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 404–411, 2017.
-  T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” IEEE transactions on pattern analysis and machine intelligence, 2018.
-  J.-K. Park, B.-K. Kwon, J.-H. Park, and D.-J. Kang, “Machine learning-based imaging system for surface defect inspection,” International Journal of Precision Engineering and Manufacturing-Green Technology, vol. 3, no. 3, pp. 303–310, 2016.
-  G. Georgakis, A. Mousavian, A. C. Berg, and J. Kosecka, “Synthesizing training data for object detection in indoor scenes,” arXiv preprint arXiv:1702.07836, 2017.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2018.
-  E. Romera, L. M. Bergasa, J. M. Alvarez, and M. Trivedi, “Train here, deploy there: Robust segmentation in unseen domains,” in Proceedings of the IEEE conference on Intelligent Vehicles Symposium, p. to appear, IEEE ITS, 2018.
-  O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
-  F. A. Guerrero-Pena, P. D. M. Fernandez, T. I. Ren, M. Yui, E. Rothenberg, and A. Cunha, “Multiclass weighted loss for instance segmentation of cluttered cells,” arXiv preprint arXiv:1802.07465, 2018.
-  M. Buda, A. Maki, and M. A. Mazurowski, “A systematic study of the class imbalance problem in convolutional neural networks,” Neural Networks, vol. 106, pp. 249–259, 2018.
-  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
-  V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” arXiv preprint arXiv:1511.00561, 2015.
-  S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
-  M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., “Tensorflow: a system for large-scale machine learning.” in OSDI, vol. 16, 2016, pp. 265–283.
-  G. Csurka, D. Larlus, F. Perronnin, and F. Meylan, “What is a good evaluation measure for semantic segmentation?.” in BMVC, vol. 27. Citeseer, 2013, p. 2013.