MN-Pair Contrastive Damage Representation and Clustering for Prognostic Explanation
It is essential for infrastructure managers to maintain a high standard to ensure user satisfaction during daily operations. Surveillance cameras and drone inspections have enabled progress toward automating the inspection of damaged features and assessing the health condition of the deterioration. When we prepare a pair of raw images and damage class labels, we can train supervised learning toward the predefined damage grade, displacement. However, such a damage representation does not constantly match the predefined classes of damage grade, hence, there may be some detailed clusters from the unseen damage space or more complex clusters from overlapped space between two damage grades. The damage representation has fundamentally complex features, consequently, all the damage classes could not be perfectly predefined. Our proposed MN-pair contrastive learning method enables us to explore the embedding damage representation beyond the predefined classes including more detailed clusters. It maximizes the similarity of M-1 positive images close to the anchor, and simultaneously maximize the dissimilarity of N-1 negative ones, using both weighting loss functions. It has been learning faster than the N-pair algorithm, instead of using one positive image. We propose a pipeline to learn damage representation and use density-based clustering on the 2-D reduction space to automate finer cluster discrimination. We also visualize the explanation of the damage feature using Grad-CAM for MN-pair damage metric learning. We demonstrate our method in three experimental studies: steel product defect, concrete crack of deck and pavement, and sewer pipe defect and mention its effectiveness and discuss potential future works.
READ FULL TEXT