Learning degraded image classification with restoration data fidelity

01/23/2021
by   Xiaoyu Lin, et al.
15

Learning-based methods especially with convolutional neural networks (CNN) are continuously showing superior performance in computer vision applications, ranging from image classification to restoration. For image classification, most existing works focus on very clean images such as images in Caltech-256 and ImageNet datasets. However, in most realistic scenarios, the acquired images may suffer from degradation. One important and interesting problem is to combine image classification and restoration tasks to improve the performance of CNN-based classification networks on degraded images. In this report, we explore the influence of degradation types and levels on four widely-used classification networks, and the use of a restoration network to eliminate the degradation's influence. We also propose a novel method leveraging a fidelity map to calibrate the image features obtained by pre-trained classification networks. We empirically demonstrate that our proposed method consistently outperforms the pre-trained networks under all degradation levels and types with additive white Gaussian noise (AWGN), and it even outperforms the re-trained networks for degraded images under low degradation levels. We also show that the proposed method is a model-agnostic approach that benefits different classification networks. Our results reveal that the proposed method is a promising solution to mitigate the effect caused by image degradation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset