Local Blur Mapping: Exploiting High-Level Semantics by Deep Neural Networks
The human visual system excels at detecting local blur of visual images, but the underlying mechanism is mysterious. Traditional views of blur such as reduction in local or global high-frequency energy and loss of local phase coherence have fundamental limitations. For example, they cannot well discriminate flat regions from blurred ones. Here we argue that high-level semantic information is critical in successfully detecting local blur. Therefore, we resort to deep neural networks that are proficient in learning high-level features and propose the first end-to-end local blur mapping algorithm based on a fully convolutional network (FCN). We empirically show that high-level features of deeper layers indeed play a more important role than low-level features of shallower layers in resolving challenging ambiguities for this task. We test the proposed method on a standard blur detection benchmark and demonstrate that it significantly advances the state-of-the-art (ODS F-score of 0.853). In addition, we explore the use of the generated blur map in three applications, including blur region segmentation, blur degree estimation, and blur magnification.
READ FULL TEXT