Improving Robustness of Feature Representations to Image Deformations using Powered Convolution in CNNs
In this work, we address the problem of improvement of robustness of feature representations learned using convolutional neural networks (CNNs) to image deformation. We argue that higher moment statistics of feature distributions could be shifted due to image deformations, and the shift leads to degrade of performance and cannot be reduced by ordinary normalization methods as observed in experimental analyses. In order to attenuate this effect, we apply additional non-linearity in CNNs by combining power functions with learnable parameters into convolution operation. In the experiments, we observe that CNNs which employ the proposed method obtain remarkable boost in both the generalization performance and the robustness under various types of deformations using large scale benchmark datasets. For instance, a model equipped with the proposed method obtains 3.3% performance boost in mAP on Pascal Voc object detection task using deformed images, compared to the reference model, while both models provide the same performance using original images. To the best of our knowledge, this is the first work that studies robustness of deep features learned using CNNs to a wide range of deformations for object recognition and detection.
READ FULL TEXT