Selective Output Smoothing Regularization: Regularize Neural Networks by Softening Output Distributions

03/29/2021
by   Xuan Cheng, et al.
12

In this paper, we propose Selective Output Smoothing Regularization, a novel regularization method for training the Convolutional Neural Networks (CNNs). Inspired by the diverse effects on training from different samples, Selective Output Smoothing Regularization improves the performance by encouraging the model to produce equal logits on incorrect classes when dealing with samples that the model classifies correctly and over-confidently. This plug-and-play regularization method can be conveniently incorporated into almost any CNN-based project without extra hassle. Extensive experiments have shown that Selective Output Smoothing Regularization consistently achieves significant improvement in image classification benchmarks, such as CIFAR-100, Tiny ImageNet, ImageNet, and CUB-200-2011. Particularly, our method obtains 77.30% accuracy on ImageNet with ResNet-50, which gains 1.1% than baseline (76.2%). We also empirically demonstrate the ability of our method to make further improvements when combining with other widely used regularization techniques. On Pascal detection, using the SOSR-trained ImageNet classifier as the pretrained model leads to better detection performances. Moreover, we demonstrate the effectiveness of our method in small sample size problem and imbalanced dataset problem.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 6

page 7

page 9

page 10

research
08/23/2023

ACLS: Adaptive and Conditional Label Smoothing for Network Calibration

We address the problem of network calibration adjusting miscalibrated co...
research
02/07/2018

ShakeDrop regularization

This paper proposes a powerful regularization method named ShakeDrop reg...
research
01/24/2017

Training Group Orthogonal Neural Networks with Privileged Information

Learning rich and diverse representations is critical for the performanc...
research
09/13/2020

Margin-Based Regularization and Selective Sampling in Deep Neural Networks

We derive a new margin-based regularization formulation, termed multi-ma...
research
03/04/2020

Black-box Smoothing: A Provable Defense for Pretrained Classifiers

We present a method for provably defending any pretrained image classifi...
research
05/13/2019

CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features

Regional dropout strategies have been proposed to enhance the performanc...
research
09/14/2020

Adaptive Label Smoothing

This paper concerns the use of objectness measures to improve the calibr...

Please sign up or login with your details

Forgot password? Click here to reset