Improved Mix-up with KL-Entropy for Learning From Noisy Labels

08/15/2019
by   Qian Zhang, et al.
0

Despite the deep neural networks (DNN) has achieved excellent performance in image classification researches, the training of DNNs needs a large of clean data with accurate annotations. The collect of a dataset is easy, but it is difficult to annotate the collecting data. On the websites, there exist a lot of image data which contains inaccurate annotations, but training on these datasets may make networks easier to over-fit the noisy labels and cause performance degradation. In this work, we propose an improved joint optimization framework, which mixed the mix-up entropy and Kullback-Leibler (KL) entropy as the loss function. The new loss function can give the better fine-tuning after the framework updates both the label annotations. We conduct experiments on CIFAR-10 dataset and Clothing1M dataset. The result shows the advantageous performance of our approach compared with other state-of-the-art methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro