DeepAI
Log In Sign Up

Learning Deep Neural Networks under Agnostic Corrupted Supervision

02/12/2021
by   Boyang Liu, et al.
0

Training deep neural models in the presence of corrupted supervision is challenging as the corrupted data points may significantly impact the generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieves strong guarantees without any assumption on the type of corruption and provides a unified framework for both classification and regression problems. Unlike many existing approaches that quantify the quality of the data points (e.g., based on their individual loss values), and filter them accordingly, the proposed algorithm focuses on controlling the collective impact of data points on the average gradient. Even when a corrupted data point failed to be excluded by our algorithm, the data point will have a very limited impact on the overall loss, as compared with state-of-the-art filtering methods based on loss values. Extensive experiments on multiple benchmark datasets have demonstrated the robustness of our algorithm under different types of corruption.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/11/2021

Twin Neural Network Regression is a Semi-Supervised Regression Algorithm

Twin neural network regression (TNNR) is a semi-supervised regression al...
05/07/2021

Self-paced Resistance Learning against Overfitting on Noisy Labels

Noisy labels composed of correct and corrupted ones are pervasive in pra...
12/29/2020

Twin Neural Network Regression

We introduce twin neural network (TNN) regression. This method predicts ...
06/30/2022

Benchmarking the Robustness of Deep Neural Networks to Common Corruptions in Digital Pathology

When designing a diagnostic model for a clinical application, it is cruc...
09/12/2017

Community Recovery in Hypergraphs

Community recovery is a central problem that arises in a wide variety of...
03/02/2022

PUMA: Performance Unchanged Model Augmentation for Training Data Removal

Preserving the performance of a trained model while removing unique char...