Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier

06/21/2021
by   Simone Marullo, et al.
16

In the last decade, motivated by the success of Deep Learning, the scientific community proposed several approaches to make the learning procedure of Neural Networks more effective. When focussing on the way in which the training data are provided to the learning machine, we can distinguish between the classic random selection of stochastic gradient-based optimization and more involved techniques that devise curricula to organize data, and progressively increase the complexity of the training set. In this paper, we propose a novel training procedure named Friendly Training that, differently from the aforementioned approaches, involves altering the training examples in order to help the model to better fulfil its learning criterion. The model is allowed to simplify those examples that are too hard to be classified at a certain stage of the training procedure. The data transformation is controlled by a developmental plan that progressively reduces its impact during training, until it completely vanishes. In a sense, this is the opposite of what is commonly done in order to increase robustness against adversarial examples, i.e., Adversarial Training. Experiments on multiple datasets are provided, showing that Friendly Training yields improvements with respect to informed data sub-selection routines and random selection, especially in deep convolutional architectures. Results suggest that adapting the input data is a feasible way to stabilize learning and improve the generalization skills of the network.

READ FULL TEXT

page 1

page 3

page 6

page 7

research
12/18/2021

Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks

Amongst a variety of approaches aimed at making the learning procedure o...
research
10/29/2018

Logit Pairing Methods Can Fool Gradient-Based Attacks

Recently, several logit regularization methods have been proposed in [Ka...
research
09/13/2022

Adversarial Coreset Selection for Efficient Robust Training

Neural networks are vulnerable to adversarial attacks: adding well-craft...
research
03/04/2021

Gradient-Guided Dynamic Efficient Adversarial Training

Adversarial training is arguably an effective but time-consuming way to ...
research
09/11/2019

Towards Noise-Robust Neural Networks via Progressive Adversarial Training

Adversarial examples, intentionally designed inputs tending to mislead d...
research
02/14/2019

Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?

Convolutional Neural Networks and Deep Learning classification systems i...
research
01/07/2021

Robust Text CAPTCHAs Using Adversarial Examples

CAPTCHA (Completely Automated Public Truing test to tell Computers and H...

Please sign up or login with your details

Forgot password? Click here to reset