Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks

12/18/2021
by   Simone Marullo, et al.
5

Amongst a variety of approaches aimed at making the learning procedure of neural networks more effective, the scientific community developed strategies to order the examples according to their estimated complexity, to distil knowledge from larger networks, or to exploit the principles behind adversarial machine learning. A different idea has been recently proposed, named Friendly Training, which consists in altering the input data by adding an automatically estimated perturbation, with the goal of facilitating the learning process of a neural classifier. The transformation progressively fades-out as long as training proceeds, until it completely vanishes. In this work we revisit and extend this idea, introducing a radically different and novel approach inspired by the effectiveness of neural generators in the context of Adversarial Machine Learning. We propose an auxiliary multi-layer network that is responsible of altering the input data to make them easier to be handled by the classifier at the current stage of the training procedure. The auxiliary network is trained jointly with the neural classifier, thus intrinsically increasing the 'depth' of the classifier, and it is expected to spot general regularities in the data alteration process. The effect of the auxiliary network is progressively reduced up to the end of training, when it is fully dropped and the classifier is deployed for applications. We refer to this approach as Neural Friendly Training. An extended experimental procedure involving several datasets and different neural architectures shows that Neural Friendly Training overcomes the originally proposed Friendly Training technique, improving the generalization of the classifier, especially in the case of noisy data.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

research
06/21/2021

Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier

In the last decade, motivated by the success of Deep Learning, the scien...
research
06/18/2019

Poisoning Attacks with Generative Adversarial Nets

Machine learning algorithms are vulnerable to poisoning attacks: An adve...
research
03/26/2021

Combating Adversaries with Anti-Adversaries

Deep neural networks are vulnerable to small input perturbations known a...
research
11/30/2022

Auxiliary Learning as a step towards Artificial General Intelligence

Auxiliary Learning is a machine learning approach in which the model ack...
research
06/08/2020

Learning to Utilize Correlated Auxiliary Classical or Quantum Noise

This paper has two messages. First, we demonstrate that neural networks ...
research
09/05/2019

Training Compact Neural Networks via Auxiliary Overparameterization

It is observed that overparameterization (i.e., designing neural network...
research
05/22/2019

Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder

In this work, we consider one challenging training time attack by modify...

Please sign up or login with your details

Forgot password? Click here to reset