Deep Learning in Target Space

06/02/2020
by   Michael Fairbank, et al.
0

Deep learning uses neural networks which are parameterised by their weights. The neural networks are usually trained by tuning the weights to directly minimise a given loss function. In this paper we propose to reparameterise the weights into targets for the firing strengths of the individual nodes in the network. Given a set of targets, it is possible to calculate the weights which make the firing strengths best meet those targets. It is argued that using targets for training addresses the problem of exploding gradients, by a process which we call cascade untangling, and makes the loss-function surface smoother to traverse, and so leads to easier, faster training, and also potentially better generalisation, of the neural network. It also allows for easier learning of deeper and recurrent network structures. The necessary conversion of targets to weights comes at an extra computational expense, which is in many cases manageable. Learning in target space can be combined with existing neural-network optimisers, for extra gain. Experimental results show the speed of using target space, and examples of improved generalisation, for fully-connected networks and convolutional networks, and the ability to recall and process long time sequences and perform natural-language processing with recurrent networks.

READ FULL TEXT
research
07/05/2022

Correlation between entropy and generalizability in a neural network

Although neural networks can solve very complex machine-learning problem...
research
04/15/2020

A Hybrid Method for Training Convolutional Neural Networks

Artificial Intelligence algorithms have been steadily increasing in popu...
research
11/22/2016

Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

A deep learning approach has been widely applied in sequence modeling pr...
research
07/15/2019

What does it mean to understand a neural network?

We can define a neural network that can learn to recognize objects in le...
research
11/14/2022

Deep Autoregressive Regression

In this work, we demonstrate that a major limitation of regression using...
research
09/17/2022

Computed Decision Weights and a New Learning Algorithm for Neural Classifiers

In this paper we consider the possibility of computing rather than train...
research
03/08/2018

SentRNA: Improving computational RNA design by incorporating a prior of human design strategies

Designing RNA sequences that fold into specific structures and perform d...

Please sign up or login with your details

Forgot password? Click here to reset