Implementation of a language driven Backpropagation algorithm

09/23/2013
by   I. V. Grossu, et al.
0

Inspired by the importance of both communication and feedback on errors in human learning, our main goal was to implement a similar mechanism in supervised learning of artificial neural networks. The starting point in our study was the observation that words should accompany the input vectors included in the training set, thus extending the ANN input space. This had as consequence the necessity to take into consideration a modified sigmoid activation function for neurons in the first hidden layer (in agreement with a specific MLP apartment structure), and also a modified version of the Backpropagation algorithm, which allows using of unspecified (null) desired output components. Following the belief that basic concepts should be tested on simple examples, the previous mentioned mechanism was applied on both the XOR problem and a didactic color case study. In this context, we noticed the interesting fact that the ANN was capable to categorize all desired input vectors in the absence of their corresponding words, even though the training set included only word accompanied inputs, in both positive and negative examples. Further analysis along applying this approach to more complex scenarios is currently in progress, as we consider the proposed language-driven algorithm might contribute to a better understanding of learning in humans, opening as well the possibility to create a specific category of artificial neural networks, with abstraction capabilities.

READ FULL TEXT
research
07/12/2017

The detector principle of constructing artificial neural networks as an alternative to the connectionist paradigm

Artificial neural networks (ANN) are inadequate to biological neural net...
research
07/13/2022

Normalized gradient flow optimization in the training of ReLU artificial neural networks

The training of artificial neural networks (ANNs) is nowadays a highly r...
research
09/26/2022

Activation Learning by Local Competitions

The backpropagation that drives the success of deep learning is most lik...
research
12/21/2014

SENNS: Sparse Extraction Neural NetworkS for Feature Extraction

By drawing on ideas from optimisation theory, artificial neural networks...
research
08/20/2003

Artificial Neural Networks for Beginners

The scope of this teaching package is to make a brief induction to Artif...
research
05/09/2020

Generalizing Outside the Training Set: When Can Neural Networks Learn Identity Effects?

Often in language and other areas of cognition, whether two components o...

Please sign up or login with your details

Forgot password? Click here to reset