DeepAI AI Chat
Log In Sign Up

Implementation of a language driven Backpropagation algorithm

by   I. V. Grossu, et al.

Inspired by the importance of both communication and feedback on errors in human learning, our main goal was to implement a similar mechanism in supervised learning of artificial neural networks. The starting point in our study was the observation that words should accompany the input vectors included in the training set, thus extending the ANN input space. This had as consequence the necessity to take into consideration a modified sigmoid activation function for neurons in the first hidden layer (in agreement with a specific MLP apartment structure), and also a modified version of the Backpropagation algorithm, which allows using of unspecified (null) desired output components. Following the belief that basic concepts should be tested on simple examples, the previous mentioned mechanism was applied on both the XOR problem and a didactic color case study. In this context, we noticed the interesting fact that the ANN was capable to categorize all desired input vectors in the absence of their corresponding words, even though the training set included only word accompanied inputs, in both positive and negative examples. Further analysis along applying this approach to more complex scenarios is currently in progress, as we consider the proposed language-driven algorithm might contribute to a better understanding of learning in humans, opening as well the possibility to create a specific category of artificial neural networks, with abstraction capabilities.


The detector principle of constructing artificial neural networks as an alternative to the connectionist paradigm

Artificial neural networks (ANN) are inadequate to biological neural net...

Normalized gradient flow optimization in the training of ReLU artificial neural networks

The training of artificial neural networks (ANNs) is nowadays a highly r...

Activation Learning by Local Competitions

The backpropagation that drives the success of deep learning is most lik...

SENNS: Sparse Extraction Neural NetworkS for Feature Extraction

By drawing on ideas from optimisation theory, artificial neural networks...

Artificial Neural Networks for Beginners

The scope of this teaching package is to make a brief induction to Artif...

Generalizing Outside the Training Set: When Can Neural Networks Learn Identity Effects?

Often in language and other areas of cognition, whether two components o...