On randomization of neural networks as a form of post-learning strategy

11/26/2015
by   K. G. Kapanova, et al.
0

Today artificial neural networks are applied in various fields - engineering, data analysis, robotics. While they represent a successful tool for a variety of relevant applications, mathematically speaking they are still far from being conclusive. In particular, they suffer from being unable to find the best configuration possible during the training process (local minimum problem). In this paper, we focus on this issue and suggest a simple, but effective, post-learning strategy to allow the search for improved set of weights at a relatively small extra computational cost. Therefore, we introduce a novel technique based on analogy with quantum effects occurring in nature as a way to improve (and sometimes overcome) this problem. Several numerical experiments are presented to validate the approach.

READ FULL TEXT
research
11/13/2017

Neural Networks Architecture Evaluation in a Quantum Computer

In this work, we propose a quantum algorithm to evaluate neural networks...
research
11/10/2022

A quantum neural network with efficient optimization and interpretability

As the quantum counterparts to the classical artificial neural networks ...
research
12/15/2013

Autonomous Quantum Perceptron Neural Network

Recently, with the rapid development of technology, there are a lot of a...
research
01/26/2023

Secure synchronization of artificial neural networks used to correct errors in quantum cryptography

Quantum cryptography can provide a very high level of data security. How...
research
12/28/2022

Persistence-based operators in machine learning

Artificial neural networks can learn complex, salient data features to a...
research
11/22/2022

Motif-aware temporal GCN for fraud detection in signed cryptocurrency trust networks

Graph convolutional networks (GCNs) is a class of artificial neural netw...

Please sign up or login with your details

Forgot password? Click here to reset