A Simple Quantum Neural Net with a Periodic Activation Function

04/20/2018
by   Ammar Daskin, et al.
0

In this paper, we propose a simple neural net that requires only O(nlog_2k) numbers of quantum gates and qubits: Here, n is the number of input parameters, and k is the number of weights applied to these input parameters in the proposed neural net. We describe the network in terms of a quantum circuit, and then draw its equivalent classical neural net which involves O(k^n) nodes in the hidden layer. Then, we show that the network uses a periodic activation function of cosine values of the linear combinations of the inputs and weights. The steps of the gradient descent are described, and then Iris and Breast cancer datasets are used for the numerical simulations. The numerical results indicate the network can be used in machine learning problems and it may provide exponential speedup over the same structured classical neural net.

READ FULL TEXT

page 1

page 2

page 3

research
11/13/2019

Quadratic number of nodes is sufficient to learn a dataset via gradient descent

We prove that if an activation function satisfies some mild conditions a...
research
07/20/2020

Supervised Learning Using a Dressed Quantum Network with "Super Compressed Encoding": Algorithm and Quantum-Hardware-Based Implementation

Implementation of variational Quantum Machine Learning (QML) algorithms ...
research
03/04/2002

Entangled Quantum Networks

We present some results from simulation of a network of nodes connected ...
research
08/28/2017

A parameterized activation function for learning fuzzy logic operations in deep neural networks

We present a deep learning architecture for learning fuzzy logic express...
research
05/14/2020

Activation functions are not needed: the ratio net

The function approximator that finds the function mapping the feature to...
research
05/12/2023

∂𝔹 nets: learning discrete functions by gradient descent

∂𝔹 nets are differentiable neural networks that learn discrete boolean-v...
research
05/17/2022

Sharp asymptotics on the compression of two-layer neural networks

In this paper, we study the compression of a target two-layer neural net...

Please sign up or login with your details

Forgot password? Click here to reset