Training Optimization for Gate-Model Quantum Neural Networks

09/03/2019
by   Laszlo Gyongyosi, et al.
0

Gate-based quantum computations represent an essential to realize near-term quantum computer architectures. A gate-model quantum neural network (QNN) is a QNN implemented on a gate-model quantum computer, realized via a set of unitaries with associated gate parameters. Here, we define a training optimization procedure for gate-model QNNs. By deriving the environmental attributes of the gate-model quantum network, we prove the constraint-based learning models. We show that the optimal learning procedures are different if side information is available in different directions, and if side information is accessible about the previous running sequences of the gate-model QNN. The results are particularly convenient for gate-model quantum computer implementations.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/03/2019

State Stabilization for Gate-Model Quantum Computers

Gate-model quantum computers can allow quantum computations in near-term...
12/09/2019

Learning Non-Markovian Quantum Noise from Moiré-Enhanced Swap Spectroscopy with Deep Evolutionary Algorithm

Two-level-system (TLS) defects in amorphous dielectrics are a major sour...
07/20/2016

Supervised quantum gate "teaching" for quantum hardware design

We show how to train a quantum network of pairwise interacting qubits su...
04/28/2022

BEINIT: Avoiding Barren Plateaus in Variational Quantum Algorithms

Barren plateaus are a notorious problem in the optimization of variation...
03/11/2020

Optimizing High-Efficiency Quantum Memory with Quantum Machine Learning for Near-Term Quantum Devices

Quantum memories are a fundamental of any global-scale quantum Internet,...
07/06/2021

A Leap among Entanglement and Neural Networks: A Quantum Survey

In recent years, Quantum Computing witnessed massive improvements both i...
06/18/2021

Binary Optimal Control Of Single-Flux-Quantum Pulse Sequences

We introduce a binary, relaxed gradient, trust-region method for optimiz...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.