Experimental machine learning quantum homodyne tomography

07/15/2019 ∙ by E. S. Tiunov, et al. ∙ RQC University of Oxford 0

Complete characterization of states and processes that occur within quantum devices is crucial for understanding and testing their potential to outperform classical technologies for communications and computing. However, this task becomes unwieldy for large and complex quantum systems. Here we realize and experimentally demonstrate a method for complete characterization of a harmonic oscillator based on an artificial neural network known as the restricted Boltzmann machine. We apply the method to experimental balanced homodyne tomography and show it to allow full estimation of quantum states based on a smaller amount of experimental data. Although our experiment is in the optical domain, our method provides a way of exploring quantum resources in a broad class of physical systems, such as superconducting circuits, atomic and molecular ensembles, and optomechanical systems.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


I Methods

Pure states.

Here we present the details of training our RBM. The neural network parametrization for the wavefunction is defined by Eqs. (4) and (5). We introduce additional notation. First, because there is one-to-one correspondence between visible layer configurations and Fock states , as discussed in the main text, we will use the symbol to denote both these objects. Second, we denote the unnormalized Boltzmann probability


with , for the amplitude RBM, and the analogous quantity


for the phase RBM. We remind the reader that the letters and denote the respective parameter sets of these RBMs.

By plugging the expression (4) into the log-likelihood function (6) and using that the fact that overlap between the number and quadrature eigenstates corresponding to the phase is just the

’s harmonic oscillator eigenfunction (Hermite-Gaussian polynomial)

with a phase factor,


we obtain the following expression:


where , the summation with index is over all quadrature measurements and is the number of measurements.

For the network training, we evaluate the gradients of the above log-likelihood over the neural net parameters and as follows:


where we defined


with and . Ascending by these gradients, we can maximize the log-likelihood (11). Both RBMs are trained simultaneously.

We note that the above gradients contain exhaustive summation over possible configutations of the visible and hidden layers of both RBMs. In the present work, we are able to compute this sum directly since the number of RBM units is relatively small. However, in the case of high Hilbert space dimension, Boltzmann sampling using an annealing device or algorithm will be required.

Mixed states.

As discussed in the main text, see Eq. (7), we treat the mixed state to be reconstructed as a partial state of a pure state in a tensor product Hilbert space with the dimension . We decompose this state in the Fock basis and apply the same parametrization as in the previous subsection:


The partial trace of this state over the environment is as follows:


The log-likelihood (6) is then given by


where the summation indices run over the truncated Fock basis, over all quadrature measurements, and


We note that the expression (17) is very similar to the pure state case (11), but requires two additional summations over the truncated Fock basis. The log-likelihood (17) gradients over and read similarly to those for the pure state (12), but with the parameters (13) redefined as follows:


The remainder of the treatment replicates that for pure states.

Efficiency correction.

To correct for an imperfect homodyne detector efficiency in our neural net approach, we model it as a perfect detector preceded by beam splitter of transmission Lvovsky2004 , which changes the quantum state by means of generalized Bernoulli transformation to a new state according to


where . Now we can repeat the above procedure for the mixed state (purification) Ansatz, with the only difference that we use instead of to calculate the log-likelihood (6).