Quantum activation functions for quantum neural networks

The field of artificial neural networks is expected to strongly benefit from recent developments of quantum computers. In particular, quantum machine learning, a class of quantum algorithms which exploit qubits for creating trainable neural networks, will provide more power to solve problems such as pattern recognition, clustering and machine learning in general. The building block of feed-forward neural networks consists of one layer of neurons connected to an output neuron that is activated according to an arbitrary activation function. The corresponding learning algorithm goes under the name of Rosenblatt perceptron. Quantum perceptrons with specific activation functions are known, but a general method to realize arbitrary activation functions on a quantum computer is still lacking. Here we fill this gap with a quantum algorithm which is capable to approximate any analytic activation functions to any given order of its power series. Unlike previous proposals providing irreversible measurement–based and simplified activation functions, here we show how to approximate any analytic function to any required accuracy without the need to measure the states encoding the information. Thanks to the generality of this construction, any feed-forward neural network may acquire the universal approximation properties according to Hornik's theorem. Our results recast the science of artificial neural networks in the architecture of gate-model quantum computers.



There are no comments yet.


page 9


Activation Functions in Artificial Neural Networks: A Systematic Overview

Activation functions shape the outputs of artificial neurons and, theref...

Simulating a perceptron on a quantum computer

Perceptrons are the basic computational unit of artificial neural networ...

Quantum Neuron: an elementary building block for machine learning on quantum computers

Even the most sophisticated artificial neural networks are built by aggr...

Neural tensor contractions and the expressive power of deep neural quantum states

We establish a direct connection between general tensor networks and dee...

Artificial Neurons with Arbitrarily Complex Internal Structures

Artificial neurons with arbitrarily complex internal structure are intro...

Spikes as regularizers

We present a confidence-based single-layer feed-forward learning algorit...

Efficient Learning for Deep Quantum Neural Networks

Neural networks enjoy widespread success in both research and industry a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

A quantum neural network encodes a neural network by the qubits of a quantum processor. In the conventional approach, biologically-inspired artificial neurons are implemented by software as mathematical rate neurons. For instance, the Rosemblatt perceptron (1957) rosenblatt1957perceptron is the simplest artificial neural network consisting of an input layer of

neurons and one output neuron behaving as a step activation function. Multilayer perceptrons

suter1990multilayer are universal function approximators, provided they are based on squashing functions. The latter consist of monotonic functions which compress real values in a normalized interval, acting as activation functions hornik1991approximation.

In principle a quantum computer is suitable for performing tensor calculations typical of neural network algorithms

preskill2018quantum; aaronson2015read

. Indeed, the qubits can be arranged in circuits acting as layers of the quantum analogue of a neural network. If equipped with common activation functions such as the sigmoid and the hyperbolic tangent, they should be able to process deep learning algorithms such those used for problems of classification, clustering and decision making. As qubits are destroyed at the measurement event, in the sense that they are turned into classical bits, implementing an activation function in a quantum neural network poses challenges requiring a subtle approach. Indeed the natural aim is to preserve as much as possible the information encoded in the qubits while taking advantage of each computation at the same time. The goal therefore consists in delaying the measurement action until the end of the computational flow, after having processed the information through neurons with a suitable activation function.

Within the field of quantum machine learning (QML)prati2017quantum; biamonte2017quantum, if one neglects the implementation of quantum neural networks on adiabatic quantum computers rocutto2020quantum, there are essentially two kind of proposals of quantum neural networks on a gate-model quantum computer. The first consists of defining a quantum neural network as a variational quantum circuit composed of parameterized gates, where non-linearity is introduced by measurements operations farhi2018classification; beer2020training; benedetti2019parameterized

. Such quantum neural networks are empirically–evaluated heuristic models of QML not grounded on mathematical theorems


Furthermore, this type of models based on variational quantum algorithms suffer from an exponentially vanishing gradient problem, the so-called barren plateau problem

mcclean2018barren, which requires some mitigation techniques grant2019initialization; cerezo2020cost. Quite differently, the second approach seeks to implement a truly quantum algorithm for neural network computations and to really fulfill the approximation requirements of Hornik’s theorem hornik1989multilayer; hornik1991approximation perhaps at the cost of a larger circuit depth. Such approach pertains to semi-classical daskin2018simple; torrontegui2019unitary or fully quantum cao2017quantum; hu2018towards models whose non-linear activation function is again computed via measurement operations.

Furthermore, quantum neural network proposals can be classified with respect to the encoding method of input data. Since a qubit consists of a superposition of the state

and , few encoding options are distinguishable by the relations between the number of qubits and the maximum encoding capability. The first is the -to- option by which each and every input neuron of the network corresponds to one qubit cao2017quantum; hu2018towards; da2016weightless; matsui2009qubit; da2016quantum. The most straightforward implementation consists in storing the information as a string of bits assigned to classical base states of the quantum state space. A similar 1-to-1 method consists in storing a superposition of binary data as a series of bit strings in a multi-qubit state. Such quantum neural networks are based on the concept of the quantum associative memory ventura2000quantum; da2017neural. Another -to- option is given by the quron (quantum neuron) schuld2014quest. A quron is a qubit whose and states stand for the resting and active neural firing state, respectivelyschuld2014quest.

Alternatively, another encoding option consists in storing the information as coefficients of a superposition of quantum states shao2018quantum; tacchino2019artificial; kamruzzaman2019quantum; tacchino2020quantum; maronese2021continuous. The encoding efficiency becomes exponential as an -qubit state is an element of a

-dimensional vector space. To exemplify, the treatment by a quantum neural network of a real image classification problem of few megabits makes the

-to- option currently not viable pritt2017satellite. Instead, the choice -to- allows to encode a megabit image in a state by using qubits only.

However, encoding the inputs as coefficients of a superposition of quantum states requires an algorithm for generic quantum state preparations shende2006synthesis; kuzmin2020variational; lazzarin2021multi or, alternatively, to directly feed quantum data to the network romero2017quantum. For instance quantum encoding methods such as Flexible Representation of Quantum Images (FRQI) le2011flexible have been proposed. Generally, to prepare an arbitrary -qubit quantum state requires a number of quantum gates that scales exponentially in . Nonetheless, in the long run, an encoding of kind -to- guarantees a better applicability to real problems than the options -to-. Moreover such encoding method satisfies the requirements of Hornik’s theorem in order to guarantee the universal function approximation capabilityhornik1989multilayer. Despite some relatively heavy constraints, such as the digital encoding and the fact that the activation function involves irreversible measurements, examples towards this direction have been reported tacchino2019artificial; tacchino2020quantum; maronese2021continuous. Instead, differently from both the above proposals and from quantum annealing based algorithms applied to neural networks rocutto2020quantum, we develop a fully reversible algorithm.

In a novel alternative approach, we define here a -to- encoding model that involves inputs, weights and bias in the interval . The model exploits the architecture of gate-model quantum computers to implement any analytical activation function at arbitrary approximation only using reversible operations. The algorithm consists in iterating the computation of all the powers of the inner product up to -th order, where is a given overhead of qubits with respect to the used for the encoding. Consequently, the approximation of most common activation functions can be computed by rebuilding its Taylor series truncated at the -th order.

The algorithm is implemented in the QisKit environment wille2019ibm to build a one layer perceptron with input neurons and different activation functions generated by power expansion such as hyperbolic tangent, sigmoid, sine and swish function respectively, truncated up to the 10-th order. Already at the third order, which corresponds to the least number of qubits required for a non-linear function, a satisfactory approximation of the activation function is achieved.

This work is organized as follows: in Section 2, the definitions and the general strategy are summarized; in Section 3 the quantum circuits for the computation of the power terms and next of the the polynomial series are obtained. Next, in Section 4 the approximation of analytical activation functions algorithm is outlined while in Section 5 the computation of the amplitude is shown. Section 6 concerns the estimation of the perceptron output. The final Section is devoted to the conclusions.

Ii Definitions and general strategy

In order to define our quantum version of the perceptron with continuous parameters and arbitrary analytic activation function, let’s consider a one-layer perceptron. The latter represents the fundamental unit of a feed-forward neural network. A one-layer perceptron is composed of input neurons and one output neuron equipped of an activation function where is a compact set. The output neuron computes the inner product between the vector of the input values and the vector of the weights plus a bias value . Such scalar value is taken as the argument of an activation function. The real output value of the perceptron is defined as as in Figure 1a.

Here we develop a quantum circuit that computes an approximation of . The algorithm starts by calculating the inner product plus the bias value . Next, it evaluates the output by calculating an approximation of the activation function . On a quantum computer, a measurement operation apparently represents the most straightforward implementation of a non-linear activation function, as done for instance in Ref. tacchino2019artificial to solve a binary classification problem on a quantum perceptron. Such approach, however, cannot be generalized to build a multi-layered qubit-based feed-forward neural network.

Figure 1: Graphical representation of a one-layer perceptron and relative qubit-based version. A one-layer perceptron architecture (a) is composed by an input layer of neurons connected to the single output neuron. It is characterized by an input vector , a weight’s vector and a bias . In the classical version, the activation function takes as argument the value and it returns the perceptron output . The quantum version (b) follows the same architecture but the calculus consists of a sequence of transformations of a -qubits quantum state initialized with the coefficients of the inputs vector

as probability amplitudes. The quantum states at each step are represented by a qsphere (graphical representation of a multi-qubit quantum state). In a qsphere each point is a different coefficient of the superposition quantum state. Generally the coefficients are complex and in a qsphere the modules of the coefficients are proportional to the radius of the points while the phases depend on the colour. In the blue box it is shown the starting quantum state with the inputs stored as probability amplitudes. Instead, in the green box it is shown the quantum state with the weights and the bias. In the first red box, at each step one qubit is added in order to store the power terms of

up to ( in the Figure). In the last red box the output of the perceptron is given by a series of rotations which compose a polynomial ( in the Figure).

First of all, measurement operations break the quantum algorithm and impose initialization of the qubits layer by layer, thus preventing a single quantum run of a multi-layer neural network. Secondly, other activation functions – beside that implied by the measurement operations, are more suitable to solve generic problems of machine learning.

We avoid both of these shortcomings with a new quantum algorithm, which is based on two theorems as detailed below. The quantum algorithm is composed of two steps (Figure 1b). First, the powers of are stored as amplitudes of a multi-qubit quantum state. Next, the chosen activation function is approximated by building its polynomial series expansion through rotations of the quantum state. The rotation angles are determined by the coefficients of the polynomial series of the chosen activation function. They can be explicitly computed by our quantum algorithm. Let’s first summarize the notation used throughout the text. Let stand for the -dimensional Hilbert space associated to one qubit. Then the -dimensional Hilbert space associated to a register of qubits is written as . If we denote by the computational basis in , then the computational basis in reads . An element of this computational basis can be alternatively written as where is the decimal integer number that corresponds to the bit string . In particular, if , then . In this notation, the number of qubits of a register is indicated with a lowercase letter, such as and , while the dimension of the associated Hilbert space is indicated by the correspondent uppercase letter, such as and .

The expression represents a separable unitary transformation constructed with one-qubit transformations acting on each qubit of the register . A non-separable unitary multi–qubit transformation is usually written as and, in some cases, simply . Two registers and , respectively with and qubit, can be compound in a single register supporting the Hilbert space with computational basis . For brevity, we will use the compact notation for and for for operators acting on only one of two registers. In particular, we write

for the dimensional projection onto the state of the register.

Particular cases of unitary operators implementable on a circuital model quantum computer are the controlled gates. Let represent a controlled- transformation: the operator is applied on the qubit (called target qubit) if is in the state (called control qubit). The transformation is a controlled transformation where the gate is applied on the qubit if is in the state . Therefore, . In a more general case, a -controlled operator has a notation of kind where, in such case, is the set of the qubits control while is the target.

In the following, two qubit registers and of and qubits, respectively, are assumed to be assigned.

Iii Computation of the polynomial series

As stated above, our aim is to build a -qubits quantum state containing the Taylor expansion of to order , where up to a normalization factor. The number of required qubits, in addition to , is determined by the dimension of the input vector. We first need to encode the powers () in the qubits. The following Lemma provides the starting point:

Lemma 1

Given two vectors and a number , and given a register of qubits such that , then there exists a quantum circuit realizing a unitary transformation such that


where and .

In Lemma 1 a qubit unitary operator is defined by the requirement that Eq. 1 holds, where , and , where and . The existence of infinitely many such operators is trivially obvious from the purely mathematical point of view. The problem is to provide an explicit realization in terms of realistic quantum gates.
Proof: Let us define two vectors in : and where . In such vectors coefficients are always null while the values and are suitable constants defined such that .
It then follows that . We now define two -qubit quantum states and as follows


Then, by construction

The initialization algorithm mentioned above allows us to consider unitary transformations and , where stands for the quantum NOT gate, such that and . It follows that


Comparing with the equations 1 we see that .

Figure 2: Fundamental gates composition of the transformations and . The transformations and encode respectively the coefficients of the vectors and in a superposition quantum state. They are composed of the inverse of the operators where and , which introduces the phases of the probability amplitudes. The operators c), d) and e) are shown as composition of multi-controlled rotations which are equal to a composition of gates and Controlled-Not mottonen2005decompositions. The transformation (a) introduces the phases of the amplitudes while (b) removes them. The details of the arbitrary quantum state preparation circuit are described in Supplementary Note 1.

Since the amplitudes of the states and are real, the phases are either or and it is no longer necessary to apply a series of multi-controlled to set them. A single diagonal transformation suffices, with either or on the diagonal. For such purpose, hypergraph states prove effective rossi2013quantum. Thanks to such kind of states, a small number of Z, CZ, and multi-controlled Z gates are needed to achieve the transformation . The transformations, which introduce the phases of the amplitudes of a -qubits quantum state, are summarized by an operator called in the Figure 2. More details about the strategy adopted for quantum–state initialization is reported in Supplementary Note 1 and Supplementary Note 2.
There are many alternatives to the states and which give the same inner product . Defining the two vectors


then the transformations and applied on the state return two states, and respectively, such that . The reason for the choice shown above is due to the phases to add. Since the values and do not appear in the inner product then their phases are not relevant. Therefore, such states and , make unnecessary a -controlled Z gate to adjust the phases of the amplitudes associated with and . In the Figure 2 the composition of the transformations (Fig.2-a) and (Fig. 2-b) are shown in the case of a one-layer perceptron with neurons. In such case the number of input neurons is . Since and then, given input neurons the minimum number of required qubits is . Therefore, with , n=3 qubits are required to store in a quantum state.

The variable generalizes in two respects the inner product of Ref. tacchino2019artificial where inputs and weights only take binary values and no bias is involved.

The transformation is a key building block of the quantum perceptron algorithm. Indeed, in our quantum circuit such transformation is iterated several times over the Hilbert space enlarged to by the addition of another register of qubits. The existence of such a quantum circuit is guaranteed by the following theorem that provides its explicit construction:

Theorem 1

Let be the real value in the interval assumed by , where and . Let and be two registers of and qubits respectively, with . Then there exists a quantum circuit which transforms the two registers from the initial state to a -qubit entangled state of the form




The circuit is expressed by (Fig. 3a) where is the quantum NOT gate and


Proof: The thesis of the theorem is the existence of a transformation which, acting on two registers of qubit and with and qubits respectively, returns a state as defined in the Equation 5. The demonstration consists of the construction of such a circuit. For such purpose, let’s define the states , where


where .
From such definition it follows that the states are states of -qubits and is a -qubits state.
The proof of the theorem is therefore reduced to demonstrating the existence of a sequence of transformations , where , such that where is the -th qubit in the register . Therefore is a unitary transformation defined over the space .
Let’s consider the following ansatz for the transformation :


whose graphical representation is given in Figure 3a.

Let’s apply , as defined in the Equation 7, on the state .


The transformation consists in the application of on the qubits controlled by the qubit which means the transformation act only on so focussing only on its subspace it results


Therefore the transformation applied on returns the following state


To demonstrate that the state just obtained is , the projection over the state must return as from the definition of the states . Let’s apply the projection on the resulting state in the Equation 10. Since by definition, and as from Lemma 1, the result of the projection is the following.


Having demonstrated that , the proof of the existence of the transformation which returns if applied on proceeds by recursion. Indeed, by applying to the state the resulting state will be .

To summarize, the quantum circuit of the quantum perceptron algorithm starts by expressing the unitary operator which initializes the and registers from the state to the state . Such a unitary operator is expressed by where is the subroutine of the quantum circuit which achieves the goal of the first step of the quantum perceptron algorithm, i.e. to encode the powers of up to in a quantum state as from the following Corollary.

Corollary 1.1

The state stores as probability amplitudes all the powers , for , up to a trivial factor. Indeed Eq. 5 in Theorem 1 implies


The first step of the quantum perceptron algorithm consists of the storage of all powers of up to in a -qubits state. The proof of the Theorem 1 implies that the first step of the algorithm is the quantum circuit shown in the Figure 3-a, consisting of a subroutine composed by a Pauli gate applied on each qubit in the register and a transformation . Indeed from the Corollary 1.1 the state stores as probability amplitudes all the powers of up to less than a factor . The proof of the Corollary 1.1 is straightforward as follows.
Proof: As shown above, the state can be written as where , therefore, .
Since then


Let’s rewrite in a binary form where from to and otherwise.


The latter holds because, ,


Therefore, .

The next step of the algorithm consists in transforming the state so as to achieve a special recursively defined -degree polynomial in . Such step is identifiable with the subroutine of the quantum perceptron circuit, see Figure 3a.
By Eq. 5 in Theorem 1 there must exists a unitary operator which acts as the identity on and returns, when applied to , a new state which stores the polynomial. In fact, it holds the following

Theorem 2

Let be the family of polynomials in defined by the following recursive law


with and for any .
Then there exists a family of unitary operators such that


These unitary operators are, in turn, defined by the recursive law


with .

The subroutine shown in Figure 3a corresponds to . The Proof of Theorem 2 follows. Proof: The proof of the theorem follows two steps, namely the statement for the first term of the polynomial and an inductive step as follows. The first step consists of demonstrating that

with as defined in the Equation 18. In the second step, instead, the proof proceeds for recursively.
It aims to prove assuming that , where

as defined in the Equation 18. Let’s preliminary consider the states , where . The state is considered in the case with . Next, let’s focus on the subspace of defined as . The operator which projects the elements of in the subspace is .
Let’s now move to the first step of the demonstration. The first operation consists of applying to the state . Because of the definition of the state , it follows that where (Corollary 1.1), therefore, the projection over of the state is

The operator rotates the qubit , along the -axis of the Bloch sphere of angle , only if the qubit is in , therefore, such operator acts on the subspace .
The projection on such subspace of the state is

Since is a controlled-NOT gate which acts only if the qubits is in the state then

which completes the first step of this demonstration. Let’s now demonstrate the recursive step. Here, the only assumption is , therefore, differently from the previous step where the projection of on the subspace was known, here the projection of is equal to


where is an unknown real value. Let’s apply on the state so as to obtain the state . From the Equation 19:

To prove the theorem, must be equal to since .
The purpose of the second step of the proof can be achieved just proving that . That is already proved for because as said above. Let’s prove that for while for the proof will proceed recursively.
The state is a state of the computational bases of . Writing such state in the binary version it results equal to where from to and otherwise.
As said before the operator acts only on the state where , therefore, it does not act on the states . Instead, the operator acts on the state where and it applies a NOT operation on the bit . That means the states become . Therefore, since , thanks to then . In particular, taking the value of such that , that is , and, therefore, . Let’s proceed recursively for assuming that


The state written in a binary form is where from to and for while is otherwise. In particular, for , and therefore which means . The recursive procedure consists of proving that starting from the assumption in the Equation 20. Let’s start from the state and let’s apply on it the transformation . The transformation acts only on the state where , therefore, it does not act on the states . Instead is a bit-flip transformation which acts on the state only if and it applies a NOT operation on the bit , therefore


That means

for and, in particular, for , therefore . Such final result proves that , therefore

The se