The Holy Grail of Quantum Artificial Intelligence: Major Challenges in Accelerating the Machine Learning Pipeline

04/29/2020
by   Thomas Gabor, et al.
Universität München
0

We discuss the synergetic connection between quantum computing and artificial intelligence. After surveying current approaches to quantum artificial intelligence and relating them to a formal model for machine learning processes, we deduce four major challenges for the future of quantum artificial intelligence: (i) Replace iterative training with faster quantum algorithms, (ii) distill the experience of larger amounts of data into the training process, (iii) allow quantum and classical components to be easily combined and exchanged, and (iv) build tools to thoroughly analyze whether observed benefits really stem from quantum properties of the algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/19/2019

Implications of Quantum Computing for Artificial Intelligence alignment research

We introduce a heuristic model of Quantum Computing and apply it to argu...
11/03/2020

Quantum Intelligence

Artificial intelligence has become promising and fast evolving technolog...
06/02/2017

Active learning machine learns to create new quantum experiments

How useful can machine learning be in a quantum laboratory? Here we rais...
11/26/2017

Quantum Artificial Life in an IBM Quantum Computer

We present the first experimental realization of a quantum artificial li...
08/04/2019

Seeding the Singularity for A.I

The singularity refers to an idea that once a machine having an artifici...
06/26/2021

Quantum Computing for Artificial Intelligence Based Mobile Network Optimization

In this paper, we discuss how certain radio access network optimization ...
01/12/2021

Quantum Mathematics in Artificial Intelligence

In the decade since 2010, successes in artificial intelligence have been...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Motivation

Two frontiers of research in computer science meet in the field of quantum artificial intelligence (QAI). As both artificial intelligence (AI) and quantum computing (QC) are very active fields with an overwhelming speed of new developments just within the last year (arute2019quantum; alphastar2019), there exist vast possibilities for interactions. However, we argue that there are grand challenges which can guide us towards a fruitful convergence of these fields.

From here on, we take a software engineer’s perspective and first analyze how traditional software engineering techniques are coping with the dynamic world of AI algorithms (Section 2). We proceed by pointing out which big challenges AI is going to have to face and what that might mean for the field (Section 3). Then we provide a quick overview of what QAI is already doing (Section 4) and deduce the challenges revealing what QAI might mean for the general development of AI (Section 5). We close with a very brief outlook (Section 6).

2. Software Engineering for Artificial Intelligence

Artificial intelligence on its own has been difficult to grasp for the art of software engineering, perhaps because traditional software engineering is focusing on preserving initial consistency (i.e., making sure the produced artifacts adhere to prior specifications) (gabor2018adapting) while methods of artificial intelligence usually start from highly chaotic initial configurations (albers1996dynamical) and only gradually introduce rules and structure. On the path towards applying the principles of rigorous engineering to more complex, adaptive and inherently self-governed systems, various directions of research have been proposed and tried (see (nierstrasz2008change; weyns2017software; bures2017software; reocas2020) and many others). As an example for these approaches, consider Figure 1: Classical methods of software engineering are kept as a feedback loop being driven forward usually by human developers while new ways to evolve the system at run-time are added as another feedback loop, usually driven by self-adaption and learning (holzl2015ensemble).

Figure 1. The ensemble development life cycle (image taken from (holzl2015ensemble)) provides a framework for the integration of design time and run-time evolution of software systems.
Figure 2. The machine learning pipeline (image taken from (reocas2020)).

However, there is a wide variety of algorithms allowing for self-adaptation and learning, ranging from simple statistical methods like SVMs or clustering to deep neural networks, and the exact way to integrate these algorithms is also subject to a lot of variation. In 

(reocas2020) we introduced the machine learning pipeline as a process model for many different machine learning methods, i.e., it is a model of temporal dependences between the creation of fundamental artifacts common to most machine learning models. Figure 2 shows a diagram of the different tasks that are relevant to software engineering. We aim to adopt various familiar development phases from classical software engineering (blue boxes at the top level). For software engineering, there is a distinct shift between writing the software for a variety of concrete domains and specializing (a single branch of) the software to a concrete environment (blue boxes at the bottom level). The main engineering tasks are shown in white boxes. A source of great difficulty for engineering lies in the inherent stochasticity of the behavior generated by most machine learning algorithms, requiring different methods to ensure quality of service (QoS) in the main training feedback loop (“select model/policy”, “train”, “assess QoS”) and actual monitoring during operations. The break from most classical software engineering approaches happens here insofar we explicitly want to achieve “softer” behavior guidelines on the algorithms because we want to employ them in domains where we cannot possibly formulate enough “hard” rules. And as we still want specific behavior, the softening does not occur as random noise but is often very specific as well. Of course, this also often makes machine learning methods susceptible to systematic failures. For instance: A car driving software failing at random on one in every 100 turns is rather easy to care for by adding redundancy and a voting system, e.g., allowing us to achieve arbitrarily low overall error rates as long as we can add enough redundancy. A car driving software that operates well but systematically breaks down every time it comes across a football on the street is harder to handle because we need very specific tests to even detect the error and then see every redundant system fail at the same time.

This inherent stochasticity in machine learning algorithms, of course, is not quite unlike the inherent stochasticity in quantum computing, which is a connection already discussed (and elaborated) by Wolfram (wolfram2018making). For us, this may suggest that we can use similar methods to integrate (especially highly error-prone, early) quantum algorithms into classical software as we can use to integrate the highly stochastic process called machine learning algorithms.

3. The Role of Compute and the Consequences

Aside from similar external properties like stochasticity, quantum algorithms and artificial intelligence may indeed form an even stronger connection. The emerging field of quantum artificial intelligence (QAI) uses quantum algorithms or quantum-inspired algorithms to solve computation tasks related to artificial intelligence.111Note that the combination also works the other way around, using AI methods to better approximate quantum computations (for instance in  (fosel2018reinforcement; porotti2019coherent)). This, however, is beyond the scope of this paper. This combination may be highly synergetic for two main reasons:

  • All machine learning methods need some randomness to work, often putting serious effort into generating necessary entropy. Beyond that, they also often show high tolerance for noise during their evolution. This makes them inherently suitable for early applications using only NISQ hardware.

  • Progress in artificial intelligence is becoming more and more demanding in computational resources. This trend is outgrowing the continued increase in available computing power by a large margin.

The first reason basically falls in line with our point on stochasticity made earlier. While high noise levels (as they are present in NISQ machines) are unwanted for many algorithms, especially AI algorithms may actually benefit from (some levels of) noise. Of course, current noise levels over a long series of computations are way too high to even allow for meaningful results, but requirements for QAI algorithms might be met earlier than for (for example) Grover’s search on similarly large input spaces.

The second reason may be a bit more elusive; of course, more computational power is always better. However, pushing the borders of AI has been especially hungry for resources. Amodei and Hernandez (openai2018compute) used the chart shown in Figure 3 to demonstrate that just in recent years, the computation power used for AI breakthrough had a doubling time of 3.5 months and has thus been dramatically outgrowing Moore’s Law (18 month doubling time).

Figure 3. Computation power used for recent breakthroughs in AI over time shows a fast exponential growth (image taken from (openai2018compute)).

For the future of AI, this directly leads to four possible consequences (or any combination thereof):

  1. Progress in AI research slows down.

  2. AI research becomes exponentially more expensive.

  3. New AI algorithms using less resources are developed.

  4. New sources of computation power are discovered.

While Consequence 1 is not entirely unlikely, it is probably the better option from a scientific point of view not to strive for that direction. Similarly, the extent to which exponentially more money for AI research (i.e., Consequence 2) can compensate the lack of computation power per chip is quite limited as we are facing an exponential demand to be satisfied.

Consequence 3 definitely should be sought for, however. Making AI algorithms more resource-efficient is imperative for many practical applications and is a rather lively topic of research (nachum2018data; cuccu2019playing). Most interestingly, this puts AI algorithms, again, in a similar position like quantum algorithms nowadays, where working around limited hardware (albeit at an entirely different scale) is one of the key skills on bringing software to practice. However, leveraging raw computational power, i.e., using the method compute, will probably always be a large part of using AI methods and possibly should be, as Sutton recently argued in an influential blog post (sutton2019bitter).

Lastly, Consequence 4 suggests that new hardware might mitigate the increasing need for computational power. For some time now, we have seen this idea being implemented by using graphic cards and even more specialized hardware like neuromorphic chips to run neural networks for AI. And while substantial benefits can be achieved, none of these hardware platforms can provide an exponential speedup that can sustainably satisfy the exponential hunger of AI; but quantum computing might (arute2019quantum).

4. Overview of Quantum-Assisted Artificial Intelligence

In order to assess the current possibilities of QAI, we performed a survey on available QAI algorithms in scientific literature, out of which we present a selection of key algorithms. For an overview, please refer to Table 1. We roughly identified four interesting areas of application for QAI algorithms:

  1. Mathematical operations. These algorithms provide faster solutions to computationally hard problems like computing eigenvalues (variational quantum eigensolver 

    (peruzzo2014variational)) or solving linear equations (HHL (harrow2009quantum)). While these tasks are not traditionally part of machine learning, they form the basis for many models and operations used in machine learning. Any practical acceleration on that front might thus have a huge impact on QAI.

  2. Traditional machine learning. These approaches are based on methods from more traditional branches of machine learning. They train models like clusters or support vector machines (SVMs) to gain or extrapolate information from given data sets. As these models have been developed (and run) a few decades ago they are usually not as computationally expensive as other approaches of AI and might thus be fit to test QAI on limited machines.

  3. Optimization. Algorithms of this group are given a specific optimization problem (usually formulated as a goal or fitness function over a specific input space) and aim to return the globally optimal point in the input space. Purely quantum methods differ from classical stochastic optimization in that they are usually guaranteed to find the global optimum under ideal conditions. In real-world implementations, they, too, yield stochastic results. The quantum approximate optimization algorithm (QAOA) (farhi2014quantum) is able to optimize on gate model hardware. Quadratic unconstrained binary optimization (QUBO) is equivalent to Ising spin glasses and both are the canonical optimization problems for commercially available quantum annealers. They are a specific hardware platform designed to perform quantum annealing (kadowaki1998quantum), which is a stochastic optimization algorithm based on adiabatic quantum computing (the exact ideal-condition algorithm) (mcgeoch2014adiabatic).

  4. Neural machine learning. These algorithms are based on more modern concepts of machine learning, which usually use neural networks for models; that means their models are quite intransparent (i.e., hard to verify) and vastly over-parametrized. Some approaches opt for Boltzmann machines (BMs) 

    (amin2018quantum; wiebe2019generative)

    as a model representation (since especially restricted BMs are easier to train). Others incorporate quantum computers in just select parts of larger neural-network-based architectures like Autoencoders 

    (romero2017quantum; khoshaman2018quantum), Generative Adversarial Networks (GANs) (dallaire2018quantum; lloyd2018quantum; romero2019variational; zoufal2019quantum)

    , or reinforcement learning (RL) agents 

    (neukart2018quantum; dong2008quantum). Here, Quantum RL (dong2008quantum) is especially interesting since it largely differs from classical RL and substitutes various artifacts and operations with their quantum analogue.

In Table 1

we also annotated all described algorithms with the QC platform they are run on as well as direct references to implementations where they were available. We attempted to guess when some algorithms will be ready for practical use based on their fitness to be run on NISQ devices. However, not for all algorithms a valid estimate was possible yet. If predictions are right, NISQ-ready algorithms will probably be the first ones to make an impact in real-world QAI. Lastly, we matched all QAI algorithms to the machine learning pipeline (cf. Figure 

2) and denoted the tasks of the machine learning pipeline that were actually executed using quantum computing. This usually meant that all other tasks are still performed on classical hardware.

Even in this small sample set of algorithms, we can notice that quantum algorithms can be used in various places throughout the machine learning pipeline. Naturally, they are focused on computationally expensive tasks. These can mainly be found when modeling the domain, which often means modeling complex probability distributions and sampling from them, and when performing the training, which usually means executing rather lengthy update operations on matrices or similar data structures. However, we can also observe that QAI algorithms naturally are hybrid approaches: A lot of classical steps within the machine learning pipeline are still necessary to produce the results.

Algorithm/Task QC platform Impl. available NISQ Quantum tasks in ML pipeline
Variational quantum eigensolver (peruzzo2014variational) Gate model PennyLane (vqa_pennylane) Yes Data/Domain, Use Policy
HHL (harrow2009quantum) Gate model Qiskit (hhl_qiskit) Unlikely (preskill2018quantum) Data/Domain, Train
Clustering (aimeur2007quantum) Gate model - No? Data/Domain, Use Policy
Clustering (lloyd2013quantum) Gate model - Yes? Data/Domain, Use Policy
Quantum nearest-neighbor (wiebe2014quantum_algorithms) Gate model - - Data/Domain, Use Policy
Recommendation system (kerenidis2016quantum) Gate model - Unlikely (preskill2018quantum) Data/Domain, Use Policy
SVM (havlivcek2018supervised) Gate model Qiskit (svm_qiskit) Yes Data/Domain, Use Policy
SVM (willsch2019support) Quantum annealing - - Data/Domain, Use Policy
QAOA (farhi2014quantum) Gate model PennyLane (qaoa_pennylane) Yes Train
QUBO / Ising spin glasses (glover2018tutorial; lucas2014ising) Quantum annealing D-WAVE (mcgeoch2013experimental) Yes Train
Quantum-assisted EA (king2019quantum) Quantum annealing - - Train
Quantum BM (wiebe2019generative) Gate model - Yes Train
Quantum BM (amin2018quantum) Quantum annealing - - Train
Autoencoder (romero2017quantum) Gate model (autoencoder_implementation) Yes Train
Autoencoder (khoshaman2018quantum) Quantum annealing - - Train
Quantum GAN (dallaire2018quantum; lloyd2018quantum) Gate model PennyLane (qgan_pennylane) Yes Data/Domain
Quantum GAN (romero2019variational) Gate model - Yes Data/Domain
Quantum GAN (zoufal2019quantum) Gate model Qiskit (qgan_qiskit) Yes Data/Domain
Quantum-enhanced RL (neukart2018quantum) Quantum annealing - - Train
Quantum RL (dong2008quantum) Gate model - - Train, Use Policy
Table 1. Selection of QAI algorithms.

5. Challenges of Quantum-Assisted Artificial Intelligence

Having analyzed the needs of AI and the current state of QAI, we can use this background knowledge and derive major challenges for future developments in QAI. Note that unlike other work (manju2014applications; perdomo2018opportunities) that formulates challenges in quantum artificial intelligence, we focus less on quantum-technical challenges but on the changes to the development methods that need to be achieved.

Challenge 1 (The Feedback Loop).

Replace the feedback loop around training (consisting of the tasks “Select Model/Policy”, “Train”, and “Assess QoS”) entirely with a quantum algorithm.

When performing machine learning, a lot of time is usually spent in training, which usually means fine-tuning a set of parameters in small gradual steps over many iterations. These iterations are often necessary as they incorporate slightly different (sets of) data points into the final model. Here, quantum approaches might not treat training iterations as a sequence of steps but maybe perform all training iterations in superposition und thus taking a huge shortcut in training a machine learning model. However, none of the surveyed approaches managed to replace such large parts of the machine learning pipeline by quantum approaches, perhaps because real(istic) quantum machines only provide relatively small coherence times. Quantum RL (dong2008quantum) probably comes closest by performing both the action execution and the resulting update in single run on the quantum machine, but the algorithm still requires many iterations of training overall. If possible at all, stepping away from iterative training might be the single biggest performance increase quantum computing could offer for AI. Thus, we might refer to The Feedback Loop Challenge as the “Holy Grail of Quantum AI”.

Nonetheless, other challenges persist and might be detrimental to achieving this highest of goals. Considering the multitude of QAI algorithms focusing on the domain model, we see that quantum-based representations can be used as models for physical domains (where they are a natural fit), complex stochastic domain (where they can approximate complex probability distributions cheaper and more precisely) and small domains in general (where quantum-based or quantum-assisted modeling of the domain might yield some benefits further along the pipeline).

That the extremely limited memory capacities of current quantum computers are one of the main bottlenecks for practical applications is well-known among the quantum computing. However, for QAI algorithms, especially the more modern ones, this problem is aggravated as only through processing very large amounts of data modern AI algorithms really shine (alom2019state). Figure 4 shows a simple sketch of that behavior. Effectively, the need to process relatively large amounts of training data might even, in the long run, prevent us from cutting out the iterative training loop.

Figure 4.

The performance of modern deep learning methods compared to more traditional machine learning depending on the amount of available training data (image taken from 

(alom2019state)).
Challenge 2 (The Training Data).

Provide means to process (the essence of) large amounts of data on quantum computers.

Note that for QAI, we might take a workaround here: Using the right hybrid approaches we might be able to construct classical pre-/postprocessing steps so that we can still process large amounts of data without processing all of them on the quantum machine. Early approaches like Quantum-enhanced RL (neukart2018quantum) have improved classical training by doing a preselection of training samples (using a quantum algorithm). Similar approaches could work to reduce the necessary training data for quantum training steps as well.

From these considerations we can already see that the combination and hybridization of various algorithms and techniques might be key to further developing QAI. However, combinations always include additional free parameters: What algorithms do we use? How and when do they interact? What domains is a specific combination good for? Furthermore, we do not only need to combine different techniques, but these techniques often stem from different fields of science and engineering. That means that even for a relatively standard QAI algorithm, we might require expert knowledge about quantum computing and the platforms it is run on, about AI and classical optimization, and about the domain at hand in order to make the right calls.

Challenge 3 (The Interfaces).

Provide standardized interfaces that allow for dynamic combination of QAI components and (by extension) for experts of different fields to collaborate on QAI algorithms.

Standardization is a goal that is often called for throughout various disciplines of science and engineering. However, QAI brings together two largely separate field, which in their own right develop rapidly and have produced little standardization. It thus be imperative to organize the interfaces between AI and QC without fixed technological standards but based on the involved experts of different expertise (leymann2019towards). An important part of this challenge is to allow standard software engineering to catch up with recent developments: Especially smaller groups will not be able to afford dedicated experts in QC and much less QAI. Instead software developers should be able to use QAI as seamlessly as they are able to use parallel computing in the cloud now, being able to benefit from advantages without the need to dive into the technical specifics.

For QC, this challenge requires a degree of technical maturity that is as of yet not reached by most practical frameworks, even though recent developments definitely aim towards making QC technology more accessible. As a lot of effort is put into QC by vendors wanting to sell their applications, the independent development of open standards is required to prevent vendor lock-in and enable QAI applications that span different QC platforms.

Challenge 4 (The Real Reason).

Keep track of the source of observed improvements.

Even classical machine learning models can often be treated as nothing more than a black box; even though they are deterministic and mathematically well understood, they just encode a behavior or connections between input and output that are too complex to trace without extreme computational effort. This is why in recent years, AI researchers showed increased interest in methods of testing and verifying the performance of AI (calinescu2012self; belzner2016software; amodei2016concrete).

For QAI, this black box property may be enforced by nature: We physically cannot introspect the probability distribution of states of a quantum machine while it is computing. That is all the more reason why we need quantum-appropriate testing and verification. Under this light, it is rather curious that we found no QAI algorithms that specifically tackle the last few tasks of the machine learning pipeline, especially “Monitor QoS”, which should be of utmost importance to practical applications.

Challenge 4, however, focuses on the reason why we need especially thorough testing in QAI: We need to constantly justify using a quantum machine. QAI will only have a success if the quantum part of the algorithm is the part that brings about the advantage over comparable methods. However, especially in the field of AI it is easy to construct a superior AI model by accident: A few lucky random numbers in the stochastic training process might result in a better performing AI. Or any part of a QAI algorithm (made up of various classical parts as well) might just match the current (state of the) domain the right way.

The more complex QAI algorithms become, the harder it might be to find a fair comparison in the purely classical world. Still, we need to provide researchers and developers in the field of QAI with the right tools to easily trace the significance and the reason of perceived advantages in comparison to other algorithms. If QC is eventually going to benefit AI, we need to be able to know exactly when and for what reason.

6. Outlook

In this paper we took a long tour from the challenges AI already poses to software engineering to the even more peculiar challenges that QAI poses to software engineering. Still, we argued that QC may greatly help in alleviating the problems the development of increasingly better AI is going to face in the upcoming years. On the flip side, AI methods with inherent robustness to noise might be an ideal testbed for early NISQ applications.

We defined four major challenges that stand without any claim to completeness. On the contrary, we expect every researcher in the field to be able to add quite a few more. However, we feel that the analysis of the projected future developments in AI and the current state of the art in QAI allowed us to deduce some of the most ambitious goals to tackle.

We hope that discussing these highly aimed challenges benefits the development of the young field of QAI and are confident that future research will (purposefully or inadvertently) make progress with respect to these challenges.

Acknowledgements.
This work was supported by the Federal Ministry of Economic Affairs and Energy, Germany, as part of the PlanQK project developing a platform and ecosystem for quantum-assisted artificial intelligence (see planqk.de).

References