Minimizing polynomial functions on quantum computers

03/19/2019 ∙ by Raouf Dridi, et al. ∙ Carnegie Mellon University 0

This expository paper reviews some of the recent uses of computational algebraic geometry in classical and quantum optimization. The paper assumes an elementary background in algebraic geometry and adiabatic quantum computing (AQC), and concentrates on presenting concrete examples (with Python codes tested on a quantum computer) of applying algebraic geometry constructs: solving binary optimization, factoring, and compiling. Reversing the direction, we also briefly describe a novel use of quantum computers to compute Groebner bases for toric ideals. We also show how Groebner bases play a role in studying AQC at a fundamental level within a Morse theory framework. We close by placing our work in perspective, by situating this leg of the journey, as part of a marvelous intellectual expedition that began with our ancients over 4000 years ago.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The present paper tells the new story of the growing romance between two protagonists: algebraic geometry [CLO98] and adiabatic quantum computations [FGG01, vDMV02]. An algebraic geometer, who has been introduced to the notion of Ising Hamiltonians [SiIC13], will quickly recognize the attraction in this relationship. However, for many physicists, this connection could be surprising, primarily because of their pre-conception that algebraic geometry is just a very abstract branch of pure mathematics. Although this is somewhat true–that is, algebraic geometry today studies variety of sophisticated objects such as schemes and stacks–at heart, those are tools for studying the same problem that our ancients grappled with: solving systems of polynomial equations.


A more known relationship is the one between algebraic geometry and classical polynomial optimization which dates back to the early 90s, with the work of B. Sturmfels and collaborators [Stu96, PS01]. The application of algebraic geometry to integer programming can be found in [CT91, TTN95, ST97, BPT00]. We take this occasion of an invited paper to introduce both classical and quantum optimization applications of algebraic geometry (the latter, conceived by the authors) through a number of concrete examples, with minimum possible abstraction, with the hope that it will serve as a teaser to join us in this leg of a marvelous expedition that began with the pioneering contributions of the Egyptian, Vedic, and pre-Socrates Greek priesthoods.

2 The profound interplay between algebra and geometry

In mathematics, there are number of dualities that differentiate it from other sciences. Through these dualities, data transcend abstraction, allowing different interpretations and access to different probing approaches. One of these is the duality between the category of (affine) algebraic varieties (i.e., zero loci of systems of polynomial equations) and the category of (finitely generated with no nilpotent elements) commutative rings:

(2.1)

Because of this equivalence, we can go back and forth between the two equivalent descriptions, taking advantage of both worlds.

Example 1

Before we go any deeper, here is an example of an algebraic variety

(2.2)

The very same data (set of points at equal distance from the origin) is captured algebraically with the coordinate ring

(2.3)

As its name indicates, the coordinate ring provides a coordinate system for the geometrical object .


We write for the ring of polynomials in with rational coefficients (at some places, including the equivalence above, the field of coefficients should be replaced by its algebraic closure! In practice, this distinction is not problematic and can be safely swept under the rug). Let be a set of polynomials . Let denotes the algebraic variety defined by the polynomials , that is, the set of common zeros of the equations . The system generates an ideal by taking all linear combinations over of all polynomials in ; we have The ideal reveals the hidden polynomials that are the consequence of the generating polynomials in . For instance, if one of the hidden polynomials is the constant polynomial 1 (i.e., ), then the system is inconsistent (because ). To be precise, the set of all hidden polynomials is given by the so-called radical ideal , which is defined by . We have:

Proposition 1

.

Of course, the radical ideal is infinite. Luckily, thanks to a prominent technical result (i.e., Dickson’s lemma), it has a finite generating set i.e., a Groebner basis , which one might take to be a triangularization of the ideal . In fact, the computation of Groebner bases generalizes Gaussian elimination in linear systems.

Proposition 2

.

Instead of giving the technical definition of what a Groebner basis is (which can be found in [CLO98] and in many other text books) let us give an example (for simplicity, we use the term “Groebner bases” to refer to reduced Groebner bases, which is, technically what we are working with):

Example 2

Consider the system by

We want to solve . One way to do so is to compute a Groebner basis for . In Figure 1, the output of the cell number 4 gives a Groebner basis of . We can see that the initial system has been triangulized: The last equation contains only the variable , whilst the second has an additional variable, and so on. The variable is said to be eliminated with respect to the rest of the variables. When computing the Groebner basis, the underlying algorithm (Buchberger’s algorithm) uses the ordering (called lexigraphical ordering) for the computing of two internal calculations: cross-multiplications and Euclidean divisions. The program tries to isolate first, then and , and finally , and  (all variables). It is clear that different orderings yield different Groebner bases.

(a)
Figure 1: Jupyter notebook for computing Groebner bases using Python package sympy. More efficient algorithms exist (e.g., [Fau02, Fau99]).

The mathematical power of Groebner bases doesn’t stop at solving systems of algebraic equations. The applicability of Groebner bases goes well beyond this: it gives necessary and sufficient conditions for the existence of solutions. Let us illustrate this with an example.

Example 3

Consider the following 0-1 feasibility problem

(2.4)

with for By putting the variables , and to the rightmost of the ordering, we obtain the set of all , and for which the system is feasible. The notebook in Figure 2 shows the details of the calculations as well as the conditions on the variables , and .

(a)
Figure 2: Necessary and sufficient conditions for existence of feasible solutions.

This machinery can be put in more precise wording as follows:

Theorem 1

Let be an ideal, and let be a reduced Groebnber basis of  with respect to the lex order . Then, for every , the set

(2.5)

is a Groebner basis of the ideal .

As previously mentioned, this elimination theorem is used to obtain the complete set of conditions on the variables , such that the ideal is not empty. For instance, if the ideal represents a system of algebraic equations and these equations are (algebraically) dependent on certain parameters, then the intersection (2.5) gives all necessary and sufficient conditions for the existence of solutions.

3 The innate role of algebraic geometry in binary optimization

By now, it should not be surprising to see algebraic geometry emerges when optimizing polynomial functions. Here, we expand on this with two examples of how algebraic geometry solves the binary polynomial optimization

(3.1)

The first method we review here was introduced in [BPT00] (different from another previous method that is studied in [TTN95], which we discuss in a later section). The second method we review here is new, and is an adaptation of the method described in [PS01] to the binary case.

3.1 A general method for solving binary optimizations

The key idea is to consider the ideal

where we note the appearance of the variable . This new variable covers the range of the function . Consequently, if we compute a Groebner basis with an elimination ordering in which appears at the rightmost, we obtain a polynomial in that gives all values of . Take, then, the smallest of those values and substitute in the rest of the basis and solve.

Example 4

Consider the following problem

(3.2)

Figure 3 details the solution.

(a)
Figure 3: Solving optimization problems with Groebner bases. Although, the cost function is linear here, the method works for any polynomial function.

3.2 A second general method for solving binary optimizations

An important construction that comes with the cost function is the gradient ideal. This is a valuable additional information that we will use in the resolution of the problem . Now, because the arguments of the cost function  are binary, we need to make sense of the derivation of the function . This is taken care of by the introduction of the function

(3.3)

where are real numbers with . We can now go ahead and define the gradient ideal of as

(3.4)

The variety gives the set of local minima of the function . Its coordinate ring is the residue algebra

(3.5)

Let us define the linear map

Because the number of local minima is finite, the residue algebra is finite-dimensional. Because of this, the following is true [CLO98]:

  • The values of , on the set of critical points

    , are given by the eigenvalues of the matrix

    .

  • The eigenvalues of and give the coordinates of the points of .

  • If

    is an eigenvector for

    , then it is also an eigenvector for and for .


We need to compute a basis for . This is done by first computing a Groebner basis for and then extracting the standard monomials (i.e., the monomials in that are not divisible by the leading term of any element in the Groebner basis). In the simple example below, we do not need to compute any Groebner basis, because is a Groebner basis with respect to .

Example 5

We illustrate this on

where . A basis for the residue algebra is given by the set of the 16 monomials

The matrix is

We obtain the following eigenvalues for :

This is also the set of values that takes on . The eigenvector

that corresponds to the eigenvalue 0 is the column vector

This eigenvector is used to find the coordinates of that minimize . The coordinates of the global minimum are defined by , and this gives , and .

4 Factoring on quantum annealers

This section reviews the use of the Groebner bases machinery in the factoring problem on current quantum annealers (introduced in [DA17]

). We need to deal with three key constraints: first, the number of available qubits. Second, the limited dynamic range for the allowed values of the couplers (i.e., coefficients of the quadratic monomials in the cost function), and third, the sparsity of the hardware graph.

4.1 Reduction

In general, reducing a polynomial function into a quadratic function necessitates the injection of extra variables (the minimum reduction is given in terms of toric ideals [DAT18a]). However, in certain cases, the reduction to QUBOs can be done without the additional variables. This is the example of the Hamiltonian that results from the long multiplication [DA17]. In fact, in addition to reduction, we can also adjust the coefficients to be within the dynamic range needed, at the same time. Consider the quadratic polynomial

with the binary variables

. The goal is to solve (obtain its zeros) by converting it into a QUBO. Instead of directly squaring the function (naive approach) and then reducing the cubic function result into a quadratic function by adding extra variables, we compute a Groebner basis of the system

and look for a positive quadratic polynomial in the ideal generated by . Note that global minima of are the zeros of . The Groebner basis is

in addition to 3 more cubic polynomials.

We take and solve for the . We can require that the coefficients are subject to the dynamic range allowed by the quantum processor (e.g., the absolute values of the coefficients of , with respect to the variables , and , be within ). The ensemble of these constraints translates into a simple real optimization problem for the coefficients .

4.2 Embedding

The connectivity graph of the resulting quadratic polynomial is the complete graph . Although embedding this into current architectures is not evident, the situation becomes better with upcoming architectures (e.g., D-Wave’s next generation quantum processors [BBRR19]).

5 Compiling on quantum annealers

Compiling the problem in AQC, consists of two steps: reduction of the problem’s polynomial function into a quadratic function (covered above) and later embedding the graph of the quadratic function inside the quantum annealer’s hardware graph. This process can be fully automatized using the language of algebraic geometry [DAT18a]. We review here the key points of this automatization, through a simple example.


Let us first explain what is meant by embeddings (and introduce the subtleties that come with). Consider the following optimization problem that we wish to solve on the D-Wave 2000Q quantum processor:

(5.1)
(a)
(b)
(c)
Figure 4: (Left) The logical graph of the objective function in , can not be embedded inside Chimera graph. (Center) We blow up the central node into edges and redistribute the surrounding nodes. (Right) Embedding inside an actual D-Wave 2000Q quantum annealer; in red, the chain of qubits representing the logical qubit . The missing qubits are faulty.

Before we start annealing, we need to map the logical variables to the physical qubits of the hardware. Similarly, the quadratic term needs to be mapped into a coupling between physical qubits with strength given by the coefficient . Not surprisingly, this mapping can not always be a simple matching–because of the sparsity of the hardware graph (Chimera in our case). This is true for our simple example; the degree of the central node is 8, so a direct matching inside Chimera, where the maximum degree is 6, is not feasible. Thus, we stretch the definition of embedding. Instead, we allow nodes to elongate or, as an algebraic geometer will say, to blow up. In particular, if we blow up the central node into an edge, say the edge , we can then redistribute the surrounding nodes , at these two duplicates of . In general, one needs a sequence of blow ups, which turns out to be a hard problem. What makes the problem even harder is that not all embeddings are equally valued. It is important to choose embeddings that have, among others, smaller chains, as illustrated in Figure 5. Of course, this is in addition to minimizing the overall number of physical qubits used.

(a)
Figure 5: The depicted embedding (for the problem ) has two long chains that don’t persist through the adiabatic evolution (in D-Wave 2000Q processor). In this case, the quantum processor fails to return an answer.

5.1 Embeddings as fiber-bundles

One way to think about embedding the logical graph into the hardware graph is in terms of fiber-bundles. This equational formulation makes the connection with algebraic geometry. The general form of such fiber-bundles is

with

where the binary number is 1 if the physical qubits is used and 0 otherwise. We write and with . The fiber of the map at is given by

(5.3)

The conditions on the parameters guarantee that fibers don’t intersect (i.e., is well defined map). In addition to these conditions, two more conditions need to be satisfied: (i) Pullback Condition: the logical graph embeds entirely inside (ii) Connected Fiber Condition: each fiber is a connected subgraph (of ). We will not go into the details of these conditions, which can be found in [DAT18a]. We illustrate this in a simple example.

Example 6

Consider the two graphs in Figure 6. In this case, equations (5.1) are given by

(5.4)
(5.5)
(5.6)
(5.7)
(5.8)

and

(a)
Figure 6: The set of all fiber bundles defines an algebraic variety. This variety is given by the Groebner basis (6).

The Pullback Condition reads

Finally, the Connected Fiber Condition is given by

We can then use the elimination theorem to obtain all embeddings of inside (by putting the variables to the right most of the elimination order). A part of the r Groebner basis is given by

In particular, the intersection gives the two minors (i.e., subgraphs ) inside . The remainder of gives the explicit expressions of the corresponding mappings.

5.2 Symmetry reduction

Many of the embeddings aquired using the above method, are redundant. We can eliminate this redundancy in a mathematically elegant way using the theory of invariants [Olv99] (on top of the algebraic geometrical formulation). First, we fold the hardware graph along its symmetries and then proceed as before. This amounts to re-expressing the quadratic form of the hardware graph in terms of the invariants of the symmetry.

Example 7

Continuing with the same example: The quadratic form of is:

(5.10)

Exchanging the two nodes and is a symmetry for and the quantities and are invariants of this symmetry. In terms of these invariants, the quadratic function , takes the simplified form:

(5.11)

which shows (as expected) that graph can be folded into a chain (given by the new nodes ). The surjective homomorphism now takes the form

(5.12)
(5.13)

The table below compares the computations of the surjections with and without the use of invariants:

original coords invt coords
Time for computing a Groebner basis (in secs) 0.122 0.039
Number of defining equations 58 30
Maximum degree in the defining eqns 3 2
Number of variables in the defining eqns 20 12
Number of solutions 48 24

In particular, the number of solutions is down to 24, that is, four (non symmetric) minors times the six symmetries of the logical graph .

6 Quantum computing for algebraic geometry

Here we give an example that goes in the opposite direction of what we have covered so far. We show how quantum computers can be used to compute algebraic geometrical structures that are exponentially hard to compute classically. Our attention is directed to a prominent type of polynomial ideals; the so-called toric ideals and their Groebner bases. In the context of the theory of integer optimization, this gives a novel quantum algorithm for solving IP problems (a quantum version of Conti and Traverso algorithm [CT91], that is used in [TTN95]). As a matter of fact, the procedure which we are about to describe can be used to construct the full Groebner fan [Stu96, CLO98] of a given toric ideal. We leave the technical details for a future work. A related notion is the so-called Graver basis which extends toric Groebner bases in the context of convex optimization. A hybrid classical-quantum algorithm for computing Graver bases is given in [ADT19].


Toric ideals are ideals generated by differences of monomials. Because of this, their Groebner bases enjoy a clear structure given by kernels of integer matrices. Specifically, let be any integer -matrix ( is called configuration matrix). Each column is identified with a Laurent monomial . In this case, the toric ideal associated with the configuration is the kernel of the algebra homomorphism

(6.1)
(6.2)

From this it follows that the toric ideal is generated by the binomials where the vector runs over all integer vectors in , the kernel of the matrix . It is not hard to see that the elimination theorem that we have used repeatedly can also be used here to compute a Groebner basis for the toric ideal .


Now we explain how AQC (or any quantum optimizer such as Quantum Approximate Optimization Algorithm, QAOA [FGG14]) can be used to compute Groebner bases for the toric ideal . The example we choose is taken from [CLO98]-Chapter 8. The matrix is given by

(6.3)

The kernel is easily obtained (with polynomial complexity). It is the two dimensional vector space spanned with We define , which is a linear combination (over ) of the two vectors. As in [CLO98], we consider the lexographical ordering represented by the matrix order

(6.4)

The cost function is given by the square of the Euclidean norm of the vector . Figure 7 details the solution of this optimization problem on D-Wave 2000Q quantum processor. Each solution has twelve entries, and is of the form , corresponding to the binary decomposition of the integers and . Qubits marked -1 are not used, so they should be considered equal to zero. The collection of all these solutions translates into the sought Groebner basis

(6.5)
(a)
Figure 7: Computation of toric Groebner bases on the D-Wave 2000Q quantum processor.

7 Groebner bases in the fundamental theory of AQC

The role of the so-called anti-crossings [SiIC13, vNW93] in AQC is well understood. This is expressed as the total adiabatic evolution time being inversely proportional to the square of the minimum energy difference between the two lowest energies of the given Hamiltonian. This minimum is attained at anti-crossings. In this last section, we connect anti-crossings to the theory of Groebner bases (through a quick detour to Morse theory [Bot88, Wit82]).


Consider the time dependant Hamiltonian:

(7.1)

To the Hamiltonian (7.1), we assign the function given by the characteristics polynomial:

(7.2)

where is the identity matrix. The important role that the function plays in AQC is described in [DAT18b, DAT19]. In particular, anti-crossings are now mapped into saddle points of the function . This is the starting point of the connection with Morse theory, which is explored in details in [DAT18b, DAT19]. Here we explain how anti-crossings can be described using Groebner bases. The key fact is that the function is a polynomial function of and , and so is any partial derivative of . Recall that a critical point of is a point at which the differential map is the zero map–that is, the gradient of vanishes at . A critical point is said to be non degenerate (e.g., a saddle point) if the determinant of the Hessian of at is not zero. Define the ideal generated by the two polynomials and . It is clear that the variety of gives the set of all critical points of . To capture the non degeneracy, we need to saturate the ideal with the polynomial . This saturation is the ideal given by all polynomials in that vanish for all the zeros of that are not zeros of . In other words, a point is a non degenerate critical point of the function if and only if the remainder is not zero at , where is a Groebner basis for the ideal .

8 Summary and discussion

As we mentioned in the Introduction, we are travelers in a journey that our ancients started. Evidence of “practical mathematics” during 2200 BCE in the Indus Valley has been unearthed that indicates proficiency in geometry. Similarly, in Egypt (around 2000 BCE) and Babylon (1900 BCE), there is good evidence (through the Rhind Papyrus and clay tablets, respectively) of capabilities in geometry and algebra. After the fall of the Indus Valley Civilization (around 1900 BCE), the Vedic period was especially fertile for mathematics, and around 600 BCE, there is evidence that magnetism (discovered near Varanasi) was already used for practical purposes in medicine (like pulling arrows out of warriors injured in battle), as written in Sushruta Brahmana. Magnetism was also independently discovered by pre-Socrates Greeks, as evidenced by the writings of Thales (624-548 BCE), who, along with Pythagoras (570-495 BCE), was also quite competent in geometry. Indeed, well before Alexander (The Great), and the high points of Hellenistic Greek period), there is evidence that the Greeks were already doing some type of algebraic geometry.


Algebra, which is derived from the Arabic word meaning completion or “reunion of broken parts”, reached a new high watermark during the golden age of Islamic mathematics around 10th Century AD. For example, Omar Khayyam (of the Rubaiyat fame) solved cubic equations. The next significant leap in algebraic geometry, a Renaissance, in the 16th and 17th century, is quintessentially European: Cardano, Fontana, Pascal, Descartes, Fermat. The 19th and 20th Century welcomed the dazzling contributions of Laguerre, Cayley, Reimann, Hilbert, Macaulay, and the Italian school led by Castelnuov, del Pezzo, Enriques, Fano, and Severio. Modern algebraic geometry has been indelibly altered by van der Waerden, Zariski, Weil, and in 1950s and 1960s, by Serre and Grothendieck. Computational algebraic geometry begins with the Buchberger in 1965 who introduced Groebner bases (the first conference on computational algebraic geometry was in 1979).


Magnetism simply could not be explained by classical physics, and had to wait for quantum mechanics. The workhorse to study it mathematically is the Ising model, conceived in 1925. Quantum computing was first introduced by Feynman in 1981 [Fey82]. The study of Ising models that formed a basis of physical realization of a quantum annealer (like D-Wave devices) can be traced to the 1989 paper by Ray, Chakrabarti and Chakrabarti [RCC89]. Building on various adiabatic theorems of the early quantum mechanics and complexity theory, adiabatic quantum computing was proposed by Farhi et al in 2001.


Which brings us to current times. The use of computational algebraic geometry (along with Morse homology, Cerf theory and Gauss-Bonnet theorem from differential geometry) in the study of adiabatic quantum computing, and numerically testing our ideas on D-Wave quantum processors, which is a physical realization of an Ising model, is conceived by us, the authors, of this expository article. Let us close with the Roman poet Ovid (43 BC-17 AD): “Let others praise ancient times; I am glad I was born in these.”


References

  • [ADT19] Hedayat Alghassi, Raouf Dridi, and Sridhar Tayur, Graver bases via quantum annealing with application to non-linear integer programs, AarXiv:1902.04215. 2019.
  • [BBRR19] Kelly Boothby, Paul Bunyk, Jack Raymond, and Aidan Roy, Next-generation topology of d-wave quantum processors, D-Wave. 2019.
  • [Bot88] Raoul Bott, Morse theory indomitable, Publications Mathématiques de l’IHÉS 68 (1988), 99–114 (en). MR 90f:58027
  • [BPT00] Dimitris Bertsimas, Georgia Perakis, and Sridhar Tayur, A new algebraic geometry algorithm for integer programming, Management Science 46 (2000), no. 7, 999–1008.
  • [CLO98] David A. Cox, John B. Little, and Donal O’Shea, Using algebraic geometry, Graduate texts in mathematics, Springer, New York, 1998.
  • [CT91] Pasqualina Conti and Carlo Traverso, Buchberger algorithm and integer programming, Proceedings of the 9th International Symposium, on Applied Algebra, Algebraic Algorithms and Error-Correcting Codes (London, UK, UK), AAECC-9, Springer-Verlag, 1991, pp. 130–139.
  • [DA17] Raouf Dridi and Hedayat Alghassi, Prime factorization using quantum annealing and computational algebraic geometry, Sci. Rep. 7 (2017).
  • [DAT18a] Raouf Dridi, Hedayat Alghassi, and Sridhar Tayur, A novel algebraic geometry compiling framework for adiabatic quantum computations, arXiv:1810.01440, 2018.
  • [DAT18b]  , Homological description of the quantum adiabatic evolution with a view toward quantum computations, ArXiv:1811.00675. 2018.
  • [DAT19]  , Enhancing the efficiency of adiabatic quantum computations, ArXiv:1903.01486. 2019.
  • [Fau99] Jean-Charles Faugere, A new efficient algorithm for computing Gröbner bases (F4), Journal of Pure and Applied Algebra 139 (1999), no. 13, 61 – 88.
  • [Fau02] Jean Charles Faugère, A new efficient algorithm for computing Gröbner bases without reduction to zero (f5), Proceedings of the 2002 International Symposium on Symbolic and Algebraic Computation (New York, NY, USA), ISSAC ’02, ACM, 2002, pp. 75–83.
  • [Fey82] Richard P. Feynman, Simulating physics with computers, International Journal of Theoretical Physics 21 (1982), no. 6, 467–488.
  • [FGG01] Edward Farhi, Jeffrey Goldstone, Sam Gutmann, Joshua Lapan, Andrew Lundgren, and Daniel Preda, A quantum adiabatic evolution algorithm applied to random instances of an np-complete problem, Science 292 (2001), no. 5516, 472–475.
  • [FGG14] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann, A quantum approximate optimization algorithm, ArXiv:1411.4028. 2014.
  • [Olv99] Peter J. Olver, Classical invariant theory, London Mathematical Society Student Texts, Cambridge University Press, 1999.
  • [PS01] Pablo A. Parrilo and Bernd Sturmfels, Minimizing polynomial functions, DIMACS Series in Discrete Mathematics and Theoretical Computer Science (2001).
  • [RCC89] P. Ray, B. K. Chakrabarti, and Arunava Chakrabarti, Sherrington-kirkpatrick model in a transverse field: Absence of replica symmetry breaking due to quantum fluctuations, Phys. Rev. B 39 (1989), 11828–11832.
  • [SiIC13] Sei Suzuki, Jun ichi Inoue, and Bikas K. Chakrabarti, Quantum ising phases and transitions in transverse ising models, Springer Berlin Heidelberg, 2013.
  • [ST97] Bernd Sturmfels and Rekha R. Thomas, Variation of cost functions in integer programming, Math. Program. 77 (1997), 357–387.
  • [Stu96] Bernd Sturmfels, Gröbner bases and convex polytopes, University Lecture Series, vol. 8, American Mathematical Society, Providence, RI, 1996. MR 1363949
  • [TTN95] Sridhar R. Tayur, Rekha R. Thomas, and N. R. Natraj, An algebraic geometry algorithm for scheduling in presence of setups and correlated demands, Math. Program. 69 (1995), 369–401.
  • [vDMV02] Wim van Dam, Michele Mosca, and Umesh Vazirani, How powerful is adiabatic quantum computation?
  • [vNW93] J. von Neumann and E. P. Wigner, Über das verhalten von eigenwerten bei adiabatischen prozessen, pp. 294–297, Springer Berlin Heidelberg, Berlin, Heidelberg, 1993.
  • [Wit82] Edward Witten, Supersymmetry and morse theory, J. Differential Geom. 17 (1982), no. 4, 661–692.