Complex Dynamics in Simple Neural Networks: Understanding Gradient Flow in Phase Retrieval

06/12/2020 ∙ by Stefano Sarao Mannelli, et al. ∙ 0

Despite the widespread use of gradient-based algorithms for optimizing high-dimensional non-convex functions, understanding their ability of finding good minima instead of being trapped in spurious ones remains to a large extent an open problem. Here we focus on gradient flow dynamics for phase retrieval from random measurements. When the ratio of the number of measurements over the input dimension is small the dynamics remains trapped in spurious minima with large basins of attraction. We find analytically that above a critical ratio those critical points become unstable developing a negative direction toward the signal. By numerical experiments we show that in this regime the gradient flow algorithm is not trapped; it drifts away from the spurious critical points along the unstable direction and succeeds in finding the global minimum. Using tools from statistical physics we characterize this phenomenon, which is related to a BBP-type transition in the Hessian of the spurious minima.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In many machine learning applications one optimizes a non-convex loss function; this is often achieved using simple descending algorithms such as gradient descent or its stochastic variations. The positive results obtained in practice are often hard to justify from the theoretical point of view, and this apparent contradiction between non-convex landscapes and good performance of simple algorithms is a recurrent problem in machine learning.

A successful line of research has studied the geometrical properties of the loss landscape, distinguishing between good minima - that lead to good generalization error - and spurious minima - associated with bad generalization error. The results showed that in some regimes, for several problems from matrix completion [1] to wide neural networks [2, 3], spurious minima disappear and consequently under weak assumptions [4] gradient descent will converge to good minima. However, these results do not justify numerous other results showing that good and spurious minima are present, but systematically gradient descent works [5, 6]. In [7]

it was theoretically shown that in a toy model - the spiked matrix-tensor model - it is possible to find good minima with high probability in a regime where exponentially many spurious minima are provably present. In

[8] it was shown that this is due to the presence of the so-called threshold states in the landscape, that play a key role in the dynamics of the gradient flow [9, 10]: at first attracting it, and successively triggering the converge towards lower minima under certain conditions [11, 12]

. However, the spiked matrix-tensor model is an unsupervised learning model and it remained open whether the picture put forward in

[7, 8] happens also in learning with neural networks.

In this work we thus study learning with a simple single-layer neural network on data stemming from the well-known phase retrieval problem - that consists of reconstructing a hidden vector having access only to the absolute value of its projection onto random directions. The problem emerges naturally in a variety of imaging applications where the intensity is easier to access than the phase

[13, 14, 15, 16] but it appears also in acoustics [17] and quantum mechanics [18]. The phase retrieval problem considered here leads to a high-dimensional and non-convex optimization problem defined as follows.

Phase retrieval. Consider -dimensional sensing vectors

with unitary norm and generated according to a centered Gaussian distribution, and the true labels

with and an unknown teacher-vector (the signal in the phase retrieval literature) from

. Given the dataset, the goal is to build an estimator

of by minimizing the loss function

(1)

with and is a modified square loss commonly used in the literature [19, 20] that ensures a smoother landscape compared to the square loss with the absolute values. The loss function, Eq. (1), is minimized using gradient-descent flow on the sphere starting from random initialization

(2)

with the Lagrange multiplier that enforces the spherical constraint during the dynamics. The value of can be readily evaluated by taking the scalar product of the gradient of the loss with and dividing by

(3)

We can finally define the Hessian of the problem, denoting the Kronecker delta, it reads

(4)

Related work and our main contributions.

Numerous algorithms can be applied to achieve a good reconstruction of the hidden signal (teacher vector) [21, 19]. For random i.i.d. data and labels generated by a teacher the information-theoretically optimal generalization error has been analyzed in [22], showing that for in the limit of no estimator is able to obtain a generalization error better than a random guess. On the other hand for algorithms (not necessarily polynomial ones) exist that are able to achieve zero generalization error. While the weak-recovery threshold is achievable with efficient algorithms [23], the information-theoretic threshold is conjectured not to be and an algorithmic gap has been conjectured to exist, with perfect recovery achievable with the approximate message passing algorithm only for [22].

In the present paper we are interested in the algorithmic performance of the gradient descent algorithm. The motivation of the present work is not to compare vanilla gradient descent to other algorithms but rather use this well studied phase-retrieval problem to get insights on the properties of the gradient descent algorithm in high-dimensional non-convex optimization. The spurious minima are known to disappear in phase retrieval if the number of samples in the dataset scales as with the input dimension [24]. Later [20] showed that randomly initialized plain gradient descent will solve the phase retrieval problem with samples. An open theoretical question is whether randomly initialized gradient descent is able to solve the phase retrieval problem with samples. We explore this question in the present work.

Our main contributions can be summarized in the following points:

  • We show empirically that in the phase retrieval problem, randomly initialized gradient flow is attracted by the so-called threshold states. We show that as the number of samples increases, the geometry of the threshold states changes from minima to saddles with the slope pointing toward the teacher-vector. This transition is akin to the BBP phase transition known in random matrix theory

    [25]. This transition affects gradient descent that, following the slope, achieves zero generalization error.

  • We obtain a close formula for the number of samples per dimension

    at which this transition happens. This depends on the joint probability distribution of the labels of the student and the teacher at the threshold.

  • Using the replica theory from statistical physics as a non-rigorous proxy we characterize the approximate distribution of the labels of the student and the teacher leading to a approximate prediction for threshold , notably suggesting that the additional logarithmic factors in the previous works might not be needed.

2 BBP on the threshold states

Figure 1: Evolution of the training loss for systems of size with number of samples

. The left panel has labels created using a teacher, while the right panel has random labels constructed using a Gaussian distribution with variance matching the teacher-labels. The left figure shows that also the simulations with a teacher approach the threshold energy before transiting to the global minimum, if

is large enough.

In Fig. 1 (left) we show the loss as a function of iteration time for the phase retrieval problem, defined above, with varying number of samples to dimension ratio in many different runs (full lines). In the right hand side of the figure we show the loss but this time for labels that do not come from a teacher network, but that are randomly reshuffled. We see that in that case the loss converges after a very long time towards a value marked by the dotted line (reproduced also in the left part), that we defined to be the so-called threshold energy. We see that for small , e.g. the train loss on the phase retrieval problem does not decrease to zero (nor the test one), while for the larger values and it does very rapidly. The value is close to a critical regime where some realization find perfect generalization and other do not, with the dynamics staying for a long time close to the threshold energy.

In a different model, the spiked matrix-tensor, Ref. [8] described exactly this phenomenology and showed that gradient flow starting from random initial conditions has a transition when the Hessian of the spurious minima that trap the dynamics, the so-called threshold states, display a BBP transition [25]. This leads to the emergence of a descending direction toward the informative minimum which is correlated with the ground truth signal . In Fig. 2 we argue that the same mechanism is at play in the phase retrieval problem and based on these insights we derive an analytic equation for the corresponding threshold in Sec. 2.1.

Figure 2: Properties of the Hessian for phase retrieval of a system of size at

. On the left figure, we show the evolution of the density of the bulk of the eigenvalues, from zero density in white to high density in blue, the smallest eigenvalue in orange, and the second smallest in green. The picture shows that a BBP transition occurs when the training loss approaches the threshold energy. The right panel depicts the evolution of the training loss (purple) and the generalization loss (dark blue) in time. The training loss rapidly approaches a plateau at the level of the threshold states (black dashed line) and converges towards the teacher after the BBP transition. Namely, when the smallest eigenvalue detaches from the rest of the bulk. The bottom panels show the distribution of the eigenvalues at five different instants in the dynamics. At iteration

we cross the threshold energy and we observe an isolated eigenvalue detaching from the bulk.

2.1 Theory for the BBP threshold

Based on the numerical results just presented, we aim to obtain an equation determining the value of threshold such that for the BBP transition occurs, whereas for it does not. For the system is at long times trapped in the threshold states and not able to recover, even weakly, the signal. We define the long-time limit of the distribution of the estimated labels and the true labels; allows us to study the Hessian of the threshold states, which is the random matrix defined in (4) with and distributed following the law .
A type of random matrix with similar structure as the contribution to the Hessian coming from the Loss function, , has been studied recently in [26] and the convergence in probability for large of the largest and the second largest eigenvalues of such matrix was proven. We applied the results of the Theorem 1 of [26] to determine the behavior of the smallest and second smallest eigenvalue of the Hessian. Call

(5)

let and then the largest eigenvalue of is with being the solution of . The second largest is .

A BBP transition occurs at , the largest such that . Following [26] this leads to an equation on and which reads:

(6)

We now use an additional assumption, which comes from studies of gradient descent dynamics of mean-field spin-glasses [27], that the threshold states are marginal, i.e. the smallest eigenvalue of their Hessian is null. As the smallest eigenvalue of the Hessian is determined by the largest eigenvalue of , this imposes the additional condition . Using the second Eq. in (5) to enforce this last condition and the definition of in Eq. (3) one finds (see Appendix Sec. A for more details):

(7)

Together the three eqs. (6), and the two in (7) allow to determine the three unknown . Assuming that , they therefore provide an equation for the algorithmic threshold once at threshold states is known.

Figure 3:

The three images show the occurring of a BBP transition at the moment when the training loss crosses the threshold energy. The images investigate the BBP as the input dimension is increased form

to

. At the BBP transition the smallest eigenvalue pops out of the bulk of eigenvalues and the associated eigenvector contains information on the signal. The left figure is the difference of the smallest eigenvalues (the second with the first in solid line, the third with the second in dotted line). The central image shows the overlap of the first eigenvector (full) and second eigenvector (dotted) with the signal. The transition appears to shift as in the left image. Finally on the right the fluctuations of the overlap first eigenvector with the teacher are shown, the peak corresponds to the transition.

2.2 Further numerical justifications

We now illustrate and test the theory by numerical simulations. Fig. 2 shows that the training loss slowly tends to the threshold energy, before departing in direction of the global minimum. According to the theory described in the previous section, the phenomenon occurring is a BBP transition. In order to confirm this we characterize the spectrum of the Hessian and focus on the smallest eigenvalues. On left figure of Fig. 2 we show the density of eigenvalues in blue and we show separately the smallest eigenvalue in orange and the second smallest eigenvalue in green. During the dynamics the spectrum moves compactly forming a bulk - except for the largest eigenvalues that do not play role in the transition. Approaching the threshold states, for the dynamics feels the presence of a descending direction associated with the smallest eigenvalue. This becomes increasingly strong and when the negative eigenvalue pops out of the bulk, the dynamics will follow and will converge to the global minimum. In the lower panel of Fig. 2 we show snapshots of the spectral density during the dynamics.

In Fig. 3 we study the spectral properties of the Hessian at the time when the training loss hits the threshold energy. In the first panel from the left, we show the difference between the second smallest eigenvalue and the smallest eigenvalue (solid line) and the difference between the third smallest and the second smallest eigenvalue (dotted). The two quantities are close, and very small, until a certain value value of - that depends on the size and is reported in the caption - after which the former increases linearly whereas the latter remains small. The second property that we investigate, central figure, is the overlap of the first and second eigenvectors with the teacher-vector (respectively solid and dotted lines). This overlap is zero before the transition, then as , it suddenly increases and finally saturates. The first two panels are provide a strong evidences that a BBP transition is taking place. To further corroborate this findings, in the right panel we consider the fluctuation of the overlap (Fig. 3 right panel). In statistical physics terms, the overlap plays the role of order parameter, and its fluctuations diverge at the BBP transition. We indeed find that at the value of at which the BBP transition seems to take place fluctuations peak (the more so the larger is as expected for a phase transition).

Finally, we also note the presence of strong finite size effects that shift the effective value of at which the transition (cross-over for finite ) takes place.

3 Characterization of threshold states

Figure 4: On the left panel is the loss of the threshold states from the simulations and from the analysis with the 1RSB replica method, evaluated for different values of

. The errorbars given by mean and standard deviation of 1000 simulations with input dimension

and shuffled labels. On the right we compare the moments of the label distribution Eq. (17) (solid lines) with the moments obtained in the simulations (circles).

The pivotal point of our analysis is that the gradient-flow dynamics is that the spurious minima trapping the dynamics and hindering weak recovery are the threshold states. The study of threshold states has been started in statistical physics of disordered systems, where it has been conjectured and verified that they play a prominent role in the gradient descent dynamics of such random systems [9]. according to this theory, threshold states are the most numerous minima and the Hessian associated to the threshold states has a spectrum that is positive semi-definite and gap-less.

In the previous section we have obtained a closed set of equations that allow us to obtain once the distribution is known. Such distribution can be derived using tools from statistical physics, in particular replica theory [28, 10, 29]. The main quantity of interest is the partition function, which is defined as the normalization constant of the Gibbs distribution associated with the loss Eq. (1),

(8)

where is a parameter associated to the inverse temperature in the physical analog of the problem. We consider the limit to study the minima of that are relevant in gradient-flow dynamics.

The disordered systems approach focuses on the average properties of the systems. In order to ensure concentration properties as the input dimension goes to infinity, the quantity to average is the logarithm of the partition function. This quantity in general is hard to compute and we resort to replica theory. The idea is to apply replica method, described below and in more detail in the Appendix Sec. B, to move from the moments of the partition function to average of its logarithmic value. The analysis leads to the free energy density, Eq. (11), that we use to compute the distribution of the labels.

We start writing the moments of Eq. (8)

(9)

where we represent the average over the possible datasets with the overbar. The high-dimensional integral in Eq. (9) can be written in term of the overlap matrix that encodes the similarity between two configurations extracted with the Gibbs measure, of the labels and , with

(10)

We perform a so-called 1-step replica symmetry breaking (1RSB) analysis [30] that allows us to obtain an explicit expression for the distribution of . Using the 1RSB ansatz on this problem we reduce the number of parameters in Eq. (10) to and : where is related to amplitude of the fluctuations in minimum when this is perturbed, and is a Lagrange multiplier that we use to enforce that the minima in consideration have a Hessian with gapless spectrum, i.e. they are threshold states. The general prescription for finding the global minimum of this kind of problem would require to optimize over and , however, the same formalism can be used to characterize the threshold states by imposing a value for and optimizing only over [28, 31]. We evaluate the integral of the partition function Eq. (8) at the threshold using Laplace’s method on Eq. (10). Finally we consider the replica method, by taking the analytic continuation of the moments of the partition function and the limit . With this considerations we obtain the free energy density in low temperature limit that reads

(11)

where , the symbol represent a convolution and we write as a compact notation to represent a centered Gaussian distribution with unit variance. The arguments and are given via implicit equations that respectively impose that the Laplace’s approximation on and that the free energy describes the threshold states (further details are in the Appendix Sec. B):

(12)
(13)

In order to obtain we follow the strategy introduced in [29]. The partition function in Eq. (8) can be written as a functional integral over the empirical labels :

(14)

where is the entropic factor (evaluated on configurations corresponding to the threshold states). This makes clear that the free-energy can also be obtained in terms of a large-deviation principle:

(15)

The minimizer of this variational problem corresponds by definition to . By taking the functional derivative of with respect to one obtains:

(16)

Since we have an explicit expression of in terms of , see eq. (11), we can obtain this distribution directly. In taking the functional derivative of Eq. (11) we must be careful in considering the implicit dependence of from . Finally inverting Eq. 16 we obtain the label distribution

(17)

The index 1RSB denotes that the free-energy has been obtained within the 1RSB scheme.

We run numerical experiments to characterize the threshold states in terms of their energy and their moments with respect to the label probability distribution. In Fig. 4 (left) the 1RSB threshold energy is plotted together with the value of the plateau obtained from the simulations in Fig. 1. The 1RSB ansatz appears to be a good approximation of the empirically obtained energy. In the inset we highlight the discrepancy with the empirical line, which may be due to both finite size effects as well as to the fact that the 1RSB scheme is an approximation and more involved schemes must be employed to obtain the exact distribution [30]. Finally in Fig. 4 (right) we show the first moments of the label distribution from the 1RSB analysis and we compare them with the numerical results. The 1RSB moments give a nice agreement in relation to the empirical ones.

Encouraged by the reasonable, though not perfect, accuracy of the 1RSB approximation, we use the expression  (17) as input for the three eqs. in (6,7). Their solutions leads to a finite threshold at . This value could be compatible with the numerical results presented in Fig. 3 if the finite size shift saturates for yet larger sizes.

4 Discussion

In Fig. 5 we present the fraction of runs of gradient descent that converge to zero test error in the phase retrieval problem under consideration. We see that a large fraction of simulations achieves convergence before the threshold, in general before . The mechanism behind this difference is not clear to us, as well as it is not clear whether it will hold as the input dimension tends to infinity. The fact that the empirical threshold seems even smaller than the one predicted by our approximate theory strengthens our conjecture that the critical value of to get weak recovery for randomly initialized gradient descent is of order one and not polynomial in . In [7] the authors showed that large finite size effects in the initialization affect the location of the BBP transition. Whether this happens also in this case is unclear and deserves further investigations.

Figure 5: Fraction of simulations with gradient descent that achieve zero test error for various input dimensions . The number of simulations increases with the smaller input dimension in order the account for the larger fluctuations. The number varies from simulations for to simulations for . The learning rate is and the simulation runs until either the loss goes below or the number of steps exceeds iterations. The logarithmic term in the time accounts for the fact that the average initial overlap with the ground truth is .

Let us conclude by commenting on the limitations and possible generalizations of the results presented in this paper. A key element of the phase retrieval model is the existence of a phase at small in which the best achievable generalization error is not better than random guess. This arises in models that present a symmetry, such as the symmetry due to the absolute value in the phase retrieval problem. The picture shown in Fig. 1

where the threshold states are characterized using the gradient-flow dynamics in a variant of the model with randomized labels is enabled by this symmetry and the rest of our analysis relies on the simplifications stemming from this. We expect that the BBP picture presented in this work generalizes to all learning where at small sample complexity error is not better than random guess. Of course working out the details, even for two layer neural networks, is an open problem of interest for future work. In most learning problems a naive linear regression often achieves a better than random guess performance, and thus an extension of our theory for problems of that type (lacking a symmetry) would be an interesting direction for future work.

Acknowledgements

We thank Carlo Lucibello for precious discussions. We acknowledge funding from the ERC under the European Union’s Horizon 2020 Research and Innovation Programme Grant Agreement 714608-SMiLe; from the French government under management of Agence Nationale de la Recherche (ANR) grant PAIL, and grant "Investissements d’Avenir" LabEx PALM (ANR-10-LABX-0039-PALM) (StatPhysDisSys) and (ANR-19-P3IA-0001) (PRAIRIE 3IA Institute); and from the Simons Foundation collaboration Cracking the Glass Problem (#454935, Giulio Biroli).

References

  • [1] Rong Ge, Jason D Lee, and Tengyu Ma. Matrix completion has no spurious local minimum. In Advances in Neural Information Processing Systems, pages 2973–2981, 2016.
  • [2] Kenji Kawaguchi. Deep learning without poor local minima. In Advances in Neural Information Processing Systems, pages 586–594, 2016.
  • [3] Daniel Soudry and Yair Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016.
  • [4] Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent only converges to minimizers. In Conference on learning theory, pages 1246–1257, 2016.
  • [5] Itay Safran and Ohad Shamir. Spurious local minima are common in two-layer relu neural networks. arXiv preprint arXiv:1712.08968, 2017.
  • [6] Shengchao Liu, Dimitris Papailiopoulos, and Dimitris Achlioptas. Bad global minima exist and sgd can reach them. arXiv preprint arXiv:1906.02613, 2019.
  • [7] Stefano Sarao Mannelli, Florent Krzakala, Pierfrancesco Urbani, and Lenka Zdeborova. Passed & spurious: analysing descent algorithms and local minima in spiked matrix-tensor model. ICML, 2019.
  • [8] Stefano Sarao Mannelli, Giulio Biroli, Chiara Cammarota, Florent Krzakala, and Lenka Zdeborová. Who is afraid of big bad minima? analysis of gradient-flow in spiked matrix-tensor models. In Advances in Neural Information Processing Systems, pages 8676–8686, 2019.
  • [9] Leticia F Cugliandolo and Jorge Kurchan. Analytical solution of the off-equilibrium dynamics of a long-range spin-glass model. Physical Review Letters, 71(1):173, 1993.
  • [10] Fabrizio Antenucci, Silvio Franz, Pierfrancesco Urbani, and Lenka Zdeborová. Glassy nature of the hard phase in inference problems. Physical Review X, 9(1):011020, 2019.
  • [11] TLH Watkin and J-P Nadal. Optimal unsupervised learning. Journal of Physics A: Mathematical and General, 27(6):1899, 1994.
  • [12] Stefano Sarao Mannelli, Giulio Biroli, Chiara Cammarota, Florent Krzakala, Pierfrancesco Urbani, and Lenka Zdeborová. Marvels and pitfalls of the langevin algorithm in noisy high-dimensional inference. Physical Review X, 10(1):011057, 2020.
  • [13] Adriaan Walther. The question of phase retrieval in optics. Optica Acta: International Journal of Optics, 10(1):41–49, 1963.
  • [14] Rick P Millane. Phase retrieval in crystallography and optics. JOSA A, 7(3):394–411, 1990.
  • [15] Robert W Harrison. Phase problem in crystallography. JOSA a, 10(5):1046–1055, 1993.
  • [16] Oliver Bunk, Ana Diaz, Franz Pfeiffer, Christian David, Bernd Schmitt, Dillip K Satapathy, and J Friso Van Der Veen. Diffractive imaging for periodic samples: retrieving one-dimensional concentration profiles across microfluidic channels. Acta Crystallographica Section A: Foundations of Crystallography, 63(4):306–314, 2007.
  • [17] Radu Balan, Pete Casazza, and Dan Edidin. On signal reconstruction without phase. Applied and Computational Harmonic Analysis, 20(3):345–356, 2006.
  • [18] John V Corbett. The pauli problem, state reconstruction and quantum-real numbers. Reports on Mathematical Physics, 1(57):53–68, 2006.
  • [19] Emmanuel J Candes, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, 2015.
  • [20] Yuxin Chen, Yuejie Chi, Jianqing Fan, and Cong Ma. Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval. Mathematical Programming, 176(1-2):5–37, 2019.
  • [21] Philip Schniter and Sundeep Rangan. Compressive phase retrieval via generalized approximate message passing. IEEE Transactions on Signal Processing, 63(4):1043–1055, 2014.
  • [22] Jean Barbier, Florent Krzakala, Nicolas Macris, Léo Miolane, and Lenka Zdeborová. Optimal errors and phase transitions in high-dimensional generalized linear models. Proceedings of the National Academy of Sciences, 116(12):5451–5460, 2019.
  • [23] Marco Mondelli and Andrea Montanari. Fundamental limits of weak recovery with applications to phase retrieval. Foundations of Computational Mathematics, 19(3):703–773, 2019.
  • [24] Ju Sun, Qing Qu, and John Wright. A geometric analysis of phase retrieval. Foundations of Computational Mathematics, 18(5):1131–1198, 2018.
  • [25] Jinho Baik, Gérard Ben Arous, Sandrine Péché, et al. Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices. The Annals of Probability, 33(5):1643–1697, 2005.
  • [26] Yue M Lu and Gen Li. Phase transitions of spectral initialization for high-dimensional non-convex estimation. Information and Inference: A Journal of the IMA, 2019.
  • [27] Leticia F Cugliandolo. Dynamics of glassy systems. arXiv preprint cond-mat/0210312, 2002.
  • [28] Rémi Monasson. Structural glass transition and the entropy of the metastable states. Physical review letters, 75(15):2847, 1995.
  • [29] Silvio Franz, Giorgio Parisi, Maxime Sevelev, Pierfrancesco Urbani, and Francesco Zamponi. Universality of the sat-unsat (jamming) threshold in non-convex continuous constraint satisfaction problems. SciPost Phys, 2(3):019, 2017.
  • [30] Marc Mézard, Giorgio Parisi, and Miguel-Angel Virasoro. Spin glass theory and beyond. World Scientific Publishing, 1987.
  • [31] Francesco Zamponi. Mean field theory of spin glasses. arXiv preprint arXiv:1008.4844, 2010.
  • [32] Hidetoshi Nishimori. Statistical physics of spin glasses and information processing: an introduction, volume 111. Clarendon Press, 2001.
  • [33] Florent Krzakala and Lenka Zdeborová. Hiding quiet solutions in random constraint satisfaction problems. Physical review letters, 102(23):238701, 2009.
  • [34] Li, Ker-Chau. Exact theory of dense amorphous hard spheres in high dimension. ii. the high density regime and the gardner transition. Journal of the American Statistical Association, 87(420):1025–1039, 1992.
  • [35] Jorge Kurchan, Giorgio Parisi, Pierfrancesco Urbani, and Francesco Zamponi. Exact theory of dense amorphous hard spheres in high dimension. ii. the high density regime and the gardner transition. The Journal of Physical Chemistry B, 117(42):12979–12994, 2013.

Appendix A BBP transition of spectra initialization

In non-convex estimation problems, such as Phase Retrieval, big advantages follow from the development of well tailored spectral methods to be used as initialization step. Recently the outcomes of one such spectral method widely used in Phase Retrieval and based on the construction of a data matrix

(18)

from sensing vectors , with elements of order one (or sensing vectors on the unitary sphere, according to our definition) and measurements has been exactly derived [26]. The method involves a pre processing function that can be optimized to further improve the results. Once the data matrix is constructed the eigenvector corresponding to the largest eigenvalue can be used as an estimator of the signal .
To obtain the performances of this kind of spectral initialization it is assumed [26] that the measurements are independently drawn according to a density function conditional on associated to the particular acquisition process, and it is recalled that

are themselves Gaussian random variables, due to the definition of the problem. Finally in the large

limit the empirical average used to construct the data matrix can be replaced by the expected value over these two distributions.
The result [26] goes as follows. Given two functions defined as

(19)

and

(20)

with , and given

(21)

with

(22)

the two largest eigenvalues of , and , are such that

(23)

with the solution of , and

(24)

A phase transition occurs at the largest such that , which can be evaluated by imposing or equivalently

(25)

which corresponds to as at that point . At larger the largest eigenvalue pops out from the spectrum bulk () and the corresponding eigenvector develops a finite correlation with the signal, in a phenomenon called BBP transition [25], hence the definition of when this occurs.

It is interesting to note that the structure of the data matrix is closely reminiscent of the structure of the first term in the Hessian of our problem (see Eq. (4)). Indeed incidentally the original idea of this spectral method for initialization can be traced back to the study of Hessian’s principal directions [34]. In particular we observe that

(26)

provided that the pre processing function is

(27)

Note that we consider a case for which measurements have a one to one correspondence with (i.e. ). Moreover in the problem discussed empirical averages involve not only measurements but also estimated in correspondence of some of interest. Therefore the expected value, , should be taken over the relative joint probability distribution function . In conclusion, the results just mentioned tell immediately what is the largest (i.e. ):

(28)

before the smallest eigenvalue of the Hessian pops out from the spectrum bulk, being also associated to an eigenvector with finite projection on the signal to be detected.

In this case we are focusing on performance of a gradient descent dynamics, which for mean-field spin-glasses naturally gets stuck on what are called threshold states [27]. We argue that the gradient descent dynamics applied to Phase Retrieval, when retrieval fails, will also approach threshold states which are mainly characterized by their property of being marginal, i.e. the smallest eigenvalue of their Hessian is null. This qualifies the relevant as a typical configuration belonging to threshold states and as the joint probability distribution at threshold states. Moreover it allows to introduce the marginal condition , which can be re-expressed by equating to :

(29)

Finally the definition of the spherical parameter (3) in the main text must be considered to close the system of equations

(30)

to be used to determine in correspondence of from threshold states.

The resulting picture is as follows. In the small regime, gradient descent will systematically approach threshold states and remain stuck there. However starting from in the Hessian of these states, that is otherwise marginal, an isolated eigenvalue pops out from the bulk immediately becoming negative. Moreover the eigenvector associated to such negative direction, naturally followed by gradient descent, has a finite overlap with the signal. Therefore, we argue, it is from that point on that the signal should be easily retrieved.

Appendix B Replica Analysis

The field of physics of disordered systems has developed numerous tools to deal with random systems [30, 32]. At an abstract level of thinking, using those tools means that we identify the inference problem with a physical systems that subject to a certain potential. The randomness comes from the having a dataset made of random projections. The estimator is mapped into a spherical spin and the loss function becomes the energy - or Hamiltonian. Finally the system the temperature in which the system lives is sent to zero and the system tend to the lowest energy, herefore minimizing the loss. The ground truth in the inference problem becomes equivalent to a minimum planted in the energetic landscape [33]. For example this formulation is equivalent to a physical system that before the experiment is liquid, but as we cool it down it can become either a crystal - an ordered solid - or a glass - an amorphous solid. Finding the crystal means reconstructing the signal.

b.1 Partition function

Moving to the mathematical aspects of the problem. We define a Gibbs distribution associated with the problem and evaluate its normalization constant - the partition function . The partition function, and in particular its logarithm divided by the temperature - the free energy -, contains all the information we aim to understand in the problem. By taking the proper derivatives, and possibly add an external field, we can compute relevant macroscopic properties such as: average overlap with the ground truth, average loss achieved. In disordered system we have to consider the additional complication given by the randomness. Therefor, we need to consider the average free energy that is the average of the logarithm of a high-dimensional integral, that can be done by the simple observation:

(31)

Which for arbitrary is not simpler than computing the logarithm, but it is much simpler if and we perform an analytic continuation to . Under this - replica trick - the average of the logarithm is equivalent to compute the average of the th moment of the partition function and take the limit. Formally the th moment correspond to the partition function of identical - replicated - system that do not interact but with the same realization of the disorder.

The problem is now computing the moments of the partition function which in general is prohibitive and we have to use an ansatz on the specific form of the solution, in particular we use the so called replica symmetry breaking ansatz [30]. This largely reduces the number of parameters. Finally the average free energy can be evaluated by set of saddle point equations.

We can now move to the analysis. The partition function already defined in the main text is

(32)

and consider its th moment - the replicated partition function -

(33)

This is formally equivalent to have independent systems. Introduce the overlaps with the projector with indices where is the overlap with the ground truth and the others are the overlaps with the estimators of the

systems. Introduce those quantities in the replicated partition function via Dirac’s deltas using their Fourier transform

(34)

Where contains the normalization factor of the Fourier transform. We introduce the matrix of overlaps between estimators and ground ground truth , . This is done using the same idea of introducing delta function and gives a contribution to the action [35, 31]. The equation can now be factorized in , so we can drop the s from and . Observing that

(35)

we can integrate over and , and write a simplified replicated partition function with action

(36)

where the first term is an entropic term that accounts for the degeneracy of the matrix in the space of symmetric matrices. And the second term is an energetic term that accounts for the potential acting on the system.

Observe that so far we did not make any ansatz on the structure of the overlap matrix . In the next subsections we will consider the 1-step replica symmetry breaking ansatz (1RSB).

b.2 1 step replica symmetric breaking

The 1RSB scheme consists in making an ansatz on the structure of the overlap matrix [30]. The assumption is that not all the replicated systems will have the same overlap - which correspond to the replica symmetric ansatz - but the systems are clustered. Systems inside the same cluster will have a larger overlap, systems outside the cluster will have a smaller overlap. This translates into the following parameters: overlaps inside the same cluster, overlaps in different clusters, dimension of the clusters, finally the overlap with the signal. Schematically we have

(37)

with of dimension , and the inner matrices of dimension .

The analysis proceed as in a standard way: we derive the 1RSB free energy with the associated saddle point equations and move to the zero temperature limit [30, 32], we impose the marginal stability - corresponding to the threshold states - [28], finally we derive the label distribution [29].

For notational convenience we call

the probability density function of a Gaussian with zero mean and (co-)variance

, and we use the symbol to indicate the convolutions, i.e. .

We can plug Eq. (37) into Eq. (36) and obtain 1RSB formulation of the action.

(38)

and using Eq. (31) the free energy is .

We can now take derivatives to find the saddle point equations.

(39)
(40)
(41)

where we define

and is a covariance matrix with entries , and .

Zero temperature and parameter ansatz

As the zero temperature goes to zero the order parameters and needs to be rescaled with the temperature as and with . Instead and will not be affected by the limit. The reason why some parameters need to be rescaled is that as long as there is a positive temperature the replicated systems exploit this thermal energy to fluctuate in the basin of the minimum, therefor their overlap is given by average overlap in the basin of attraction. When the temperature drops to zero the thermal goes to zero and all the system shrink to a point. represent the fluctuation that systems can have when they receive an infinitesimal amount of thermal energy. With the same physical reason, also the cluster itself shrinks to a point. The rate of convergence to the point is given by .

Under those observation we obtain the 1RSB free energy

(42)

and the corresponding saddle point equations from Eqs. (39-41).

However, the solution of those equation will lead to the global minimum of the loss, while we are interested in the threshold states. We follow the idea of [28] where is used as a Lagrange multiplier that selects the threshold states. is fixed by imposing the marginal stability condition, i.e. that the spectrum of the Hessian of the minima in consideration touches zero, following [29] this is given by

(43)

At the threshold the overlap with the signal is zero, , and for symmetries of the problem also . In fact is always a solution of Eq. (41), and if then is also solution of the second saddle point equation, Eq. (40), as in the RHS the Gaussian becomes degenerate in and

being an odd function in

. Therefore we can restrict the equations to just the one for , Eq. (39) that becomes

(44)

Rewriting these equation explicifying the convolutions we obtain the equations presented in the main text. With those element we can obtain the label distribution presented in the main text.