Shaping the learning landscape in neural networks around wide flat minima

05/20/2019 ∙ by Carlo Baldassi, et al. ∙ 0

Learning in Deep Neural Networks (DNN) takes place by minimizing a non-convex high-dimensional loss function, typically by a stochastic gradient descent (SGD) strategy. The learning process is observed to be able to find good minimizers without getting stuck in local critical points, and that such minimizers are often satisfactory at avoiding overfitting. How these two features can be kept under control in nonlinear devices composed of millions of tunable connections is a profound and far reaching open question. In this paper we study basic non-convex neural network models which learn random patterns, and derive a number of basic geometrical and algorithmic features which suggest some answers. We first show that the error loss function presents few extremely wide flat minima (WFM) which coexist with narrower minima and critical points. We then show that the minimizers of the cross-entropy loss function overlap with the WFM of the error loss. We also show examples of learning devices for which WFM do not exist. From the algorithmic perspective we derive entropy driven greedy and message passing algorithms which focus their search on wide flat regions of minimizers. In the case of SGD and cross-entropy loss, we show that a slow reduction of the norm of the weights along the learning process also leads to WFM. We corroborate the results by a numerical study of the correlations between the volumes of the minimizers, their Hessian and their generalization performance on real data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Artificial Neural Networks (ANN), currently also known as Deep Neural Networks (DNN), are powerful nonlinear devices used to perform different types of learning tasks.(mackay2003information, ) From the algorithmic perspective, learning in ANN is in principle a hard computational problem in which a huge number of parameters, the connection weights

, need to be optimally tuned. Yet, at least for supervised pattern recognition tasks, learning has become a relatively feasible process in many applications across domains and the performances reached by DNNs have had a huge impact on the field of Machine Learning (ML).

DNN models have evolved very rapidly in the last decade, mainly by an empirical trial and selection process guided by heuristic intuitions. As a result, current DNN are in a sense akin to complex physical or biological systems, which are known to work but for which a detailed understanding of the principles underlying their functioning remains unclear. The tendency to learn efficiently and to generalize with limited overfitting are two properties that often coexist in DNN and yet a unifying theoretical framework is still missing.

Here we provide analytical results on the geometrical structure of the loss landscape of ANN which shed light on the success of Deep Learning (DL)

(lecun2015deep, ) algorithms and allow us to design novel efficient algorithmic schemes.

We focus on non-convex ANN models that exhibit sufficiently complex behavior and yet are amenable to detailed analytical and numerical studies. Building on methods of statistical physics of disordered systems, we analyze the complete geometrical structure of the minimizers of the loss function of ANN learning random patterns and discuss how the current DNN models are able to exploit such structure, e.g. starting from the choice of the loss function, avoiding algorithmic traps and reaching rare solutions which belong to wide flat regions of the weight space. In our study the notion of flatness is given in terms of the volume of the weights around a minimizer which do not lead to an increase of the loss value. This generalizes the so called Local Entropy of a minimizer(baldassi_subdominant_2015, ), defined for discrete weights as the log of the number of optimal weights assignments within a given Hamming distance from the reference minimizer. We call these regions High Local Entropy (HLE) regions for discrete weights or Wide Flat Minima (WFM) for continuous weights. Our results are derived analytically for the case of random data and corroborated by numerics on real data. In order to eliminate ambiguities which may arise from changes of scale of the weights, we control the norm of the weights in each of the units that compose the network. The outcomes of our study can be summarized as follows.

(i) We show analytically that ANN learning random patterns possess the structural property of having extremely robust regions of optimal weights, namely wide flat minima of the loss, whose existence is important to achieve convergence in the learning process. Though these wide minima are rare compared to the dominant critical points (absolute narrow minima, local minima or saddle points in the loss surface), they can be accessed by a large family of simple learning algorithms. We also show analytically that other learning machines, such as the Parity Machine, do not possess wide flat minima.

(ii) We show analytically that the choice of the cross-entropy loss function has the effect of biasing learning algorithms toward HLE or WFM regions.

(iii) We derive a greedy algorithm – Entropic Least Action Learning (eLAL) – and a message passing algorithm – focusing Belief Propagation (fBP) – which zoom in their search on wide flat regions of minimizers.

(iv) We compute the volumes associated to the minimizers found by different algorithms using Belief Propagation.

(v) We show numerically that the volumes correlate well with the spectra of the Hessian on computationally tractable networks and with the generalization performance on real data. The algorithms which search for WFM display a spectrum which is much more concentrated around zero eigenvalues compared to plain SGD.

Our results on random patterns support the conclusion that the minimizers which are relevant for learning are not the most frequent isolated and narrow ones (which also are computationally hard to sample) but the rare ones which are extremely wide. While this phenomenon was recently disclosed for the case of discrete weights(baldassi_subdominant_2015, ; baldassi_unreasonable_2016, ), here we demonstrate that it is present also in non convex ANN with continuous weights. Building on these results we derive novel algorithmic schemes and shed light on the performance of SGD with the cross-entropy loss function. Numerical experiments suggest that the scenario generalizes to real data and is consistent with other numerical results on DNN(keskar2016large, ).

Ii HLE/WFM regions exist in non convex neural devices storing random patterns

In what follows we analyze the geometrical structure of the weights space by considering the simplest non convex neural devices storing random patterns: the single layer network with discrete weights and the two layer networks with both continuous and discrete weights. The choice of random patterns, for which no generalization is possible, is motivated by the possibility of using analytical techniques from statistical physics of disordered systems and by the fact that we want to identify structural features which do not depend on specific correlation patterns of the data.

ii.1 The simple example of discrete weights

In the case of binary weights it is well known that even for the single layer network the learning problem is computationally challenging. Therefore we begin our analysis by studying the so called binary perceptron, which maps vectors of

inputs to binary outputs as , where is the synaptic weights vector .

Given a training set composed of input patterns with and their corresponding desired outputs , the learning problem consists in finding a solution such that for all . The entries and the outputs are random unbiased i.i.d. variables. As discussed in ref. (krauth-mezard, ) (but see also the rigorous bounds in ref. (ding2018capacity, )

), perfect classification is possible with probability

in the limit of large up to a critical value of , usually denoted as ; above this value, the probability of finding a solution drops to zero. is called the capacity of the device.

The standard analysis of this model is based on the study of the zero temperature limit of the Gibbs measure with a loss (or energy) function which counts the number of errors (NE) over the training set:

(1)

where is the Heaviside step function, if and otherwise. The Gibbs measure is given by

(2)

where is the inverse temperature parameter. For large values of , concentrates on the minima of . The key analytical obstacle for the computation of is the evaluation of the normalization factor, the partition function :

(3)

In the the zero temperature limit ( and below the partition function simply counts all solutions to the learning problem,

(4)

where

is a characteristic function which evaluates to one if all patterns are correctly classified, and to zero otherwise.

is an exponentially fluctuating quantity (in ), and its most probable value is obtained by exponentiating the average of , denoted by , over the realizations of the patterns

(5)

The calculation of has been done in the 80s and 90s by the replica and the cavity methods of statistical physics and, as mentioned above, the results predict that the learning task undergoes a threshold phenomenon at , where the probability of existence of a solution jumps from one to zero in the large limit(krauth-mezard, ). This result has been put recently on rigorous grounds by ref. (ding2018capacity, ). Similar calculations predict that for any , the vast majority of the exponentially numerous solutions on the hypercube are isolated, separated by a Hamming mutual distance(huang2014origin, ). In the same range of , there also exist an even larger number of local minima at non-zero loss, a result that has been corroborated by analytical and numerical findings on stochastic learning algorithms which satisfy detailed balance (horner1992dynamics, ). Recently it became clear that by relaxing the detailed balance condition it was possible to design simple algorithms which can solve the problem efficiently(braunstein-zecchina, ; baldassi-et-all-pnas, ; baldassi-2009, ).

ii.2 Local entropy theory

The existence of effective learning algorithms indicates that the traditional statistical physics calculations, which focus on set of solutions which dominate the zero temperature Gibbs measure (i.e. the most numerous ones), are effectively blind to the solutions actually found by such algorithms. Numerical evidence suggests that in fact the solutions found by heuristics are not at all isolated: on the contrary, they appear to belong to regions with a high density of nearby other solutions. This puzzle has been solved very recently by an appropriate large deviations study(baldassi_subdominant_2015, ; baldassi_local_2016, ; baldassi2016learning, ; baldassi_unreasonable_2016, ) in which the tools of statistical physics have been used to study the most probable value of the local entropy of the loss function, i.e. a function that is able to detect the existence of regions of linear size containing a high density of solutions even when the number of these regions is small compared to the number of isolated solutions. For binary weights the local entropy function is the (normalized) logarithm of the number of solutions at Hamming distance from a reference solution

(6)

with

(7)

and where is the Kronecker delta symbol. In order to derive the typical values that the local entropy can take, one needs to compute the Gibbs measure of the local entropy

(8)

where has the role of an inverse temperature. For large values of this probability measure focuses on the surrounded by an exponential number of solutions within a distance . The regions of high local entropy (HLE) are then described in the regime of large and small . In particular, the calculation of the expected value of the optimal local entropy

(9)

shows the existence of extremely dense clusters of solutions up to values of close to .(baldassi_subdominant_2015, ; baldassi_local_2016, ; baldassi2016learning, ; baldassi_unreasonable_2016, )

The probability measure (8) can be written in an equivalent form that generalizes to the non zero errors regime, is analytically simpler to handle, and leads to novel algorithmic schemes(baldassi_unreasonable_2016, ):

(10)

where is a “local free entropy” potential in which the distance constraint is forced through a Lagrange multiplier

(11)

where is some monotonically increasing function of the distance between configurations, defined according to the type of weights under consideration. In the limit and by choosing so that a given distance is selected, this expression reduces to eq. (8).

The crucial property of eq. (10) comes from the observation that by choosing to be a non-negative integer, the partition function can be rewritten as:

(12)

where

(13)

These are the partition function and the effective loss of interacting real replicas of the system, one of which acts as reference system ( while the remaining () are identical, each being subject to the energy constraint and to the interaction term with the reference system. As discussed in ref. (baldassi_unreasonable_2016, ), several algorithmic schemes can be derived from this framework by minimizing . Here we shall also use the above approach to study the existence of WFMs in continuous models and to design message passing and greedy learning algorithms driven by the local entropy of the solutions.

ii.3 Two layer networks with continuous weights

As for the discrete case, we are able to show that in non convex networks with continuous weights the WFMs exist, are rare and yet accessible to simple algorithms. In order to perform an analytic study, we consider the simplest non-trivial two-layer neural network, the committee machine with non-overlapping receptive fields. It consists of input units, one hidden layer with units and one output unit. The input units are divided into disjoint sets of units. Each set is connected to a different hidden unit. The input to the -th hidden unit is given by where is the connection weight between the input unit and the hidden unit , and is the -th input to the -th hidden unit. As before, is a pattern index. We study analytically the pure classifier case in which each unit implements a threshold transfer function and the loss function is the error loss. Other types of (smooth) functions, more amenable for numerical simulation, will be also discussed in a subsequent section. The output of the -th hidden unit is given by

(14)

In the second layer all the weights are fixed and equal to one, and the overall output of the network is simply given by a majority vote

As for the binary perceptron, the learning problem consists in mapping each of the random input patterns with (, , ) onto a randomly chosen output Both and

are independent random variables which take the values

with equal probability. For a given set of patterns, the volume of the subspace of the network weights which correctly classifies the patterns, the so called version space, is given by

(15)

where we have imposed a spherical constraint on the weights in order to keep the volume finite (though exponential in . In the case of binary weights the integral would become a sum over all the configurations and the volume to the overall number of zero error assignments of the weights.

The committee machine has been studied extensively in the 90s.(barkai1992broken, ; schwarze1992generalization, ; engel1992storage, ) The capacity of the network can be derived by computing the typical weight space volume as a function of the number of correctly classified patterns , in the large limit. As for the binary case, the most probable value of is obtained by exponentiating the average of , , a difficult task which is achieved by the replica method(mezard1987spin, ; barkai1990statistical, ).

For the smallest non-trivial value of , it has been found that above the space of solution changes abruptly, becoming clustered into multiple components111Strictly speaking each cluster is composed of a multitude of exponentially small domains(monasson1995weight, ). Below the geometrical structure is not clustered and can be described by the simplest version of the replica method, known as replica symmetric (RS) solution. Above the analytical computation of the typical volume requires a more sophisticated analysis that properly describes a clustered geometrical structure. This analysis can be performed by a variational technique which is known in statistical physics as the replica-symmetry-breaking (RSB) scheme, and the clustered geometrical structure of the solution space is known as RSB phase.

The capacity of the network, above which perfect classification becomes impossible, is found to be 3.02. In the limit of large (but still with ), the clustering transition occurs at a finite number of patterns per weight, (barkai1992broken, ) whereas the critical capacity grows with as .(monasson1995weight, )

ii.4 The existence of wide flat minima

In order to detect the existence of WFMs we use a large deviation measure which is the continuous version of the measure used in the discrete case: each configuration of the weights is re-weighted by a local volume term, analogously to the analysis of section  II.2. For the continuous case, however, we adopt a slightly different formalism which simplifies the analysis. Instead of constraining the set of real replicas (not to be confused with the virtual replicas of the replica method) to be at distance from a reference weight vector, we can identify the same WFM regions by constraining directly the real replicas to be at a given mutual distance. This novel technique (see Appendix C for the derivation) allows to use directly the first-step formalism of the RSB scheme (1-RSB). Similarly to the discrete case, the computation of the maximal WFM volumes leads to the following results: for and in the large limit, we find

with

where , , , and satisfies a saddle point equation that needs to be solved numerically. is the logarithm of the volume of the solutions, normalized by , under the spherical constraints on the weights, and with the real replicas forced to be at a mutual overlap , (, which is bijectively related to the distance between the replicated systems. In analogy with the discrete case, we still refer to as to the local entropy. It is composed by the sum of two terms: the first one, corresponds to the log-volume at , where all configurations are solutions and only the geometric constraints are present. This is an upper bound for the local entropy. The second term, , is in general negative and it represents the log of the fraction of solutions at overlap , and we call it normalized local entropy. If WFMs exist for positive , we expect to observe that is close to in an extended region of below one (analogously to the discrete case at small , cf. fig. 4). In fig. 1 (top panel) we report the values of vs the overlap , for different values of Indeed, one may observe that the behavior is qualitatively similar to that of the binary perceptron: besides the solutions and all the related local minima and saddles predicted by the standard statistical physics analysis (barkai1992broken, ; schwarze1992generalization, ; engel1992storage, ; monasson1995weight, ), there exist absolute minima which are flat at relatively large distances. Indeed, reaching such wide minima efficiently is non trivial, and different algorithms can have drastically different behaviors, as we will discuss in detail in sec. V.

Figure 1: Normalized local entropy vs , i.e. logarithm of the fraction of configurations of real replicas at mutual overlap in which all replicas have zero error loss . The curves are for a tree-like committee machine trained on random patterns, for various values of , obtained from a replica calculation in the limit of large and large number of real replicas . When the curves approach as it means that nearly all configurations are minima, thus that the replicas are collectively exploring a wide minimum (any implies distances of between replicas). Top: the case of . Bottom: the the limiting case of a large number of hidden units .

The case is still relatively close to the simple perceptron, though the geometrical structure of its minima is already dominated by non convex features for . A case which is closer to more realistic ANNs is (but still ), which, luckily enough, is easier to study analytically. We find:

where with , and is fixed by a saddle point equation. In fig. 1 (bottom panel) we observe that indeed WFM still exist for all finite values of .

The results of the above WFM computation may require small corrections due to RSB effects, which however are expected to be very tiny due to the compact nature of the space of solutions at small distances.

A more informative aspect is to study the volumes around the solutions found by different algorithms. This can be done by the Belief Propagation method, similarly to the computation of the weight enumerator function in error correcting codes(di2006weight, ).

ii.5 Not all devices are appropriate: the Parity Machine does not display HLE/WFM regions

The existence of WFM is a structural property of neural networks. Its origin relies in the threshold sum form of the nonlinearity characterizing the formal neurons. As a check of this claim, we can analyze a model which is in some sense complementary, namely the so called

parity machine. We take its network structure to be identical to the committee machine, except for the output unit which performs the product of the hidden units instead of taking a majority vote. While the outputs of the hidden units are still given by sign activations, eq. (14), the overall output of the network reads

For a given set of patterns, the volume of the weights which correctly classifies the patterns is then given by

Parity machines are closely related to error correcting codes based on parity checks. The geometrical structure of the absolute minima of the error loss function is known (monasson1995weight, ) to be composed by multiple regions, each in one to one correspondence with the internal representations of the patterns. For random patterns such regions are typically tiny and we expect the WFM to be absent. Indeed, the computation of the volume proceeds analogously to the previous case (it’s actually even simpler, see Appendix C.1), and it shows that in this case for any distance the volumes of the minima are always bounded away form the maximal possible volume, i.e. the volume one would find for the same distance when no patterns are stored: the log-ratio of the two volumes is constant and equal to . In other words, the minima never become flat, at any distance scale.

ii.6 The connection between Local Entropy and Cross-Entropy

Given that dense regions of optimal solutions exist in non convex ANN, at least in the case of independent random patterns, it remains to be seen which role they play in current models. Starting with the case of binary weights, and then generalizing the result to more complex architectures and to continuous weights, we can show that the most widely used loss function, the so called Cross-Entropy (CE) loss, focuses precisely on such rare regions (see ref. (baldassi2018role, ) for the case of stochastic weights).

For the sake of simplicity, we consider a binary classification task with one output unit. The CE cost function for each input pattern reads

(16)

where . The parameter allows to control the degree of “robustness” of the training, see figure (2). In standard machine learning practice is simply set to , but a global rescaling of the weights can lead to a basically equivalent effect. That setting can thus be interpreted as leaving as implicit, letting its effective value, and hence the norm of the weights, to be determined by the initial conditions and the training algorithm. As we shall see, controlling explicitly along the learning process plays a crucial role in finding HLE/WFM regions.

For the binary case, however, the norm is fixed and thus we keep as an explicit parameter. Note that since the minima of below at large are the solutions to the training problem, i.e. they coincide with those of .

Figure 2: Binary cross-entropy function for various values of (increasing from top to bottom). For low values of , the loss is non-zero even for small positive values of the input, and thus the minimization procedure tends to favor more robust solutions. For large values of the function tends to . The dotted line show the corresponding NE function, which is just in case of an error and otherwise, cf. eq. (1).

We proceed by first showing that the minimizers of this loss correspond to near-zero errors for a wide range of values of , and next by showing that these minimizers are surrounded by an exponential number of zero error solutions.

In order to study the probability distribution of the minima of

in the large limit, we need to compute its Gibbs distribution (in particular, the average of the of the partition function, see eq. (5)) as it has been done for the the error loss . The procedure follows standard steps and it is detailed in the Appendix B. The method requires to solve two coupled integral equations as functions of the control parameters , and . In figure 3 we show the behavior of the fraction of errors vs the loading , for various values of . Up to relatively large values of the optimum of corresponds to extremely small values of , virtually equal to zero for any accessible size

Figure 3: Mean error rate achieved when optimizing the CE loss in the binary single-layer network, as predicted by the replica analysis, at various values of (increasing from top to bottom). The figure also shows the points (up to which solutions exist) and (up to which non-isolated solutions exist).

Having established that by minimizing the the CE one ends up in regions of perfect classification where the error loss function is essentially zero, it remains to be understood which type of configurations of weights are found. Does the CE converge to an isolated point-like solution in the weight space (such as the typical zero energy configurations of the error function)222A quite unlikely fact given that finding isolated solutions is a well known intractable problem. or does it converge to the more rare regions of high local entropy?

In order to establish the geometrical nature of the typical minima of the CE loss, we need to compute the average value of (which tells us how many zero energy configurations of the error loss function can be found within a given radius from a given , see eq. (6)) when is sampled from the minima of . This can be accomplished by a well known analytical technique (franz1995recipes, ) which was developed for the study of the energy landscape in disordered physical systems. The computation is relatively involved, and here we report only the final outcome. For the dedicated reader, all the details of the calculation, which relies on the replica method and includes a double analytic continuation, can be found in Appendix B. As reported in figure 4, we find that the typical minima of the CE loss for small finite are indeed surrounded by an exponential number of zero error solutions. In other words, the CE focuses on HLE regions.

Figure 4: Average local entropy around a typical minimum of the loss, for various values of and . The gray upper curve, corresponding to , is an upper bound since in that case all configurations are solutions. For , the two curves with and  nearly saturate the upper bound at small distances, revealing the presence of dense regions of solutions (HLE regions). There is a slight improvement for , but the curve at shows that the improvement cannot be monotonic: in that limit, the measure is dominated by isolated solutions. This is reflected by the gap at small in which the entropy becomes negative, signifying the absence of solutions in that range. For we see that the curves are lower, as expected. We also see that for there is a gap at small , and that we need to get to in order to find HLE regions.

As an algorithmic check we have verified that while a Simulated Annealing approach gets stuck at very high energies when trying to minimize the error loss function, the very same algorithm with the CE loss is indeed successful up to relatively high values of , with just slightly worse performance compared to an analogous procedure based on local entropy(baldassi_local_2016, ). In other words, the CE loss on single-layer networks is a computationally cheap and reasonably good proxy for the LE loss. These analytical results extend straightforwardly to two layer networks with binary weights. The study of continuous weight models can be performed resorting to the Belief Propagation method.

Iii Belief Propagation and focusing Belief Propagation.

Belief Propagation (BP), also known as sum-product, is an iterative message-passing algorithm for statistical inference. When applied to the problem of training a committee machine with a given set of input-output patterns, it can be used to obtain, at convergence, useful information on the probability distribution, over the weights of the network, induced by the Gibbs measure. In particular, it allows to compute the marginals of the weights as well as their entropy, which in the zero-temperature regime is simply the logarithm of the volume of the solutions, eq. (15), rescaled by the number of variables . The results are approximate, but (with high probability) they approach the correct value in the limit of large in the case of random uncorrelated inputs, at least in the replica-symmetric phase of the space of the parameters. Due to the self-averaging property, in this limit the macroscopic properties of any given problem (such as the entropy) tend to converge to a common limiting case, and therefore a limited amount of experiments with a few samples is sufficient to describe very well the entire statistical ensemble.

We have used BP to study the case of the zero-temperature tree-like committee machine with continuous weights and . We have mostly used , which turns out to be large enough to produce results in quite good agreement with the replica theory analysis. The implementation can be made efficient by encoding each message with only two quantities (see Appendix D). As mentioned above, this algorithm works well in the replica-symmetric phase, which for our case means when . Above this value, the (vanilla) algorithm doesn’t converge at all.

However, BP can be employed to perform additional analyses as well. In particular, it can be modified rather straightforwardly to explore and describe the region surrounding any given configuration, as it allows to compute the local entropy (i.e. the log-volume of the solutions) for any given distance and any reference configuration (this is a general technique, the details for our case are reported in Appendix D.2). The convergence issues are generally much less severe in this case. Even in the RSB phase, if the reference configuration is a solution in a wide minimum, the structure is locally replica-symmetric, and therefore the algorithm converges and provides accurate results, at least up to a value of the distance where other unconnected regions of the solutions space come into consideration. In our tests, the only other issue arose occasionally at very small distances, where convergence is instead prevented by the loss of accuracy stemming from finite size effects and limited numerical precision.

Additionally, the standard BP algorithm can be modified and transformed into a (very effective) solver. There are several ways to do this, most of which are purely heuristic. However, it was shown in ref. (baldassi_unreasonable_2016, ) that adding a particular set of self-interactions to the weight variables could approximately but effectively describe the replicated system of eq. (12): in other words, this technique can be used to analyze the local-entropy landscape instead of the Gibbs one. By using a sufficiently large number of replicas (we generally used and following an annealing protocol in the coupling parameter (starting from a low value and making it diverge) this algorithm focuses on the maximally dense regions of solutions, thus ending up in wide flat minima. For these reasons, the algorithm was called “focusing-BP” (fBP). The implementation closely follows that of ref. (baldassi_unreasonable_2016, ) (complete details are provided in Appendix D.3). Our tests – detailed below – show that this algorithm is the best solver (by a wide margin) among the several alternatives that we tried in terms of robustness of the minima found (and thus of generalization properties, as also discussed below). Moreover, it also achieves the highest capacity, nearly reaching the critical capacity where all solutions disappear.

Iv Entropic Least Action Learning

Least Action Learning (LAL) (mitchison1989bounds, )

is a heuristic greedy algorithm that was designed to extend the well-known Perceptron algorithm to the case of committee-machines with a single binary output and sign activation functions. It takes one parameter, the learning rate

. In its original version, patterns are presented randomly one at a time, and at most one hidden unit is affected at a time. In case of correct output, nothing is done, while in case of error the hidden unit, among those with a wrong output, whose pre-activation was closest to the threshold (and is thus the easiest to fix) is selected, and the standard perceptron learning rule (with rate ) is applied to it. In our tests we simply extended it to work in mini-batches, to make it more directly comparable with stochastic-gradient-based algorithms: for a given mini-batch, we first compute all the pre-activations and the outputs for all patterns at the current value of the weights, then we apply the LAL learning rule for each pattern in turn.

This algorithm proves to be surprisingly effective at finding minima of the NE loss very quickly: on the random patterns case, its algorithmic capacity is higher than gradient-based variants and almost as high as fBP, and it requires comparatively few epochs. It is also computationally very fast, owing to its simplicity. However, as we show in the sec. 

V, it finds solutions that are much narrower compared to those of other algorithms.

In order to drive LAL toward WFM regions, we add a local-entropy component to it, by applying the technique described in ref. (baldassi_unreasonable_2016, ) (see eq. (13)): we run replicas of the system in parallel and we couple them with an elastic interaction. The resulting algorithm, that we call entropic-LAL (eLAL) can be described as follows. We initialize replicas randomly with weights and compute their average . We present mini-batches independently to each replica, using different permutations of the dataset for each of them. At each mini-batch, we apply the LAL learning rule. Then, each replica is pushed toward the center with some strength proportional to a parameter : more precisely, we add a term to each of the weight vectors . After this update, we recompute the average . At each epoch, we increase the interaction strength . The algorithm stops when the replicas have collapsed to a single configuration.

This simple scheme proves rather effective at enhancing the wideness of the minima found while still being computationally efficient and converging quickly, as we show in the sec. V. We show tests performed with , but the entropic enhancement is gradual, already providing improvements with very few replicas and progressing at least up to , which is the maximum value that we have tested. Its main limitation is, of course, that it is tailored to committee machines and it is unclear how to extend it to general architectures. On the other hand, the effectiveness of the replication scheme seems very promising from the point of view of improving the quality of the minima found by greedy and efficient heuristics.

V Numerical studies

We conclude our study by comparing numerically the curvature, the wideness of the minima and the generalization error found by different approaches. We consider two scenarios: one, directly comparable with the theoretical calculations, where a tree committee machine with

is trained over random binary patterns, and a second one, which allows us to estimate the generalization capabilities, where a fully-connected committee machine with

is trained on a subset of the Fashion-MNIST dataset (fashionmnist, ). The choice of using instead of is intended to enhance the potential robustness effect that the CE loss can have over NE on such architectures (see fig. 2): for , a correctly classified pattern already requires out for units to give the correct answer, and there is not much room for improvement at the level of the pre-activation of the output unit. On the other hand, since we study networks with a number of inputs of the order of , an even larger value of would either make too small in the tree-like case (exacerbating numerical issues for the BP algorithms and straying too far from the theoretical analysis) or make the computation of the Hessians too onerous for the fully-connected case (each Hessian requiring the computation of terms).

We compare several training algorithms with different settings (see Appendix E): stochastic GD with the CE loss (ceSGD); least-action learning (LAL) and its entropic version (eLAL); focusing BP (fBP). Of these, the non-gradient based ones (LAL, eLAL and fBP) can be directly used with the sign activation functions (14) and the NE loss. On the other hand, ceSGD requires a smooth loss landscape, therefore we used activations, adding a gradually-diverging parameter in their argument, since . The parameter of the CE loss (16) was also increased gradually. As in the theoretical computation, we also constrained the weights of each hidden unit of the network to be normalized. The NE loss with sign activations is invariant under renormalization of each unit’s weights, whereas the CE loss with activations is not. In the latter case, the parameters and can be directly interpreted as the norm of the weights, since they just multiply the pre-activations of the units. In a more standard approach, the norm would be controlled by the initial choice of the weights and be driven by the SGD algorithm automatically. In our tests instead we have controlled these parameters explicitly, which allows us to demonstrate the effect of different schedules. In particular, we show (for both the random and the Fashion-MNIST scenarios) that slowing down the growth of the norm with ceSGD makes a significant difference in the quality of the minima that are reached. We do this by using two separate settings for ceSGD, a “fast” and a “slow” one. In ceSGD-fast both and are large from the onset and grow quickly, whereas in ceSGD-slow they start from small values and grow more slowly (requiring much more epochs for convergence).

In all cases – for uniformity of comparison, simplicity, and consistency with the theoretical analysis – we consider scenarios in which the training error (i.e. the NE loss) gets to zero. This is, by definition, the stopping condition for the LAL algorithm. We also used this as a stopping criterion for ceSGD in the “fast” setting. For the other algorithms, the stopping criterion was based on reaching a sufficiently small loss (ceSGD in the “slow” setting), or the collapse of the replicas (eLAL and fBP).

The analysis of the quality of the results was based on the study of the local loss landscape at the solutions. On one hand, we computed the normalized local entropy using BP as described in a sec. D.2, which provides a description of the NE landscape. On the other hand, we also computed the spectrum of the eigenvalues of a smoothed-out version of the NE loss, namely the MSE loss computed on networks with activations. This loss depends on the parameters of the activations: we set to be as small as possible (maximizing the smoothing and thereby measuring features of the landscape at a large scale) under the constraint that all the solutions under consideration were still corresponding to zero error (to prevent degrading the performance). For the Fashion-MNIST case, we also measured the generalization error of each solution.

Figure 5: Normalized local entropy as a function of the distance from a reference solution, on a tree-like committee machine with and , trained on random patterns. The results were obtained with the BP algorithm, by averaging over samples. Numerical issues (mainly due to the approximations used) prevented BP from converging at small distances for the LAL algorithm, and additionally they slightly affect the results at very small distances. Qualitatively, though, higher curves correspond to larger local entropies and thus wider minima.
Figure 6: Spectra of the Hessian for the same solutions of fig. 5, for various algorithms. The spectra are directly comparable since they are all computed on the same loss function (MSE; using CE doesn’t change the results qualitatively) and the networks are normalized. The top panel shows the results with the parameter of the activation functions set to a value such that all solutions of all algorithms are still valid; this value is exclusively determined by the LAL algorithm, and the bottom panel shows the results for a much lower value of that can be used when removing the LAL solutions, where differences between ceSGD-slow, eLAL and fBP that were not visible al higher can emerge (the spectrum of LAL would still be the widest by far even at this ).

In the random patterns scenario we set and , and tested samples (the same for all algorithms). The results are presented in figg. 5 and 6. The two analyses allow to rank the algorithms (for the Hessians we can use the maximum eigenvalue as a reasonable metric) and their results are in agreement. As expected, fBP systematically finds very dense regions of solutions, qualitatively compatible with the theoretical analysis (cf. fig. 5 with fig. 1) and corresponding to the narrowest spectra of the Hessian at all ; the other algorithms follow roughly in the order eLAL, ceSGD-slow, ceSGD-fast, LAL. The latter is a very efficient solver for this model, but it finds solutions in very narrow regions. On the other hand, the same algorithm performed in parallel on a set of interacting replicas is still efficient but much better at discovering WFMs. These results are for replicas in eLAL, but our tests show that would be sufficient to match ceSGD-slow and that would further improve the results and get closer to fBP. Overall, the results of the random pattern case confirm the existence of WFMs in continuous networks, and suggest that a (properly normalized) Hessian spectrum can be used as a proxy for detecting whether an algorithm has found a WFM region.

Figure 7: Experiments on a subset of Fashion-MNIST. Top: average normalized local entropies; Bottom: test error vs maximum eigenvalue of the Hessian spectrum at . The maximum eigenvalue is correlated to the generalization (WFMs tend to generalize better), and the quality of the minima varies between algorithms.

We then studied the performance of ceSGD (fast and slow settings), LAL and eLAL on a small fully-connected network which learns to discriminate between two classes of the Fashion-MNIST dataset (we chose the classes Dress and Coat, which are rather challenging to tell apart but also sufficiently different to offer the opportunity to generalize even with a small simple network trained on very few examples). We trained our networks on a small subset of the available examples (

patterns; binarized to

by using the median of each image as a threshold on the original grayscale inputs; we filtered both the training and test sets to only use images in which the median was between and as to avoid too-bright or too-dark images and make the data more uniform and more challenging). This setting is rather close to the one which we could study analytically, except for the patterns statistics and the use of fully-connected rather than tree-like layers, and it is small enough to permit computing the full spectrum of the Hessian. On the other hand, it poses a difficult task in terms of inference (even though finding solutions with zero training error is not hard), which allowed us to compare the results of the analysis of the loss landscape with the generalization capabilities on the test set. Each algorithm was ran times. The results are shown in fig. 7, and they are analogous to those for the random patterns case, but in this setting we can also observe that indeed WFMs tend to generalize better. Also, while we could not run fBP on this data due to the correlations present in the inputs and to numerical problems related to the fully-connected architecture, which hamper convergence, it is still the case that ceSGD can find WFMs if the norms are controlled and increased slowly enough, and that we can significantly improve the (very quick and greedy) LAL algorithm by replicating it, i.e. by effectively adding a local-entropic component.

Conclusions and future directions

In this paper, we have generalized the local entropy theory to continuous weights and we have shown that WFM exists in non convex neural systems. We have also shown that the CE loss spontaneously focuses on WFM. On the algorithmic side we have derived and designed novel algorithmic schemes, either greedy (very fast) or message passing, which are driven by the local entropy measure. Moreover, we have shown numerically that ceSGD can be made to converge in WFM by an appropriate cooling procedure of the parameter which controls the norm of the weights. Future work will try to put in a unified framework other key aspects on DNN, from transfer functions and architectures to the role that WFM play for generalization in different data regimes.

Acknowledgements.
CB and RZ acknowledge ONR Grant N00014-17-1-2569. We thank Leon Bottou for discussions.

References

  • (1) David JC MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003.
  • (2) Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
  • (3) Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina.

    Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses.

    Physical Review Letters, 115(12):128101, September 2015. URL: http://link.aps.org/doi/10.1103/PhysRevLett.115.128101, doi:10.1103/PhysRevLett.115.128101.
  • (4) Carlo Baldassi, Christian Borgs, Jennifer T. Chayes, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina. Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes. Proceedings of the National Academy of Sciences, 113(48):E7655–E7662, November 2016. doi:10.1073/pnas.1608103113.
  • (5) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.
  • (6) Werner Krauth and Marc Mézard. Storage capacity of memory networks with binary couplings. J. Phys. France, 50:3057–3066, 1989.
  • (7) Jian Ding and Nike Sun. Capacity lower bound for the ising perceptron. arXiv preprint arXiv:1809.07742, 2018.
  • (8) Haiping Huang and Yoshiyuki Kabashima. Origin of the computational hardness for learning with binary synapses. Physical Review E, 90(5):052813, 2014.
  • (9) Heinz Horner. Dynamics of learning for the binary perceptron problem. Zeitschrift für Physik B Condensed Matter, 86(2):291–308, 1992.
  • (10) Alfredo Braunstein and Riccardo Zecchina. Learning by message-passing in neural networks with material synapses. Phys. Rev. Lett., 96:030201, 2006.
  • (11) Carlo Baldassi, Alfredo Braunstein, Nicolas Brunel, and Riccardo Zecchina.

    Efficient supervised learning in networks with binary synapses.

    Proceedings of the National Academy of Sciences, 104:11079–11084, 2007.
  • (12) Carlo Baldassi. Generalization learning in a perceptron with binary synapses. J. Stat. Phys., 136:902–916, 2009.
  • (13) Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina. Local entropy as a measure for sampling solutions in constraint satisfaction problems. Journal of Statistical Mechanics: Theory and Experiment, 2016(2):P023301, February 2016. URL: http://stacks.iop.org/1742-5468/2016/i=2/a=023301?key=crossref.a72a5bd1abacd77b91afb369eff15a65, doi:10.1088/1742-5468/2016/02/023301.
  • (14) Carlo Baldassi, Federica Gerace, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina. Learning may need only a few bits of synaptic precision. Physical Review E, 93(5):052313, May 2016. URL: http://link.aps.org/doi/10.1103/PhysRevE.93.052313, doi:10.1103/PhysRevE.93.052313.
  • (15) E Barkai, David Hansel, and Haim Sompolinsky.

    Broken symmetries in multilayered perceptrons.

    Physical Review A, 45(6):4146, 1992.
  • (16) Holm Schwarze and John Hertz. Generalization in a large committee machine. EPL (Europhysics Letters), 20(4):375, 1992.
  • (17) A Engel, HM Köhler, F Tschepke, H Vollmayr, and A Zippelius. Storage capacity and learning algorithms for two-layer neural networks. Physical Review A, 45(10):7590, 1992.
  • (18) Marc Mézard, Giorgio Parisi, and Miguel Virasoro. Spin glass theory and beyond: An Introduction to the Replica Method and Its Applications, volume 9. World Scientific Publishing Company, 1987.
  • (19) E Barkai, D Hansel, and I Kanter. Statistical mechanics of a multilayered neural network. Physical review letters, 65(18):2312, 1990.
  • (20) Rémi Monasson and Riccardo Zecchina. Weight space structure and internal representations: a direct approach to learning and generalization in multilayer neural networks. Physical review letters, 75(12):2432, 1995.
  • (21) Changyan Di, Thomas J Richardson, and Ruediger L Urbanke. Weight distribution of low-density parity-check codes. IEEE Transactions on Information Theory, 52(11):4839–4855, 2006.
  • (22) Carlo Baldassi, Federica Gerace, Hilbert J Kappen, Carlo Lucibello, Luca Saglietti, Enzo Tartaglione, and Riccardo Zecchina. Role of synaptic stochasticity in training low-precision neural networks. Physical review letters, 120(26):268103, 2018.
  • (23) Silvio Franz and Giorgio Parisi. Recipes for metastable states in spin glasses. Journal de Physique I, 5(11):1401–1415, 1995.
  • (24) GJ Mitchison and RM Durbin. Bounds on the learning capacity of some multi-layer networks. Biological Cybernetics, 60(5):345–365, 1989.
  • (25) Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017. arXiv:cs.LG/1708.07747.
  • (26) Rémi Monasson. Structural glass transition and the entropy of the metastable states. Physical review letters, 75(15):2847, 1995.
  • (27) Andreas Engel and Christian Van den Broeck. Statistical mechanics of learning. Cambridge University Press, 2001.
  • (28) Florent Krzakala, Federico Ricci-Tersenghi, Lenka Zdeborova, Riccardo Zecchina, Eric W Tramel, and Leticia F Cugliandolo. Statistical Physics, Optimization, Inference, and Message-Passing Algorithms: Lecture Notes of the Les Houches School of Physics-Special Issue, October 2013. Number 2013. Oxford University Press, 2016.

Appendix A High Local Entropy states from the 1-RSB formalism

Given a system described by a vector of discrete variables with an associated energy function , the Boltzmann equilibrium distribution at inverse temperature reads

(17)

where the normalization factor is given by the partition function

(18)

In the limit , the distribution is just a flat measure over the ground states of the energy; we can denote the ground state energy as and the characteristic function over the ground states as

(19)

such that and gives the entropy of the ground states.

In ref. (baldassi_subdominant_2015, ), we introduced a large-deviation measure with a modified energy function in which each configuration is reweighted by a “local entropy” term. There, we only considered the limit and defined the local entropy as the number of ground states at a certain normalized distance from a reference configuration :

(20)

where is a suitably defined distance function and is the Kronecker delta. With this definition, we can define a modified partition function as follows:

(21)

which (up to irrelevant constant factors) coincides with:

(22)

This approach can be shown to be strictly related to the 1-step replica-symmetry-broken (1RSB) formalism of the bare energetic problem. First, let us define a local free entropy at any given inverse temperature (i.e. a generalization of eq. (20)), and use a soft-constraint on the distance through a Lagrange multiplier :

(23)

Note that the use of a Lagrange multiplier is mostly convenient in order to make the relation with the 1RSB description evident. It is only equivalent to using a hard constraint for the distance in the thermodynamic limit, and depending on the convexity properties of the function , but we will generally ignore these issues for the time being and come back to them later.

We can then rewrite the large-deviation partition function eq. (21) in this more general case as:

(24)

Let us now consider the case in which : this allows us, by simple algebraic manipulations, to rewrite the partition function introducing a sum over all the configurations of replicas of the system:

(25)

This partition function describes a system of interacting real replicas with an interaction which is mediated by the reference configuration . However we can isolate the sum over the configurations of to obtain a system of interacting real replicas. In the special case we obtain:

(26)

We have stressed the fact that the replicas are real to avoid the confusion with the virtual replicas used for the “replica trick”: here, we are not replicating the system virtually in order to compute a free entropy in the limit of zero replicas: instead, we are describing a system of identical interacting objects. The general case of can be obtained by analytic continuation once an expression for all integer is found.

This description is highly reminiscent of – in fact, almost identical to – the derivation of the ergodicity-breaking scheme used in ref. (monasson1995structural, ): there, an auxiliary symmetry breaking field is introduced (having the same role of in our notation); then, a free energy expression is introduced in which the role of the energy is taken by a “local free entropy” (the analogous of eq. (20) for general ), after which the system is replicated times and the auxiliary field is traced out, leading to a system of real replicas with an effective pairwise interaction. Finally, the limit of vanishing interaction () is taken in order to derive the equilibrium description. When this system is studied in the replica-symmetric (RS) Ansatz, it results in the 1RSB description of the original system, with having the role of the Parisi parameter (usually denoted by ). Indeed, in this limit of vanishing interaction and for , our equations reduce to the 1RSB case as well.

Therefore, apart from minor differences, the main point of discrepancy between that analysis and our approach is that we don’t restrict ourselves to the equilibrium distribution. Instead, we explore the whole range of values of . In this context, we also have no reason to restrict ourselves to the range , as it is usually done in order to give a physical interpretation to the 1RSB solution; to the contrary, we are (mostly) interested in the limit of large , in which only the configurations of maximal local entropy are described.

The relationship between our analysis and the usual 1RSB case can be made even more direct, leading to an alternative – although with very similar results – large deviations analysis: consider, instead of eq. (26), a partition function in which the interaction among the replicas is pairwise (without the reference configuration ) and the constraint on the distance is hard (introduced via a Dirac delta function):

(27)

Suppose then that we study the average free entropy (where represents the average over the quenched parameters, if any) in the context of replica theory. Then, we will have virtual replicas of the whole system, and since each system has real replicas we end up with total replicas. Let’s use indices , for the virtual replicas and , for the real ones, such that a configuration will now have two indices, e.g. . Suppose that we manage to manipulate the expression such that it becomes a function, among other order parameters, of the overlaps , where represents some scalar product, and that the distance function can be expressed in terms of those. Then, as usual, we would introduce auxiliary integrals

(28)

Using this, we can rewrite the interaction term. Say that , then:

(29)

By assuming replica symmetry, we seek a saddle point with this structure:

(30)

with . The interaction term eq. (29) becomes:

(31)

Therefore, the external parameter

eliminates a degree of freedom in the solution to the saddle point equations for the overlaps. The final step in the replica calculation would have the form

(32)

where is the expression which would have been derived in an equilibrium computation without the interaction term, the dots in the argument represent extra order parameters, and the order parameters are fixed by the saddle point equations

(33)

Thus, the difference with respect to the usual 1RSB computation is that the equation for finding the extremum over is removed, and the one for finding the extremum over is modified. Maximizing over , by solving for , is then equivalent to the usual 1RSB description (equivalent to the case in the soft-constraint case):

(34)

In the common case where is fixed (e.g. if the variables are discrete, or constraints on the norm are introduced) then this representation fixes ; it is clear then that our large deviations analysis (the alternative one of eq. (27)) is simply derived by fixing as an external parameter, and thus omitting the saddle point equation . Note that this wouldn’t make physical sense in the standard derivation of the 1RSB equations, since in that context is only introduced as an overlap between virtual replicas when choosing an Ansatz for the solutions of the saddle point equations; our derivation is only physically meaningful when describing a system of real interacting replicas or, in the case of the original derivation from eq. (21), a system with a modified energy function.

Appendix B Cross-Entropy minima, errors and high local entropy configurations

b.1 Cross-Entropy loss ground states

In order to study analytically the properties of the minima of the CE loss function in the case of i.i.d. random patterns, the key obstacle is to compute the normalization factor of the Gibbs measure, the partition function . Once this is done one has access to the the Gibbs measure which concentrates on the minima of the loss in the limit.

is an exponentially fluctuating random variable and in order to find its most probable values we need to average its logarithm, a complicated task which we perform by the replica method. Once this is done, the typical value of can be recovered by , where stands for the average over the random patterns.

We refer to ref. (engel-vandenbroek, )

for a thorough review of the replica method. Here we just remind the reader that the replica method is an analytic continuation technique which allows in some cases (mean-field models) to compute the expectation of the logarithm of the partition function from the knowledge of its integer moments. The starting point is the following small

expansion

This identity may be averaged over the random patterns and gives the average of the from the averaged -th power of the partition function

The idea of the replica method is to restrict to integer and to take the analytic continuation

We have replicas of the initial model. The random patterns in the expression of the energy disappear once the average has been carried out. Eventually one computes the partition function of an effective system of variables with a non random energy function resulting from the average. The result may be written formally as

where is the expression resulting from the sum over all configurations. Once the small limit is taken, the final expression can be estimated analytically by means of the saddle-point method given that is assumed to be large.

In the case of our problem we have

Following the replica approach, we need to compute

where the integration measure is just over the binary values of the weights. By enforcing through a delta function, we can linearize the dependence on the randomness of the patterns and perform the average as follows:

where we have used the delta functions to introduce the order parameters and . In order to write the multiple integrals in a form which can be evaluated by saddle point, we restrict to the replica symmetric assumption and , and perform few simplifications.

First we sum over the weights:

Second, we simplify the terms which are raised to the power :