Spin glasses and neural networks are very analogous and draw many parallels in their dynamics. Generally, a spin glass is a model of disordered magnetism. The simplest model of a spin glass, the Ising model, is a network of "spins" which take on the discrete values, connected by a weight matrix that represents the strength of connection between the spins. The dynamics of these systems is determined by the values of randomly chosen , which are generally time independent [Dotsenko, 1995].
The similarity of these spin glass systems with neural networks is of interest to us because spin glasses have been a focus of research in statistical physics for the last fifty years, and a large library of machinery and techniques has been developed to deal with them. We would like to apply this machinery to the field of neural networks.
For this paper we used PetaVision, a high performance neural simulation toolbox [pet, ], to construct sparsely coding convolutional neural networks and examine the relationship between the network’s efficiency and sparsity. Interesting behavior in the efficiency of the networks as the sparsity was varied led us to analyze the finite-size scaling of the network, a technique more commonly used in the study of spin glasses, and discovered power law relationships that indicate a continuous (second-order) phase transition is occurring in the networks as sparsity is varied.
2 Neural network
We used two networks in our simulation, both built using PetaVision. The first network was a sparse auto-encoder network that trained the filter kernels of a convolutional layer using a Locally Competitive Algorithm, as defined by Rozell et al. , as it attempted to iteratively converge on a sparse representation of different input images. The second network (see Figure 1
) used the same sparsely encoding convolutional layer that was trained by the autoencoder to denoise images that had very high Gaussian noise added to them.
The input for both networks were images from the CIFAR-10 image set [Krizhevsky, 2009].The image set was divided into two parts. The first 50,000 images were used for training the filter kernels of the sparsely coding convolutional layer for different levels of sparsity. Then, 10,000 additional images had very high Gaussian noise added to them and were denoised by the denoising network for each level of sparsity used in training.
We observed a distinct minimum in the percent reconstruction error of the noisy images as the sparsity of the network was varied that displayed behavior analogous to continuous phase transitions seen in spin glasses (see figure 2) [Dotsenko, 1995, Täuber, 2014]. With this as our motivation we investigated the presence of a phase transition in our system.
. Feature learning utilizes a local Hebbian rule to implement stochastic gradient descent.
3 Phase transitions and finite-size scaling
A phase of a system is defined as a subspace of the microscopic system parameters where the system’s dynamics obey the same macroscale laws and relations everywhere in that subspace. The space of system parameters can have many phases, and the system can transition between them as system control parameters change. The point of transition between two (or more) phases is known as the critical point.
Phase transitions have been subject of significant study in Condensed Matter Physics, and it is well established that the occurrence of a continuous phase transition111A continuous (second-order) phase transition has a continuous change in the dynamics of the system as it transitions between phases, while first-order phase transitions are discontinuous. is accompanied by a singularity at the critical point in one or more system parameters when the system is of infinite size [Täuber, 2014]. It is impossible to achieve infinite system sizes computationally, but this theory can be expanded to finite systems where these singularities become truncated and rounded. These minima or maxima that the singularities turn into at finite system sizes follow very specific relations with system size:
where is the linear system size. This behavior is known as finite-size scaling [Täuber, 2014, Cardy, 1996]. The exponents and two examples of "critical exponents". The critical exponents describe the behavior of the system as it approaches the critical point [Täuber, 2014, Cardy, 1996]
. Thus we can identify a phase transition in our network by the existence and behavior of minima and maxima in the space of system parameters as we vary the system size, which in our case will be the number of neurons in the convolutional layer. The exponents we record,and , will be proportional to and through some effective dimension of our system.
The parameters of the system that we are interested in are the fraction of active neurons and the average percent reconstruction error of our noisy images:
where is the average percent reconstruction error, is the original image before it has Gaussian noise added to it, is the reconstruction of the noised image taken from the sparsely coding convolutional layer [pet, , Rozell et al., 2008, Schultz et al., 2014].
The fraction of active neurons is controlled by a parameter , as described Rozell et al. , that behaves monotonically with the sparsity of active neurons and inversely with the fraction of active neurons. Through we can control the fraction of active neurons and observe how the average percent reconstruction error behaves as the fraction of active neurons is varied. We observed a minimum in occur as we varied the fraction of active neurons for many different system sizes. These results are summarized in Figure 2.
We measured the shift in height and location of the minima in as the system size was varied, and plot each on a log-log plot (see Figures 3 (a), and 3 (b)). We observe power law behavior in both the location and height of the minima as the system size is varied. This satifies the finite-size scaling requirements as defined in equations 1 and 2. This finite-size scaling behavior indicates a continuous phase transition is occurring as the sparsity of the network is varied.
The existence of phase transitions in neural networks is not unique to this sparsely coding convolutional system. The auto-associative network proposed by Hopfield  was shown by Hertz et al.  to display a first-order phase transition in its memory capacity. If the number of patterns recorded by the network exceeds a "critical fraction" of the network size, the output of the network is maximally disordered [Hertz et al., 1991].
We propose a similar mechanism is responsible for the observed continuous phase transition of our sparsely coding convolution network, where the fraction of active neurons is analogous to the "critical fraction" of learned patterns in the auto-associative network. If our network’s fraction of active neurons is too far above the "critical fraction", the network will have the freedom to reconstruct the noise in the image, while if the fraction of active neurons is too low, the network will only reconstruct image components for which it has learned strong priors. These two different regions of dynamics form our "phases". The existence of a phase transition in the average percent reconstruction error of the network as the fraction of active neurons is varied guarantees the persistence of the power law behavior seen in Figure 3 (b). This power law behavior allows us to predict the optimal fraction of active neurons for any system size, which in turn can be tuned to through the parameter , as described by Rozell et al. , to ensure that any sparsely coding convolutional network is operating at the optimal level of sparsity.
The critical behavior of the network allows us to always achieve the minimum denoising error by operating the network at this critical value of sparsity.
We gladly acknowledge helpful discussions with Uwe C. Täuber.
This work was supported by the Los Alamos National Laboratory under contract DE-AC52-06NA25396.
Computations were performed using the Darwin Computational Cluster at Los Alamos National Laboratory.
-  Petavision. URL github.com/PetaVision/OpenPV.
- Cardy  J. Cardy. Scaling and renormalization in statistical physics. Cambridge University Press, 1996. ISBN 0521499593.
- Dotsenko  V. Dotsenko. An introduction to the theory of spin glasses and neural networks. World Scientific Lecture Notes in Physics. World Scientific Publishing Company, 1995. ISBN 9810218737.
- Hertz et al.  J. A. Hertz, A. S. Krogh, and R. G. Palmer. Introduction to the theory of neural computation. Addison-Wesley Publishing Company, 1991. ISBN 0201515601.
- Hopfield  J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79(8):2554–2558, 1982. URL http://www.pnas.org/content/79/8/2554.abstract.
- Krizhevsky  A. Krizhevsky. Learning multiple layers of features from tiny images. 2009.
- Rozell et al.  C. J. Rozell, D. H. Johnson, R. G. Baraniuk, and B. A. Olshausen. Sparse coding via thresholding and local competition in neural circuits. Neural Computation, 20:2526–2563, 2008.
- Schultz et al.  P. F. Schultz, D. M. Paiton, W. Lu, and G. T. Kenyon. Replicating kernels with a short stride allows sparse reconstructions with fewer independent kernels. arXiv preprint arXiv:1406.4205, 2014.
- Täuber  U. C. Täuber. Critical dynamics: a field theory approach to equilibrium and non-equilibrium scaling behavior. Cambridge University Press, 2014. ISBN 9780521842235.