# Dynamical Neural Network: Information and Topology

A neural network works as an associative memory device if it has large storage capacity and the quality of the retrieval is good enough. The learning and attractor abilities of the network both can be measured by the mutual information (MI), between patterns and retrieval states. This paper deals with a search for an optimal topology, of a Hebb network, in the sense of the maximal MI. We use small-world topology. The connectivity γ ranges from an extremely diluted to the fully connected network; the randomness ω ranges from purely local to completely random neighbors. It is found that, while stability implies an optimal MI(γ,ω) at γ_opt(ω)→ 0, for the dynamics, the optimal topology holds at certain γ_opt>0 whenever 0≤ω<0.3.

## Authors

• 1 publication
• 2 publications
• 1 publication
• 4 publications
• ### Memory Retrieval in the B-Matrix Neural Network

This paper is an extension to the memory retrieval procedure of the B-Ma...
03/14/2011 ∙ by Prerana Laddha, et al. ∙ 0

• ### Emergent Criticality Through Adaptive Information Processing in Boolean Networks

We study information processing in populations of Boolean networks with ...
04/20/2011 ∙ by Alireza Goudarzi, et al. ∙ 0

• ### Learning Topology and Dynamics of Large Recurrent Neural Networks

Large-scale recurrent networks have drawn increasing attention recently ...
10/05/2014 ∙ by Yiyuan She, et al. ∙ 0

• ### Optimal Machine Intelligence Near the Edge of Chaos

It has long been suggested that living systems, in particular the brain,...
09/11/2019 ∙ by Ling Feng, et al. ∙ 0

• ### Neighbor connectivity of k-ary n-cubes

The neighbor connectivity of a graph G is the least number of vertices s...
10/27/2019 ∙ by Tomáš Dvořák, et al. ∙ 0

• ### Iterative Deep Learning for Network Topology Extraction

This paper tackles the task of estimating the topology of filamentary ne...
12/04/2017 ∙ by Carles Ventura, et al. ∙ 0

• ### Deep learning architectures for inference of AC-OPF solutions

We present a systematic comparison between neural network (NN) architect...
11/06/2020 ∙ by Thomas Falconer, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The collective properties of attractor neural networks (ANN), such as the ability to perform as an associative memory, has been a subject of intensive research in the last couple of decades[2], dealing mainly with fully-connected topologies. More recently, the interest on ANN has been renewed by the study of more realistic architectures, such as small-world [3],[5] or scale-free [4],[16] models. The storage capacity and the overlap with the memorized patterns are the most used measures of the retrieval ability for the Hopfield-Hebb networks[6],[7]. Comparatively less attention has been paid to the study of the mutual information (MI) between stored patterns and the neural states[8][9], although neural networks are information processing machines.

A reason for this relatively low interest is twofold: on the one hand, it is easier to deal with the global parameter , than with

, a function of the conditional probability of neuron states

given the patterns . This can be solved for the so called mean-field networks

which satisfy the law of large numbers, hence MI is a function only of the macroscopic parameters

, and the load rate (where is the number of uncorrelated patterns, and is the neuron connectivity). On the other hand, the load is enough to measure the information if the overlap is close to , since in this case the information carried by any single binary neuron is almost 1 bit. It is true for a fully-connected (FC) network, for which the critical [6], with (with a sharp transition to for larger ): in this case, the information rate is about , as can be seen in the left panel of Fig.1. There we show the overlap (upper) and information for several architectures. However, in the case of diluted networks the transition is smooth. In particular, the random extremely diluted (RED) network has load capacity [11] but the overlap falls continuously to , which yields null information at the transition, , as seen in right panel of Fig.1 (dashed line). Such indetermination shows that one must search for the value of corresponding to the maximal information , instead of .

We address the problem of searching for the optimal topology, in the sense of maximizing the mutual information. Using the graph framework [4], one can capture the main properties of a wide range of neural systems, with only 2 parameters: , which is the average rate of links per neurons, where is the network size, and , which controls the rate of random links (among all neighbors). When is large, the clustering coefficient is large () and the mean-length-path between neurons is small (), whatever is. When is small, then if is too small, and , but if it is about , the network behaves again as if , with and . This region, called small-world (SW), is rather usefull when one is interested to built networks where the information transmition is fast and efficient, with high capacity in presence of significant noise, but do not wants to spent too much wiring [18]. Small-world networks may model many biological systems [15]. For instance, in a brain local connections dominate in intracortex, while there are a few intercortical connections [14].

In Fig.1 we show the overlap (upper) and information for several architectures. In the left panel, it is seen that the maximum information rate, , of FC network is about , while in the right panel, we show extremely-diluted networks (ED). The RED network () has . The right panel of Fig.1 plot also the overlap and the information for the local extremely diluted network (LED, ), with , and a small-world extremely diluted network (SED, ), with . We see that the ED transitions are smooth. The central panel of Fig.1 plot moderately diluted (MD) networks, which are commented later. Theoretical results fit well with the simulations, except for small , where theory underestimate it. Previous works about small-world attractor neural networks [13] studied only the overlap , so no result about information were known.

Our main goal in this work is to solve the following question: how does the maximal information, behaves with respect to the network topology? To our knowledge, up to now, there were no answer to this question. We will show that, near to the stationary retrieval states, for every value of the randomness , the extremely-diluted network, performs the best, . However, regarding the attractor basins, starting far from the patterns, the optimal topology holds for moderate . For instance, if transients are taken in account, values of lead to an optimal with .

The structure of the paper is the following: in the next section we review the information measures used in the calculations; in Sec.3, we define the topology and neuro-dynamics model. The results are shown in Sec.4, where we study retrieval by theory and simulation (with random patterns and with images); conclusions are drawn in last section.

## 2 The Information Measures

### 2.1 The Neural Channel

The network state at a given time is defined by a set of binary neurons, . Accordingly, each pattern

, is a set of site-independent random variables, binary and uniformly distributed:

. The network learns a set of independent patterns .

The task of the neural channel is to retrieve a pattern (say, ) starting from a neuron state which is inside its attractor basin, , i.e.: . This is achieved through a network dynamics, which couples neighbor neurons by the synaptic matrix with cardinality .

### 2.2 The Overlap

For the usual binary non-biased neurons model, the relevant order parameter is the between the neural states and a given pattern:

 mμtN≡1N∑iξμiσti, (1)

at the time step . Note that both positive and negative patterns, carry the same information, so the absolute value of the overlap measures the retrieval quality: means a good retrieval. Alternatively, one can measure the error in retrieving using the Hamming distance: .

Together with the overlap, one needs a measure of the load, which is the rate of pattern bits per synapses used to store them. Since the synapses and patterns are independent, the load is given by

.

We require our network to have long-range interactions. Therefore, we regard a mean-field network (MFN), the distribution of the states is site-independent, so every spatial correlation such as can be neglected, which is reasonable in the asymptotic limit . Hence the condition of the law of large numbers, are fulfilled. At a given time step of the dynamical process, the network state can be described by one particular overlap, let say . The order parameters can thus be written, when , as

. The brackets represent average over the joint distribution

, for a single neuron (we can drop the index ). This macroscopic variable describes the information processing of the network, at a given time step of the dynamics. Along with this signal parameter, the residual microscopic overlaps yield the cross-talk noisy, its statistics complete the network macro-dynamics.

### 2.3 Mutual Information

For a long-range system, it is enough to observe the distribution of a single neuron in order to know the global distribution [9]. This is given by the conditional probability of having the neuron in a state , at each (unspecified) time step , given that in the same site the pattern being retrieved is . For the binary network we are considering, , [10] where the overlap is .

The joint distribution of is interpreted as an ensemble distribution for the neuron states and inputs . In the conditional probability, , all type of noise in the retrieval process of the input pattern through the network (both from environment and over the dynamical process itself) is enclosed.

With the above expressions and , we can calculate the MI [9], a quantity used to measure the prediction that an observer at the output () can do about the input () (we drop the time index ). It reads , where is the entropy and is the conditional entropy. We use binary logarithms to measure the information in bits. The entropies are [10]:

 S[σ|ξ]=−1+m2log21+m2−1−m2log21−m2, S[σ]=1[bit]. (2)

We define the information rate as

 i(α,m)=MI[→σ|{→ξμ}]/|J|≡αMI[σ;ξ], (3)

since for independent neurons and patterns, . When the network approaches its saturation limit , the states can not remain close to the patterns, then is usually small. So, while the number of patterns increase, the information per pattern decreases. Therefore, information is a non-monotonic function of the overlap and load rate, see Fig.1, which reaches its maximum value at some value of the load .

## 3 The Model

### 3.1 The Network Topology

The synaptic couplings are , where the connectivity matrix has a local and a random parts, , and W are synaptic weights. The local part connects the nearest neighbors, , with in the asymmetric case, on a closed ring. The random part consists of independent random variables , distributed with probability , and otherwise, with , where is the mean number of random connections of a single neuron. Hence, the neuron connectivity is . The network topology is then characterized by two parameters: the connectivity ratio, defined as , and the randomness ratio, . The plays the role of rewiring probability in the small-world model (SW) [3]. Our model was proposed by Newman and Watts [20], which has the advantage of avoiding disconneting the graph.

Note that the topology can be defined by an adjacency list connecting neighbors, , with . So the storage cost of this network is . Hence, the information is , Eq.(3), where the load rate is scaled as . The learning algorithm updates W, according to the Hebb rule

 Wμij=Wμ−1ij+1Kξμiξμj. (4)

The network starts at , and after learning steps, it reaches a value . The learning stage is a slow dynamics, being stationary-like in the time scale of the much faster retrieval stage, we define in the following.

### 3.2 The Neural Dynamics

The neural states, , are updated according to the stochastic parallel dynamics:

 σt+1i=sign(hti+Tx),hti≡∑jJijσtj,i=1...N (5)

where is a normalized random variable and is the temperature-like environmental noise. In the case of symmetric synaptic couplings, , an energy function can be defined, whose minima are the stable states of the dynamics Eq.(5).

In the present paper, we work out the asymmetric network by simulation (no constraints ). The theory was carried out for symmetric networks. As it is seen in Fig.1, theory and simulation shows similar results, except for local networks (theory underestimate , where the symmetry may play some role. We restrict our analysis also for the deterministic dynamics (). The stochastic macro-dynamics comes from the extensive number of learned patterns, .

## 4 Results

We studied the information for the stationary and dynamical states of the network were studied as a function of the topological parameters, and . A sample of the results for simulation and theory is shown in Fig.1, where the stationary states of the overlap and information are plotted for the FC, MD and ED arquitetures. It can be seen that information increases with dilution and with randomness of the network. A reason for this behavior is that dilution decreases the correlation due to the interference between patterns. However, dilution also increases the mean-path-length of the network, thus, if the connections are local, the information flows slowly over the network. Hence, the neuron states can be eventually trapped in noisy patterns. So, is small for even if .

### 4.1 Theory: Stationary States

Following to the Gardner calculations[11], at temperature T=0 the MFN approximation gives the fixed point equations:

 m=erf(m/√rα), (6) χ=2φ(m/√rα)/√rα; (7) r=∞∑k=0ak(k+1)χk,ak=γTr[(C/K)k+2] (8)

with , . The parameter is the probability of existence of cycle of length in the connectivity graph. The can be calculated either by using Monte Carlo [17], or by an analytical approach, which gives , where

is the Fourier transform of the probability of links,

. For an RED and FC networks one recover the known results for and respectively [2].

The theoretical dependence of the information on the load, for FC, MD and ED networks, with local, small-world and random connections, are plotted in the fat lines in Fig.1. A comparison between theory and simulation is also given in Fig.1. It can be seen that both results agree for most , but theory fails for . One reason is that theory uses symmetric constraint, while simulation was carried out with asymmetric synapsis. Figure 3 shows their maxima vs. the parameters . It is seen that the optimal is at . This implies that the best topology for information (stationary states) is the extreme diluted network, with purely random connectivity.

### 4.2 Simulation: Attractors and Transients

We have studied the behavior of the network varying the range of connectivity and randomness . We used Eq.(5). Both local and random connections are asymmetric. The simulation was carried out with synapses, storing an adjacency list as data structure, instead of . For instance, with , we used . In [13] the authors use , which is far from asymptotic limit.

We studied the network by searching for the stability properties and transients of the neuron dynamics. To look for stability, we started the network at some pattern (with initial overlap ), and wait until it stays or leave it after a flag time step (unless it converges to a fixed point before ). When we check transients, we start with , and stop the dynamics at the time . Usually, parallel (all neurons) updates is a large enough delay for retrieval. Indeed in most case far before the saturation, after the network end up in a pattern, however, near , even after the network has not yet relaxed.

In first place, we checked for the stability properties of the network: the neuron states start precisely at a given pattern (which changes at each learned step ). The initial overlap is , so, after time steps in retrieving, the information for final overlap is calculated. We plot it as a function of , and its maximum is evaluated. We averaged over a window in the axis of , usually . This is repeated for various values of the connectivity ratio and randomness parameters. The results are in the upper panels of Fig.4.

Second, we checked for the retrieval properties: the neuron states start far from a learned pattern, but inside its basin of attraction, . The initial configuration is chosen with distribution: , for all neurons (so we avoid a bias between local/random neighbors). The initial overlap is now , and after steps, the information is calculated. The results are in the lower panels of Fig.4. The first observation now is that the maximal information increases with dilution (smaller ) if the network is more random, , while it decreases with dilution if the network is more local, .

The comparison between upper () and lower parts of Fig.4, shows that the non-monotonic behavior of the information with dilution and randomness, is stronger for the retrieval () than for the stability properties (). One can understand this in terms of the basins of attraction. Random topologies have very deep attractors, specially if the network is diluted enough, while regular topologies almost lose their retrieval abilities with dilution. However, since the basins becomes rougher with dilution, then network takes longer to reach the attractor. Hence, the competition between depth-roughness is won by the more robust MD networks.

Each maximal in Fig.4 is plotted in Fig.5. We see that, for intermediate values of the randomness parameter there is an optimal information respect to the dilution , if dynamics is truncated. We observe that the optimal is shifted to the left (stronger dilution) when the randomness of the network increases.

For instance, with , the optimal is at while with , it is . This result does not change qualitatively with the flag time, but if the dynamics is truncated early, the optimal , for a fixed , is shifted to more connected networks. However, the behavior depends strongly on the initial condition: respect to , where the maximal are pronounced, with , the dependence on the topology becomes almost flat. We see also that for there is no intermediate optimal topology. It is worth to note that the simulation converges to the theoretical results if when .

### 4.3 Simulation with Images

The simulations presented so far use artificial patterns randomly generated. In order to check if our results are robust against possibly correlations existent in realistic patterns, we test the algorithm with images. We see that the same non-monotonic behavior for is observed here.

We have checked the results by using data derived from the Waterloo image database. We are working with square shaped patches. In order to use Hebb-like non-sparse code binary network and still preserve the structure of the image we process the images preserving the edges, by applying edge filter. Each pixel of the patch represents a different neuron. The number of connections is up to and the feasible connectivities (more than 3 patterns) are .

Note that the procedure, strictly speaking, does not guarantee the conditions for the distribution of , because neither is uniform (due to the threshold in large blocks), nor are uncorrelated (due to image edges).

We are choosing at random the origin of the patch and the image to be used from the available 12 images. The topology of the network is a ring with small world topology. The results of the simulation, using Chen filter, are shown in Fig.3. The optimal connectivity with and is found to be . The fluctuation now are much larger than with random patterns, due to correlation and small network size. In the stationary states, , the optimal connectivity remains at , with . The results agree qualitatively with simulation for random patterns, Fig.4, where the initial overlaps are and (in Fig.3 it is always ).

## 5 Conclusions

In this paper we have studied the dependence of the information capacity with the topology for an attractor neural network. We calculated the mutual information for a Hebb model, for storing binary patterns, varying the connectivity () and randomness () parameters, and obtained the maximal respect to , . Then we look at the optimal topology, in the sense of the information, . We presented stationary and transient states. The main result is that larger always leads to higher information .

From the stability calculations, the stationary optimal topology, is the extremely diluted (RED) network. Dynamics shows, however, that this is not true: we found there is an intermediate optimal , for any fixed . This can be understood regarding the shape of the attractors. The ED waits much longer for the retrieval than more connected networks do, so the neurons can be trapped in spurious states with vanishing information. We found there is an intermediate optimal , whenever the retrieval is truncated, and it remains up to the stationary states.

Both in nature and in technological approaches to neural devices, dynamics is an essential issue for information process. So, an optimized topology holds in any practical purpose, even if no attemption is payed to wiring or other energetic costs of random links [18]. The reason is a competition between the broadness (larger storage capacity) and roughness (slower retrieval speed) of the attraction basins.

We believe that the maximization of information respect to the topology could be a biological criterium (where non-equilibrium phenomena are relevant) to build real neural networks. We expect that the same dependence should happens for more structured networks and learning rules.

Acknowledgments Work supported by grants TIC01-572, TIN2004-07676-C01-01, BFI2003-07276, TIN2004-04363-C03-03 from MCyT, Spain.

## References

• [1]
• [2] Hertz, J., Krogh, J., Palmer, R.: Introduction to the Theory of Neural Computation. Addison-Wesley, Boston (1991)
• [3] Strogatz, D., Watts, S.: Nature 393 (1998) 440
• [4] Albert, R. and Barabasi, A.L, Rev. Mod. Phys. 74 (2002) 47
• [5] Masuda, N. and Aihara, K. Biol. Cybernetics, 90: 302 (2004)
• [6] Amit, D., Gutfreund, H., Sompolinsky, H.: Phys. Rev. A 35 (1987) 2293
• [7] Okada, M.: Neural Network 9/8 (1996) 1429
• [8] Perez-Vicente, C., Amit, D.: J. Phys. A, 22 (1989) 559
• [9] Dominguez, D., Bolle, D.: Phys. Rev. Lett 80 (1998) 2961
• [10] Bolle, D., Dominguez, D., Amari, S.: Neural Networks 13 (2000) 455
• [11] Canning, A. and Gardner, E. Partially Connected Models of Neural Networks, J. Phys. A, 21, 3275-3284, 1988
• [12] Kupermann, M. and Abramson, G. Phys. Rev. Lett. 86: 2909, 2001
• [13] McGraw, P.N. and Menzinger, M. Phys. Rev. E 68: 047102-1, 2003
• [14] Rolls, E., Treves, A., Neural Network and Brain Function. Oxford University Press, 2004
• [15] Sporns, O. et al., Cognitive Sciences, 8(9): 418-425, 2004
• [16] Torres, J. et al., Neurocomputing, 58-60: 229-234, 2004
• [17] Dominguez, D., Korutchev, K.,Serrano, E. and Rodriguez F.B., LNCS 3173: 14-29, 2004
• [18] Adams, R., Calcraft, L., Davey, N.: ICANNGA05, preprint (2005)
• [19] Li, C., Chen, G.: Phys. Rev. E 68 (2003) 52901
• [20] Newman, M.E.J., Watts, D.J: Phys. Rev. E 60 (1999) 7332
• [21]