Estimating the Cheeger constant using machine learning

05/12/2020
by   Ambar Jain, et al.
IISER Bhopal
0

In this paper, we use machine learning to show that the Cheeger constant of a connected regular graph has a predominant linear dependence on the largest two eigenvalues of the graph spectrum. We also show that a trained deep neural network on graphs of smaller sizes can be used as an effective estimator in estimating the Cheeger constant of larger graphs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/09/2018

Structure Entropy and Resistor Graphs

We propose the notion of resistance of a graph as an accompanying notio...
02/17/2020

Spectrum preserving short cycle removal on regular graphs

We describe a new method to remove short cycles on regular graphs while ...
09/16/2019

Explicit near-Ramanujan graphs of every degree

For every constant d ≥ 3 and ϵ > 0, we give a deterministic poly(n)-time...
01/09/2021

On the Seidel spectrum of threshold graphs

In this paper, we analyse spectral properties of Seidel matrix (denoted ...
05/10/2022

On equipathenergetic graphs and new bounds on path energy

The path energy of a simple connected graph G is equal to the sum of the...
08/13/2020

On the Bipartiteness Constant and Expansion of Cayley Graphs

Let G be a finite, undirected d-regular graph and A(G) its normalized ad...
02/21/2018

Spectrally approximating large graphs with smaller graphs

How does coarsening affect the spectrum of a general graph? We provide c...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Let be a finite, simple, connected and undirected -regular graph with . It is a well known fact from basic algebraic graph theory [1, 3] that the eigenvalues , , of the adjacency matrix of are real and can be ordered as:

For each , let . Then the number

(1.1)

is called the Cheeger constant (or the isoperimetric constant or the edge expansion constant) of the graph . The Cheeger constant is a measure of the connectivity of the graph . Families of regular graphs with Cheeger constants bounded below by a positive constant also known as expander families have been widely studied (see [4, 7, 8, 10] and the references therein) due to their applications to communication networks.

The computation of of an arbitrary finite graph is a well known [2, 6, 9] NP hard problem. However, for a -regular graph of size , we use machine learning to answer the following natural questions.

  1. Does the dependence of on and stronger than what the known bounds indicate?

  2. Is this dependence predominantly linear or non-linear?

  3. Is there a strong dependence of on , for ?

  4. Can these dependencies be used to estimate for large with greater efficiency?

We begin by providing data which shows that in general these known bounds for

deviate significantly from its actual value. By considering random regular graphs of sizes 12 through 30, we apply machine learning via deep neural networks and linear regression to make the following statistical observations:

  1. has predominant linear dependence on and . Moreover, as increases, this dependence appears to approach the linear function . This linearity is more pronounced when the spectral gap is large.

  2. Its dependence on , for is insignificant.

  3. We demonstrate that a trained deep neural network on graphs of smaller sizes can be used as an effective estimator for Cheeger constants of larger graphs where computation times using classical algorithms are large.

The paper is organized as follows. In Section 2, we analyze whether some well known bounds can be used as effective estimators for . In Section 3, we determine whether the dependence of on and is predominantly linear. In Section 4, we use machine learning to examine whether has a nonlinear dependence on ) and , and also study its relation to , for . Finally, in Section 5, we explore whether deep neural networks trained on graphs of smaller sizes can be used as viable estimators for Cheeger constants of larger graphs.

2. Numerical analysis of known bounds

We consider a dataset of random regular graphs of sizes 12 through 30 for our analysis. This dataset was generated by using a Python package that implements the algorithm described in [11]. The number of graphs considered for was limited by the total number of available graphs, while for , the limitation came from long computation time for . In all other cases, we have considered at least random graphs of varying regularity. The number of graphs considered for analysis for each is shown in the second column of Table 1.

The Cheeger is related to the spectral gap of a -regular graph by the following inequality (see [5, Proposition 1.84]):

(2.1)

Mohar [9] showed that

(2.2)

and when (where denotes the complete graph on vertices), he showed that

(2.3)

For each graph , we compute the lower bound on as given by Eqn. (2.1), and an upper bound, which is the lowest of the upper bounds appearing in (2.1)-(2.3). For each of these estimators, we calculate its deviation from the true value of as given in the equation below:

(2.4)

where, refers to the estimator of . For the analysis in this section, corresponds to either the upper bound or the lower bound. The mean values of (which we denote by and respectively) for each is shown in Table 1 below.

# of Graphs
12 15176 0.18 0.61
13 55128 0.23 0.61
14 115663 0.18 0.60
15 118702 0.22 0.63
16 22635 0.18 0.65
17 20024 0.21 0.61
18 35774 0.18 0.66
19 20436 0.20 0.59
20 56016 0.18 0.64
21 1606 0.19 0.56
22 1626 0.17 0.61
23 1636 0.19 0.55
24 1825 0.16 0.59
25 1385 0.18 0.53
26 1829 0.16 0.57
27 1722 0.17 0.52
28 1097 0.16 0.57
29 958 0.20 0.62
30 872 0.16 0.59
Table 1. Graph data considered in the analysis of this paper and the average deviation in bounds: The second column shows the number of graphs considered in the analysis in this paper for each . For , at least 20,000 graphs were considered for each with exception to , where the total number of available graphs is less than 20,000. For , we tried to accumulate at least about 1000 graphs with the exceptions of and .

We note that, on an average, the lower bound deviates from the true value of by about , while the upper bound deviates at about . This deviation marginally reduces for large values of . The table indicates that the bounds considered are not efficient estimators for . In the following section, we consider linear regression to construct a better estimator for .

3. Linear regression analysis and prediction

In this section, we want to determine whether the relationship between and and is predominantly linear. To begin with, we analyze whether can be estimated reasonably well by a linear function of the largest eigenvalues, for . For each , we calculate the mean deviation , where we use the fitted linear regression function as the estimator. The results for this analysis are presented in Fig. 1 below for various values of .

Figure 1. Linear regression on Cheeger constant: Graph shows average deviation of from the estimate obtained through a linear fit for topmost, top two, top three and top four eigenvalues. Points are joined by lines to guide the eye. Log scale is used on the -axis to stretch the scale. There is no considerable improvement in the Cheeger estimate from linear regression beyond and .

It is evident from the graph that adding the third and fourth eigenvalue to the analysis does not significantly reduce . This shows that a linear function of just the two largest eigenvalues estimates fairly accurately Interestingly, the average deviation reduces gradually with increase in coming down to about for . This observation confirms that the relationship between the two largest eigenvalues and is mostly linear.

The regression coefficients of appears to converge to as increases, while the coefficient of appears to converge to . The coefficients and of the model are plotted in Fig. 2 below for each along with lines corresponding to and for reference.

Figure 2. Coefficients of linear regression on .

This suggests a universality in the linear relationship, which is almost independent of . This observation motivates us to test the linear model on and for the prediction of for larger , where its computation is challenging. We train the linear regression model on the available data for and then use it to predict for other . Using the trained linear model as the estimator, we show the mean deviation in Fig.3 below.

Figure 3. Predicting with linear regression: Linear models trained on data for are used to predict Cheeger constants for graphs of other . Average fractional deviation of the model from true value of is shown for each . Linear models trained on even (resp. odd) values of work better for the prediction of for even (resp. odd) .

The left panel shows prediction for even , while the right panel shows prediction for odd . We make the following observations

  1. In general, for large , linear regression with and appears to be a reasonable estimator for .

  2. The prediction is slightly more accurate when regression on odd (resp. even ) is used to predict the for larger values of odd (resp. even .)

  3. Average deviation is typically 4-5% for odd-odd and even-even predictions for the entire range of considered.

  4. It also appears that and linear models are slightly better over and models respectively for even-sized and odd-sized graphs respectively. This indicates that, for training a predictive model, we should opt for largest possible even and odd for which the Cheeger constant data is available.

4. Estimation of Cheeger constant using machine learning

In this section, we study the data on using machine learning methods with deep neural networks, mainly to answer following two questions.

  1. Does have a non-linear dependence on and ?

  2. Does has any significant dependence on other eigenvalues?

We expect that machine learning techniques will be able to identify non-linear dependencies that were not visible through linear regression. We randomly take 40% of our dataset for and train a deep neural network shown in Fig. 4 below 111We have observed that other similar choices of neural net produce similar results presented in this section, as is the case with any machine learning problem. Several results in this paper can also be produced using a less deeper network. Our choice of neural network here works for all the results presented here. using ADAM optimizer.

Figure 4. Neural Network architecture used in this paper.

The remaining of the dataset is used for validation. The trained neural net essentially provides an approximate non-linear map between the input eigenvalues and the expected Cheeger constant. The validation ensures that there is no memorization done by the neural net and that it is truly capturing features of the data. Fig. 5 below shows training and validation histograms of for , for both the cases of trainings done with the largest two and the largest four eigenvalues.

Figure 5. Training and Validation Histograms for training deep neural network on graph data. Each bin size corresponds to 0.5% of . Mean deviation for both training and validation for both cases of and eigenvalues is about .

We make the following observations:

  1. and have a very strong correlation with . Furthermore, there appears to be a small non-linear dependence on and which accounts for about 2.5% improvement over the linear regression. The average deviation is about 2.5% in both the training and validation data sets for deep neural net (DNN) model while it was about 5% for the linear model.

  2. We do not observe any significant improvement for the estimation of while considering largest four eigenvalues over and . In both cases with small fluctuations in each attempt of training. Using other subsets of the spectrum, including the full spectrum, does not seem to improve the training and validation errors beyond what are observed by considering just and .

  3. Similar exercise done for other graph sizes between to

    has similar results. Mean and standard deviation for

    for these cases is plotted in Fig. 6 below for both training and validation, reaffirming the observations made above.

    Figure 6. Training and validation mean and their standard deviations for neural network model trained with and for to .
  4. Studying the trained deep neural network reveals that has largely linear dependence on and when the spectral gap is large, while it exhibits non-linear dependence when the spectral gap is small.

We conclude that and suffice to estimate reliably.

5. Predicting using Machine Learning

The most interesting application of this work is to predict Cheeger constant for large regular graphs, where it is computationally inefficient to calculate Cheeger constant but computationally efficient to calculate the spectrum. To achieve this, we train a neural net for small graphs where it is possible to calculate Cheeger constant in reasonable computation time. We then use this trained net to predict Cheeger constant for the large graph. We moderately train the deep neural network shown in the previous section for 50 epochs

222The training was stopped after 50 epochs as compared to about 500 epochs (optimization stopping automatically when loss stops improving) done in the previous section. This ensures that the network learns the significance of top two eigenvalues and not the information about the . Maximal training to about 500 epochs optimizes the network to estimate Cheeger constant for a given , but is bad for predicting Cheeger constant of other . on and of the spectrum and Cheeger constant data for graphs of sizes 12 and 16 for even-sized graphs and sizes 13 and 17 for odd-sized graphs. Again for training here we have taken only 40% of the available data. Each training results in a new model, so we train the network for each a few times then take the trained model that yields the least validation error on the same . We use the trained nets to predict for graphs of other sizes which we compare to its true value and obtain . The average deviation with respect to is shown in Fig. 7 below, where we also show prediction done by linear regression method of Sec. 3 for contrast.

Figure 7. Comparison of Deep Neural Network with Linear Regression for predicting Cheeger constant. Left panel shows mean prediction deviation for even when deep neural network (DNN) models and linear regression (LR) models for or are used. Right panel shows mean prediction deviation for odd when deep neural network (DNN) models and linear regression (LR) models for or are used.

Here are our observations

  1. We note that works better than for predicting Cheeger constants for higher even , and similarly works better than for predicting Cheeger constant for higher odd .

  2. Although the plots are not shown here, but we have verified that to predict for even training on even works better than training on odd , and vice versa. This is consistent with observations of Sec. 3.

  3. We also note that deep neural net based model provides better prediction compared to linear regression model with a consistent improvement as increases. Particularly, the models trained on and data predict Cheeger constants for the graphs of sizes and respectively, to within 3% accuracy on an average.

  4. While we observe low average the standard deviation in is also low at about 4% throughout the range of the , thus guaranteeing reliability on predictions. This is shown in Fig. 8 below.

Figure 8. Prediction Statistics for and DNN models. Mean deviation stays between 2% and 4% for all higher while standard deviation is about 4%.

6. Conclusion

In this paper, we have studied the relevance of the spectrum of a graph in estimating . We find that is strongly dependent on and , and this correlation is largely linear with a small non-linear component, as confirmed by the machine learning analysis. We have also demonstrated that by using a deep neural network that has been moderately trained about the relationship between and , we can effectively estimate the Cheeger constant of a larger graph with high accuracy, statistically. We believe that an optimal use of this approach could be a powerful and efficient tool for studying the connectivity for large regular graphs.

References

  • [1] N. Biggs. Algebraic graph theory. Cambridge Mathematical Library. Cambridge University Press, Cambridge, second edition, 1993.
  • [2] M. R. Garey, D. S. Johnson, and L. Stockmeyer. Some simplified NP-complete graph problems. Theoret. Comput. Sci., 1(3):237–267, 1976.
  • [3] C. Godsil and G. Royle. Algebraic graph theory, volume 207 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2001.
  • [4] S. Hoory, N. Linial, and A. Wigderson. Expander graphs and their applications. Bull. Amer. Math. Soc. (N.S.), 43(4):439–561, 2006.
  • [5] Mike Krebs and Anthony Shaheen. Expander families and Cayley graphs. Oxford University Press, Oxford, 2011. A beginner’s guide.
  • [6] Tom Leighton and Satish Rao. Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms. J. ACM, 46(6):787–832, 1999.
  • [7] A. Lubotzky. Discrete groups, expanding graphs and invariant measures. Modern Birkhäuser Classics. Birkhäuser Verlag, Basel, 2010. With an appendix by Jonathan D. Rogawski, Reprint of the 1994 edition.
  • [8] A. Lubotzky. Expander graphs in pure and applied mathematics. Bull. Amer. Math. Soc. (N.S.), 49(1):113–162, 2012.
  • [9] Bojan Mohar. Isoperimetric numbers of graphs. J. Combin. Theory Ser. B, 47(3):274–291, 1989.
  • [10] M. R. Murty. Ramanujan graphs. J. Ramanujan Math. Soc., 18(1):33–52, 2003.
  • [11] A. Steger and N. C. Wormald. Generating random regular graphs quickly. volume 8, pages 377–396. 1999. Random graphs and combinatorial structures (Oberwolfach, 1997).