Let be a finite, simple, connected and undirected -regular graph with . It is a well known fact from basic algebraic graph theory [1, 3] that the eigenvalues , , of the adjacency matrix of are real and can be ordered as:
For each , let . Then the number
is called the Cheeger constant (or the isoperimetric constant or the edge expansion constant) of the graph . The Cheeger constant is a measure of the connectivity of the graph . Families of regular graphs with Cheeger constants bounded below by a positive constant also known as expander families have been widely studied (see [4, 7, 8, 10] and the references therein) due to their applications to communication networks.
Does the dependence of on and stronger than what the known bounds indicate?
Is this dependence predominantly linear or non-linear?
Is there a strong dependence of on , for ?
Can these dependencies be used to estimate for large with greater efficiency?
We begin by providing data which shows that in general these known bounds for
deviate significantly from its actual value. By considering random regular graphs of sizes 12 through 30, we apply machine learning via deep neural networks and linear regression to make the following statistical observations:
has predominant linear dependence on and . Moreover, as increases, this dependence appears to approach the linear function . This linearity is more pronounced when the spectral gap is large.
Its dependence on , for is insignificant.
We demonstrate that a trained deep neural network on graphs of smaller sizes can be used as an effective estimator for Cheeger constants of larger graphs where computation times using classical algorithms are large.
The paper is organized as follows. In Section 2, we analyze whether some well known bounds can be used as effective estimators for . In Section 3, we determine whether the dependence of on and is predominantly linear. In Section 4, we use machine learning to examine whether has a nonlinear dependence on ) and , and also study its relation to , for . Finally, in Section 5, we explore whether deep neural networks trained on graphs of smaller sizes can be used as viable estimators for Cheeger constants of larger graphs.
2. Numerical analysis of known bounds
We consider a dataset of random regular graphs of sizes 12 through 30 for our analysis. This dataset was generated by using a Python package that implements the algorithm described in . The number of graphs considered for was limited by the total number of available graphs, while for , the limitation came from long computation time for . In all other cases, we have considered at least random graphs of varying regularity. The number of graphs considered for analysis for each is shown in the second column of Table 1.
The Cheeger is related to the spectral gap of a -regular graph by the following inequality (see [5, Proposition 1.84]):
Mohar  showed that
and when (where denotes the complete graph on vertices), he showed that
For each graph , we compute the lower bound on as given by Eqn. (2.1), and an upper bound, which is the lowest of the upper bounds appearing in (2.1)-(2.3). For each of these estimators, we calculate its deviation from the true value of as given in the equation below:
where, refers to the estimator of . For the analysis in this section, corresponds to either the upper bound or the lower bound. The mean values of (which we denote by and respectively) for each is shown in Table 1 below.
|# of Graphs|
We note that, on an average, the lower bound deviates from the true value of by about , while the upper bound deviates at about . This deviation marginally reduces for large values of . The table indicates that the bounds considered are not efficient estimators for . In the following section, we consider linear regression to construct a better estimator for .
3. Linear regression analysis and prediction
In this section, we want to determine whether the relationship between and and is predominantly linear. To begin with, we analyze whether can be estimated reasonably well by a linear function of the largest eigenvalues, for . For each , we calculate the mean deviation , where we use the fitted linear regression function as the estimator. The results for this analysis are presented in Fig. 1 below for various values of .
It is evident from the graph that adding the third and fourth eigenvalue to the analysis does not significantly reduce . This shows that a linear function of just the two largest eigenvalues estimates fairly accurately Interestingly, the average deviation reduces gradually with increase in coming down to about for . This observation confirms that the relationship between the two largest eigenvalues and is mostly linear.
The regression coefficients of appears to converge to as increases, while the coefficient of appears to converge to . The coefficients and of the model are plotted in Fig. 2 below for each along with lines corresponding to and for reference.
This suggests a universality in the linear relationship, which is almost independent of . This observation motivates us to test the linear model on and for the prediction of for larger , where its computation is challenging. We train the linear regression model on the available data for and then use it to predict for other . Using the trained linear model as the estimator, we show the mean deviation in Fig.3 below.
The left panel shows prediction for even , while the right panel shows prediction for odd . We make the following observations
In general, for large , linear regression with and appears to be a reasonable estimator for .
The prediction is slightly more accurate when regression on odd (resp. even ) is used to predict the for larger values of odd (resp. even .)
Average deviation is typically 4-5% for odd-odd and even-even predictions for the entire range of considered.
It also appears that and linear models are slightly better over and models respectively for even-sized and odd-sized graphs respectively. This indicates that, for training a predictive model, we should opt for largest possible even and odd for which the Cheeger constant data is available.
4. Estimation of Cheeger constant using machine learning
In this section, we study the data on using machine learning methods with deep neural networks, mainly to answer following two questions.
Does have a non-linear dependence on and ?
Does has any significant dependence on other eigenvalues?
We expect that machine learning techniques will be able to identify non-linear dependencies that were not visible through linear regression. We randomly take 40% of our dataset for and train a deep neural network shown in Fig. 4 below 111We have observed that other similar choices of neural net produce similar results presented in this section, as is the case with any machine learning problem. Several results in this paper can also be produced using a less deeper network. Our choice of neural network here works for all the results presented here. using ADAM optimizer.
The remaining of the dataset is used for validation. The trained neural net essentially provides an approximate non-linear map between the input eigenvalues and the expected Cheeger constant. The validation ensures that there is no memorization done by the neural net and that it is truly capturing features of the data. Fig. 5 below shows training and validation histograms of for , for both the cases of trainings done with the largest two and the largest four eigenvalues.
We make the following observations:
and have a very strong correlation with . Furthermore, there appears to be a small non-linear dependence on and which accounts for about 2.5% improvement over the linear regression. The average deviation is about 2.5% in both the training and validation data sets for deep neural net (DNN) model while it was about 5% for the linear model.
We do not observe any significant improvement for the estimation of while considering largest four eigenvalues over and . In both cases with small fluctuations in each attempt of training. Using other subsets of the spectrum, including the full spectrum, does not seem to improve the training and validation errors beyond what are observed by considering just and .
Studying the trained deep neural network reveals that has largely linear dependence on and when the spectral gap is large, while it exhibits non-linear dependence when the spectral gap is small.
We conclude that and suffice to estimate reliably.
5. Predicting using Machine Learning
The most interesting application of this work is to predict Cheeger constant for large regular graphs, where it is computationally inefficient to calculate Cheeger constant but computationally efficient to calculate the spectrum. To achieve this, we train a neural net for small graphs where it is possible to calculate Cheeger constant in reasonable computation time. We then use this trained net to predict Cheeger constant for the large graph. We moderately train the deep neural network shown in the previous section for 50 epochs222The training was stopped after 50 epochs as compared to about 500 epochs (optimization stopping automatically when loss stops improving) done in the previous section. This ensures that the network learns the significance of top two eigenvalues and not the information about the . Maximal training to about 500 epochs optimizes the network to estimate Cheeger constant for a given , but is bad for predicting Cheeger constant of other . on and of the spectrum and Cheeger constant data for graphs of sizes 12 and 16 for even-sized graphs and sizes 13 and 17 for odd-sized graphs. Again for training here we have taken only 40% of the available data. Each training results in a new model, so we train the network for each a few times then take the trained model that yields the least validation error on the same . We use the trained nets to predict for graphs of other sizes which we compare to its true value and obtain . The average deviation with respect to is shown in Fig. 7 below, where we also show prediction done by linear regression method of Sec. 3 for contrast.
Here are our observations
We note that works better than for predicting Cheeger constants for higher even , and similarly works better than for predicting Cheeger constant for higher odd .
Although the plots are not shown here, but we have verified that to predict for even training on even works better than training on odd , and vice versa. This is consistent with observations of Sec. 3.
We also note that deep neural net based model provides better prediction compared to linear regression model with a consistent improvement as increases. Particularly, the models trained on and data predict Cheeger constants for the graphs of sizes and respectively, to within 3% accuracy on an average.
While we observe low average the standard deviation in is also low at about 4% throughout the range of the , thus guaranteeing reliability on predictions. This is shown in Fig. 8 below.
In this paper, we have studied the relevance of the spectrum of a graph in estimating . We find that is strongly dependent on and , and this correlation is largely linear with a small non-linear component, as confirmed by the machine learning analysis. We have also demonstrated that by using a deep neural network that has been moderately trained about the relationship between and , we can effectively estimate the Cheeger constant of a larger graph with high accuracy, statistically. We believe that an optimal use of this approach could be a powerful and efficient tool for studying the connectivity for large regular graphs.
-  N. Biggs. Algebraic graph theory. Cambridge Mathematical Library. Cambridge University Press, Cambridge, second edition, 1993.
-  M. R. Garey, D. S. Johnson, and L. Stockmeyer. Some simplified NP-complete graph problems. Theoret. Comput. Sci., 1(3):237–267, 1976.
-  C. Godsil and G. Royle. Algebraic graph theory, volume 207 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2001.
-  S. Hoory, N. Linial, and A. Wigderson. Expander graphs and their applications. Bull. Amer. Math. Soc. (N.S.), 43(4):439–561, 2006.
-  Mike Krebs and Anthony Shaheen. Expander families and Cayley graphs. Oxford University Press, Oxford, 2011. A beginner’s guide.
-  Tom Leighton and Satish Rao. Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms. J. ACM, 46(6):787–832, 1999.
-  A. Lubotzky. Discrete groups, expanding graphs and invariant measures. Modern Birkhäuser Classics. Birkhäuser Verlag, Basel, 2010. With an appendix by Jonathan D. Rogawski, Reprint of the 1994 edition.
-  A. Lubotzky. Expander graphs in pure and applied mathematics. Bull. Amer. Math. Soc. (N.S.), 49(1):113–162, 2012.
-  Bojan Mohar. Isoperimetric numbers of graphs. J. Combin. Theory Ser. B, 47(3):274–291, 1989.
-  M. R. Murty. Ramanujan graphs. J. Ramanujan Math. Soc., 18(1):33–52, 2003.
-  A. Steger and N. C. Wormald. Generating random regular graphs quickly. volume 8, pages 377–396. 1999. Random graphs and combinatorial structures (Oberwolfach, 1997).