On the complexity of logistic regression models

03/01/2019
by   Nicola Bulso, et al.
0

We investigate the complexity of logistic regression models which is defined by counting the number of indistinguishable distributions that the model can represent (Balasubramanian, 1997). We find that the complexity of logistic models with binary inputs does not only depend on the number of parameters but also on the distribution of inputs in a non-trivial way which standard treatments of complexity do not address. In particular, we observe that correlations among inputs induce effective dependencies among parameters thus constraining the model and, consequently, reducing its complexity. We derive simple relations for the upper and lower bounds of the complexity. Furthermore, we show analytically that, defining the model parameters on a finite support rather than the entire axis, decreases the complexity in a manner that critically depends on the size of the domain. Based on our findings, we propose a novel model selection criterion which takes into account the entropy of the input distribution. We test our proposal on the problem of selecting the input variables of a logistic regression model in a Bayesian Model Selection framework. In our numerical tests, we find that, while the reconstruction errors of standard model selection approaches (AIC, BIC, ℓ_1 regularization) strongly depend on the sparsity of the ground truth, the reconstruction error of our method is always close to the minimum in all conditions of sparsity, data size and strength of input correlations. Finally, we observe that, when considering categorical instead of binary inputs, in a simple and mathematically tractable case, the contribution of the alphabet size to the complexity is very small compared to that of parameter space dimension. We further explore the issue by analysing the dataset of the "13 keys to the White House" which is a method for forecasting the outcomes of US presidential elections.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 21

08/05/2014

Volumes of logistic regression models with applications to model selection

Logistic regression models with n observations and q linearly-independen...
02/16/2021

The MELODIC family for simultaneous binary logistic regression in a reduced space

Logistic regression is a commonly used method for binary classification....
03/20/2012

Selection of tuning parameters in bridge regression models via Bayesian information criterion

We consider the bridge linear regression modeling, which can produce a s...
05/13/2015

Bootstrapped Adaptive Threshold Selection for Statistical Model Selection and Estimation

A central goal of neuroscience is to understand how activity in the nerv...
11/07/2018

Interpreting the Ising Model: The Input Matters

The Ising model is a widely used model for multivariate binary data. It ...
05/15/2019

Revisiting High Dimensional Bayesian Model Selection for Gaussian Regression

Model selection for regression problems with an increasing number of cov...
04/17/2019

Correlated Logistic Model With Elastic Net Regularization for Multilabel Image Classification

In this paper, we present correlated logistic (CorrLog) model for multil...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In many studies and in different fields of research, a recurring task is to find the set of explanatory variables, among many candidates, which accurately predict the probability of occurrence of an event. This is the case of identifying risk factors for developing a particular disease, finding the set of neurons in a population which are driving the activity of another neuron or discovering biomarkers from genomic data, to cite a few examples

(Truett et al., 1967; Saeys et al., 2007; Hertz et al., 2013)

. When the event we want to predict is binary, such as when estimating the probability of developing a certain disease, a widely employed statistical method is Logistic Regression

(Cox, 1958).

Logistic regression model is also interesting because it is the building block of more sophisticated architectures. For instance, under the so called pseudo-likelihood approximation, the difficult task of learning an Ising Model reduces to that of learning many logistic regression models (Besag, 1972; Ravikumar et al., 2010; Aurell and Ekeberg, 2012). Another example is that of the Kinetic Ising model (Marre et al., 2009; Roudi and Hertz, 2011; Battistin et al., 2015). Both models are popular methods for analysing neural assemblies in computational neuroscience (Schneidman et al., 2006; Roudi et al., 2015).

Selecting a set of inputs or predictors111

In the statistics and machine learning literature, the term “input” is often used interchangeably with “predictor”, “feature”, “covariate” or more classically “independent variable”

(Friedman et al., 2001). Through the paper we will mainly use the terms “input” or “predictor”. among some candidates corresponds to comparing different models for the output variable. Thus we search for models with a good trade-off between goodness of fit and model complexity: very simple models might not be able to capture the relevant inputs which are driving the variable we are interested in, whereas complex models are more prone to overfitting (Friedman et al., 2001). Thus assessing the complexity of a model is an interesting question for model selection (see e.g. Bulso et al., 2016).

Classical approaches to Model Selection include Bayesian Information Criteria (BIC) (Schwarz, 1978) and Akaike Information Criteria (AIC) (Akaike, 1974). These criteria are derived from different perspectives as approximations of more general quantities in the limit of large samples. However, when the sample is not large enough, these methods might require careful interpretation and application. In this limit, prior information becomes important in the inference process in Bayesian Model Selection. Yet, such information might not be available a priori. In these cases it might be advantageous to use an uninformative prior, such as Jeffreys prior (Jeffreys, 1946).

As shown in Balasubramanian (1997)

, the choice of Jeffreys prior in Bayesian Model Selection corresponds to measuring model complexity from a geometric perspective, namely by counting the number of indistinguishable distributions that the model can represent. In this perspective, complex models are models which are able to describe a wide range of probability distributions. In

Myung et al. (2000), the authors further show how this approach for characterizing model complexity relates to the Minimum Description Length (MDL) principle (Rissanen, 1987, 1996).

Jeffreys prior is a commonly used prior in Bayesian analysis. It exhibits many nice features among which being uninformative and parametrization invariant (George E. P. Box, 1973) (yet see also Kass and Wasserman, 1996). For Generalized Linear Models, its theoretical properties has been investigated in Ibrahim and Laud (1991). In particular, for binomial data it has been shown that the penalization related to the model complexity under Jeffreys prior depends not only on the number of explanatory variables, which determines the dimensionality of the model, but also on their distribution (Chen et al., 2008). Interestingly, Jeffreys prior arises in the large sample size limit also from other perspectives in recently proposed approaches to Model Selection (LaMont and Wiggins, 2017; Mattingly et al., 2018). In particular, Mattingly et al. (2018) show, among other things, that an optimal prior obtained by maximizing the mutual information between the parameters and their expected data approaches the Jeffreys prior in the limit of abundant data.

In this paper, we address the question of how complex a given logistic model is. We systematically characterize how the complexity depends on the input distribution and prove how the information on model complexity can be used to devise a successful model selection criterion which improves over other well known criteria (BIC, AIC, regularization) whose performances instead strongly depend on the sparsity of the data generating model.

The paper is organised as follows. In section 2

, we introduce the notation and define the measure of complexity that we use in the paper. Then we proceed to study, analytically and numerically, the complexity of logistic regression models with binary variables by varying the dimensionality of the model and the correlations among inputs. We also investigate the special case of models whose parameters are defined on a finite support (regularized models). In section

3, we exploit our results on the complexity to devise a novel model selection criterion and test it on simulated data. Finally in section 4

, we investigate the special problem of degenerate models, namely models where parameters might be constrained to being equal. This is an interesting case in model selection since standard recipes based on counting free parameters are not able to distinguish between, for instance, a logistic model with only one parameter multiplying one input and a model with the same parameter multiplying many different inputs. Furthermore, we will see how this issue relates to the problem of evaluating the complexity of categorical variables. We report analytical and numerical results on the complexity along with an application concerning U.S. presidential elections forecasts.

2 The complexity of logistic regression models with binary inputs

We consider a binary variable distributed according to a logistic model

(1)

with parameters and binary inputs

. The model is equivalent to a single-layer neural network with

binary input nodes, , connected to an output node, , through the weights , and bias . Now suppose that we are given measurements of the inputs, , and the output,

, and that the output data has been generated according to the conditional probability distribution in equation

1

for a given choice of the values of parameters (which can also be zero). We are interested in inferring, from the data, the set of non-zero parameters (non-zero components of the vector

). This is a problem of input selection where, given candidate inputs, we ask which of them are relevant for the variable we want to explain, i.e. which of them is expected to have any influence (non-zero connection) on the output. We proceed following a Bayesian model selection approach. In this framework each possible combination of non-zero parameters represents a model (a set of candidate inputs) and, given inputs, there are possible models 222From now on with“models” we mean non degenerate models, i.e. models whose non-zero parameters are independent of each other, and come back to discuss degenerate models only in the last section.

. The posterior probability of a model

given the data , can be expressed by Bayes rule as , where is the prior over models and

(2)

is the probability of measuring given and model . In the last equation, is the prior over parameters and is the normalised log-likelihood function

(3)

where and . The overbars in equation 3 stand for averages over all observations, namely for any function . Thus, the set of non-zero parameters that has generated the output from the input can be inferred by searching for the model that maximizes among all candidates. When all models are a priori equally likely, this procedure is equivalent of finding the model that maximizes , which is given by equation 2.

The choice of the prior over parameters represents an important issue when the sample is not very large. In this paper we use Jeffreys prior. Our choice is motivated by the geometrical interpretation of model complexity that originates from it and by its relation with MDL (Balasubramanian, 1997; Myung et al., 2000). Jeffreys prior is defined as follows

(4)

where is the Fisher Information whose elements are , where is the expected value with respect to the model distribution . Since the elements of the Hessian matrix , calculated from equation 3

, do not depend on the random variable

, it follows that the Fisher Information matrix is simply equal to the Hessian.

With this choice for the prior, the probability in equation 2 can be approximated by the following expression (see Balasubramanian, 1997, for instance):

(5)

where is the number of non-zero parameters (non-zero components of the parameter vector ) employed by model and is the maximum likelihood estimate.

In MDL theory the first three terms in equation 5, all together, represent the stochastic complexity of the string given the model and the inputs , which is the length of the shortest code that can be obtained by encoding the data with the parametric family (Rissanen, 1987). The penalty terms in MDL (the second and third term in equation 5), also known as the geometric complexity (Myung et al., 2000), are independent of the specific instance of the output data and rather related to intrinsic and geometric properties of the parametric family. Yet it is worth noting that since the Fisher Information in equation 5 depends on the input data , the complexity also depends on the input distribution, as we will discuss thoroughly in the paper. Finally, the second term in equation 5 corresponds to the well known Bayesian Information Criteria (Schwarz, 1978).

In the following sections we will investigate the typical properties of the geometric complexity for logistic regression models with binary variables (equation 1). In particular, we will focus on the penalty term

(6)

and we will study its dependence on the model size and input distribution. For logistic regression models, as defined by equation 1, the elements of the Fisher Information matrix can be written explicitly in the form

(7)

where is the frequency of observing the configuration in the data, with . Beyond the interesting theoretical aspects, the complexity represents a higher order term in the expansion of the posterior 5 and it is therefore relevant for model selection, particularly when the sample size is not very large. From now on we will refer to in equation 6 as the complexity. Notice that we will also use the term complexity to refer to and the difference will be clear from the context.

2.1 A worked example: n = 2

We start by considering simple cases which are analytically solvable. The simplest case is when only one parameter is different from zero. When the only non-zero parameter is the bias, i.e , then the complexity is simply . The same result is retrieved also when the only non-zero parameter is a weight, i.e. and the associated input is observed in the configuration with frequency : .

We consider now a model in which only two weights are different from zero, i.e. . The Fisher information is now a two by two matrix whose determinant can be easily calculated: , where is the frequency that the two inputs, , have been observed with the same sign. The complexity is therefore given by the integral which can be solved by the change of variable, and , with the result .

As it is clear from the above formula, the complexity varies in the range according to the degree of correlation between the two inputs. The complexity reaches its maximum, i.e. , when the two inputs are observed with the same sign half of the times, . It is worth noting that the maximum value for the complexity of the logistic regression model with is smaller than the complexity of a model for two independent outputs, each of which being explained by a logistic regression model with one parameter, which is . In both cases the models employ two parameters. However, in the latter case, the parameters are completely independent of each other and the volume spanned in the space of distributions will be the maximum achievable given the absence of constraints. Following the same argument, for a generic number of parameters , the complexity of a logistic regression model should always be smaller than .

In case of strongly correlated inputs, namely when is close to one or zero, the complexity will reach its minimum. In particular, for a sample of size , the smallest non-zero value for is and the largest value different from one is . In the latter cases, the complexity attains its lower (non trivial) bound which is equal to . Intuitively, correlations among inputs induce effective dependences among parameters, thus constraining the model into a reduced volume in the space of distributions. In other words, the stronger the correlations, the higher the constraints and, consequently, the lower the complexity. This represents a hard case for model selection. In fact, in the case of two strongly correlated inputs, the three models with at least one non-zero weight parameter multiplying the inputs will have approximatively the same maximum likelihood values. It follows that the selected model will strongly depend on the assumptions pertaining the model selection recipe employed.

In the limiting case in which or , we get and we say that the model is redundant: the model is constrained to a lower dimensional manifold defined by the deterministic dependence between the parameters induced by equal () or specular inputs (). In this case, the by dimensional matrix of input data, , also called the design matrix, is not full rank. In all cases which are not redundant, i.e. , the matrix is full rank, the complexity is finite and the BIC term is always larger than in absolute value, consistent with the expansion of equation 5 and the geometrical interpretation of the penalization factors as ratio between volumes (Balasubramanian, 1997).

In the following we derive and discuss some results on the complexity of logistic regression models as a function of the dimension of the model and of the shape of the input distribution extending the observations drawn from these simple cases.

2.2 Bounds on geometric complexity

In this section, we will derive and discuss analytical results for the upper and lower bound of the complexity. Additionally, we will support and further investigate the results with numerical simulations. Without loss of generality, we consider models with only weight parameters, namely as defined at the beginning of section 2. In fact, all results derived in this section easily extend also to models with a bias parameter, i.e. , given that the bias term acting on the output can be thought as a weight multiplying a constant input.

The Fisher Information defined in equation 7 can be conveniently rewritten as by defining the matrix as , where we use subscripts and Roman letter for components indices (those ranging from 1 to ) whereas superscript and Greek letters for configurations indices (ranging from 1 to ). Moreover the matrix can be expressed as , namely as the product of the matrix whose components are given by and the diagonal matrix whose elements are . These mathematical manipulations allow us to exploit Cauchy Binet formula and write the complexity in a more compact form. In fact, by Cauchy Binet formula, the determinant of the Fisher Information reads , where is now a square matrix obtained from by selecting configuration indices over the possible ones. The selection is labelled by the vector and the sum ranges over all possible vectors. Finally, since , the determinant of the Fisher Information can be written as and consequently the complexity takes the form

(8)

From the above expression we can now find a mathematical approximation for the upper bound of the complexity. By employing the triangular inequality in equation 8, we move the sum over outside the square root so that where . The integral can be evaluated by making the change of variable and turns out to be equal to . Notice that if , there is no contribution to the summation in expression 8. Therefore the factors cancel each other and the sum over is restricted to those selections such that . The final result is the following simple inequality

(9)

where the equal sign holds when only linearly independent configurations are present in the input data matrix (design matrix) as, in this case, there is only one possible selection in equation 8. Notice that equation 9 depends on both the dimensionality and the input distribution .

First, we consider the case in which the input distribution is localised on few configurations. In the limiting case in which the distribution is peaked around only one configuration , i.e. , where is the Kronecker delta function (most localised distribution), will be zero. As we have already observed in section 2.1 for models with , in this extreme case the design matrix is not full rank. Consequently the parametrization is redundant: a model with only one parameter would have sufficed to explain the behaviour of the output variable in this particular case. Thus, in general the redundancy is caused by dependencies among the parameters induced by an “extremely localised” distribution of the input configurations. Those dependencies constrain the model to a lower dimensional manifold whose volume is of zero measure. Whenever this happens, the model represents a redundant description of the reality and one should look for a simpler explanation by reducing the number of parameters and/or combining inputs.

Moreover, it is rather intuitive and easy to see from equation 9 that the parametrization is not redundant if there are at least linearly independent configurations with such that , that is if the design matrix is full rank. In the case of exactly linearly independent configurations there is only one selection such that and equation 9 holds with the equal sign: the result is simply . In this case, given a sample of size , the value of the complexity can vary between two values, . The upper bound can be derived by maximizing while enforcing the normalisation of the frequencies through a Lagrange multiplier and is achieved when the linearly independent configurations are observed an equal number of times , . The lower bound can be verified numerically for small values of and and corresponds to the situation in which all configurations but one are observed only once, i.e. , , and . The difference between these two values tends to zero as approaches from above, while it grows large as and . In fact, their ratio scales as .

Based on the fact that a more localised input distribution would lead to a redundant parametrization (zero value for the complexity) and given that equation 9 and numerical simulations (see Figure 1) suggests that a more spread distribution would generally lead to an equal or larger value for the complexity, we expect that the lower bound presented above, , represents the minimum attainable (non zero) value of the complexity. In fact, when , it matches the lower bound we found in section 2.1.

Interestingly, in this “most-localized scenario”, the complexity decreases instead of increasing as the dimensionality of the model increases (it goes at most as ). This means that, although adding a new parameter would still increase the dimensionality of the model (larger BIC penalization), it would also result in a more constrained model (decreasing of complexity).

In order to support these results and further investigate the dependence of the complexity on the input distribution, we resort to Monte Carlo simulations for estimating the integral (details of the simulations are described in Supplemental Information, section 1).

As a first step, we assume an input distribution which is uniform and different from zero only in a limited number of configurations, . The parameter tunes the localisation of the input distribution or, in other words, the strength of input correlations. We evaluated the complexity at varying over the entire range: from to . The results are shown in Figure 1 for .

Figure 1: The complexity at varying degrees of localisation of the input distribution from (the inputs are frozen in one configuration) to (the inputs are observed in all possible configurations the same number of times) and for . The values of the complexity are enclosed between two limits: for (bottom dashed line) and for (top dashed line). For the complexity is zero. The different lines are 10 different random paths: starting from linearly independent configurations and randomly adding configurations until reaching . For a given value of , the complexity exhibits a high variability which depends on the degree of independence of the selected subset of configurations. Each estimate of the integral in the figure is the result of the average over 20 different Monte Carlo estimates, each of which has been calculated with points. Notice that estimations are less accurate as grows large due to an increasing variability in the integrand (see Supplemental Information, section 1).

As predicted, the complexity is zero if less than

configurations have been observed at least once and, when the input distribution is uniformly distributed on exactly

linearly independent configurations, the complexity jumps to (bottom black dashed line). For , the value of the complexity depends on the degree of independence of the configurations, namely on the volumes spanned by each subset of configurations. Therefore for the complexity exhibits a high variability while generally increasing with from toward an upper bound which is achieved when the input distribution is uniform over all possible configurations (top black dashed line). This fact is in line with what has been observed in the case in section 2.1 where a uniform distribution leads to the model with the largest complexity.

Unfortunately, when the distribution is uniform, the inequality 9 provides a trivial upper bound, namely an upper bound larger than expected from simple arguments i.e. , as argued in 2.1.

The upper bound of the complexity, obtained by evaluating the integral with Monte Carlo simulations assuming a uniform distribution on the inputs, is shown in Figure 2 a) at varying , the number of inputs. Interestingly, the trend is consistent, within error bars, with the simple function (black dotted line) which has been also reported in Figure 1 (black dashed line) for comparison. As expected, the upper bound is found to be smaller than .

Figure 2: a) Monte Carlo evaluations of the complexity assuming ,

(red line). The error bars in the figures are standard deviations over

independent estimates of the integral, each of which calculated using evaluation of the integrand. For comparison the function is plotted (black dotted line); b) The complexity at varying degrees of localisation of the input distribution from (the inputs are observed in all possible configurations the same number of times) to (the inputs are almost frozen around a specific configuration) for and . The complexity (red full line) increases, as expected, from (bottom dashed line) to (top dashed line). Each estimate of the integral in the figure is the result of an average over 100 different Monte Carlo estimates, each of which has been calculated with points.

Next, we assume that the input distribution follows an Ising model, namely , where , and , 333The small parameter has been introduced to break the symmetry between the two configurations with all inputs equal.. The localisation of the distribution is regulated by the parameter : very small values of produce an almost uniform distribution whereas for larger values the distribution becomes more and more localised around the configuration with all inputs being one. As before, we used Monte Carlo simulations to estimate the integral for and for different values of . The results are shown in Figure 2 b) with red lines. As expected, for small the curves approach the upper bound (top black dashed line) and when increasing , the complexity decreases until some point around the limit (bottom black dashed line).

As a final remark, in Appendix A we prove that, due to the parity of the hyperbolic cosine function, the complexity reaches the upper bound already when the input distribution is uniform over the configurations of a subset of inputs. This is evident in Figure 1 for the cases and : the complexity reached the upper bound already when all possible patterns relative to a subset of inputs were observed the same number of times. For larger values of this is not observed because none of the 10 different random paths in the simulation passes through the point in which the distribution is uniform over the patterns relative to a subset of inputs. In particular, as observed in Appendix A, this fact implies that the upper bound of the complexity for a model with inputs and without the bias term on the output variable is equal to that of a model with inputs and the bias.

2.3 The emergence of statistical constraints in regularized models

We now consider logistic models whose parameters are defined on a finite support, , , with being any finite positive number, and we study the effect of this effective regularization on the model complexity. In particular, we focus on the case of uniformly distributed inputs, , , and compare the results obtained in the last section with un-regularized models.

In Appendix B we derive a fully analytical expression for the complexity of such models in the limit of an indefinitely large input layer. In this regime, the complexity can be approximated by the following equation

(10)

As it is clear from the above expression, if the parametric family is defined in a box with half length , the volume occupied by the parametric family in the space of distribution will start shrinking after a certain critical dimension in contrast with the steady increase observed for un-regularized models, namely . More precisely, beyond the critical dimension,

, increasing the number of inputs will make the model more constrained although still adding further degrees of freedom.

Intuitively this is due to the emergence of statistical constraints, namely the applicability of Central Limit arguments as shown in Appendix B, which induces effective dependencies among parameters. In fact, in the proof of equation 10

given in Appendix B, the assumptions of a large number of uniformly distributed inputs and parameters defined on a finite support allowed us to invoke Central Limit Theorem

444Notice that the requirement of the finite support is the crucial assumption in the derivation and makes the difference with the previous cases of un-regularized models.

which implies that, for any choice of the parameter values, their linear combination with the inputs is constrained to follow a Gaussian distribution with zero mean. This is similar to what was observed in the previous section in the case of the most-localized input distribution where the complexity decreases with increasing the dimensionality of the model. In that case, the effective dependencies among parameters were induced by the correlations among inputs while, in this case, by statistical arguments.

Finally, the fact that the complexity starts decreasing after a certain critical dimension in regularized models is expected to be true for any non trivial input distribution given that in this section we proved it to be true for the uniform distribution which corresponds to the upper bound in the un-regularized case.

3 Model selection

In this section, we are going to apply the outcomes of the previous sections to the problem of model selection with logistic regression models and binary variables. As outlined at the beginning of section 2, we are interested in solving the following problem: given measurements of the input and output variables in a logistic model with binary inputs, what is the model , i.e. the set of non-zero parameters, that has most likely generated the output data given the inputs? Since each weight parameter multiplies a different input in a logistic model, this is also an input selection problem: which subset of the inputs is relevant for predicting the output? In the following, we will first derive a novel Model Selection criterion based on the posterior expansion and the results on the complexity obtained in section 2. Then we will apply this criterion to investigate the quality of model recovery and compare the performance with those achieved with other standard model selection recipes. As before and without any loss of generality, we will neglect the bias and consider only weight parameters, i.e. in equation 1 and , since, if needed, the bias can always be introduced as a weight parameter multiplying an additional constant input.

3.1 A novel model selection criteria

Given potential predictors (inputs) for the output variable, there are possible models each of which trying to explain the output with a different subset of inputs. In other words, each model corresponds to a different subset of non-zero weight parameters. In a Bayesian Model Selection framework, we compare models according to their posterior probability. Assuming that all these models are a priori equally likely, this coincides with ranking models based on their likelihood . In section 2 we saw that can be approximated as where the first term is the likelihood of the data calculated at the Maximum Likelihood estimates of the parameters and the second term is a penalization factor. The latter is given by and consists of the sum of the BIC and the complexity term respectively. In the last section we found that the value of depends on both the localisation of the input distribution and the dimensionality of the model; it is upper bounded by ; finally, it decreases down to some value around as the localisation of the input distribution increases. In the first case, the penalisation would be which is dominated by a linear dependence on n and a logarithmic dependence on T, similar to the case of BIC. In the second case, the penalisation would be which is a weaker penalisation with respect to the previous case.

We now introduce the entropy as a measure of the localisation of the input distribution: the value of the entropy approaches (the dimension of the model) when the distribution of the inputs approaches the uniform distribution; whereas it shrinks to some value around when the distribution is localised on approximatively linearly independent configurations.

Based on the results of the previous section and on the aforementioned measure of localisation, we introduce the following heuristic for calculating the penalization of a model with

inputs whose distribution has entropy :

(11)

where is the entropy of the full design matrix with all predictors taken into account. We further motivate this formula in Supplemental Information (section 2), while, in the following, we provide the intuition behind it and discuss its properties and performance on model selection tasks.

Equation 11

allows us to approximately interpolate between the expected trends as derived in the limiting cases of a uniform and localised distribution. In fact, when

, the largest contribution to the penalization exhibits the expected BIC-like trend, , which represents the leading contribution in this regime given that in order to have , a sample size larger than the dimensionality of the configuration space is needed, i.e. , and given that . Alternatively, one might think about this as the type of penalization which affects mostly models with (or sparsity ).

On the other hand, when increasing the localisation of the distribution, the entropy growth with would be drastically reduced. This is typically the case for models with a large number of inputs, namely with , where (see also Figure 3 in Supplemental Information). In this case, the factor in the argument of the logarithm in equation 11 is counterbalanced by a large value of leading to a weaker penalization, similarly to what was obtained when substituting the lower bound of the complexity into the posterior expansion. Moreover, for very dense models and small datasets , the leading term of the penalization tends to an AIC-like one of order .

3.2 Model selection tests

In general, we expect that results delivered by different model selection recipes will differ more among each other as the dimensionality of the models grows large. For example, the difference between the AIC and BIC penalization factors grows as . This is an important issue for real world datasets for which with and . In our tests, we consider binary inputs ( in Supplemental Information) and and . As before, in section 2.2, we generate samples for the inputs according to an Ising model, , where , , and , , where the small parameters have been introduced to break the symmetry between the two configurations with all inputs being equal. The choice of an Ising distribution is motivated by the fact that it is easy to tune the localisation of the distribution by varying the parameter .

Given the input data, we generate the output according to the conditional probability distribution of equation 1 employing only a random subset of the predictors. This is achieved by assigning non zero values in the parameter vector only to the weights multiplying the selected predictors. The number of selected predictors, , is decided according to the desired level of sparsity, . Each non zero weight is drawn independently from the distribution defined as: , for and ; otherwise. After that, the parameter vector is normalized by to ensure that the effective field acting on the output is of regardless of the number of non-zero parameters in the model.

Thus, after having defined the ground truth and generated samples from it, we aim at recovering its structure from the data by doing model comparison: we rank the models according to their maximum likelihood value and penalise them for their complexity by employing equation 11. We compared the performance with those obtained with some of the most common approaches, i.e BIC (Schwarz, 1978), AIC (Akaike, 1974), regularization (Tibshirani, 1996; Koh et al., 2007) with a K-fold cross validation procedure for selecting the regularizer (). Except for the method where the search in the space of models is embodied in the optimization, for the other methods we need to define the pool of candidate models. In fact, the set of all possible candidates grows exponentially with the number of predictors so that an extensive search among all possibilities is impracticable for large values of . We therefore resort to approximate procedures to walk through this huge model space. We use a decimation procedure (LeCun et al., 1990; Hassibi and Stork, 1993; Decelle and Ricci-Tersenghi, 2014): we start from a full model with all predictors connected to the output and estimate its parameters; then the next model to include in the pool would employ all predictors as in the previous model except for the one whose corresponding interaction parameter had been inferred as having the least absolute value; at each step, we define new models by removing the parameter with the smallest absolute value until we reach the model with no active interactions at all, namely the model with the output independent from all candidate predictors. We have also tried a “forward” approach by starting from the null model and adding at each step the parameter that would cause the biggest jump in the likelihood. The reconstruction errors with both procedures are found to be almost indistinguishable in our simulations and, therefore, we report only the results with the decimation procedure. These errors are plotted in Figure 3 versus the sparsity of the ground truth and for different levels of the localisation of the input distribution (columns) and sample size (rows). The errors are the mean fraction of misclassified non-zero/zero parameters over independent realisations of the same experiments and the error bars are the corresponding standard deviations. In the figure, the sparsity level varies from to meaning that the ground truth in the first case is a model with all weights being non-zero, whereas in the second case of them are set to zero.

Figure 3: The reconstruction errors versus the sparsity of the ground truth and for different levels of the localisation of input distribution (columns) and sample size (rows). The errors are the mean fraction of misclassified active/inactive parameters over independent realisations of the same experiments and the error bars are the corresponding standard deviations. The size of the input layer is . We compared performance employing different model selection criteria: AIC (red), BIC (blue), equation 11 (yellow) and logistic regression with regularization (purple) (the latter only for and ).

Clearly, the reconstruction error decreases as the sample size increases for all methods, but more slowly for AIC. On the other hand, at increasing , model retrieval becomes harder. Consistently, the worst reconstruction is observed for the largest value of and the smallest sample size ( and in Figure 3). In the latter case, the likelihood is quite flat so that a very sparse model is typically selected by any model selection criterion. In fact, the information on the activity of the output provided by a new predictor which is highly correlated with the ones already employed in the model is expected to be small because much of its variability is already encoded in the set of predictors in use. This is true even if this predictor is indeed present in the ground truth. On the other hand, when all the predictors are maximally different from each other (maximal information conveyed by each predictor) and the data length is large, the likelihood is expected to be quite sharp around the ground truth, allowing for a very good reconstruction. This is indeed the case of and in Figure 3 where each model selection recipe achieved its minimal reconstruction error.

BIC and regularization employ a stronger penalization than AIC which explains why in the figure the latter methods are always better than AIC in recovering sparse models, but worse when it comes to unveiling dense models. The proposed penalization in equation 11 acts against this unbalance in the recovery errors between sparse and dense models. In fact, it tends to match the BIC trend for high sparsity and the AIC one for low sparsity, resulting in the best model selection recipe in our simulations: in Figure 3, for any level of sparsity, the reconstruction error with our method is the smallest or gets close to the smallest one achieved by the other methods whose performances instead strongly depend on the sparsity of the ground truth. In Supplemental Information (Figure 4), numerical experiments with further substantiates these results. This result is very important in applications given that normally we are not given any information on the expected number of predictors (sparsity) and therefore we might want to treat sparse and dense models on the same footing. As shown in this section, our method achieves this goal whereas standard alternatives typically introduce a bias toward some sparsity range.

4 Degenerate models

In this section we consider degenerate models, namely those models for which some components of the parameter vector are constrained to be equal. Evaluating the complexity for degenerate models is particularly interesting for model selection given that usually model selection criteria, such as Bayesian Information Criteria and Akaike Information Criteria, can distinguish models only based on the number of parameters employed: for instance, a model with one parameter connecting one input to the output would be equivalent to a model connecting inputs through the same parameter. For logistic regression models, it is easier to see that any degenerate model, i.e. any model built from a non-degenerate model by explicitly constraining a subset of parameters to be equal, can be mapped into a non-degenerate model with categorical inputs 555From equation 1, the predictor of a degenerate parameter is the sum of all binary predictors connected to the output through the same parameter, i.e. a categorical predictor.. Therefore the additional complexity of a degenerate model with respect to a non degenerate one with the same number of parameters is related to the additional complexity conveyed by categorical inputs with respect to binary ones.

In the following we will consider a simple example of such models and discuss an application on a real dataset.

4.1 1-parameter degenerate models

The simplest example of such models is a logistic regression model with binary inputs with and the corresponding weights tied together through the same parameter :

(12)

where is a fixed parameter which works as a threshold. Given observations of the inputs and the output , for , the normalised log-likelihood of the model is given by

(13)

where . As the form of the likelihood suggests, this model can be mapped into a model with only one categorical input , connecting to a binary output through the parameter . Consequently the Fisher Information takes the form , where represents the distribution of the sum of random variables , and the complexity is simply the integral of its square root. As we did in section 2.2 for non-degenerate models, we resort to triangular inequality to study the bounds of the complexity by varying the distribution and the dimension . This procedure leads to the simple inequality:

(14)

It can be easily shown that the upper bound in the right hand side of the previous equation is maximised when the distribution is uniform over all possible outcomes of the variable which means . By substituting this form of the distribution in equation 14, we find that . Thus the complexity grows at most as the square root of the number of inputs which is a much slower growth compared with the exponential one in non-degenerate models. Therefore, it is clear that the additional complexity gained by growing the alphabet of a categorical variable is generally very small compared to the additional complexity obtained by adding more independent parameters.

On the other hand, the minimum value for the right hand side of equation 14 will be reached when only one particular value, say , of the variable is observed, namely when , where stands for the Kronecker delta function. In this case, the resulting complexity is exactly , as we saw for a one-parameter model with a binary input in section 2.1.

An interesting case is when all inputs are uniformly distributed so that in a large sample () we expect to observe all possible configurations the same number of times. In this case, the number of occurrences of a particular value of is given by 666The number of plus ones in a pattern with is ; it follows that the combinatorial factor counts all possible patterns with the same number of plus ones, i.e. the same value of .. It follows that the distribution of will be . By plugging this distribution into equation 14, we find . The right hand side of the inequality is very well approximated by the function with and which gives a better intuition of the behaviour of the complexity in this particular case: it increases with but much slower than the upper bound; it is always above the lower bound identified by and, as goes to zero, it tends to the same value taken by the upper bound. Notice that both the upper bound and this particular solution do not tend to for because these results are obtained by employing triangular inequality.

We present these analytical results in Figure 4 (dotted lines) at varying the size of the input layer . In the same figure, we compare these trends with simulations obtained by evaluating the one dimensional integral numerically (solid lines). In evaluating the integral we fixed and studied it at varying . As it is clear from the figure, the complexity oscillates while very slowly increasing with in both cases of uniformly distributed random inputs (red line) and uniformly distributed sum of inputs (blue line). The oscillation is due to the fact that when is even, the categorical variable can take exactly the value of the threshold , which has been set to zero. The complexity increases with the dimension more rapidly for models with a small number of predictors, whereas its value becomes almost independent of for large models. Moreover, it is interesting to notice that the complexity of a one parameter degenerate model for , as shown in 4 (dotted line), is always smaller than the maximum value of the complexity for a two parameter non degenerate model which is , as we have seen in section 2.1.

As a conclusion, given that a one parameter degenerate model with inputs can be mapped into a model with only one categorical input with an alphabet of symbols, the additional complexity conveyed by categorical inputs with respect to binary ones increases very slowly with the cardinality of the alphabet of the categorical variable, showing a larger effect for small values of , and it was found to be lower than the increment in complexity which would follow from adding another non degenerate parameter to the model, even for large values of .

Figure 4: The complexity of 1-parameter degenerate models versus the degree of degeneracy of the parameter. The complexity has been evaluated numerically using equation 12 and for two cases: when all possible values of the sum of inputs are observed the same number of times (solid blue line) and in the case in which all possible input configurations are instead observed an equal number of times (solid red line). We also report the upper bounds estimate in the aforementioned two cases (dotted lines, same color coding) and the value of the complexity when the distribution is extremely localized (black dotted line), i.e. when the input layer is frozen in a certain configuration.

4.2 The 13 keys to the White House

An interesting application is the question of predicting the U.S. presidential elections from the values of some predictors. A successful method for forecasting the verdict of US presidential elections is “The 13 keys to the White House” (Lichtman and Keilis-Borok, 1981). The method was developed in the eighties and, since then, has been able to predict correctly presidential elections (Lichtman, 2016). It is based on 13 binary questions (keys) and each time the answer to a key is false, the key is turned against the party in power. If six or more keys are false, the challenging party is predicted to win the elections otherwise the party holding the White House will be more likely to win. Since the predictions depend only on the sum of the keys, i.e. how many of them are false, the method can be interpreted as the deterministic counterpart of a 1-parameter degenerate model (see Supplemental Information, section 3).

The question we would like to ask from a Bayesian Model Selection point of view is: is there any “simpler” explanation of the outcomes of presidential elections which would perhaps involve less predictors? We compare all possible models corresponding to all possible ways of selecting a subset of predictors from the original set of the 13 keys and use that subset for forecasting the presidential elections (see Supplemental Information). This amounts at comparing models. Since all models, except the model employing no predictors at all, can be cast into degenerate models with only 1 parameter, they will all have the same BIC penalization factor. It follows that the largest penalization factor which differs among models is represented by the complexity.

Figure 5: a) The values of the posterior for all the models ranked according to their maximum likelihood; b) Distribution of which is the order of magnitude of local shuffling induced by the complexity on the likelihood based ranking of models.

In Figure 5 a) we plot the posterior probability for all models ranked according to their maximum likelihood values. The figure clearly shows that the effect of taking into account the complexity on model selection is local, i.e. it concerns models which are very close in likelihood. In fact, the posterior seems to be monotonically increasing on a larger scale while it presents fluctuations on a finer scale (insets). Therefore, the complexity induces a local reordering of the models with respect to the likelihood ranking. We measure the range of such an effect by introducing the quantity . Specifically, given all models ranked in a non-decreasing order according to their likelihood, is the range of models surrounding a given one within which the fluctuations of the complexity are approximately equal to the variation of the likelihood. In some sense is a measure of the uncertainty induced by the complexity in the likelihood ranking. It varies across models with a mean value of , as shown in Figure 5 b).

Figure 6: a) The 42 models with the largest posterior. The models are ranked by increasing values of the posterior. Each model is represented by a column vector with binary values: yellow if the key is active in the selected model and black otherwise. The model of interest are highlighted: model 4 (green) which correspond to the model with all 13 keys and model 42 (red) which is the one that maximize the posterior; b) The posterior values for the 42 most likely models.

Moreover, Figure 5 a) reveals that, interestingly, there are some models in our pool that are attached a posterior significatively larger than the others. There are 42 of such models and they are reported in Figure 6 a). These models fit the dataset exactly (). It follows that, given that the BIC term is the same overall, the complexity is the only term varying across models. In Figure 6 b) the models are ranked according to their posterior (or equivalently from the most to the least complex model, given that the likelihood and the BIC term are constant). As mentioned in the previous section, the complexity does not depend merely on the number of predictors but also slightly on the distribution of the corresponding categorical variable attached to them. In fact, the model with all 13 keys active turns out to be not the most complex model although still among the most complex ones. On the other hand, the model with the largest posterior overall, contains only 8 out of the 13 keys (Keys 1,2,3,4,6,7,8,13) where the first seven keys are among the most frequent keys over all 42 models. Interestingly, by looking at the set of predictors not included in the best model (Keys 5,9,10,11,12), we could see that: according to this model, it does not matter for the forecast of the next presidential election whether the economy is in recession during the electoral campaign (key 5), but what does matter is rather the economic growth during the whole previous term (key 6); military failures or successes are irrelevant as predictors (keys 10 and 11); finally, it does not matter whether the incumbent administration is untainted by a major scandal (key 9) or whether the incumbent-party candidate is charismatic or a national hero while it is relevant whether the challenging-party candidate is charismatic (key 13)777We refer to (Lichtman, 2016) for a detailed explanation of the meaning of the keys.

As a conclusion, by comparing all possible degenerate models with different predictors through a Bayesian model selection approach, we found that the original method of the 13 keys to the White House is not attached a posterior probability larger than many other candidates. Instead, a model with only 8 predictors is the one that achieves the largest posterior. However it is worth noting that the differences in the values of the posterior among these 42 most likely models are very tiny and eventually the result of the model comparison would strongly depend on the choice of the prior over models. Our choice of the prior favours the least complex models among the best candidates, i.e. the one which account for the minimum number of distinguishable probability distributions.

5 Conclusions

In this paper we address the question of estimating the complexity of a logistic regression model with binary inputs. As shown in Balasubramanian (1997), the complexity of a model can be related to the number of distinguishable distributions that a model can represent. In this perspective, it is defined as the integral of the square root of the determinant of the Fisher Information matrix. We evaluate analytically the integral in special cases and corroborate the analysis with Monte Carlo simulations. We find that the integral depends on the degree of correlation of the inputs: correlations among inputs induce effective dependencies on the parameters thus reducing the volume that the model occupies in the space of distributions. It follows that when the inputs are maximally uncorrelated, the model achieves its maximal value of the complexity, whereas when the inputs are highly correlated, it becomes highly constrained and it is characterized by a low value of the complexity. The upper bound of the complexity has been estimated with Monte Carlo methods. We find that it can be very well approximated by the simple relation , where is the number of parameters. The lower bound instead is reached when the number of unique patterns observed in the input layer is equal to the number of parameters. In the latter case, we find an exact solution for the value of the complexity, . The solution depends on the distribution of these patterns and ranges from to . Monte Carlo estimates of the integral at varying degrees of correlation (localisation of input distribution) further confirm these results. Finally when the number of unique patterns observed in the input layer is less than the number of parameters, the data matrix is not full rank and : the model is constrained into a lower dimensional manifold in the space of distribution induced by the deterministic relation among parameters.

Moreover, we investigate the complexity of models whose parameters are defined on a finite support (regularized models). In the case of maximally uncorrelated inputs, we find that the complexity starts decreasing with the dimensionality of the model after a certain critical dimension as opposed to the steady exponential increase, i.e. , found for models with parameters defined on the entire real axis. This is due to the gradual appearance of dependencies among parameters induced by emergent statistical constraints. Thus, for large values of

, increasing the number of parameters in regularized models will always increase the degrees of freedom of the model, but it will also make the model more constrained (less complex), at odds with the corresponding un-regularized case.

We apply these results on the model complexity for devising a novel model selection criterion that would take into account the correlations among inputs when deciding how much a model should be penalised in a Bayesian model selection framework. To this purpose, we introduce a heuristic for the penalization cost based on the entropy of the input distribution. Consistently, our proposal assigns a strong penalization, similar to a BIC-like one, to models with a large value of the complexity and a weaker one, more similar to an AIC-like penalization, to extremely constrained models. We then tested this ansatz by simulating data from logistic models with various levels of sparsity, sample size and input correlations, and then asking different model selection criteria to retrieve the correct model. Numerical tests clearly show that our proposal always achieves (or gets very close to) the smallest reconstruction error for all levels of sparsity, data size and input correlations whereas performances with the other tested methods strongly depend on the sparsity of the ground truth. As a consequence, our method qualifies as the best choice in our tests and would be especially useful in those cases in which a priori we have no information about the sparsity of the ground truth. As an example, this might be the case for a population of neurons simultaneously recorded from some region of the brain. In fact, the sparsity of the network of dependencies among neurons strongly varies according to if the recorded cells belong to a functionally connected module or many independent ones as well as according to how the behavioural task during the recording shapes the activity of the cells (Dunn et al., 2015). These information might often be not known to the scientist a priori.

Finally, we address the question of how the alphabet size of the inputs affect the value of model complexity. The problem is equivalent of studying the complexity of a model with binary inputs, but with degenerate parameters, namely when subsets of parameters are forced to be equal. In this respect, we study analytically and numerically the simple case of a model with only one parameter by varying its degree of degeneracy or, alternatively, the alphabet size of the corresponding effective input. We find that the additional complexity conveyed by categorical inputs with respect to binary ones increases very slowly with the alphabet size of the categorical variable, showing a slightly larger rise for small sizes, and it is always lower than the increment in complexity which would follow from adding another non degenerate parameter to the model.

Studying the complexity of degenerate models is also interesting because many commonly used model selection recipes are based on penalising models according to some cost function which often depends only on the number of parameters and sample size. A Bayesian Model selection approach would instead employ a penalization which might differ even for models with the same number of parameters. As an application, we study the dataset of the 13 keys to the White House (Lichtman and Keilis-Borok, 1981; Lichtman, 2016). In fact, the question of deciding if a model with less predictors would be better at explaining the variability in the dataset with respect to the full model with all 13 predictors, can actually be cast into a model selection problem with 1-parameter degenerate models. As one would expect, the contribution of the complexity is very small as compared with that coming from the likelihood. Yet, we show that it becomes relevant for deciding among the subset of models with the largest value for the likelihood.

It would be interesting to extent this study of the complexity to a more general class of models known as Generalized Linear Models (Nelder and Wedderburn, 1972) which is widely employed in applications, for instance in neuroscience for studying the neural code of cells or populations of cells in the brain (Roudi et al., 2015).

Another interesting application is related to the fact that one of the most powerful approaches for inference and structure learning in Ising model is based on the so-called pseudo-likelihood approximation (Ravikumar et al., 2010; Aurell and Ekeberg, 2012; Decelle and Ricci-Tersenghi, 2014). This approximation basically consists of substituting the exact likelihood function, whose optimization is hard because of the partition function, with a sum of conditionally independent logistic regression functions with binary inputs and outputs. Within this approximation, based on the results of section 3, we would expect that our proposed criterion (equation 11) will deliver close to optimal performances for nodes with any level of degree distribution (sparsity) and, therefore, on a wide range of network topologies, including topologies where highly connected nodes coexist with almost isolated ones, as for instance in scale free and small world networks.

Finally, it must be noticed that equation 11 is a heuristic inspired by the truncated posterior expansion of equation 5 so it should inherit the limitations buried in the truncation. In particular, terms of order 1/T, which are neglected in equation 5, might be relevant in some regimes (Balasubramanian, 1997). As shown also by our numerical tests, when the sample size approaches the dimensionality of the model or for large input correlations, model selection represents a hard task and performances are generally poor with any approach. It would be interesting to study higher order terms in the expansion and see whether the information contained in them could be used to improve model selection in this regime.

6 Acknowledgments

The authors are grateful to Prof. Benjamin Dunn for reading through the paper.

References

  • Akaike (1974) Akaike, H. (1974). A new look at the statistical model identification. Automatic Control, IEEE Transactions on, 19(6):716–723.
  • Aurell and Ekeberg (2012) Aurell, E. and Ekeberg, M. (2012). Inverse ising inference using all the data. Phys. Rev. Lett., 108:090201.
  • Balasubramanian (1997) Balasubramanian, V. (1997). Statistical inference, occam’s razor, and statistical mechanics on the space of probability distributions. Neural computation, 9(2):349–368.
  • Battistin et al. (2015) Battistin, C., Hertz, J., Tyrcha, J., and Roudi, Y. (2015). Belief propagation and replicas for inference and learning in a kinetic ising model with hidden spins. Journal of Statistical Mechanics: Theory and Experiment, 2015(5):P05021.
  • Besag (1972) Besag, J. E. (1972). Nearest-Neighbour systems and the Auto-Logistic model for binary data.
  • Billingsley (1995) Billingsley, P. (1995). Probability and Measure. Wiley, 3 edition.
  • Bulso et al. (2016) Bulso, N., Marsili, M., and Roudi, Y. (2016). Sparse model selection in the highly under-sampled regime. Journal of Statistical Mechanics: Theory and Experiment, 2016(9):093404.
  • Chen et al. (2008) Chen, M.-H., Ibrahim, J. G., and Kim, S. (2008). Properties and implementation of jeffreys’s prior in binomial regression models. Journal of the American Statistical Association, 103(484):1659–1664.
  • Cox (1958) Cox, D. R. (1958).

    The regression analysis of binary sequences.

    Journal of the Royal Statistical Society. Series B (Methodological), pages 215–242.
  • Decelle and Ricci-Tersenghi (2014) Decelle, A. and Ricci-Tersenghi, F. (2014). Pseudolikelihood decimation algorithm improving the inference of the interaction network in a general class of ising models. Phys. Rev. Lett., 112:070603.
  • Dunn et al. (2015) Dunn, B., Mørreaunet, M., and Roudi, Y. (2015). Correlations and functional connections in a population of grid cells. PLoS computational biology, 11(2):e1004052.
  • Friedman et al. (2001) Friedman, J., Hastie, T., and Tibshirani, R. (2001). The elements of statistical learning, volume 1. Springer series in statistics New York, NY, USA:.
  • George E. P. Box (1973) George E. P. Box, G. C. T. (1973). Bayesian Inference in Statistical Analysis (Wiley Classics Library). Wiley-Interscience.
  • Hassibi and Stork (1993) Hassibi, B. and Stork, D. G. (1993). Second order derivatives for network pruning: Optimal brain surgeon. In Advances in neural information processing systems, pages 164–171.
  • Hertz et al. (2013) Hertz, J., Roudi, Y., Tyrcha, J., Quiroga, R. Q., and Panzeri, S. (2013). Principles of neural coding.
  • Ibrahim and Laud (1991) Ibrahim, J. G. and Laud, P. W. (1991). On bayesian analysis of generalized linear models using jeffreys’s prior. Journal of the American Statistical Association, 86(416):981–986.
  • Jeffreys (1946) Jeffreys, H. (1946).

    An invariant form for the prior probability in estimation problems.

    Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 186(1007):453–461.
  • Kass and Wasserman (1996) Kass, R. E. and Wasserman, L. (1996). The selection of prior distributions by formal rules. Journal of the American Statistical Association, 91(435):1343–1370.
  • Koh et al. (2007) Koh, K., Kim, S.-J., and Boyd, S. (2007). An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression. Journal of Machine Learning Research, (8):1519–1555.
  • LaMont and Wiggins (2017) LaMont, C. H. and Wiggins, P. A. (2017). A correspondence between thermodynamics and inference. arXiv preprint arXiv:1706.01428.
  • LeCun et al. (1990) LeCun, Y., Denker, J. S., and Solla, S. A. (1990). Optimal brain damage. In Advances in neural information processing systems, pages 598–605.
  • Lichtman (2016) Lichtman, A. J. (2016). The keys to the white house: The current forecast for 2016. Social Education, 80(1):26–30.
  • Lichtman and Keilis-Borok (1981) Lichtman, A. J. and Keilis-Borok, V. I. (1981). Pattern recognition applied to presidential elections in the united states, 1860-1980: Role of integral social, economic, and political traits. Proceedings of the National Academy of Sciences, 78(11):7230–7234.
  • Marre et al. (2009) Marre, O., El Boustani, S., Frégnac, Y., and Destexhe, A. (2009). Prediction of spatiotemporal patterns of neural activity from pairwise correlations. Physical review letters, 102(13):138101.
  • Mattingly et al. (2018) Mattingly, H. H., Transtrum, M. K., Abbott, M. C., and Machta, B. B. (2018). Maximizing the information learned from finite data selects a simple model. Proceedings of the National Academy of Sciences, 115(8):1760–1765.
  • Myung et al. (2000) Myung, I. J., Balasubramanian, V., and Pitt, M. A. (2000). Counting probability distributions: Differential geometry and model selection. Proceedings of the National Academy of Sciences, 97(21):11170–11175.
  • Nelder and Wedderburn (1972) Nelder, J. A. and Wedderburn, R. W. M. (1972). Generalized linear models. Journal of the Royal Statistical Society. Series A (General), 135(3):370–384.
  • Ravikumar et al. (2010) Ravikumar, P., Wainwright, M. J., and Lafferty, J. D. (2010). High-dimensional ising model selection using l1-regularized logistic regression. Ann. Statist., 38(3):1287–1319.
  • Rissanen (1987) Rissanen, J. (1987). Stochastic complexity. Journal of the Royal Statistical Society. Series B (Methodological), 49(3):223–239.
  • Rissanen (1996) Rissanen, J. J. (1996). Fisher information and stochastic complexity. IEEE Transactions on Information Theory, 42(1):40–47.
  • Roudi et al. (2015) Roudi, Y., Dunn, B., and Hertz, J. (2015). Multi-neuronal activity and functional connectivity in cell assemblies. Current opinion in neurobiology, 32:38–44.
  • Roudi and Hertz (2011) Roudi, Y. and Hertz, J. (2011). Mean field theory for nonequilibrium network reconstruction. Phys. Rev. Lett., 106:048702.
  • Saeys et al. (2007) Saeys, Y., Inza, I., and Larrañaga, P. (2007).

    A review of feature selection techniques in bioinformatics.

    bioinformatics, 23(19):2507–2517.
  • Schneidman et al. (2006) Schneidman, E., Berry, M. J., Segev, R., and Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087):1007–1012.
  • Schwarz (1978) Schwarz, G. (1978). Estimating the dimension of a model. Ann. Statist., 6(2):461–464.
  • Tibshirani (1996) Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society (Series B), 58:267–288.
  • Truett et al. (1967) Truett, J., Cornfield, J., and Kannel, W. (1967).

    A multivariate analysis of the risk of coronary heart disease in framingham.

    Journal of Clinical Epidemiology, 20(7):511–524.

Appendix A Upper bound for models including the bias term

Here we prove that due to the parity of the hyperbolic cosine, the complexity of a logistic regression model with binary inputs, where of them are uniformly distributed, is equal to the complexity of the same model when all inputs are uniformly distributed, which achieves the upper bound, as we found in section 2.2.

To prove the statement, let’s consider the Fisher Information elements for a model with inputs connected through a dimensional vector of parameters

(15)

Let’s perform the sum over an arbitrary variable ,

(16)

where , and are the dimensional vectors obtained by removing the -th index and and . By changing with in the second summation we obtain

(17)

Notice that and is a sum over the same elements but in a reversed order. It follows that we can write it as

(18)

We have considered here , but the latter equation is true for any elements of the Fisher Information matrix. In fact, it is not difficult to see that equation 18 holds in the case when . For the cases and , there will be a minus sign in front of the second summation in equation 16 which will become again a plus sign in equation 17 after transforming in . Now, if the distribution of the inputs satisfy the relation

(19)

then the elements of the Fisher Information matrix can be written as if the k-th input was absent and the corresponding weight was the bias term acting on the output, namely

(20)

In particular, equation 19 is true if the distribution factorize over the spin variable , i.e. and if all possible patterns and anti-patterns of the remaining spin are observed with the same frequency, i.e. . One instance in which this is true is when out of the inputs are uniformly distributed, i.e. , regardless of . It follows that, the upper bound for the complexity is reached already in the case in which the input distribution is flat over the configuration space of only inputs, as it has been observed in the simulations and discussed in the paper in section 2.2.

As a consequence, since the bias term can be thought as a weight parameter connecting a constant input (always equal to or ), the complexity of a logistic model with inputs (weights) and the bias term reaches the same upper bound as the model with inputs (weights) and no bias. The upper bound is attained when the inputs are uniformly distributed.

Appendix B Proof of equation 10

We consider logistic models with parameters defined on a finite support , with being any finite positive number, in the case of uniformly distributed inputs, , . Recall that the Fisher Information matrix elements are given by .

We examine first the off-diagonal terms. We carry out the summations over the variables and and define the variables and by removing the indices and from the variables and respectively,

(21)

where we have first exploited the parity of the hyperbolic cosine and then rearranged the terms in the summations. We think of the average over observations as the average over a random process with the same limiting distribution, i.e. . This will allow us to exploit central limit theorem and perform calculations. It follows that, since the distribution over is uniform, the quantity is the sum of independent random variables, , distributed as . Therefore each random variable

has zero mean and variance

. Asymptotically, i.e. as goes to infinity, becomes a Gaussian random variable with zero mean and variance . This is ensured by Lyapunov’s condition which is always satisfied when the parametric family is bounded (see for instance Billingsley, 1995). Thus, in the last equation, we substitute the expectation over the configuration states with an expectation over ,

(22)

where is the gaussian density and where we used the change of variable for passing from the first to the second line. The integrand in the last equation is the difference between two functions centered at and weighted according to a gaussian density centered at . However the standard deviation of the Gaussian density grows with and eventually for , the two contributions will be weighted equally cancelling each other. More specifically, for , namely when , the density can be approximated as . The term in has a zero contribution to the integral so the first non zero contribution comes from the term. Thus off-diagonal elements go to zero faster than which ensures that these terms will not bring a relevant contribution to the determinant as goes to infinity.

Following the same argument, it is easy to obtain a simple expression also for the diagonal terms , . The latter will become where