1 Introduction
Let
be a probability distribution on the subsets of the set
. We assign a multiaffine polynomial with variables to ,The polynomial is also known as the generating polynomial of . A polynomial is homogeneous if every monomial of has degree . We say is homogeneous if the polynomial is homogeneous, meaning that for any with .
A polynomial with nonnegative coefficients is logconcave on a subset if is a concave function at any point in , or equivalently, its Hessian is negative semidefinite on . We say a polynomial is strongly logconcave on if for any , and any sequence of integers ,
is logconcave on
. In this paper, for convenience and clarity, we only work with (strong) logconcavity with respect to the allones vector,
. So, unless otherwise specified, in the above definition. We say the distribution is strongly logconcave at if is strongly logconcave at . The notion of strong logconcavity was first introduced by Gur09,Gur10 to study approximation algorithms for mixed volume and multivariate generalizations of Newton’s inequalities.In this paper, we show that the “natural” Monte Carlo Markov Chain (MCMC) method on the support of a
homogeneous strongly logconcave distribution mixes rapidly. This chain can be used to generate random samples from a distribution arbitrarily close to .The chain is defined as follows. We take the state space of to be the support of , namely . For , first we drop an element , chosen uniformly at random from . Then, among all sets in the support of we choose one with probability proportional to .
It is easy to see that is reversible with stationary distribution . Furthermore, assuming is strongly logconcave, we will see that is irreducible. We prove that this chain mixes rapidly. More formally, for a state of the Markov chain , and , the total variation mixing time of started at with transition probability matrix and stationary distribution is defined as follows:
where is the distribution of the chain started at at time .
The following theorem is the main result of this paper.
Theorem 1.1.
Let be a homogeneous strongly logconcave probability distribution. If denotes the transition probability matrix of and denotes the collection of size subsets of which are contained in some element of , then for every , has at most eigenvalues of value . In particular, has spectral gap at least , and if is in the support of and , the total variation mixing time of the Markov chain started at is at most
To state the key corollaries of this theorem, we will need the following definition.
Definition 1.2.
Given a domain set , which is compactly represented by, say, a membership oracle, and a nonnegative weight function , a fully polynomialtime randomized approximation scheme (FPRAS) for computing the partition function is a randomized algorithm that, given an error parameter and error probability , returns a number such that . The algorithm is required to run in time polynomial in the problem input size, , and .
Equipped with this definition, we can now concisely state the main applications of Theorem 1.1. Theorem 1.1 gives us an algorithm to efficiently sample from a distribution which approximates closely in total variation distance. By the equivalence between approximate counting and approximate sampling for selfreducible problems [JVV86], this gives an FPRAS for each of the following:

counting the bases of a matroid, and

estimating the partition function of the random cluster model for a new range of parameter values
For real linear matroids, we also give an algorithm for estimating the partition function of a generalized version of a determinantal point process. Note that these problems are all instantiations of the following: estimate the partition function of some efficiently computable nonnegative weights on bases of a matroid. Furthermore, as the restriction and contraction of a matroid by a subset of the ground set are both (smaller) matroids, problems of this form are indeed selfreducible. In the following sections we discuss these applications in greater depth.
1.1 Counting Problems on Matroids
Let be an arbitrary matroid on elements (see Section 2.3) of rank . Let
be the uniform distribution on the bases of the matroid
. It follows that is homogeneous. Using the HodgeRiemann relation proved by AHK18, a subset of the authors proved [AOV18] that for any matroid , is strongly logconcave.^{1}^{1}1Indeed, in [AOV18], it is shown that satisfies a seemingly stronger property known as “complete logconcavity”, namely that is logconcave (at ) for any sequence of directional derivatives with nonnegative directions . We will prove in a future companion paper that complete logconcavity is equivalent to strong logconcavity. This implies that the chain converges rapidly to stationary distribution. This gives the first polynomial time algorithm to generate a uniformly random base of a matroid. Note that to run we only need an oracle to test whether a given set is an independent set of . Therefore, with only polynomially many queries (in ) we can generate a random base of .Corollary 1.3.
For any matroid of rank , any basis of and , the mixing time of the Markov chain starting at is at most
To prove this we simply used the fact that a matroid of rank on elements has at most bases. There are several immediate consequences of the above corollary. Firstly, by equivalence of approximate counting and approximate sampling for selfreducible problems [JVV86] we can count the number of bases of any matroid given by an independent set oracle up to a multiplicative error in polynomial time.
Corollary 1.4.
There is a randomized algorithm that for any matroid on elements with rank given by an independent set oracle, and any , counts the number of bases of up to a multiplicative factor of with probability at least in time polynomial in .
As an immediate corollary for any we can count the number of independent sets of of size . This is because if we truncate to independent sets of size at most
it remains a matroid. As a consequence we can generate uniformly random forests in a given graph, and compute the reliability polynomial
for any matroid and , all in polynomial time. Note this latter fact follows from the ability to count the number of independent sets of a fixed size, as the complements of rank subsets are precisely the independent sets of the dual of . Prior to our work, we could only compute the reliability polynomial for graphic matroids due to a recent work of Guo and Jerrum [GJ18a].
One can associate a graph to any matroid , called the bases exchange graph. This graph has a vertex for every basis of and two bases are connected by an edge if . It follows by the bases exchange property of matroids that this graph is connected. For an unweighted graph , the expansion of a set and the graph are defined as
MV89 conjectured that the bases exchange graph has expansion at least one, i.e., that , for any matroid . It turns out that the bases exchange graph is closely related to the Markov chain . The following theorem is an immediate consequence of the above corollary.
Theorem 1.5.
For any matroid , the expansion of the bases exchange graph is at least , .
1.2 The Random Cluster Model
Another application of this theory is estimating the partition function of the random cluster model. For a matroid of rank and parameters , the partition function of the random cluster model from statistical mechanics due to Fortuin and Kasteleyn [For71, FK72I, For72II, For72III] is the following polynomial function associated to ,
where is the size of the largest independent set contained in . We note that typically one scales each term by but up to a normalization factor (and change of variables) the two polynomials are equivalent. We refer interested readers to a recent book of Grim09 for further information. Typically, one considers the special case where is a graphic matroid, in which case the exponent of is simply the number of connected components of . To the best of our knowledge, prior to this work, one could only compute when because of the close connection to the Ising model [JS93, GJ17]. Our next result is a polynomial time algorithm that estimates for any and .
Theorem 1.6.
For a matroid with rank function and parameter , the polynomial
is strongly logconcave.
Together with Theorem 1.1, this gives an FPRAS for estimating given an independence oracle for the matroid . Estimating then follows as
and each term is nonnegative. In fact, the polynomial is closely related to the Tutte polynomial
Indeed, we can write
Hence, an FPRAS for estimating for and gives an FPRAS for estimating in the region described by the inequalities and .
1.3 Determinantal Distributions on Real Linear Matroids
Finally, we show that the class of homogeneous multiaffine strongly logconcave polynomials is closed under raising all coefficients to a fixed exponent less than .
Theorem 1.7.
Let be a homogeneous degree multiaffine strongly logconcave polynomial. Then is strongly logconcave for every .
We use the above theorem to design a sampling algorithm for determinantal point processes. A determinantal point process (DPP) on a set of elements is a probability distribution identified by a positive semidefinite matrix where for any we have
where is the principal submatrix of indexed by the elements of
. Determinantal point processes are fundamental to the study of a variety of tasks in machine learning, including text summarization, image search, news threading, and diverse feature selection
[[, see, e.g.,]]KT12. A determinantal point process (DPP) is a determinantal point process conditioned on the sets having size .Given a positive semidefinite matrix , let be the corresponding DPP. We have
It turns out that the above polynomial is real stable and so it is strongly logconcave over [[, see, e.g.,]]AOR16. AOR16 show that a natural Markov chain with the Metropolis rule mixes rapidly and generates a random sample of . The above theorem immediately implies the following logconcavity result.
Corollary 1.8.
For every positive semidefinite matrix and exponent , the polynomial
is strongly logconcave.
It follows from Theorem 1.1 that for any we can generate samples from a “smoothed” DPP distribution, where for any set , , in polynomial time. The weights
may be thought of as a way to interpolate between two extremes for selecting diverse data points.
We also note that for , it is known that Corollary 1.8 follows from the BrunnMinkowski theorem applied to appropriately defined zonotopes. For when the DPP has full support, and for as mentioned earlier, the above polynomial is actually real stable, and hence strongly logconcave. Theorem 1.7 gives a unified proof that all of these polynomials are strongly logconcave.
1.4 Related Works
There is a long line of work on designing approximation algorithms to count the bases of a matroid. Most of these works focus on expansion properties of bases exchange graph. FM92 showed that for a special class of matroids known as balanced matroids [MS91, FM92], the bases exchange graph has expansion at least 1. A matroid is balanced if for any minor of (including itself), the uniform distribution over its bases satisfies the pairwise negative correlation property. Many of the extensive results in this area [Gam99, JS02, JSTV04, Jur06, Clo10, CTY15, AOR16] only study approximation algorithms for this limited class of matroids, and not much is known beyond the class of balanced matroids. Unfortunately, many interesting matroids are not balanced. An important example is the matroid of all acyclic subsets of edges of a graph of size at most (for some ) [FM92].
There has been other approaches for counting bases. GJ18b used the popping method to count bases of bicircular matroids. BS07 designed a randomized algorithm that gives, roughly, a approximation factor to the number of bases of a given matroid with elements and rank . In [AOV18], a subset of the authors gave a deterministic approximation to the number of bases using the fact that is logconcave over .
There is an extensive literature on hardness of exact computation and inapproximability of the Tutte polynomial and the partition function of the random cluster model. It is known that exact computation of the Tutte polynomial for a graph is #Phard at all points except at , along the hyperbola , and for planar graphs, along the hyperbola [JVW90], [Ver91], [Wel94]. In the realm of inapproximability, it is known that even for planar graphs, there is no FPRAS to approximate the Tutte polynomial for or assuming [GJ08, GJ12II]. Furthermore, there is no FPRAS for estimating the partition function of the random cluster model on general graphic matroids when , nor is there an FPRAS for at for general binary matroids, unless there is an FPRAS for counting independent sets in a bipartite graph [GJ12I, GJ13, GJ14].
1.5 Independent Work
In a closely related upcoming work, Brändén and Huh, in a slightly different language, independently prove the strong logconcavity of several of the polynomials that appear in this paper. In upcoming papers, both groups of authors use these techniques to prove the strongest form of Mason’s conjecture and further study closure properties of (strongly) logconcave polynomials.
1.6 Techniques
One of our key observations is a close connection between pure simplicial complexes and multiaffine homogeneous polynomials. Specifically, if is a pure simplicial complex with positive weights on its maximal faces, we can associate with a multiaffine homogeneous polynomial such that the eigenvalues of the localized random walks on correspond to the eigenvalues of the Hessian of derivatives of .
Weighted Simplicial Complex  Multiaffine Polynomial 

Dimension  Degree 
Weight of  Evaluation at 
Connectivity of Links  Indecomposability 
Link  Differentiation 
Local Random Walk  (Normalized) Hessian 
Using this correspondence, one can study multiaffine homogeneous polynomials using techniques from simplicial complexes, and vice versa. To study the walk corresponding to a polynomial , we analyze the simplicial complex corresponding to . To do this, we leverage recent developments in the area of highdimensional expanders, which we discuss below.
Given a simplicial complex (see Section 2.4) and an ordering of its vertices, one can associate a high dimensional Laplacian matrix to the dimensional faces of . These matrices generalize the classical graph Laplacian and there has been extensive research to study their eigenvalues [[, see]and the references therein]Lub17. A method known as Garland’s method [Gar73] relates the eigenvalues of graph Laplacians of 1skeletons of links of to eigenvalues of high dimensional Laplacians of [[, see]]BS97, Opp18.
Recently, KM17 studied a high dimensional walk on a simplicial complex, which is closely related to the walk that we defined above (see Section 3). Their goal is to argue that, similar to classical expander graphs, high dimensional walks mix rapidly on a high dimensional expander. Their bounds were improved in a work of DK17, who showed that if all nontrivial eigenvalues of the simple random walk matrix on all 1skeletons of links of have absolute value at most , then the high dimensional walk on faces of has spectral gap at least . This was further improved in a recent work of KO18: They showed that if all nontrivial eigenvalues of the simple random walk matrix on all 1skeleton of links of are at most , then the spectral gap of the high dimensional walk is at least . In other words, negative eigenvalues of the random walk matrix do not matter. One only needs positive eigenvalues to be small.
Note that in order to make the spectral gap bounds meaningful one needs . In other words, one needs that, except the trivial eigenvalue of 1, all other eigenvalues are either negative or very close to . Here is the place where the connection to (strong) logconcavity comes into the picture. A polynomial is logconcave at if has at most one positive eigenvalue. A polynomial is strongly logconcave if the same holds for all partial derivatives of . Our main observation is that this property is equivalent to taking in the corresponding simplicial complex. Namely, we obtain the best possible spectral gap of when the simplicial complex comes from a strongly logconcave polynomial.
Our approach has a close connection to the original plan of FM92 who used the negative correlation property of balanced matroids to show that the bases exchange walk mixes rapidly. Unfortunately, most interesting matroids do not satisfy negative correlation. But it was observed [AHK18, HW17, AOV18] that all matroids satisfy a spectral negative dependence property. Namely, consider the uniform distribution over the bases of a matroid , and consider the Hessian of the of the generating polynomial at the point . Then is negatively correlated if and only if all offdiagonal entries of this matrix are nonpositive, whereas being spectrally negatively correlated means that this matrix is negative semidefinite. Spectral negative correlation is precisely what one needs to bound the mixing time of the high dimensional walk on the corresponding simplicial complex.
Structure of the paper.
In Section 2 we discuss necessary background on linear algebra, matroids, simplicial complexes and strongly logconcave polynomials. We also provide a useful characterization of strong logconcavity. In Section 3 we discuss and reprove a version of the main theorem of KO18 on mixing time of high dimensional walks, Theorem 3.3. In Section 4 we use this to prove Theorem 1.1 and the MihailVazirani conjecture, Theorem 1.5. Finally, in Section 5 we first prove our new characterization of strong logconcavity and discuss its applications. Specifically, we give a selfcontained proof that the uniform distribution over the bases of a matroid is strongly logconcave and we prove Theorems 1.7 and 1.6.
Acknowledgements.
Part of this work was started while the first and last authors were visiting the Simons Institute for the Theory of Computing. It was partially supported by the DIMACS/Simons Collaboration on Bridging Continuous and Discrete Optimization through NSF grant CCF1740425. Shayan Oveis Gharan and Kuikui Liu are supported by the NSF grant CCF1552097 and ONRYIP grant N000141712429. Cynthia Vinzant was partially supported by the National Science Foundation grant DMS1620014.
2 Preliminaries
First, let us establish some notational conventions. Unless otherwise specified, all logarithms are in base . All vectors are assumed to be column vectors. For two vectors , we use to denote the standard Euclidean inner product between and . We use and to denote the set of positive and nonnegative real numbers, respectively, and to denote . For a vector and a set , we let denote .
We use or to denote the partial differential operator . We denote the gradient of a function or polynomial by and the Hessian of by .
2.1 Linear Algebra
We say a matrix is stochastic if all entries of
are nonnegative and every row adds up to exactly 1. It is wellknown that the largest eigenvalue in magnitude of any stochastic matrix is
and its corresponding eigenvector is the allones vector,
. If the eigenvalues of a matrix are all real, then we order them asA symmetric matrix is positive semidefinite (PSD), denoted , if all its eigenvalues are nonnegative, or equivalently if for all ,
Similarly, is negative semidefinite (NSD), denoted , if for all . Equivalently, a real symmetric matrix is PSD (NSD) if its eigenvalues are nonnegative (nonpositive), respectively.
Theorem 2.1 (Schur Product Theorem [Hj13, Thm 7.5.3]).
If are positive semidefinite, then their Hadamard product , whose entries are , is positive semidefinite.
Theorem 2.2 (PerronFrobenius Theorem [Hj13, Ch. 8]).
Let be symmetric and have strictly positive entries. Then has an eigenvalue which is strictly positive. Furthermore, it has multiplicity one and its corresponding eigenvector has strictly positive entries.
Theorem 2.3 (Cauchy’s Interlacing Theorem [Hj13, Corollary 4.3.9]).
For a symmetric matrix and vector , the eigenvalues of interlace the eigenvalues of . That is, for
The following is an immediate consequence:
Lemma 2.4.
Let be a symmetric matrix and let . If has at most one positive eigenvalue, then has at most one positive eigenvalue.
Proof.
Since has at most one positive eigenvalue, we can write for some vector and some negative semidefinite matrix . Then . First, observe that , since for , . Second, let . Then and by Theorem 2.3, the eigenvalues of interlace the eigenvalues of . Since all eigenvalues of are nonpositive, has at most one positive eigenvalue. ∎
The following fact is wellknown.
Fact 2.5.
Let and be arbitrary matrices. Then, nonzero eigenvalues of are equal to nonzero eigenvalues of with the same multiplicity.
Lemma 2.6.
Let be a symmetric matrix with at most one positive eigenvalue. Then, for any PSD matrix , has at most one positive eigenvalue.
Proof.
Since , we can write for some . By creftype 2.5, has the same nonzero eigenvalues as the matrix . Since has at most 1 positive eigenvalue, by Lemma 2.4, has at most one positive eigenvalue and so does . ∎
Lemma 2.7.
Let be a symmetric matrix with nonnegative entries and at most one positive eigenvalue, and let . Then,
Proof.
Let . Then, by Lemma 2.4, has at most one positive eigenvalue. Observe that the top eigenvector of is the vector, where , for all . In particular, . So, is the only eigenvector of with positive eigenvalue and we have
Multiplying both sides of the inequality on the left and right by proves the lemma. ∎
In this paper, we will often switch between different inner products. As such, we highlight the following variational characterization of eigenvalues of a linear operator that is selfadjoint with respect to an arbitrary inner product. In particular, the matrix of the linear operator need not be symmetric.
Theorem 2.8 (CourantFischer Theorem).
Let be a linear operator that is selfadjoint with respect to some inner product . If are the eigenvalues of , then
where the minimum is taken over all dimensional subspaces and the maximum is taken over all vectors with .
When the inner product is clear, we call a matrix selfadjoint when , for all . Similarly we call a selfadjoint positive semidefinite when for all
By Theorem 2.8, this is equivalent to having nonnegative eigenvalues.
2.2 Markov Chains and Random Walks
For this paper, we consider a Markov chain as a triple where denotes the (finite) state space, denotes the transition probability matrix and denotes a stationary distribution of the chain (which will be unique for all chains we consider). For , we use to denote the corresponding entry of , which is the probability of moving from to . We say a Markov chain is lazy if for any state , . A chain is reversible if there is a nonzero nonnegative function such that for any pair of states ,
If this condition is satisfied, then is proportional to a stationary distribution of the chain. In this paper we only work with reversible Markov chains. Note that being reversible means that the transition matrix is selfadjoint w.r.t. the following defined for :
Reversible Markov chains can be realized as random walks on weighted graphs. Given a weighted graph where every edge has weight , the nonlazy simple random walk on is the Markov chain that from any vertex chooses an edge with probability proportional to and jumps to . We can make this walk lazy by staying at every vertex with probability . It turns out that if is connected, then the walk has a unique stationary distribution where , where is the weighted degree of .
For any reversible Markov chain , the largest eigenvalue of is . We let denote the second largest eigenvalue of in absolute value. That is, if are the eigenvalues of , then equals .
Theorem 2.9 ([Ds91, Prop 3]).
For any reversible irreducible Markov chain , , and any starting state ,
For our results, it will be enough to look at the second largest eigenvalue , which we can bound using the conductance of a weighted graph. Consider a weighted graph and a subset of vertices. We let denote the complement . Then the conductance of , denoted by , is defined as
where is the set of edges between and , is the sum of weights of these edges, and the volume is the sum of the weighted degrees of the vertices in . The conductance of is then
where the minimum is taken over subsets for which .
We say is regular if for all .
Theorem 2.10 (Cheeger’s Inequalities [Am85, Alon86]).
For any regular weighted graph ,
where is the weighted adjacency matrix of given by .
A direct consequence of the above theorem is that if the (weighted) graph is connected, i.e., for all proper nonempty subsets of vertices, , then . If the matrix is stochastic, then the graph is regular, which gives the following.
Corollary 2.11.
If is a stochastic matrix corresponding to a reversible Markov chain with the property that for all subsets , then .
2.3 Matroids
A matroid is a combinatorial structure consisting of a ground set of elements and a nonempty collection of independent subsets of satisfying:

[i)]

If and , then (hereditary property).

If and , then there exists an element such that (exchange axiom).
The rank, denoted by , of a subset is the size of any maximal independent set of contained in . Thus, the independent sets of are precisely those subsets for which . We call the rank of , and if has rank , any set of size is called a basis of .
An element is a loop if , that is, is dependent. Two nonloops are parallel if , that is, is dependent.
Definition 2.12 (Contraction).
Let ) be a matroid and . Then the contraction is the matroid with ground set and independent sets .
We will use a key property of matroids called the matroid partition property. For any matroid , the nonloops of can be partitioned into sets for some with the property that nonloops are parallel if and only if they belong to the same set . Indeed, one can check from the axioms for a matroid that being parallel defines an equivalence relation on the nonloop elements of and are then the corresponding equivalence classes.
2.4 Simplicial Complexes
A simplicial complex on the ground set is a nonempty collection of subsets of that is downward closed, namely if and , then . The elements of are called faces/simplices, and the dimension of a face is defined as . Note that for convenience and clarify of notation, our definition deviates from the standard definition of used by topologists. A face of dimension 1 is a vertex of and a face of dimension 2 is called an edge. More generally, we write
for the collection of dimension faces, or faces/simplices, of . The dimension of is the largest for which is nonempty, and we say that is pure of dimension if all maximal faces of have the dimension . In this paper we will only consider pure simplicial complexes.
The link of a face denoted by is the simplicial complex on obtained by taking all faces in that contain and removing from them,
Note that if is pure of dimension and , then is pure and has dimension .
For any matroid of rank , the independent sets form a pure dimensional simplicial complex on called its independence (or matroid) complex. Furthermore, for any , the link of the independence complex consists precisely of the independent sets of the contraction . There are many other beautiful simplicial complexes associated to matroids, but here we will be mainly interested in the independence complex.
We can equip a simplicial complex with a weight function: which assigns a positive weight to each face of . We say that is balanced if for every nonmaximal face of dimension ,
(1) 
For a pure simplicial complex we can define a balanced weight function by assigning arbitrary positive weights to maximal faces and defining the weight of each lower dimensional face recursively. Indeed, if is a pure simplicial complex of dimension and is a balanced weight function, then, for any ,
One natural choice is the function which assigns a weight of one to each maximal face, but there are many other interesting choices.
Any balanced weight function on induces a weighted graph on the vertices of as follows. The 1skeleton of is the graph on vertices with edges . Then, restricting to and determines a weighted graph, where gives the weighted degree of each . The weighted graphs coming from both and its links will be useful in later sections.
2.5 LogConcave Polynomials
We say a polynomial is homogeneous if every monomial of has degree ; equivalently, is homogeneous if for every . For a homogeneous polynomial , the following identity, known as Euler’s identity, holds:
(2) 
Note that if is homogeneous then all directional derivatives of are also homogeneous, so one can apply this to and to find that
(3) 
A polynomial with nonnegative coefficients is logconcave if is a concave function over . For simplicity we also consider the zero polynomial to be logconcave. Equivalently, is logconcave if the Hessian of
is negative semidefinite at any point , where is the gradient of . Since is a rank1 matrix, by Cauchy’s interlacing theorem, has at most one positive eigenvalue at any . Since has nonnegative coefficients and has strictly positive entries, so being negative semidefinite is equivalent to , where the righthand side is a rank1 positive semidefinite matrix. In particular, has at most 1 positive eigenvalue at . In [AOV18] it is shown that for homogeneous polynomials , the converse of this is also true, i.e., if has at most 1 positive eigenvalue at all then is logconcave.
Proposition 2.13 ([Aov18]).
A degree homogeneous polynomial with nonnegative coefficients is logconcave over iff has at most 1 positive eigenvalue at all .
We say a polynomial is decomposable if it can be written as a sum of polynomials in disjoint subset of the variables, that is, if there exists a nonempty subset and nonzero polynomials , for which . We call indecomposable otherwise.
Lemma 2.14.
If has nonnegative coefficients, is homogeneous of degree at least 2, and logconcave at , then is indecomposable.
Proof.
Suppose that has nonnegative coefficients, is homogeneous of degree , and is decomposable, with decomposition where and . Both and are restrictions of obtained by setting some variables equal to zero, therefore both and are logconcave. Then, at , the Hessians of and each have precisely one positive eigenvalue. However, the Hessian of at this point is a block diagonal matrix with these two blocks, ,
So, has exactly two positive eigenvalues, meaning that is not logconcave, a contradiction. ∎
In order to prove several distributions of interest are strongly logconcave, we will prove an equivalent characterization of strongly logconcave polynomials.
Theorem 2.15.
Let be a homogeneous polynomial such that:

for any and any , is indecomposable, and

for any , the quadratic is either identically zero, or logconcave at .
Then is strongly logconcave at .
In Lemma 2.14 we show that the condition that all partial derivatives are indecomposable is necessary for a polynomial to be (strongly) logconcave.
3 Walks on Simplicial Complexes
Consider a pure dimensional complex with a balanced weight function . We will call a weighted complex. For , we define two random walks on , which we will refer to as the upper and lower walks. To define these walks we construct a bipartite graph with one side corresponding to and the other side corresponding to . We connect to with an edge of weight iff . Now, consider the simple (weighted) random walk on . Given a vertex we choose a neighbor proportional to the weight of the edge connecting the two vertices.
This is a walk on a bipartite graph and is naturally periodic. We can consider the odd steps and even steps, in order to obtain two random walks; one on
called , and the other on called , where given you take two steps of the walk in to transition to the next face with respect to the matrix, and similarly, you take two steps in from to transition with respect to .Now, let us formally write down the entries of and . Given a simplex , first among all dimensional simplices that contain we choose one proportional to . Then, we delete one of the elements of uniformly at random to obtain a new state . It follows that the probability of transition to is equal to the probability of choosing in the first step, which is equal to since is balanced, times the probability of choosing conditioned on , which is . In summary,
(4) 
Note that upper walk is not defined for , because there is no dimensional simplex in .
Analogously, given , first we remove a uniformly random element of to obtain . Then, among all all simplices
Comments
There are no comments yet.