1. Introduction
Our most important contribution here is the proof of a conjecture of Talagrand [17] that is (a strengthening of) a fractional relaxation of the “expectationthreshold” conjecture of Kalai and the second author [11]. For an increasing family on a finite set , we write (with definitions below) , and for the threshold, fractional expectationthreshold, and size of a largest minimal element of . In this language, our main result is the following.
Theorem 1.1.
There is a universal such that for every finite and increasing ,
As discussed below, is a more or less trivial lower bound on , and Theorem 1.1 says this bound is never far from the truth. (Apart from the constant , the upper bound is tight in many of the most interesting cases; see below.)
Thresholds have been a—maybe the—central concern of the study of random discrete structures (random graphs and hyerpgraphs, for example) since its initiation by Erdős and Rényi [5], with much work around identifying thresholds for specific properties (see [3, 9]), though it was not observed until [2] that every increasing admits a threshold (in the ErdősRényi sense; see below). See also [7] for developments, since [6], on the very interesting question of sharpness of thresholds.
Our second main result is Theorem 1.8 below, which was motivated by the “random multidimensional assignment” problem of Frieze and Sorkin [8]. The statement is postponed until we have filled in some background, to which we now turn. (See the beginning of Section 2 for notation not defined here.)
Thresholds. For a given and , is the product measure on given by . An is increasing if . If this is true (and ), then is strictly increasing in , and the threshold, , is the unique for which . This is finer than the original Erdős–Rényi notion, according to which is a threshold for if if and if . (That is always an Erdős–Rényi threshold follows from [2].)
Then , which we call the expectationthreshold of (though the term is used slightly differently in [11]), is a trivial lower bound on , since for as above and drawn from ,
(2) 
Thus the following statement, the main conjecture of [11], says that for any there is a trivial lower bound on that is close to the truth.
Conjecture 1.2.
There is a universal such that for every finite and increasing ,
We should perhaps emphasize how strong this is (from [11]
: “It would probably be more sensible to conjecture that it is
not true”); e.g. it easily implies—and was largely motivated by—ErdősRényi thresholds for (a) perfect matchings in random uniform hypergraphs, and (b) appearance of a specified bounded degree spanning tree in random graphs. These have since been resolved: the first—Shamir’s Problem, circa 1980—in [10], and the second—a mid90’s suggestion of the second author—in [14]. Both arguments are difficult and specific to the problems they address (in that they are utterly unrelated either to each other or to what we do here).From considerations of duality, Talagrand [17] suggests relaxing “small” by replacing the set system above by a fractional set system : say is weakly small if there is a such that
and 
Then , the fractional expectationthreshold for , satisfies
(3) 
(the first inequality is trivial and the second is similar to (2)), and Talagrand proposes a sort of LP relaxation of Conjecture 1.2, and then a strengthening thereof, the second of which is our Theorem 1.1:
Conjecture 1.3.
There is a universal such that for every finite and increasing ,
Conjecture 1.4.
There is a universal such that for every finite and increasing ,
Talagrand also suggests the following “very nice problem of combinatorics”, which implies equivalence of Conjectures 1.3 and 1.2, as well as of Conjecture 1.4 and the corresponding strengthening of Conjecture 1.2.
Conjecture 1.5.
There is a universal such that, for any increasing on a finite set ,
(That is, weakly small implies small.)
But, at least for the problems originally motivating Conjecture 1.2 discussed above, this equivalence is not needed: the applications follow just as easily from Conjecture 1.3.
Spread hypergraphs and spread measures. In this paper a hypergraph on the (vertex) set is a collection of subsets of (edges of ), with repeats allowed. For , we use for , and for a hypergraph on , we write for for the increasing family generated by . We say is bounded (resp. uniform) if each of its members has size at most (resp. exactly) , and spread if
(4) 
(Note that edges are counted with multiplicities on both sides of (4).)
A major advantage of Conjectures 1.3 and 1.4 over Conjecture 1.2—and the source of the present relevance of [1]
—is that they admit, via linear programming duality, reformulations in which the specification of
gives a usable starting point. Following [17], we say that a probability measure is spread iffor all . Thus a hypergraph is spread iff uniform measure on is spread with .
As observed by Talagrand [17], the following is an easy consequence of duality.
Proposition 1.6.
For an increasing family on , if , then there is a spread probability measure on supported on . ∎
This allows us to reduce Theorem 1.1 to the following alternate (actually, equivalent) statement. In this paper with high probability (w.h.p.) means with probability tending to 1 as .
Theorem 1.7.
There is a universal such that for any bounded, spread hypergraph on , a uniformly random element subset of belongs to w.h.p.
The easy reduction is given in Section 2.
Assignments. Our second main result provides upper bounds on the minima of a large class of hypergraphbased stochastic processes, somewhat in the spirit of [16] (see also [15, 18]), saying that in “smoother” settings, the logarithmic corrections of Conjectures 1.2, 1.3, 1.4 and Theorem 1.1 are not needed.
Theorem 1.8.
There is a universal such that for any bounded, spread hypergraph , and w.h.p.
This is often tight (again up to the value of ). The distribution of the ’s is not very important; e.g. it’s easy to see that the same statement holds if they are random variables, as in the next example.
Theorem 1.8 was motivated by work of Frieze and Sorkin [8] on the “axial” version of the random ddimensional assignment problem. This asks (for fixed and large
) for estimation of
(5) 
where the ’s () are independent weights and ranges over “axial assignments,” meaning meets each
axisparallel hyperplane
( for some and ) exactly once. (For , this is classical; see [8] for its rather glorious history. For , it was considered somewhat earlier in the physics literature [13], and its deterministic version was one of Karp’s [12] original NPcomplete problems. For larger , as far as we know, it was first considered in [8].)Frieze and Sorkin show (regarding bounds; they are also interested in algorithms)
(6) 
(The lower bound is easy and the upper bound follows from the Shamir bound of [10].)
In present language, is simply , with the set of perfect matchings of the complete, balanced uniform partite hypergraph on vertices (that is, the collection of sets meeting each of the pairwise disjoint sets ). This is easily seen to be spread with (apart from the nearly irrelevant particity, this is the of Shamir’s Problem), so the correct bound is an instance of Theorem 1.8:
Corollary 1.9.
.
Frieze and Sorkin also considered the “planar” version of the problem, in which in (5) meets each line ( for some and ) exactly once; and one may of course generalise from hyperplanes/lines to dimensional ‘subspaces’ for a given . It’s easy to see what to expect here, and one may hope Theorem 1.8 will eventually apply, but we at present lack the technology to say the relevant ’s are suitably spread.
Organisation. Following minor preliminaries and the derivation of Theorem 1.1 from Theorem 1.7 in Section 2, the heart of our argument, Lemma 3.1, is treated in Section 3. Our approach here strengthens that underlying the recent breakthrough of Alweiss, Lovett, Wu and Zhang [1] on the Erdős–Rado “Sunflower Conjecture” [4]. Section 4 adds one small technical point (more or less repeated from [1]), and the proofs of Theorems 1.7 and 1.8 are given in Sections 5 and 6.
2. Preliminaries
Usage. As is usual, we use for , for the power set of , for the family of element subsets of , and for . Our default universe is , with .
In what follows we assume and are somewhat large (when there is an it will be at most ), as we may do since smaller values can by handled by adjusting the ’s in Theorems 1.7 and 1.8. Asymptotic notation referring to some parameter (usually ) is used in the natural way: implied constants in and are independent of , and (also written ) means is smaller than any given for large enough values of .
For and , and are (respectively) a random subset of (drawn from ) and a uniformly random element subset of . The following standard observation (contained in e.g. [9, Propositions 1.12, 1.13]) allows us to move easily between these models.
Proposition 2.1.
Let be an increasing family on a finite set of size . As , if for all , then ; similarly, if for all , then . ∎
We close this section with the promised:
Derivation of Theorem 1.1 from Theorem 1.7.
Let be as in Theorem 1.7 with its set of minimal elements, let with be large enough that the exceptional probability in Theorem 1.7 is less than 1/4, say, and let be the spread probability measure promised by Proposition 1.6, where . We may assume is supported on (since transferring weight from to doesn’t destroy the spread condition) and that takes values in . We may then replace by whose edges are copies of edges of , and by uniform measure on .
3. Main Lemma
Let be an bounded, spread hypergraph on a set of size , with . Let be a slightly small constant (e.g. ) and suppose , with the constant large enough to support the last line in the proof of Lemma 3.1. Set ( since ), and . Finally, fix satisfying for all , set, for and ,
and say the pair is bad if and good otherwise.
The heart of our argument is the following statement, which improves Lemma 5.7 of [1].
Lemma 3.1.
For chosen uniformly from ,
Proof.
It is enough to show, for ,
or, equivalently, that
(7) 
(Note bounds the number of for which the set in question can be nonempty, whence the negligible factors .)
We now use . Let and for say is pathological if there is with and
(8) 
From now on we will always take (with as in Lemma 3.1); thus is typically roughly and, since is spread, is a natural upper bound on what one might expect for the l.h.s. of (8).
We bound the nonpathological and pathological parts of (7) separately; this (with the introduction of “pathological”) is the source of our improvement over [1].
Nonpathological contributions. We first bound the number of in (7) with nonpathological. This basically follows [1], but “nonpathological” allows us to bound the number of possibilities in Step 3 below by the r.h.s. of (8), where [1] settles for something like .
Step 1. There are at most
choices for .
Step 2. Given , let . Choose , for which there are at most possibilities, and set . (If then, as , cannot be bad.)
Step 3. Since we are only interested in nonpathological choices, the number of possibilities for is now at most
Step 4. Complete the specification of by choosing , the number of possibilities for which is at most .
In sum, since and , the number of nonpathological possibilities is at most
(9) 
Pathological contributions. We next bound the number of as in (7) with pathological. The main point here is Step 4.
Step 1. There are at most possibilities for .
Step 2. Choose witnessing the pathology of (i.e. for which (8) holds); there are at most possibilities for .
Step 3. Choose for which
(10) 
Here the left hand side counts members of in whose intersection with is precisely . (Of course, existence of as in (10) follows from (8).) The number of possibilities for this choice is at most .
Step 4. Choose , the number of choices for which is less than . To see this, write for the r.h.s. of (10). Noting that must belong to , we consider, for drawn uniformly from this set,
(11) 
Set and notice that we may assume , since otherwise the probability in (11) is zero. We have
while, for any ,
(of course if the probability is zero); so
Markov’s Inequality then bounds the probability in (11) by , and this bounds the number of possibilities for by , which (now using and ) is easily seen to be less than .
Step 5. Complete the specification of by choosing , which can be done in at most ways.
Combining (and slightly simplifying), we find that the number of pathological possibilities is at most
(12) 
Finally, the sum of the bounds in (9) and (12) is less than the of Lemma 3.1 for suitable and , which completes the proof of the lemma.
∎
4. Small uniformities
As in [1] (see their Lemma 5.9), very small set sizes are handled by a simple Janson bound:
Lemma 4.1.
For an bounded, spread of size at least on , and ,
(13) 
Proof.
We may assume is uniform (or add dummy vertices of degree 1 to achieve this; note this preserves the spread condition since ). Denote members of by and set . Then
and
(where the inequality holds because is spread); and Janson’s Inequality (e.g. [9, Thm. 2.18(ii)]) bounds the probability in (13) by .∎
Corollary 4.2.
Let be as in Lemma 4.1 with , and let be uniform from . Then
5. Proof of Theorem 1.7
We may of course assume (or there is nothing to prove). We may also assume is uniform, as follows: if, for a sufficiently large , we replace each by new edges, each consisting of together with new vertices (with each new vertex used just once), then the resulting hypergraph , say with vertex set , is uniform and spread. But if Theorem 1.7 holds for , then w.h.p. chosen uniformly from the element subsets of both lies in (and therefore lies in ) and satisfies implying that the theorem holds for with replaced by .
Given this assumption, we have
(14) 
since for . We may then slightly strengthen our assumptions to
and  (15) 
(since under the assumptions of Theorem 1.7, (15) holds with replaced by ).
Fix an ordering “” of . In what follows we will have a sequence , with and
where and will be defined below (and is a version of the of Section 3). We then order by setting
(So e.g. each member of ultimately inherits its position in from some member of . This is not very important; the point is that we will be applying Lemma 3.1 repeatedly, and the present convention just provides a concrete for each stage of the iteration.)
Let and be as in Section 3. Set , define by , and set . Then and Theorem 1.7 will follow from the next assertion.
Claim 5.1.
If W is a uniform subset of , then w.h.p.
Proof.
Set . Let and for . Let and, for , let be uniform from and set .
For let , where is the first member of contained in (where is ordered by ). Say is good if (and bad otherwise), and set
(Thus is a bounded collection of subsets of and inherits the ordering as described above.)
Finally, choose uniformly from . Then is as in Claim 5.1. Note also that whenever . (More generally, whenever lies in .)
So to prove the claim, we just need to show
(16) 
(where the refers to the entire sequence ).
For call successful if , call successful if it lies in , and say a sequence of ’s is successful if each of its entries is. We show a little more than (16):
(17) 
According to Lemma 3.1 (and Markov’s Inequality), for ,
since successful implies which with (15) (and ) gives the spread condition (4) for . Thus
(18) 
(see the specifications of and given before and after the statement of Claim 5.1).
Finally, if is successful, then Corollary 4.2 (applied with , , , and ) gives
(19) 
and we have the claim. ∎
6. Proof of Theorem 1.8
We assume the setup of Theorem 1.8 and may again assume . In addition, as in Section 5, we may assume is uniform, since the construction there produces an uniform, spread with . In particular this gives
(20) 
The theorem will follow from the next assertion, in which is as in Section 3.
Claim 6.1.
For ,
Proof of Theorem 1.8 given Claim 6.1.
The “w.h.p.” statement is immediate (take ). For the expectation, set and , and note that the bound on promised by Claim 6.1 holds for all . Recalling that and noting that , we have
and we are done as either or
Proof of Claim 6.1.
Terms not defined here (beginning with and ) are as in Section 5, but we now define by and set , with as in Section 3 (that is, ), noting that (20) gives .
It’s now convenient to generate the ’s using the ’s in the natural way: let
and let consist of the ’s in positions when is ordered according to the ’s.
Proposition 6.2.
With probability ,
(21) 
for all and .
Proof.
Write for .
Proposition 6.3.
If , then contains some with
Proof.
Suppose . By construction (of the ’s) there are with and , whence for ; and then gives the proposition. ∎
We now define “success” for to mean that is successful in our earlier sense and that (21) holds. Notice that with our current values of and (and ), we can replace the error terms in (18) and (19) by essentially and , which with Proposition 6.2 bounds the probability that is not successful by (say) . We finish by observing the following.
Proposition 6.4.
If is successful then .
Proof.
For as in Proposition 6.3, we have (with and )
Acknowledgements
The first, second and fourth authors were supported by NSF grant DMS1501962 and BSF Grant 2014290. The third author was supported by NSF grant DMS1800521.
References
 [1] R. Alweiss, S. Lovett, K. Wu, and J. Zhang, Improved bounds for the sunflower lemma, Preprint, arXiv:1908.08483.
 [2] B. Bollobás and A. Thomason, Threshold functions, Combinatorica 7 (1987), 35–38.
 [3] Béla Bollobás, Random graphs, second ed., Cambridge Studies in Advanced Mathematics, vol. 73, Cambridge University Press, Cambridge, 2001.
 [4] P. Erdős and R. Rado, Intersection theorems for systems of sets, J. London Math. Soc. 35 (1960), 85–90.
 [5] P. Erdős and A. Rényi, On the evolution of random graphs, Magyar Tud. Akad. Mat. Kutató Int. Közl. 5 (1960), 17–61.
 [6] E. Friedgut, Sharp thresholds of graph properties, and the sat problem, J. Amer. Math. Soc. 12 (1999), 1017–1054, With an appendix by J. Bourgain.
 [7] by same author, Hunting for sharp thresholds, Random Structures Algorithms 26 (2005), 37–51.
 [8] A. Frieze and G. B. Sorkin, Efficient algorithms for threedimensional axial and planar random assignment problems, Random Structures Algorithms 46 (2015), 160–196.
 [9] S. Janson, T. Łuczak, and A. Rucinski, Random graphs, WileyInterscience Series in Discrete Mathematics and Optimization, WileyInterscience, New York, 2000.
 [10] A. Johansson, J. Kahn, and V. Vu, Factors in random graphs, Random Structures Algorithms 33 (2008), 1–28.
 [11] J. Kahn and G. Kalai, Thresholds and expectation thresholds, Combin. Probab. Comput. 16 (2007), 495–502.
 [12] R. M. Karp, Reducibility among combinatorial problems, Complexity of computer computations (Proc. Sympos., IBM Thomas J. Watson Res. Center, Yorktown Heights, New York.), 1972, pp. 85–103.
 [13] O. C. Martin, M. Mézard, and O. Rivoire, Random multiindex matching problems, J. Stat. Mech. Theory Exp. (2005), P09006, 36.
 [14] R. Montgomery, Spanning trees in random graphs, Adv. Math. 356 (2019), 106793, 92.
 [15] M. Talagrand, The generic chaining, Springer Monographs in Mathematics, SpringerVerlag, Berlin, 2005.
 [16] by same author, Selector processes on classes of sets, Probab. Theory Related Fields 135 (2006), 471–486.

[17]
by same author, Are many small sets explicitly small?
, Proceedings of the 2010 ACM International Symposium on Theory of Computing, 2010, pp. 13–35.
 [18] by same author, Upper and lower bounds for stochastic processes, A Series of Modern Surveys in Mathematics, vol. 60, Springer, Heidelberg, 2014.
Comments
There are no comments yet.