1 Introduction
Sauer’s Lemma, discovered first by Vapnik & Chervonenkis VC71 and later independently by Shelah SS72 and Sauer S72 , upperbounds the cardinality of a set system in terms of its VapnikChervonenkis (VC) dimension. The lemma has found many applications in such diverse fields as computational learning theory and empirical process theory BEHW89 ; VC71 ; BartlettBook ; Haussler90probablyapproximately ; AngluinSurvey ; DevroyeBook ; vanDerVaartBook , coding theory Guruswami2010 , computational geometry HausslerWelzl86 ; Matousek94 ; SetCover95 ; Kleinberg97 , road network routing AbrahamDFGW11 , and automatic verification SetCover95 ; in the former it is the avenue through which the VC dimension enters into generalisation error bounds and the theoretical foundations of learnability.
Maximum classes are concept classes on the cube^{1}^{1}1As discussed below, we consider concept classes evaluated on finite samples. Such projections are equivalent to subsets of the cube. Thus we discuss concept classes as such subsets without loss of generality. that meet Sauer’s Lemma with equality W87 ; KW07 : they maximise cardinality over all concept classes with a given VC dimension. Recent work has illuminated a beautiful geometric structure to maximum classes, one in which such classes (and their complements) can be viewed as complete collections of cubes—unions of cubes each varying over a unique set of coordinates—which forms a contractible simplicial complex (the higherorder cubical generalisation of a tree) RBR08 . Another important family of concept classes are known as maximal classes, which cannot be expanded without increasing their VC dimension W87 ; KW07 ; the complement of any maximal VC class is also a complete collection of cubes RBR08 . Indeed it is most natural to study the complementary structure of VC classes due to these cubical characterisations.
Our key motivation for studying maximal and maximum classes is for resolving the Sample Compression conjecture LW86 ; W03 , a problem that has evaded resolution for over a quarter century, and that equates learnability with a concept class admitting a socalled finite compression scheme. Littlestone & Warmuth LW86 , after showing that finite compression schemes immediately lead to risk bounds, posed the conjecture to determine whether the converse holds: does finite VC dimension imply sized compression schemes. Beyond providing a deeper understanding of the fundamental notions of learning theory, such as VC dimension, maximum and maximal classes, foundational work on the Sample Compression conjecture may lead to practical learning algorithms. Previously, compressionbased learning algorithms LBS04 and bounds Lang05 have enjoyed successful application in practice.
To date, most progress towards the conjecture has been on compressing maximum classes. Floyd F89 first compressed maximum classes with labeled schemes. Later BenDavid & Litman BDL98 proved existence of unlabeled schemes for maximum classes, followed by Kuzmin & Warmuth KW07 and Rubinstein & Rubinstein Rubinsteins09 who constructed unlabeled schemes using the cubical structure of such classes. In the related problem of teaching, Doliwa et al. Doliwa2010 showed that the recursive teaching dimension of maximum classes coincides with the VC dimension, using the cubical cornerpeeling compression scheme of Rubinstein & Rubinstein Rubinsteins09 . Recently Livni & Simon honest developed a new approach using ideas from model theory to form boundedsize compression schemes for a new family of concept classes. It is unclear, however, how to directly extend any of these results to schemes for general VCclasses.
To compress general classes it is necessary and sufficient to compress maximal classes, since any concept class can be expanded to a maximal class without increasing VC dimension. Given the past success at compressing maximum classes, a natural approach to the conjecture is to develop techniques for embedding any maximal class into a maximum class without significantly increasing its VC dimension F89 . This chapter provides results relating to this approach.
We first discuss a series of higherdimensional analogs of Sauer’s Lemma for counting dimensional hypercubes in the complements of general VC classes for . Where Sauer’s Lemma lower bounds points (the case) in the complement, these higherdimensional analogues lower bound edges () all the way up to faces (). Moreover we show that maximum classes uniquely meet each higherdimensional bound with equality, just as in the case. These bounds were first obtained by Kuzmin & Warmuth KW07 . We present a different treatment as we are particularly interested in the graph obtained by considering only the incidence relations of maximaldimensional cubes along their faces of codimension one.
We view this characterisation of maximum VC classes as providing a measure of closeness of any VC class—most importantly maximal classes—to being maximum. Knowing how close a maximal class is to being maximum may prove to be useful in achieving the desired maximumembedding of maximal classes with linear increase to VC dimension.
The deficiency of a VC class is defined as the difference between the cardinality of a maximum VC class and of —clearly maximum classes are precisely those of deficiency . We prove that classes of small deficiency have useful compression schemes coming from embedding into maximum classes, by establishing that every VC class with deficiency embeds in a maximum class of VC dimension . There are two interesting steps to show this. The first is that if a VC class projects onto a maximum class, via a projection of the binary cube to a binary cube, then embeds in a maximum VC class. Secondly, if is a VC class which is not maximum, there is always a projection from the binary cube to the binary cube, which reduces the deficiency of .
As an application of the characterisation of maximum VC classes, we produce a collection of concept classes of VCdimension embedded in an cube, so that each class cannot be embedded in any maximum class of VCdimension but can be embedded in a maximum class of VCdimension . The cubical structure of the complements is the key to the construction. This negative result improves that of Rubinstein & Rubinstein Rubinsteins09 , where it is shown that for all constants there exist VC classes which cannot be embedded in any maximum class of VCdimension . Our new negative result proves that while the general Sample Compression conjecture—that every VC class has a compression scheme of size —may still hold, the constant must at least be , if the compression scheme is to be obtained via embeddings.
We also give a recursive scheme to embed any VC class into a maximum VC class, if any such embedding exists. The scheme does not resolve the conjecture, because must be supplied, but rather demonstrates a possible approach to the compression problem, via embedding into maximum classes. The key idea is to use lifting RBR08 .
For the special case of VC classes in the binary cube, for we give bestpossible results for embedding into maximum classes. Maximal VC classes in the binary
cube are classified. For symmetric Boolean functions, we show that there is a natural way of enlarging the class to a maximum class of the same VC dimension. A construction is given for sets of Boolean functions on
variables which give maximum classes in the binary cube.Chapter Organisation. We begin with preliminaries in Sect. 2. Our proof bounding the number of hypercubes contained in the complement of a VC class is presented in Sect. 3. We then develop a new characterisation of maximum classes in Sect. 4. In Sect. 5, we prove that every VC class embeds in a maximum class of VC dimension where is the deficiency of the class. Section 6 presents examples which demonstrate a new negative result on embedding maximal classes into maximum ones in which their VC dimension must double. Section 7 gives a general recursive construction of embeddings of VC classes into VC maximum classes. In Sect. 8, classes of VC dimension embedded in binary cubes for are discussed. In Sect. 9, symmetric and Boolean functions are viewed as classes in the binary cube and related to maximum classes. Sect. 10 concludes the chapter.
2 Background and Definitions
Consider the binary cube for integer . We call any subset a concept class and elements concepts
. This terminology derives from statistical learning theory: a binary classifier
on some domain (e.g., Euclidean space) is equivalent to thebit vector of its evaluations on a particular sample of points
of interest. Hence on a given sample we equate concepts with such classifiers, and families of classifiers (e.g., the linear classifiers) with concept classes. Equivalently a concept class corresponds to a set system with underlying set taken to be the axes (or points) and each subset corresponding to the support of a concept.2.1 Special Concept Classes
We next outline a number of families of concept classes central to VC theory, and that exhibit special combinatorial structure. We begin with the important combinatorial parameter known as the VC dimension VC71 .
Definition 1
The VapnikChervonenkis (VC) dimension of concept class is defined as
where is the set of coordinate projections of the concepts of on coordinates .
In words, the VC dimension is the largest number of coordinates on which the restriction of the concept class forms the complete binary cube. The VC dimension is used extensively in statistical learning theory and empirical process theory to measure the complexity of families of classifiers in order to derive risk bounds. It enters into such results via the following bound on concept class cardinality first due to Vapnik & Chervonenkis VC71 , and later independently by Shelah SS72 and Sauer S72 .
Lemma 1 (Sauer’s Lemma)
The cardinality of any concept class is bounded by
Any concept class that meets Sauer’s Lemma with equality is called maximum, while any concept class that cannot be extended without increasing VC dimension is called maximal W87 ; KW07 . Trivially maximum classes are maximal by definition, while not all maximal classes are maximum Welzl87 ; KW07 .
A family of “canonical” maximum classes, which are particularly convenient to work with, are the fixed points of a certain type of contractionlike mapping known as shifting which is used to prove Sauer’s Lemma HLW94 ; RBR08 .^{2}^{2}2We use to denote the indicator function on predicate , and to denote integers .
Definition 2
A concept class is called closedbelow if implies that for every the concept , with , is also in .
We can now define the deficiency of any VC class.
Definition 3
The deficiency of a concept class is the difference where is any maximum class with the same VC dimension as .
2.2 Cubical View of VC Classes
Rubinstein et al. RBR08 established the following natural geometric characterisations of VC classes, and maximum & maximal classes in particular.
Definition 4
A collection of subcubes of cardinality is called complete if for all sets of cardinality , there exists a cube such that .
Theorem 2.1
has iff contains a complete collection of subcubes. In particular iff contains a complete collection but no complete collection. It follows that of VCdimension is maximal iff is a complete collection and properly contains no complete collection; and of VCdimension is maximum iff is the union of a maximally overlapping complete collection, or equivalently iff is the union of a maximally overlapping complete collection.
Due to this characterisation, it is often more convenient to focus on the complementary class of a concept class .
Given a class and a projection from the cube to an cube on : the tail of with respect to is the subset of with unique images under ; the reduction of is the projection of the subset of with nonunique images. Welzl W87 (cf. also Kuzmin & Warmuth KW07 ) showed that is a maximum class of VCdimension while is a maximum class of VCdimension . Moreover is a collection of cubes which are faces of the cubes which make up .
We next review a technique due to Rubinstein & Rubinstein Rubinsteins09 for building all VC maximum classes by starting with a closedbelow maximum class and proceeding through a sequence of maximum classes (inverting the process of shifting). Lifting is the process of reconstructing from the knowledge of the tail and reduction . First, we form a new maximum class in the cube by placing all the cubes with at least one vertex in at the level where the coordinate and is used to form cubes of the form where is a cube of and give both choices . Now splitting along , each connected component of cubes, each with at least one vertex in , is lifted to either the level or to the level . Lifting all the components in this way always produces a maximum class and all maximum classes are obtained in this way by a series of lifts starting at the closedbelow maximum class.
2.3 The Sample Compression Conjecture
Littlestone and Warmuth’s Sample Compression conjecture predicts that any concept class with VCdimension admits a socalled compression scheme of size LW86 ; W03 .
Definition 5
Let , domain , and family of classifiers . The following pair of mappings is called a compression scheme for of size
if they satisfy the following condition for each classifier and unlabeled sample : we first evaluate the compression function on labeled by to a subsequence of length at most , called the representative of ; and then the reconstruction function can label consistently with for each .
Floyd F89 in 1989 showed that all VC maximum classes can be compressed with schemes of size . Since then, little progress has been made on compressing richer families of VC classes, although unlabeled compression schemes, relations to teaching, and a number of beautiful related combinatorial results have been developed BDL98 ; KW07 ; RBR08 ; LBS04 ; RR08 ; Rubinsteins09 ; Doliwa2010 ; honest . Since concept classes inherit the compression schemes of larger classes in which they can be embedded, a leading approach to positively establishing the conjecture is to embed (general) VC classes into maximum classes without significantly increasing VC dimension. In particular, it would be sufficient to embed any maximal class into an maximum class.
3 Bounding the Number of Hypercubes of a VC Class
As discussed, a natural approach to understanding the content of a class provided by its VC dimension is via the class’s cubical structure. In this section we focus on counting the cubes of a VCclass.
The following was established by Kuzmin & Warmuth KW07 via a different argument. We will apply this result to proving a new characterisation of maximum classes in the next section (Theorem 4.1).
Theorem 3.1
Let integers be such that , and . For any maximal concept class of VCdimension , the number of cubes contained in is lower bounded by , and the bound is met with equality iff is maximum.
To prove this result, we first count the number of cubes in maximum closedbelow classes.
Lemma 2
Let be a maximum closedbelow class of VCdimension in the cube. Then contains cubes for each .
Proof
For each , the maximum closedbelow class of VCdimension is the class with all concepts with norms at most RBR08 . (In other words, all the concepts are binary strings of length in the cube with at most ones).
For we must count the number of points in . This is done by simply partitioning the vertices of into layers, where each layer contains vertices with the same norm. (In other words, the same number of ones). At the top layer there are nodes of norm , at layer there are nodes, etc. down to the bottom layer which consists of a single vertex of zero norm.
The case corresponds to the edge counting argument in bounding the density of oneinclusion graphs HLW94 ; RBR08 , which is one of the steps used in proving Sauer’s Lemma by shifting. By noting that every edge connects one vertex with lower norm to a vertex with higher norm, we may count edges uniquely by considering edges oriented downwards, and again partitioning them by the norm of the higher incident vertex. At the top layer each of the vertices identifies edges, at the next layer each of the vertices identifies edges, etc. all the way down to the first layer where each of the vertices identifies edge.
For the general case the argument remains much the same. Now instead of orienting edges away from their top incident vertex, we orient cubes away from their top incident vertex; where each edge is identified by specifying the top and bottom vertices, each cube is identified by specifying the top vertex and each of its neighboring vertices in the cube. We again partition the cubes by the layers of their top vertices. The top layer contains vertices each of which identifies cubes, the layer contains vertices each identifying cubes, all the way down to the layer which contains vertices each identifying cubes. ∎
We may now prove the main result of this section.
Proof (of Theorem 3.1)
Consider the technique of lifting (as reviewed in Sect. 2.2): it is obvious that the lifting process does not change the number of cubes for all with . And since lifting always creates maximum classes, and all such classes are created by lifting, it follows that all maximum classes of VCdimension have the same number of cubes as the closedbelow maximum classes of VCdimension .
The final step is to show that for any class in the cube which is not maximum, must have more cubes than a maximum class of VCdimension , for all satisfying . This can be established using shifting—the inverse process to lifting where all points move along a chosen dimension towards zero provided no existing points block movement H95 . Namely we know that is a complete union of cubes, since is maximal with VCdimension . It is convenient to shift cubes rather than vertices. Namely for the coordinate, we can shift an cube of with anchor containing this coordinate and having value to the value . Notice that this type of shifting preserves the number of cubes but may decrease the number of lowerdimensional cubes. In fact, since by assumption is not maximum, neither is . So during the shifting process, the number of vertices must decrease, i.e., two vertices which differ only at the coordinate become identified. But then it is easy to see that the number of cubes decreases for all with by considering cubes having one or other of these two vertices. This completes the proof. ∎
4 An IteratedReduction Characterisation of Maximum Classes
In this section, we offer another characterisation of maximum classes (cf. Theorem 4.1), which we subsequently use in Sect. 5 to show existence of projections that strictly reduce deficiency, and again in Sect. 6 to build examples of classes of VC dimension which cannot be embedded into maximum classes of VC dimension . The characterisation is in terms of iterated reductions.
Definition 6
Consider a complete collection embedded in the cube, a set of directions , and the projection of onto directions . Then the iterated reduction of under this projection is the graph embedded in the cube with edges the images of cubes of varying along , nodes the images of the faces of directions in , and with a node incident to an edge when (respectively) the corresponding face is contained in the corresponding cube.
Figure 1 illustrates the iterated reductions for a class .
Proposition 1
For every class which is a complete union of cubes, every iterated reduction is a forest.
Proof
Consider a iterated reduction along colors . Assume has a cycle. Project out the coordinates corresponding to the colours in . The cycle in corresponds to a collection of cubes which project to edges in the binary cube. Hence there are two such edges of the same colour which come from different cubes with the same colours. This is a contradiction, since there is only one cube per choice of colours in . ∎
Theorem 4.1
A complete union of cubes in the cube is a maximum class if and only if all the iterated reductions are trees, i.e., are connected.
Proof
Firstly, if is a maximum class, then any reduction is maximum W87 ; KW07 . Now can be viewed as the result of taking multiple reductions times so is a maximum class of VCdimension , i.e., a tree, proving the necessity of connectedness.
For the converse, we note that a tree has Eulercharacteristic one, whereas a forest has Euler characteristic given by the number of trees in the forest (cf. e.g., Trudeau94 ). Therefore if all the iterated reductions are trees, the sum of all their Euler characteristics is the number of iterated reductions, which is clearly , since this is the number of ways of choosing a set of coordinate directions. The Euler characteristic is defined as the number of vertices minus the number of edges of a graph; for the collection of iterated reductions, counting up all the edges gives times the number of cubes in a complete collection, which is , since each cube is counted times, one for each pair of subcubes with the same collection of coordinates. The total number of vertices in the trees is the number of cubes in . We conclude that
if all the iterated reductions are trees. Consequently, this can be rewritten as
which is the expression for the number of cubes in a maximum class of VCdimension in the cube by Theorem 3.1. So applying the theorem, we conclude that if all the iterated reductions of a class are trees, then is a maximum class. ∎
Note that the graph depends on the choice of the cubical structure of . So if has different cubical structures, it yields different iterated reductions. The following minor, but novel, result proves that maximum classes have unique iterated reductions.
Lemma 3
Any class containing two cubes of the same set of colors has .
Proof
Form a set of colors by taking the colors of the cubes with any anchor color on which the two cubes differ. Trivially this set is shattered. ∎
Corollary 1
Let be a maximum class. Then has a unique representation as a complete collection.
Remark 1
We note that the set of iterated reductions can be integrated into the one structure known as the face graph in computational geometry. The face graph for a complete collection , is a bipartite graph with vertices for each cube and each cube of . has an edge between vertices associated to a cube and a cube, whenever the latter is a face belonging to the former. For any of size , define induced subgraph of consisting of all vertices and edges corresponding to cubes whose directions contain . Then corresponds to the iterated reduction for directions subdivided to be made bipartite.
5 Deficiency and Embedding VC Classes into Maximum Classes
Our main result in this section is the following;
Theorem 5.1
Suppose is a VC concept class with deficiency . Then there is an embedding of into a maximum class .
The proof of this will follow immediately from two preliminary results, which are of independent interest.
Proposition 2
Suppose is a VC concept class and for some , there is a projection so that is maximum. Then there is a maximum class so that .
Proof
The argument is by induction on . Assume first that . Since is maximum, it follows that the complementary class is also maximum by RBR08 . Consider the inverse image of this complementary class . This has the structure of a product . We observe that there are embeddings of maximum classes of VC dimension in . For by the tailreduction procedure of KW07 , we can find a maximum VC class embedded in the maximum VC class , as a union of faces of codimension one of the cubes. By lifting Rubinsteins09 , we can find many embeddings of maximum VC classes in . But then the complement of any such a class is a maximum VC class in the binary cube containing . This completes the first step of the induction argument.
Now assume the result is correct for . Let be a projection and a VC concept class in the binary cube, so that is maximum of VC dimension . We factorise into the composition of projections where and . Apply the induction step to the projection and the class . Since has VC dimension clearly the same is true for . We conclude that is contained in a maximum class of VC dimension .
To complete the proof, we follow the same approach as for the case applied to the image of the complementary maximum class in the binary cube. Namely by lifting, we can find maximum classes in of VC dimension . The complement of such a class will then be a maximum class in the binary cube containing of VCdimension as required. ∎
Proposition 3
Suppose is a VC concept class which is not maximum. Then there is a projection so that has VC dimension and deficiency strictly less than the deficiency of .
Proof
Firstly, since has VC dimension , there is a set shattered by . Therefore for any , the corresponding projection from the binary cube to the binary cube maps onto a VC class. The idea is to prove that for one such direction , the deficiency of is strictly less than that for .
As in KW07 we consider the tail/reduction of the projection applied to . We consider the image and the reduction —the subset of the binary cube, such that is all pairs of vertices with the property that . We claim that either the deficiency of is strictly less than the deficiency of or the reduction is a maximum class of VC dimension .
To prove the claim, note that the cardinalities of are related by . On the other hand, the deficiencies of respectively satisfy respectively. Hence we see that . But the binomial sum is precisely the cardinality of a maximum VC class in the binary cube and hence the difference is positive unless is maximum, by Sauer’s lemma, since clearly the VC dimension of is at most . This establishes the claim.
We can now conclude that either the proposition follows, or for each directions , the corresponding projection has reduction for which is maximum of VC dimension . In the latter case, consider an iterated reduction as in Theorem 4.1, where . It is easy to see that is isomorphic as a graph to an iterated reduction coming from a reduction class , so long as is in . For then we can take the iterated reduction of corresponding to the set of directions and it follows immediately that the two graphs are isomorphic. But then since is maximum, the corresponding iterated reduction is a tree. This shows that all iterated reductions are trees, so long as .
To complete the proof, we need to deal with the iterated reductions , where . This is precisely the initial set of directions for which shatters. But since all the reductions are assumed maximum, for we see that shatters all sets of directions so that is one of the directions. To see this, note that maximum means that it is a complete union of cubes and multiplying by gives a set of cubes covering all sets of directions containing . It is now easy to find new sets of directions shattered by which do not contain any chosen set of directions. So the previous argument applies to show that either there is a direction so that the projection reduces the deficiency of or all possible iterated reductions are trees. In the latter case, is a maximum class by Theorem 4.1 and the proof is complete. ∎
Proof (of Theorem 5.1)
Assume that is a VC class in the binary cube with deficiency . By repeated applications of Proposition 3, we can reduce the deficiency of to zero and hence get a maximum class as image, after at most projections along single directions. But then by Proposition 2, this implies that there is an embedding of into a maximum class of VC dimension . ∎
6 An Application to Inembeddability
In this section, we give examples of concept classes of VCdimension which cannot be embedded in any maximum class of VCdimension . Moreover we exhibit maximum classes of VCdimension which contain each of our classes . This negative result improves previous known examples Rubinsteins09 where it was shown that there is no constant such that any class of VCdimension can be embedded in a maximum class of VCdimension .
Theorem 6.1
There are classes of VCdimension in the binary cube for each pair satisfying is even and with the following properties:

There is no maximum class of VCdimension at most in the binary cube containing .

There is a maximum class of VCdimension containing and can be taken as a bounded below maximum class, for a suitable choice of origin of the binary cube.
Proof
The proof proceeds by a number of steps.
Construction of . Partition the coordinates of a binary cube into sets of size or , where or respectively. (In fact, roughly equal size will also work for the construction). We first describe the complement to . is a complete union of cubes, the anchors of which are strings with the property that each string is either all zeros or all ones. The former is chosen if the majority of the anchor coordinates are in and the latter if the majority are in . (Having
even means that the anchors are of odd length, so we do not need tiebreaking).
Computing VC Dimension. It is immediate that the VC dimension of is at most . We claim that the VC dimension cannot be less than . If the VC dimension of was at most , there would be a complete collection of cubes in the complementary class . We show that this leads to a contradiction. Suppose that is an cube embedded in . The anchor for is of length . Assume is chosen so that there are exactly elements of the anchor in and in . Consider an element which has all the coordinates which are in but not in the anchor of , having value one and all the coordinates which are in and not in the anchor, having value zero. As , it follows that is in one of the cubes of . must have an anchor either consisting of zeros and the majority of the anchor coordinates must be in or the anchor has ones, with the majority of the anchor coordinates in . But in both cases, there would be at least coordinates of which are in or and are all zeros or ones respectively. This gives a contradiction and we conclude that is not contained in and hence the VC dimension of is .
Decomposing the Complementary Class. Divide into two sets of cubes, with anchors all zero and with anchors all one. We abuse notation by using the same symbol for a collection of cubes and also the elements in the unions of these cubes. Note that a pair of cubes, one from each of these two collections, either will be disjoint or will intersect in a cube of dimension , depending on whether the anchors have any coordinates in common or not. In particular, is a union of cubes with anchors consisting of zeros and ones. No two of these cubes have anchors with exactly the same sets of coordinates. So is a subcollection of a complete collection of cubes (cf. Fig. 2).
We claim there are no cubes in . Recall that any vertex in belongs to an cube with anchor consisting of zeros and ones. But any cubes must contain vertices which are not of this form, e.g., which have at most zeros or ones. So this proves that has no cubes.
Inembeddability into Maximum Classes. We claim that no maximum class of VC dimension at least can be contained in . Taking complements, this shows that the original class cannot be contained in a maximum class of VC dimension . By RBR08 , a maximum class of VC dimension at least inside is a complete union of cubes. We can assume without loss of generality that has VCdimension since it is wellknown that any maximum class contains maximum classes of all smaller VC dimensions. The key step is to show that any dimensional cube of is contained in either or in . Once this is shown, it is easy to deduce a contradiction to the assumption that is maximum. For if we consider any iterated reduction of as in the previous section, not all the cubes can lie in say. Hence some are in and some in . But these cubes can only meet in which is a union of cubes. Moreover we have previously shown there are no cubes in . Consequently, the assumption that these cubes have faces of dimension in a tree structure for the iterated reduction is contradicted.
Consider an cube of . Now the anchor has digits. Clearly the anchor can have at least zeros or at least ones but not both. So without loss of generality, assume the anchor of has at least zeros. If the majority of the coordinates corresponding to these zeros are in , then we see that as required. Therefore it suffices to suppose that this is not the case, i.e., the majority of the coordinates corresponding to the zeros in the anchor of are in . But then we get a contradiction, because has vertices where all the coordinate entries outside the anchor which are in are all one and all those in are zero. For such a vertex clearly does not belong to . We conclude that must be in as claimed and the construction is complete.
Embedding into Maximum Classes. To show there is a maximum class of VCdimension in , define the complete collection of cubes of to have anchors with entries zero for coordinates in and one for coordinates in . It is easy to see that all these cubes are indeed in , since the anchors are of length , so there must be either at least coordinates in or coordinates in . Hence . To see that is maximum, flip all the coordinates in interchanging zero and one. Then it follows immediately that is actually a closedbelow maximum class. ∎
7 Embedding of VC Classes into VC Maximum Classes
In this section we develop an algorithm that, given a VC class and desired positive integer , builds a maximum class containing if one exists. We start by enlarging such that is a complete union of cubes. Our aim is to find a complete union of cubes inside . The complement is the required VC maximum class containing .
Algorithm 1 aims to produce all maximum classes containing . The output of the algorithm is this set, and is empty if no such classes exist. The strategy, working in the complement as usual, proceeds iteratively from the canonical closedbelow maximum class. At each iteration the next dimension in is considered: components of the maximum classes from the previous iteration are lifted along the chosen dimension to eventually be contained within . In particular, we consider embedding in the dimensions processed so far—we check whether the lifted connected component projected onto these dimensions is contained in also projected. If a choice along the current dimension achieves containment then the class is retained; if both choices are feasible then the class is cloned with siblings making each choice; if neither choice is possible then the maximum class is discarded.
Essentially the process is one of lifting to build arbitrary maximum classes as developed by Rubinstein & Rubinstein Rubinsteins09 —recall that a complete collection is lifted by arbitrarily setting the ‘height’ of components of cubes that are connected without crossing the reduction (cf. Sect. 2). The difference is that we iteratively filter out intermediate maximum classes as soon as it is clear they cannot be embedded in .
Proposition 4
For any VC class in the cube, and any , Algorithm 1 returns the set of all maximum classes in the cube containing .
Proof
The result follows from the maximum property being invariant to lifting, lifting constructs all maximum classes of given dimension Rubinsteins09 , and that the algorithm filters out exactly nonembedded classes as subsequent liftings do not alter the containment property of earlier iterations. ∎
8 Vc Classes
We study VC classes embedded in the binary cube, for . We will prove some results on embedding of these VC classes into maximum classes and also on the deficiency of maximal VC classes. Our choices of in this section yield the simplest “complete picture” for VC classes for which embedding (and compression) is nontrivial, and as such serve as useful tests for the tools developed above. In particular, we calculate the maximin VC dimension of the maximum classes in which maximal classes are embeddable, as summarised in Table 1.
maximin maximumembeddable  
4  3 
5  4 
6  4 
Case . We first classify maximal VC classes in the binary cube and prove these have deficiency . As a corollary it follows that these classes project to maximum VC classes in the binary cube.
The argument is straightforward. The complement of a maximal VC class is a complete union of cubes i.e., edges in the binary cube. Note that such a complete union is maximum if and only if it is a tree. In this case, too is maximum and so we are not interested in this (trivial) case. Consider then a forest, with four edges. There are two possibilities: one is that there are two components of size and the other is that there are two components, each of size . (We will verify that having three or more components is not possible). Notice that the components of this forest must be distance at least two apart. Since the diameter of the binary cube is , it is easy to check that there cannot be three or more components and the two components are either a tree with a vertex of degree and a single edge, or two trees with two edges each. It is then straightforward to verify that up to symmetry of the 4cube, there are precisely one of each type of forest. Hence there are precisely two maximal VC classes in the binary cube and both have deficiency . The latter holds since the forests both have one more vertex than a tree, corresponding to the complement of a maximum class. This completes the discussion in the cube.
Case . In the binary cube, there is a large number of possibilities for a maximal VC class. However by our argument in the inembeddability section, it follows that there are VC classes which do not embed in VC maximum classes in the binary cube. Since a maximum VC class is obtained by removing a single vertex from the binary cube, it follows immediately that every VC class embeds in a maximum VC class. But this is clearly a trivial result.
Case . Finally let’s examine the more interesting case of VC classes in the binary cube. We claim there is a simple argument that these all embed in maximum VC classes. The idea is as usual, to study the complementary class . We can assume this is a complete union of cubes, by enlarging if necessary, but not increasing its VC dimension. Consider two such cubes with anchors at disjoint sets of coordinates . Note that contains the vertex with coordinate values at (respectively ) given by the anchor of (respectively ). Hence there is a tree embedded in consisting of six edges one of each coordinate type, with three in all sharing and three in all containing . But then the complementary class is a maximum class of VC dimension containing . (In fact it is easy to see that if the coordinates of the binary cube are flipped so all the coordinates of are , then is actually closedbelow maximum.)
9 Boolean Functions
Our aim in this section is to consider special VCclasses corresponding to Boolean functions and study their associated maximum classes. In Ehrenfeucht et al. EHKV89 , Procaccia & Rosenschein monotone , the learnability of examples of such classes are considered by way of computing VC dimensions. We will show that there are interesting connections between natural classes of Boolean functions and maximum classes, hence yielding information about compression schemes for such classes. We begin with symmetric functions, showing the class can be enlarged to a maximum class of the same VC dimension. We then show that using a suitable basis of monomials, classes of Boolean functions can be formed by sums, which are maximum classes of arbitrary VC dimension.
9.1 Symmetric Functions
Definition 7
A function is symmetric if it has the same value when coordinates are permuted.
We study the class of symmetric functions where is the binary cube . Each symmetric function is associated to the mapping given by where is a binary vector. Clearly a symmetric function is completely determined by the number of coordinates with value which are in vectors mapped to .
We introduce some notation to assist the discussion. Coordinates in will be the monomials . Here the variable indicates a in the location of a binary vector. We divide the coordinates into classes so that each class consists of all monomials of the same degree (matching the class index). Then a symmetric function has the same value on all monomials in each class . There are therefore degrees of freedom of functions in .
We prove the following result due to Ehrenfeucht et al. EHKV89 via a novel argument that leverages the class’s natural structure under the above partitionedmonomial basis.
Lemma 4
The VC dimension of is .
Proof
Using our basis of partitioned monomials, it is easy to see that the VC dimension of is at least . For we can choose symmetric functions which evaluate independently on each of our classes of monomials. Hence we see that shatters a set of coordinates, so long as there is one coordinate from each class in . On the other hand, it is also easy to see that there is no shattering of an set. For if we choose any collection of coordinates, then two of them have to be in the same class . Hence every element of does not distinguish these two coordinates, so shattering does not occur. This establishes that the VC dimension of is exactly . ∎
Next, consider the collection of cubes in the complement of . We trivially have the following.
Lemma 5
The complement contains a complete collection of cubes with anchors having coordinates with at least two falling in the same class with differing values.
Finally we establish the following novel result on the maximumembedding of the class of symmetric Boolean functions.
Proposition 5
There exists a maximum class of VC dimension containing .
Proof
Choose an ordering of the monomial coordinates of consistent with their degrees. So if a monomial has larger degree than a monomial then in the ordering.
The complement of is a complete collection of cubes with anchors of length . We describe the set of anchors of these cubes.
Each anchor has coordinates set equal to and a single coordinate equal to . The special coordinate is defined as follows.
For every anchor, there must be at least two anchor coordinates in the same class . Choose the first coordinate in the ordering in for any , where there is a second anchor coordinate in , and put the value of equal to . This gives anchors of a complete collection of cubes.
To show that is a maximum class, we study its iterated reductions. This involves a number of cases.
Case 1. Consider an iterated reduction of , along a set of coordinates. Let denote the complementary set of coordinates. In the first case, there are two coordinates in , where the first class with more than one coordinate of in the ordering . Then there must be at least two coordinates in for and is the next class in the ordering containing more than one coordinate of . Each anchor for a cube in the iterated reduction along has coordinates, forming a set leaving out precisely one element of . There are two possibilities. The first is that the missing coordinate is not in . It is easy to see that the set of all such cubes overlap in pairs in codimension one faces. So it remains to consider what happens for the remaining cubes. Clearly there are two such cubes, say . Both have a in the single remaining coordinate in . Assume that the coordinate of in occurs before the coordinate of in in the ordering. also have a in , since this now becomes the first class in the ordering where there are multiple anchor coordinates for the cubes. It is not difficult to see that has a codimension one face in common with a cube of . Moreover has a codimension one face in common with . Hence it follows that the iterated reduction is a tree.
Case 2. Suppose that there are at least three coordinates of in the first class in the ordering with more than one coordinate of in . It is not difficult to again enumerate cases and see that the cubes with anchors obtained from , by leaving out one of the coordinates of , have codimension one faces in common. Finally if we leave out one of the remaining coordinates of , it is obvious that these cubes meet in pairs of codimensionone faces. Moreover it is easy to find a cube from the first family and one from the second which have a codimensionone face in common. So this completes the argument that is maximum and hence embeds in , which is maximum of VC dimension . ∎
9.2 A Method for Generating Maximum Boolean Function Classes
We next provide a method to generate interesting collections of Boolean functions which form maximum classes. We start with degree monomials in the binary cube. These are expressions of the form where each is either or . We wish to find a collection of Boolean functions, which is a maximum class of VC dimension in the binary cube. We begin with a generating set for . This is an ordered set given by sums of distinct monomials, denoted :

is any single monomial; and

Each subsequent has a unique representation as the sum of a single monomial and for some .
The following is easy to verify.
Lemma 6
The set is a maximum class of VCdimension 1 in the cube, where is a generating set and is the zero Boolean function.
We now may build by taking all sums of zero up to distinct elements from the set . It follows that is maximum.
Proposition 6
is maximum of VC dimension .
Proof
First, it is clear that the cardinality of is . For if two sums are equal, then by Boolean addition, we obtain that a nontrivial sum is the zero function. But this is clearly impossible by our choice of the generating set as linearlyindependent functions over . So if we can prove that has VC dimension at most , by Sauer’s Lemma it follows that is maximum.
Consider the projection of to a cube. Notice that the projection of the generating set for is a maximum VC class in this cube. Hence the projection of consists of all sums of up to elements of . But a maximum VC class containing the origin is easily seen to give a basis for a binary cube considered as a vector space. Hence in the binary cube, the collection of all sums of up to elements from clearly does not contain the element . Hence this shows the projection of to any cube is not onto and so is maximum as claimed. ∎
10 Conclusion
This chapter makes two main contributions. The first is a simple scheme to embed any VC class into a maximum class of VC dimension where is the deficiency. Therefore, for a collection of VC classes in binary cubes, with increasing, so long as there is a bound on the deficiency of the classes independent of , then the resulting compression scheme from embedding into VC maximum classes satisfies the Sample Compression conjecture of Littlestone & Warmuth. This focusses attention on maximal VC classes, where the deficiency grows with the dimension of the binary cube.
Our second main contribution is a negative embeddability result, placing a fundamental limit on the leading approach to resolving the Sample Compression conjecture—an approach that requires the embedding of general VC classes into maximum classes. We exhibit VC classes that can be embedded into VC maximum classes but not into any VC maximum class.
We developed our negative result as an application of a generalised Sauer’s Lemma, proved first by Kuzmin & Warmuth KW07 , from bounding the number of points in a concept class to bounding all hypercubes from edges to faces. We also offer a novel proof of this result building on recent geometric characterisations as cubical complexes Rubinsteins09 .
We believe that our negative examples may be close to worst possible. We offer a new iteratedreduction characterisation that provides a practical approach to measuring whether a union of cubes is maximum; and we develop an algorithm for building all maximumembeddings of a given VCclass. It is our hope that these three new tools may help in embedding all VC classes into maximum classes of dimension but at least . As a first step we demonstrate their application to VC classes in the 4,5,6cubes, and also consider maximumembeddings of classes of Boolean functions.
Bibliography
 (1) Abraham, I., Delling, D., Fiat, A., Goldberg, A.V., Werneck, R.F.: VCdimension and shortest path algorithms. In: ICALP’11, pp. 690–699 (2011)
 (2) Angluin, D.: Computational learning theory: survey and selected bibliography. In: STOC’92, pp. 351–369 (1992)

(3)
Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations.
Cambridge University Press (1999)  (4) BenDavid, S., Litman, A.: Combinatorial variability of VapnikChervonenkis classes with applications to sample compression schemes. Discrete Applied Mathematics 86(1), 3–25 (1998)
 (5) Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.: Learnability and the VapnikChervonenkis dimension. Journal of the ACM 36(4), 929–965 (1989)
 (6) Brönnimann, H., Goodrich, M.T.: Almost optimal set covers in finite VCdimension. Discrete and Computational Geometry 14(1), 463–479 (1995)

(7)
Devroye, L., Györfi, L., Lugosi, G.: A Probabilistic Theory of Pattern Recognition.
SpringerVerlag (1996)  (8) Doliwa, T., Simon, H.U., Zilles, S.: Recursive teaching dimension, learning complexity, and maximum classes. In: ALT’10, pp. 209–223 (2010)
 (9) Ehrenfeucht, A., Haussler, D., Kearns, M.J., Valiant, L.G.: A general lower bound on the number of examples needed for learning. Information and Computation 82(3), 247–261 (1989)
 (10) Floyd, S.: Spacebounded learning and the VapnikChervonenkis dimension. Technical Report TR89061, ICSI, UC Berkeley (1989)
 (11) Guruswami, V., Hastad, J., Kopparty, S.: On the listdecodability of random linear codes. In: STOC ’10, pp. 409–416 (2010)

(12)
Haussler, D.: Probably approximately correct learning.
In: AAAI’90, pp. 1101–1108 (1990)  (13) Haussler, D.: Sphere packing numbers for subsets of the boolean cube with bounded VapnikChervonenkis dimension. Journal of Combinatorial Theory, Series A 69, 217–232 (1995)
 (14) Haussler, D., Littlestone, N., Warmuth, M.: Predicting functions on randomly drawn points. Information and Computation 115(2), 284–293 (1994)
 (15) Haussler, D., Welzl, E.: Epsilonnets and simplex range queries. In: SOGC’86, pp. 61–71 (1986)
 (16) Kleinberg, J.M.: Two algorithms for nearestneighbor search in high dimensions. In: STOC’97, pp. 599–608 (1997)

(17)
Kuzmin, D., Warmuth, M.: Unlabeled compression schemes for maximum classes.
Journal of Machine Learning Research
8(Sep), 2047–2081 (2007)  (18) Langford, J.: Tutorial on practical prediction theory for classification. Journal of Machine Learning Research 6(Mar), 273–306 (2005)
 (19) Littlestone, N., Warmuth, M.: Relating data compression and learnability (1986). Unpublished manuscript http://www.cse.ucsc.edu/~manfred/pubs/lrnkolivier.pdf
 (20) Livni, R., Simon, P.: Honest compressions and their application to compression schemes. In: COLT’13 (2013)
 (21) von Luxburg, U., Bousquet, O., Schölkopf, B.: A compression approach to support vector model selection. Journal of Machine Learning Research 5, 293–323 (2004)
 (22) Matoušek, J.: Geometric range searching. ACM Computing Surveys 26(4), 421–461 (1994)
 (23) Procaccia, A.D., Rosenschein, J.S.: Exact VC dimension of monotone formulas. Neural Information Processing  Letters and Reviews 10(7), 165–168 (2006)
 (24) Rubinstein, B.I.P., Bartlett, P.L., Rubinstein, J.H.: Shifting: oneinclusion mistake bounds and sample compression. Journal of Computer and System Sciences: Special Issue on Learning Theory 2006 75(1), 37–59 (2009)
 (25) Rubinstein, B.I.P., Rubinstein, J.H.: Geometric & topological representations of maximum classes with applications to sample compression. In: COLT’08, pp. 299–310 (2008)
 (26) Rubinstein, B.I.P., Rubinstein, J.H.: A geometric approach to sample compression. Journal of Machine Learning Research 13(Apr), 1221–1261 (2012)
 (27) Sauer, N.: On the density of families of sets. Journal of Combinatorial Theory, Series A 13, 145–147 (1972)
 (28) Shelah, S.: A combinatorial problem; stability and order for models and theories in infinitary languages. Pacific Journal of Mathematics 41(1), 247–261 (1972)
 (29) Trudeau, R.J.: Introduction to Graph Theory. Dover (1994)
 (30) van der Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes. Springer (1996)
 (31) Vapnik, V.N., Chervonenkis, A.Y.: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16(2), 264–280 (1971)
 (32) Warmuth, M.K.: Compressing to VC dimension many points. In: COLT’03 (2003)
 (33) Welzl, E.: Complete range spaces (1987). Unpublished notes
 (34) Welzl, E., Wöginger, G.: On VapnikChervonenkis dimension one (1987). Unpublished notes
Comments
There are no comments yet.