In an approval election bra-fis:b:approval-voting, each voter indicates which candidates he or she finds acceptable for a certain task (e.g., to be a president, to join the parliament, or to enter the final round of a competition) and a voting rule aggregates this data into the final outcome. In the single-winner setting (e.g., when choosing the president), the most popular rule is to pick the candidate with the highest number of approvals. In the multiwinner setting (e.g., in parliamentary elections or when choosing finalists in a competition) there is a rich spectrum of rules to choose from, each with different properties and advantages. Approval voting is particularly attractive due to its simplicity and low cognitive load imposed on the voters. Indeed, its practical applicability has already been tested in a number of field experiments, including those in France las-str:j:approval-experiment; bau-ige:b:french-approval-voting; bou-bla-bau-dur-ige-lan-lar-las-leb-mer:t:french-approval-voting-2017 and Germany alo-gra:j:german-approval-voting. Over the recent years there was also tremendous progress regarding its theoretical properties (see, e.g., the overviews of las-san:chapter:approval-multiwinner and lac:sko:t:approval-survey).
In spite of all these achievements, numerical experiments regarding approval voting are still challenging to design. One of the main difficulties is caused by the lack of consensus on which statistical election models to use. Below we list a few models (i.e., statistical cultures) that were recently used:
In the impartial culture setting, we assume that each vote is equally likely. Taken literally, this means that each voter approves each candidate with probabilitybar-lan-yok:c:hamming-approval-manipulation. As this is quite unrealistic, several authors treat the approval probability as a parameter bre-fal-kac-nie2019:experimental_ejr; fal-sli-tal:c:vnw or require that all voters approve the same (small) number of candidates lac-sko:j:av-vs-cc. A further refinement is to choose an individual approval probability for each candidate lac-mal:c:vnw-shortlisting.
In Euclidean models, each candidate and voter is a point in , where is a parameter, and a voter approves a candidate if they are sufficiently near. Such models are used, e.g., by bre-fal-kac-nie2019:experimental_ejr and god-bat-sko-fal:c:2d. Naturally, the distribution of the candidate and voter points strongly affects the outcomes.
Some authors consider statistical cultures designed for the ordinal setting (where the voters rank the candidates from the most to the least desirable one) and let the voters approve some top-ranked candidates (e.g., a fixed number of them). This approach is taken, e.g., by lac-sko:j:av-vs-cc on top of the ordinal Mallows model (later on, caragiannis2022evaluating provided approval-based analogues of the Mallows model).
Furthermore, even if two papers use the same model, they often choose its parameters differently. Since it is not clear how the parameters affect the models, comparing the results from different papers is not easy. Our goal is to initiate a systematic study of approval-based statistical cultures and attempt to rectify at least some of these issues. We do so by extending the map-of-elections framework of szu-fal-sko-sli-tal:c:map and boe-bre-fal-nie-szu:c:compass to the approval setting.
Briefly put, a map-of-elections is a set of elections with a distance between each pair. Such a set of elections can then be embedded in a plane, by representing each election as a point and embedding the distance between the elections into Euclidean distances in the plane. Such a map-of-elections is useful as, by analyzing the distances between elections generated from various models, we can often obtain some insights about their nature.
To create a map-of-elections for approval elections, we start by identifying two metrics between approval elections, the isomorphic Hamming distance and the approvalwise distance. The first one is accurate, but difficult to compute, whereas the second one is less precise, but easily computable. Fortunately, in our election datasets the two metrics are strongly correlated; thus, we use the latter one.
Next, we analyze the space of approval elections with a given number of candidates and voters. For each , by -identity (-ID) elections we mean those where all the votes are identical and approve the same -fraction of candidates; and by -impartial culture (-IC) elections those where each voter chooses to approve each candidate with probability . We view -ID and -IC elections as two extremes on the spectrum of agreement between the voters and, intuitively, we expect that every election (where each voter approves on average a fraction of candidates) is located somewhere between these two. In particular, for , we introduce the -resampling model, which generates elections whose expected approvalwise distance from -ID is exactly the fraction of the distance between -ID and -IC (and the expected distance from -IC is the fraction).
Armed with these tools, we proceed to draw maps of elections. First, we consider -ID, -IC, and -resampling elections, where the and values are chosen to form a grid, and compute the approvalwise distances between them. We find that, for a fixed value of , the -resampling elections indeed form lines between the -ID and -IC ones, whereas for fixed values they form lines between -ID and -ID ones (which we refer to as the empty and full elections). We obtain more maps by adding elections generated according to other statistical cultures; the presence of the -resampling grid helps in understanding the locations of these new elections. For each of our elections we compute several parameters, such as, e.g, the highest number of approvals that a candidate receives, the time required to compute the results of a certain multiwinner voting rule, or the cohesiveness level (see Section 2 for a definition). For each of the statistical cultures, we present maps where we color the elections according to these values. This gives further insight into the nature of the elections they generate. Finally, we compare the results for randomly generated elections with those appearing in real-life, in the context of participatory budgeting.
For a given positive integer , we write to denote the set , and as an abbreviation for .
A (simple) approval election consists of a set of candidates and a collection of voters . Each voter casts an approval ballot, i.e., he or she selects a subset of candidates that he or she approves. Given a voter , we denote this subset by . Occasionally, we refer to the voters or their approval ballots as votes; the exact meaning will always be clear from the context. An approval-based committee election (an ABC election) is a triple , where is a simple approval election and is the size of the desired committee. We use simple elections when the goal is to choose a single individual, and ABC elections when we seek a committee.
Given an approval election (be it a simple election or an ABC election) and a candidate , we write to denote the number of voters that approve . We refer to this value as the approval score of . The single-winner approval rule (called AV) returns the candidate with the highest approval score (or the set of such candidates, in case of a tie).
Distances Between Votes.
For two voters and , their Hamming distance is , i.e., the number of candidates approved by exactly one of them. Other distances include, e.g., the Jaccard one, defined as . For other examples of such distances, we point to the work of caragiannis2022evaluating.
Approval-Based Committee Voting Rules.
An approval-based committee voting rule (an ABC rule) is a function that maps an ABC election to a nonempty set of committees of size . If an ABC rule returns more than one committee, then we consider them tied.
We introduce two prominent ABC rules. Multiwinner Approval Voting (AV) selects the candidates with the highest approval scores. Given a committee , its approval score is the sum of the scores of its members; . If there is more than one committee that achieves a maximum score, AV returns all tied committees. The second rule is Proportional Approval Voting (PAV). PAV outputs all committees with maximum PAV-score:
where is the harmonic function. Intuitively, AV selects committees that contain the “best” candidates (in the sense of most approvals) and PAV selects committees that are in a strong sense proportional justifiedRepresentation. In contrast to AV, which is polynomial-time computable, PAV is NP-hard to compute azi-gas-gud-mac-mat-wal:c:approval-multiwinner; sko-fal-lan:j:collective
. In practice, PAV can be computed by solving an integer linear programjair/spoc or by an approximation algorithm DudyczMMS20-tight-pav-apx.
Intuitively, a proportional committee should represent all groups of voters in a way that (roughly) corresponds to their size. To speak of proportional committees in ABC elections, justifiedRepresentation introduced the concept of cohesive groups.
Consider an ABC election with voters and some non-negative integer . A group of voters is -cohesive if (i) and (ii) .
An -cohesive group is large enough to deserve representatives in the committee and is cohesive in the sense that there are candidates that can represent it. A number of proportionality notions have been proposed based on cohesive groups, such as (extended) justified representation justifiedRepresentation, proportional justified representation Sanchez-Fernandez2017Proportional, proportionality degree sko:c:prop-degree, and others. For our purposes, it is sufficient to note that all these concepts guarantee cohesive groups different types of representations represented (see also the survey of lac:sko:t:approval-survey for a comprehensive overview).
3 Statistical Cultures for Approval Elections
In the following, we present several statistical cultures (probabilistic models) for generating approval elections. Our input consists of the desired number of voters and a set of candidates . For models that already exist in the literature, we provide examples of papers that use them.
Resampling, IC, and ID Models.
Let and be two numbers in . In the -resampling model, we first draw a central ballot , by choosing approved candidates uniformly at random; then, we generate each new vote by initially setting and executing the following procedure for every candidate : With probability , we leave ’s approval intact and with probability we resample its value (i.e., we let be approved with probability ). The resampling model is due to this paper and is one of our basic tools for analyzing approval elections. By fixing , we get the -impartial culture model (-IC) where each candidate in each vote is approved with probability ; it was used, e.g., by bre-fal-kac-nie2019:experimental_ejr and fal-sli-tal:c:vnw. By fixing , we ensure that all votes in an election are identical (i.e., approve the same fraction of the candidates). We refer this model as -identity (-ID).
The -disjoint model, where and are numbers in and is a non-negative integer, works as follows: We draw a random partition of into sets, , and, to generate a vote, we choose uniformly at random and sample the vote from a -resampling model with the central vote that approves exactly the candidates from (while the central votes are independent of , we still need this parameter for resampling).
Let and be two numbers from and let be a distance between approval votes. We require that is polynomial-time computable and, for each two approval votes and , depends only on , , and ; both distances from Section 2 have this property. In the -Noise model we first generate a central vote as in the resampling model and, then, each new vote is generated with probability proportional to . Such noise models are analogous to the Mallows model for ordinal elections and were studied, e.g., by caragiannis2022evaluating. In particular, they gave a sampling procedure for the case of the Hamming distance. We extend it to arbitrary distances.
There is a polynomial-time sampling procedure for the -noise models (as defined above).
Let be the central vote and let . Consider non-negative integers and such that and . The probability of generating a vote that contains candidates from and candidates from is proportional to the following value (abusing notation, we write to mean the value ; indeed, depends only on , , and ):
Next, let . To sample a vote, we draw values and with probability and form the vote as approving random members of and random members of . ∎
In the reminder, we only use the noise model with the Hamming distance and we refer to it as the -noise model. Note that the roles of and in this model are similar but not the same as in the -resampling model (for example, for we get the -ID model, but for get the -IC one).
In the -dimensional Euclidean model, each candidate and each voter is a point from and a voter approves candidate if the distance between their points is at most (this value is called the radius); such models were discussed, e.g., in the classical works of Enelow and Hinich [enelow1984spatial,enelow1990advances], and more recently by elk-lac:c:ci-vi--approval, elk-fal-las-sko-sli-tal:c:2d-multiwinner, bre-fal-kac-nie2019:experimental_ejr, and god-bat-sko-fal:c:2d. We consider -dimensional models for , where the agents’ points are distributed uniformly at random on . We refer to them as 1D-Uniform and 2D-Square models (note that to fully specify each of them, we also need to indicate the radius value).
Truncated Urn Models.
Let be a number in and let be a non-negative real number (the parameter of contagion). In the truncated Pólya-Eggenberger Urn Model berg1985paradox we start with an urn that contains all possible linear orders over the candidate set. To generate a vote, we (1) draw a random order from the urn, (2) produce an approval vote that consists of top candidates according to (this is the generated vote), and (3) return copies of to the urn. For , all votes with approved candidates are equally likely, whereas for large values of all votes are likely to be identical (so the model becomes similar to -ID).
Next we describe two (pseudo)metrics used to measure distances between approval elections. Since we are interested in distances between randomly generated elections, our metrics are independent of renaming the candidates and voters.
Consider two equally-sized candidate sets and , and a voter with a ballot over . For a bijection , by we mean a voter with an approval ballot . In other words, is the same as , but with the candidates renamed by . We write to denote the set of all bijections from to . For a positive integer , by we mean the set of all permutations over . Next, we define the isomorphic Hamming distance (inspired by the metrics of fal-sko-sli-szu-tal:c:isomorphism).
Let and be two elections, where , and . The isomorphic Hamming distance between and , denoted , is defined as:
Intuitively, under the isomorphic Hamming distance we unify the names of the candidates in both elections and match their voters to minimize the sum of the resulting Hamming distances. We call this distance isomorphic because its value is zero exactly if the two elections are identical, up to renaming the candidates and voters. Computing this distance is -hard (see also the related results for approximate graph isomorphism arv-koe-kuh-vas:c:approximate-graph-isomorphism; gro-rat-woe:c:approximate-graph-isomorphism).
Computing the isomorphic Hamming distance between two approval elections is NP-hard.
Thus we compute this distance using a brute-force algorithm (which is faster than using, e.g., ILP formulations). Since this limits the size of elections we can deal with, we also introduce a simple, polynomial-time computable metric.
Let be an election
with candidate set and
Its approvalwise vector, denoted
voters. Its approvalwise vector, denoted, is obtained by sorting the vector , in non-increasing order.
The approvalwise distance between elections and with approvalwise vectors and is defined as:
In other words, the approvalwise vector of an election is a sorted vector of the normalized approval scores of its candidates, and an approvalwise distance between two elections is the distance between their approvalwise vectors. We sort the vectors to avoid the explicit use of a candidate matching, as is needed in the Hamming distance. Occasionally we will speak of approvalwise distances between approvalwise vectors, without referring to the elections that provide them.
It is easy to see that the approvalwise distance is computable in polynomial time. Indeed, its definition is so simplistic that it is natural to even question its usefulness. However, in Section 6.3 we will see that in our election datasets it is strongly correlated with the Hamming distance. Thus, in the following discussion, we focus on approvalwise distances.
5 A Grid of Approval Elections
To better understand the approvalwise metric space of elections, next we analyze expected distances between elections generated according to the -resampling model.
Fix some number of candidates and parameters , such that is an integer, and consider the process of generating votes from the -resampling model. In the limit, the approvalwise vector of the resulting election is:
Indeed, each of the candidates approved in the central ballot either stays approved (with probability ) or is resampled (with probability , and then gets an approval with probability ). Analogous reasoning applies to the remaining candidates. With a slight abuse of notation, we call the above vector . Furthermore, we refer to as the -ID vector, to as the -IC vector, and to -ID and -ID vectors as the empty and full ones, respectively (note that 0-ID 0-IC and 1-ID 1-IC).
Now, consider two additional numbers, , such that is an integer. Simple calculations show that:
Thus is a fraction of the distance between empty and full, and is a fraction of the distance between -IC and -ID (see also Figure 2). Furthermore, is the largest possible approvalwise distance.
Intuitively, -resampling elections form a grid that spans the space between extreme points of our election space; the larger the parameter, the more “chaotic” an election becomes (formally, the closer it is to the -IC elections), and the larger the parameter, the more approvals it contains (the closer it is to the full election). We use -resampling elections as a background dataset, which consists of elections with 100 candidates and 1000 voters each, with the following and parameters:
is chosen from and is chosen from the interval ,111By generating elections with a parameter from interval , we mean generating one election for each value , for .
is chosen from and is chosen from the interval .
For each of these elections we compute a point in , so that the Euclidean distances between these points are as similar to the approvalwise distances between the respective elections as possible. For this purpose, similarly to szu-fal-sko-sli-tal:c:map, we use the Fruchterman-Reingold force-directed algorithm fruchterman1991graph. For the resulting map, see the clear grid-like shape at the left side of Figure 3.222While our visualizations fit nicely into the two-dimensional embedding, our election space has a much higher dimension. Whenever we present maps of elections later in the paper, we compute them in the same way as described above (but for datasets that include other elections in addition to the background ones).
In this section, we use the map-of-elections approach to analyze quantitative properties of approval elections generated according to our models. In particular, we will see how an election’s position in the grid influences each of the properties, and what parameters to use to generate elections with the quantitative property in a desired range.
6.1 Experimental Design
Concretely, we consider the following four statistics:
- Max. Approval Score.
The highest approval score among all candidates in a given election, normalized by the maximum possible score, i.e., the number of voters.
- Cohesiveness Level.
The largest integer such that there exists an -cohesive group (for committee size ).
- Voters in Cohesive Groups.
Fraction of voters that belong to at least one 1-cohesive group (committee size ).
- PAV Runtime.
Runtime (in seconds) required to compute a winning committee under the PAV rule, by solving an integer linear program provided by the abcvoting library abcvoting, using the Gurobi ILP solver.
We use six datasets. Five of them are generated using our statistical cultures and consist of 100 candidates and 1000 voters (except for the experiments related to the cohesiveness level, where we have 50 candidates and 100 voters, due to computation time). We have: 250 elections from the Disjoint Model (50 for each with ); 225 elections from the Noise Model with Hamming distance (25 for each with ); 225 elections from the Truncated Urn Model (25 for each with ); 200 elections from Euclidean Model (100 for 1D-Uniform, with radius in , and 100 for 2D-Square, with radius in ); these parameters are as used by bre-fal-kac-nie2019:experimental_ejr. The sixth dataset uses real-life participatory budgeting data and contains 44 elections from Pabulib pabulib, where for each (large enough) election we randomly selected a subset of 50 candidates and 1000 voters.
6.2 Experimental Results
Our visualizations are shown in Figures 3 and 4. We use the grid structure of the background dataset for comparison with other datasets. Notably, some of them do not fill this grid: the disjoint model (Figure 4b) is restricted to the lower half (i.e., the disjoint model does not yield elections with very many approvals), the Euclidean model (Figure 4
a) is restricted to the left half (due to the uniform distribution of points, its elections are rather “chaotic”), and the real-world dataset Pabulib (Figure4d) is placed very distinctly in the bottom left part.
To get an intuitive understanding of the four statistics, let us consider the background dataset in Figure 3a. We see that the highest approval score is lowest in the lower left side and increases towards up and right. This is sensible: If the average number of approved candidates increases, so does this statistic; also, if voters become more homogeneous, high-scoring candidates are likely to exist. Moreover, regarding voters in cohesive groups, it turns out that in most elections almost all voters belong to a 1-cohesive group, with the left lower part as an exception (where there are not enough approvals to form -cohesive groups). The time needed to find a winning committee under PAV is correlated with the distance from 0.5-IC. We see that it takes the longest to find winning committees if the election is unstructured. Similar to the highest approval score, the cohesiveness level increases when moving up or right in the diagram. Cohesive groups with levels close to the committee size only exist in very homogeneous elections (rightmost path) and elections with many approvals (top part).
We move on to the results for the five other datasets. Note that each figure also contains the background dataset (gray dots) for reference. These results help to understand the differences between our statistical cultures.
The maximum approval score statistic provides an insight into whether there is a candidate that is universally supported. Instances with a value close to 1 possess such a candidate. In a single-winner election, this candidate is likely to be a clear winner. This is undesirable when simulating contested elections or shortlisting. Also note that in the real-world data set (Pabulib) we do not observe the existence of such a candidate.
When looking at the PAV runtime, we find some statistical cultures that generate computationally difficult elections, e.g., the -resampling model with parameter values close to and (0.5-IC), the noise model with parameters and , and the disjoint model with . Yet, instances from the real-world dataset, as well as from the Euclidean and urn ones, can be computed very quickly.333Less than 1 second on a single core (Intel Xeon Platinum 8280 CPU @ 2.70GH) of a 224 core machine with 6TB RAM. In contrast, the worst-case instance (0.3-IC) required 25 minutes on 13 cores.
Concerning voters in cohesive groups, whenever this statistic is close to 1, it is easy to satisfy most voters with at least one approved candidate in the committee; such committees are easy to find justifiedRepresentation. Since many proportional rules take special care of voters that belong to cohesive groups, in such elections there are no voters that are at a systematic disadvantage. In many of our generated elections (almost) all voters belong to -cohesive groups, but this is not the case for the real-world, Pabulib data. Indeed, to simulate Pabulib data well, we would likely need to provide some new statistical culture(s). For the cohesiveness level, we see that all models generate a full spectrum (i.e., ) of cohesiveness levels. That said, we expect realistic elections to appear in the “lower left” part of our grid (with few approvals), and such elections tend to have low cohesiveness levels. Indeed, this is also the case for Pabulib elections; hence it is important how proportional rules treat -cohesive groups with small .
Figures 3 and 4 are based on the approvalwise distance. We argue that they would not change much if we used the (computationally intractable) isomorphic Hamming distance. To this end, we generated 363 elections with 10 candidates and 50 voters from the statistical cultures used in the previous experiment (detailed dataset description is in the appendix C). We compare Hamming and approvalwise distances; the results are presented in Figure 2. Each dot there represent a pair of elections and its coordinates are the distances between them, according to the Hamming and approvalwise metrics. The Pearson Correlation Coefficient is 0.989, and for 67% of elections the distances are identical.
7 Future Work
An important task for future work is to broadly study real-world datasets with the methods proposed in this paper. Most of these come from political elections with few candidates. As our analysis may be influenced by the number of candidates, a direct comparison with the figures in this paper is not possible; instead one has to rerun the experiments for a similar number of candidates.
Martin Lackner was supported by the Austrian Science Fund (FWF), project P31890. Nimrod Talmon was supported by the Israel Science Foundation (ISF; GrantNo.630/19). This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 101002854).
Appendix A Proof of Proposition 2
Proof of Proposition 2.
We describe a reduction from the NP-hard Clique problem. An instance of Clique is , where is a graph, with set of vertices and set of edges .
Given such an instance of Clique, create two elections, and , with voters and candidates each. The intuitive idea of the reduction is for to correspond to and for to correspond to a -clique; then, a minimal Hamming distance would correspond to finding a -clique in . Details follows.
Denote the voters of by and the candidates of by . For each , let approve . Furthermore, for each , let approve each for which there is an edge in .
Denote the voters of by and the candidates of by . For each , let approve , and disapprove all other candidates.
Set the bound on the Hamming distance to be . This completes the description of the reduction. Next we show the two directions for correctness.
First, assume that there is a -clique in . Without loss of generality let be the vertices of the -clique. Then, match the voters of so that , , is matched to ; and match the candidates of so that , , is matched to . That way, all the approvals of correspond to approvals in the resulting ; in particular, for each and such that approves , the matched voter is approving the matched candidate. The number of approvals of the matched that correspond to disapprovals in are exactly .
For the other direction, note that the Hamming distance corresponding to any matching of the voters and candidates of equals to , where is the number of disapprovals between the first matched voters and candidates, thus only when corresponding to a -clique. ∎
Appendix B Cohesive Groups
In this section we describe algorithms we used to obtain results regarding cohesive groups. The algorithm is based on the one provided by Anonymous .
b.1 Calculating Maximum Cohesiveness Level
We show an algorithm using Integer Linear Programming (ILP) that we used to calculate the maximum cohesiveness level of a group of voters.
Let be an approval election instance and be the committee size. We ask what is the maximum value of such that there exists an -cohesive group of voters, that is, a subset of at least voters that have at least common candidates, that is, approved by all of them.
First of all, let us point out that we can find by using binary search, because each -cohesive group is also an -cohesive group for any . We can also search for by looping through each possible value, that is, from to , and stop as soon as there is no cohesive group of a given the cohesiveness level. The second option may be better when we expect the cohesiveness level to be small, however, the first is better in the sense of theoretical computational complexity. In our algorithm, we focus on checking if there exists an -cohesive group for a given cohesiveness level .
Let us assume we are given an election , a committee size and a cohesiveness level . We show how to construct an ILP instance that indicates whether there exists an -cohesive group in . For the sake of brevity, we set and . Furthermore, let be the binary matrix of approvals for , that is, we have if the -th voter approves the -th candidate, and we have otherwise.
We note that, if there exists an -cohesive group of any size, then there also exists an -cohesive group of size exactly , that is, with the lowest possible size (we can just remove redundant voters, because the set of commonly approved candidates would not decrease). Thus it is enough to ask whether there exists an -cohesive group of size .
To construct our ILP instance, we create the specified variables:
For each voter
, we create a binary variable, with the intention that if the -th voter belongs to the cohesive group, and otherwise.
For each candidate , we create a binary variable , with the intention that if all the selected voters (that is, specified by variables ) approve the -th candidate, and otherwise.
For convenience, we refer to the voters (to the candidates) whose () variables are set to as selected.
Now let us specify the constraints for these variables. First of all, we need to ensure that exactly voters and at least candidates have been selected:
Furthermore, we need to ensure that each selected voter approves all the selected candidates. Thus, for each , we form a constraint:
Now let us show how the above inequality works. On the one hand, if the -th candidate is not selected, then this inequality is satisfied trivially, because the right-hand side is equal to . On the other hand, if the -th candidate is selected, then the sum on the left-hand side must be at least , i.e., there must be at least selected voters who approve this candidate. Since there are exactly selected voters, all of them must approve the -th candidate.
Now we see that, if there exists an assignment which satisfies the above constraints, then the selected voters form an -cohesive group. Otherwise, there is no -cohesive group.
Appendix C Correlation
The dataset used for comparing metrics in Figure 2 consists of: 40 elections from the Disjoint Models, 45 elections from the Noise Models with Hamming distance, 50 elections from the Truncated Urn Models, 50 elections from Euclidean Models, 134 elections from Resampling Models, 20 elections from IC, 20 elections from ID, and four extreme elections (i.e., 0.5-IC, 0.5-ID, Empty, Full).