1 Introduction
It is well accepted that people often vote strategically in political and other situations, taking into account not just their preferences, but also beliefs about how their vote would affect the outcome [7, 2]. Researchers in economics, political science, and more recently in the computational social choice community, suggested various mathematical models to capture the strategic decision that a voter faces [6, 5, 8]. The gains from a certain action depend not only on the preferences of the voter, but also on the votes of others. Thus, part of the difficulty in predicting a voter’s decision arises from the fact that there is uncertainty about others’ voting decisions, i.e., what can be inferred from the poll about the actual votes. Theoretical models describe this uncertainty in the following different ways, which can lead to different predictions of a voter’s actions.

Expected utility maximization. A rational voter maximizes her expected utility with respect to a probability distribution
over the actions of the other voters. The distribution itself may be given exogenously (e.g., by a poll with known variance as in our model), or derived via equilibrium analysis from the uncertain preferences of the other voters. Such models were developed mainly in the economics literature and are sometimes known as the “calculus of voting”
[15, 11, 12]. 
Voting heuristics. In these models the voter uses some (typically simple) function that states which action to use at any given situation. The voter is not assumed to be rational, and may not even have a cardinal utility measure or an explicit probabilistic representation of the different outcomes. For example, according to the 2pragmatist heuristics, the voter behaves as if only the two candidates leading the poll are participating [14].

Bounded rationality. These models present a midpoint between utility maximization and heuristics. The voter makes a rational strategic decision based on a heuristic belief, rather than accurate probabilistic belief. One example of such a model is local dominance [10], which assumes that each voter derives a set of possible outcomes based on a poll, and then selects a nondominated action within these outcomes.
We evaluated the different models on data obtained from Tal et al. [16] who implemented several voting scenarios in controlled experiments involving humans in different scenarios that vary the number of voters, the poll information, and voters’ preferences. Our main findings are that the AU model outperforms all other models in all scenarios. The pragmatist heuristic model which considers only a limited number of candidates when making decisions comes in second. The boundedrational models obtained the worst performance.
1.1 Contribution
Our first contribution is an extensive evaluation of various decision models on realworld data. We use the data of Tal et al. [16], where human subjects with dictated preferences are exposed to a poll over three candidates and make a single voting decision under the Plurality rule. This is the simplest possible setting that involves a nontrivial strategic decision.
This is the first time that these models are tested versus voting decision with poll information. In fact, for some of them this is the first empirical test at all.
Our second contribution is new heuristic voting rule, inspired by a similar model of Bowman et al. [4], that takes into account both the utility of a candidate and its attainability. The AttainabilityUtility (AU) decision model outperforms all other decision models we tested in predicting human votes.
This contributes to the understanding of the factors that determine people’s strategic voting and can lead to new theories of voting behavior that combine rational and boundedly rational behavior.
1.2 Related Work
We are not aware of another controlled experiment where voters face multiple strategic decisions with poll information. Yet, similar experiments were conducted in which groups of human players voted strategically with dictated preference profiles.
Closest to our work is a recent paper by Tyszler and Schram [17], who showed that the strategic behavior of voters in the lab using is consistent with a quantal bestresponse equilibrium. The main difference is that their subjects played a strategic game versus other human players, and the information they had was the preferences of other voters rather than poll information. Similar gametheoretic experiments along that line were conducted in [7, 2, 18]. In particular, these studies have shown that strategic voting in the lab increases with the amount of information subjects received about others’ preferences and actions.
A different line of works in political science compared theoretical models against actual votes in political elections (using exit polls to obtain the truthful preferences). For example, Blais et al. [3] tested the calculus of voting model on empirical data from political elections and focused on voter’s decision to vote/abstain. They concluded that the model has some explanatory power but is far from explaining the data completely, and did not compare to other decision models. In contrast, Abramson et al. [1] concluded that voting behavior in US primary elections is consistent with the calculus of voting, but observed obvious strategic behavior only in 13% of the voters, a bit higher than the fraction that seem to vote at random.
In contrast to controlled experiments, such datasets typically contain few decisions of each voter (usually just one), and are this insufficient to test decision models versus individual behavior.
2 Preliminaries
In this section we provide the necessary background for our work. An (anonymous) score aggregation rule with candidates is a function
, mapping vectors of candidates’ scores to a subset of winning candidates. In particular, the Plurality rule lets each voter vote for a single candidate, collects the total number of votes
, and selects .We consider a single voter who faces a decision, to vote for one of several candidates . The voter has a cardinal utility function , where is the utility of the voter if candidate wins (different utility for each candidate). The utility of a subset of winners is . Denote by the set of all utility functions over the set . We denote by the outcome of the score vector with one additional vote to .
Prior to her vote, the voter is faced with poll information, which is a point estimate of candidates scores under the Plurality voting rule. Formally, the poll is a vector
, where is the number of voters expected to vote for candidate . There is a joint probability distribution over pairs of “real outcomes” and polls.^{1}^{1}1A priori, this distribution could be arbitrary, but in most realistic cases there is some correlation between the real score of a candidate and its score in the poll. The voter is not explicitly informed of this distribution.A decision model (for Plurality with candidates and a poll) is a function , where is the vote of a voter with utility function , using decision model given a poll . We use a superscript for the name of the decision model (e.g., ), and subscripts to denote voterspecific parameters, if relevant. We restrict our attention to deterministic decision models in this work. We demonstrate with two simple examples. First, the decision model of a voter who is always truthful regardless of the poll is .
Next, consider a rational voter that believes the poll to be a completely accurate representation of the other votes. Such a voter can predict that the outcome of voting is , and her decision will be , i.e., a “best response” to the votes of the other voters (with some assumption on how to vote when there are multiple best responses).
For exposition, we introduce a running example with 5 candidates, and specify which candidate the voter will choose under every decision model.
Example 1 (Running example).
The set of candidates is . A voter’s utility is described by the vector (preferences are lexicographic). Poll scores are given by .
Figure 1 shows the scores of all candidates graphically. Both and always select .

2.1 Decision models from the literature
In this section we briefly describe some decisionmaking models of voting behavior from the literature, one for each of the approaches specified above (heuristic, rational, and boundedrational). In Section 3 we describe our decision model developed for this study, and in Section 4 we provide a detailed comparison of this model to the decisionmaking models below as well as several baseline models.
kpragmatist
The first model we consider is the simple pragmatist heuristic [14]. Formally, let contain the candidates with highest score in , then the pragmatist decision model with parameter selects the most preferred candidate among them, i.e.,
We allow to be an individual parameter that differs from voter to voter.
In Figure 1 we see that for the voter look only at the two leading candidates ( and ), and therefore will vote to the one that is more preferred among them. For , the voter considers all candidates except as possible winners, and therefore will vote for his most preferred candidate .
Calculus of voting
The calculus of voting suggests that a voter always votes in a way that maximizes her expected utility [15, 12]. The complications of the model usually arise from the fact that the voter is assumed to know only the other voters’ preferences, and uses an equilibrium model to predict their votes. However, we consider a simpler version where the distribution of votes is given exogenously [11].
Recall that we defined
as a joint distribution over actual scores and polls. We denote by
the distribution on the actual scores, conditional on poll scores . Denote bythe probability that the voter is pivotal for versus when the poll is . That is, voting for makes a joint or unique winner. Then, the voter votes so as to maximize her expected utility:
To make the CV model concrete, we need to pin it down to a specific distribution . For this paper, we use the way that scores were generated in the experiment of Tal et al. [16]. Specifically, given poll of a voter population, the actual score vector is obtained by sampling votes from a multinomial distribution whose parameters are .
We use as a shorthand for when is a multinomial distribution with voters as explained above. When (i.e., the true number of voters), this means that selects the candidate that exactly maximizes the voter’s expected utility, since is the true probability that the voter is pivotal.
However, the decision model allows for a more flexible, boundedrational decision: when the voter overestimates her true pivot probability, and thus her influence on the outcome, whereas means that the voter underestimates her influence.
In Figure 1 we see that for the voter believes she is pivotal with sufficiently high probability to substantially increase the chance of to win. However for , the voter believes that any tie except a tie of is highly improbable, and therefore will vote for .
Local dominance
Under the Local dominance model [10, 9], the voter has an ‘uncertainty parameter’ . Given poll with participants, the voter considers as possible (without assigning any explicit probability) all score vectors such that for all . Then, the voter selects an undominated action (i.e., candidate) given this set of possible outcomes. Meir et al. [10] characterize the undominated candidates:

Let be all candidates whose score in is at least .

If , then the undominated candidates are all candidates in except the least preferred.

If , then all candidates are undominated.
Denote by the set of undominated candidates in poll for a voter with utility and parameter . The decision model of such a voter is
This assumes that the voter selects the most preferred undominated candidate, if more than one exists.
In Figure 1 we see that for the voter believes that the poll is very accurate (the score of each candidate may change by at most votes), and there is only one possible winner (). In this case, the voter remains truthful and . When , the voter believes that the poll is not very accurate and hence .
3 The AttainabilityUtility Heuristic
We suggest a new heuristic that separately evaluates the attainability
(an approximation of the success of each candidate according to the poll score) and the utility of the candidate given the voter’s preferences. It selects the candidate that maximizes their weighted geometric mean. The heuristic is partly inspired by a rule that was used in the simulations of Bowman et al.
[4] for multiissue voting.^{2}^{2}2In [4], attainability was computed for each issue separately and there were additional factors such as learning from the past. All factors were multiplied to obtain the heuristic attractiveness of the candidate.Given a poll , we compute the attainability of each voter similarly to [4]:
Then, for some small constant , we define:
Intuitively, the parameter tradesoff the relative importance of attainability and utility, where means the voter always selects the candidate with maximal score,^{3}^{3}3This is the only case where is needed, as may be . and means the voter is always truthful.
The parameter can be thought of as the accuracy of the poll in the eyes of the voter, similarly to the role of parameter in the LD model and in the CV model.
Figure 2 shows how affects the attainability score . Candidates that are tied have the same attainability. High means that a small advantage in score translates to a large gap in attainability.
Table 1 shows the AU model behavior over Example 1 with different parameters. For close to 2, the voter is tend to be more truthful, however when there is a big gap in votes between candidates, which of the top preferences to choose. There is big gap between and , therefore when is big, the gap will cause a bigger difference in the score and will cause to vote for . In contrast When is small, this gap is less taken into account, and the voter will vote for . In contrast, when is close to 0, the model considers the poll as more important: when is large, smaller changes in the poll will have more effect on the decision and therefore only and have nonnegligible attainability; for small the difference in poll have less effect and therefore preference 4 was chosen. Notice that no matter what the parameters are, the model will never choose to vote for or , since they are each dominated by another candidate with higher score and utility.
382.9  433.3  100.1  64.0  0.01  
578.7  413.2  162.6  61.4  0.01  
1.18  1.54  0.51  
0.16  0.77  0.11  1.06  0.3 
4 Methodology
Dataset
We evaluated the different models on data obtained from Tal et al. [16] who implemented several voting scenarios in controlled experiments involving humans. Some of this data is publicly available at the link votelib.org. The data was obtained from 595 distinct subjects. Each subject played up to 20 rounds of voting with 3 candidates, each time with different preferences and poll information. The poll provided a noisy indication of the results of the voting. The voting instances can be divided into six different “scenarios” corresponding to different orders of candidates’ scores in the poll once preferences are held fixed (See two leftmost columns in Table 2).
We denote the candidate as for the most preferred, for the second and for the least preferred. The reward was for each round where was elected, for , and for . Note that only in scenarios E and F, where is ranked last at the poll, the voter may have a monetary incentive to vote for , and never has an incentive to vote for .
4.1 Evaluated decision models
In addition to our decision model , we evaluate the following singleparameter decision models described in Section 2.1: . To these models, we add several other baselines.
Voter type based model
Tal, Meir and Gal [16] identified 3 distinct types of voter behavior, albeit without suggesting an explicit decision model:

Voters who are always truthful (TRT voters, about 10%15% of subjects);

Voters that often compromise when is ranked last (CMP voters, about 40% of subject), and otherwise vote truthfully;

Voters that often compromise AND select the leader when ranked first (LB voters, about 50% of subjects).
They also identified a subgroup of subjects who select unjustified actions (a candidate where there is that is both more preferred and higherranked) more than once. The behavior of these voters (about 5%10% of the dataset) is naturally harder to predict for any decision model. We analyze the results for all subjects, but return to the issue of unjustified actions and voters in Section 5.3.
Based on their distinction of types, we consider the simple TMG decision model . The parameter is the voter type. It is defined as follows:

;

if is ranked last in , and otherwise;

if is ranked first in , and otherwise.
LocalDominance with Leader bias
Note that the findings of [16] indicate a strong tendency to bias for the leader of the poll, which is not taken into account in the Local dominance model. We thus consider a “leaderbiased” variation of the local dominance model:
In Figure 1 we see that this model acts similar to the LD model, however when there is only one possible winner, this model allows the voter to be leader biased and voter for his fourth preference instead of being truthful.
Blackbox neural network predictor
Another baseline we used was a general blackbox classifier. We extracted about 30 relevant features, including the poll scores, the differences in poll scores, voter’s utility and the voter type as identified in
[16]. The “decision model” then feeds all features to the classifier, which predicts an action in .The classifier consisted of a singlehiddenlayer feedforward neural network classifier. The input nodes represented features that summarized voters’ preferences, and the poll information that was provided to them, and information about the voter types. The classifier was implemented using the
nnet package of R.^{4}^{4}4https://cran.rproject.org/web/packages/nnet/index.html4.2 Evaluation metrics
Prediction and parameter fitting
The prediction was performed using leaveoneout method. For each voter we excluded one of his rounds, one by one. Using the rest of the rounds we learned the relevant model parameters and predicted what the voter will do in the excluded round.
Confusion matrices
The predictions for a specific decision model result in a confusion matrix: The entry in the matrix specifies how many times the model predicted and the actual voter action was (a matrix where all offdiagonal entries are 0 indicates perfect prediction). Both rows and columns are sorted
. An example for a confusion matrix from out data:
confusion matrix:
For example, in 441 samples (4.7% of the data), the studied model in the example predicted but the voter selected .
Performance measures
From the confusion matrix we compute standard measures for multiclass prediction problems [13]
. These include precision, recall as well as the fmeasure, which is the harmonic mean of precision and recall, for every candidate
.where . Since there are three possible actions, we calculate a single fmeasure by weighting each fmeasure by the number of times this action was played:
In the example matrix above, , , and . For the entire matrix, we would get .
An fmeasure of means that the decision model perfectly explains the data.
5 Results and Analysis
Table 2 shows the fmeasure of each decision model. We emphasize that the individual parameters of each voter were learned using leaveoneout to avoid overfitting. The results are separated to the different poll scenarios, as they each reflect a different strategic decision. The fmeasure is also presented graphically in the solid bars shown in Figure 3.
scenario  frequency  AU  LD  LD+LB  CV  Prag  TMG  NN  

A  0.902  0.902  0.902  0.902  0.902  0.902  0.904  
B  0.903  0.903  0.903  0.903  0.903  0.903  0.909  
C  0.734  0.389  0.691  0.389  0.610  0.626  0.697  
D  0.722  0.657  0.678  0.657  0.695  0.657  0.709  
E  0.736  0.486  0.680  0.642  0.708  0.655  0.704  
F  0.571  0.414  0.461  0.470  0.559  0.407  0.551  
total  0.759  0.591  0.706  0.676  0.729  0.708  0.739 
5.1 Main findings
From Table 2 and Figure 3 we can derive the following insights:

In Scenarios A and B, all decision models (except NN) predict that voters are always truthful, and thus have the same high performance.

In all scenarios CF, the AU model outperforms all other models.

The “sophisticated” boundedrational models CV and LD have the worst performance. In particular, they demonstrate poor performance in Scenario C where voters’ decision is influenced by leaderbias [16].

The pragmatist heuristics performs surprisingly well, considering its utter simplicity and the fact that it only allows three types of voters (for or ).

The LB variant of local dominance strictly improves its performance, placing it roughly at par with pragmatist and the neural network predictor.

Scenario F is the most difficult one for almost all models, with the best models having an fmeasure slightly above .

5.2 Upper bound on Performance
The data we use to fit the parameters of each voter is sparse. Each voter has at most 20 samples, and in some scenarios only 1 or 2 samples (or none at all). Therefore, even leaving out a single sample may significantly hurt performance. In order to find what is the maximum explanatory power of each model, we recalculated the fmeasure for each model using the entire dataset both as a training set and a test set. Clearly this approach is suspect to overfitting, so it only provides an upper bound on the prediction ability of each model.
5.3 Where are the errors?
Next, we dig deeper into the results. We want to see what kinds of mistakes the AU decision model tends to have. For example, whether these mistakes concentrate on a specific subset of subjects or scenarios. These insights could be used later to improve the model, and to design further experimental evaluation.
Errors by poll size
First, the model seems to perform equally well for all poll sizes (see Table 4 in appendix), even as different as and . We thus conclude that the size of the poll is not a major factor in explaining the prediction errors.
Errors by type



Figure 4 shows the confusion matrices of the AU decision model in all the “interesting” scenarios CF. Recall that all the offdiagonal entries indicate prediction errors, where the column is the predicted action ( or , in that order) and the row is the action of the subject.
As can be seen in the table, most of the prediction errors in Scenario C are due to underprediction of voting for the leader . In contrast, most of the errors in Scenario E are due to over prediction of a strategic compromise . We can also see why Scenario F is the hardest, as all three actions are played frequently.
Errors by subject
Every decision model can capture the behavior of some human subjects better than others. To check how well different subjects are predicted, we computed the confusion matrices and fmeasure for each of the 595 subjects, when actions are predicted by . An fmeasure of means that all actions of this subject were predicted correctly.
Figure 5 shows the distribution of subjects’ individual fmeasures. We can see that about of the subjects are predicted very well (fmeasure over 0.9), predicted reasonably well (fmeasure over 0.8), and the rest of the subjects (about ) are with fmeasure less than 0.8.
This means that most of the prediction errors are due to a relatively small subset of subjects. One possible explanation is that these are the subjects who played fewer games, and therefore it is harder for the model to learn their parameters, however we get a similar distribution after omitting subjects who played under 10 games.
The main question is thus whether voters whose behavior is not predicted well follow a different decision model than AU, or are somehow inherently unpredictable.
Inherently inconsistent behavior
To answer the above question, we considered types of behavior that would be ‘inherently unpredictable.’ For example, [16] categorized as “unjustified” a sample (action under poll ) if there is another candidate that ‘dominates’ the selected action ( is more preferred than and ). They showed that voters with at least two unjustified actions have a random component in their behavior.
We suggest an additional criterion that is based on inconsistency among a voter’s own actions. We say that a sample is inconsistent if there is another sample of the same voter, such that: (i) ; (ii) ; and (iii) for all . In words, is in a weakly better position in , but still the voter prefers to vote for another candidate .
Figure 6 (left) shows the fmeasure of all voters, classified by their consistency type. We can see that the left tail of the histogram (i.e., almost all voters with low fmeasure) are either “unjustified” or “inconsistent.”
This might suggest that perhaps prediction cannot be significantly improved. We thus tested how many of the prediction errors themselves were of dominated/inconsistent actions. This can be seen in Figure 6 (right). The plaid gray bars represent all prediction errors that cannot be explained away as dominated or inconsistent actions.
We conclude that while most of the prediction errors are indeed due voters that behave inconsistently sometimes, most of the actual errors are “plaid” especially in scenario F. Thus there is still room for improvement of our decision models.
6 Discussion and Conclusions
It seems that the AttainabilityUtility heuristics explains quite well the behavior of most subjects in the data, except those with inherent inconsistencies in their replies. To improve the model we can perform more experiments where we use different utilities for the candidates (different utility gaps, negative utility, etc) and have more than 3 candidates. Those experiments can expose behavior that do not exist in the current data. Interestingly, the NN blackbox model sometimes successfully predicts “unjustified” actions, and we can try to understand when is this possible.
6.1 Parameters distribution in the population
Since the and parameters correspond to the natural cognitive inclinations, their distribution can in principle reveal important information on the types of strategic voters in the population. Unfortunately, it is hard to make clear patterns from the parameter distribution. Indeed, it seems that the distribution of parmeter is bimodal with peaks at and (see appendix). However this is probably an artifact of the way we select parameters for subjects with a large range of optimal parameters (like truthful voters). More experimentation in different conditions is required if we want to better understand the population structure.
One pattern that does stand out is that values seem to be much higher in the ‘small ’ condition. This may suggest that not only the relative attainability matters: when is low and the voter has a substantial chance to be pivotal, the importance of utility increases.
References
 [1] Paul R Abramson, John H Aldrich, Phil Paolino, and David W Rohde. “sophisticated” voting in the 1988 presidential primaries. American Political Science Review, 86(1):55–69, 1992.
 [2] Anna Bassi. Voting systems and strategic manipulation: an experimental study. Technical report, mimeo, 2008.
 [3] André Blais, Robert Young, and Miriam Lapp. The calculus of voting: An empirical test. European Journal of Political Research, 37(2):181–201, 2000.
 [4] Clark Bowman, Jonathan K Hodge, and Ada Yu. The potential of iterative voting to solve the separability problem in referendum elections. Theory and decision, 77(1):111–124, 2014.
 [5] Samir Chopra, Eric Pacuit, and Rohit Parikh. Knowledgetheoretic properties of strategic voting. Presented in JELIA04, Lisbon, Portugal, 2004.
 [6] Vincent Conitzer, Toby Walsh, and Lirong Xia. Dominating manipulations in voting with partial information. In AAAI’11, pages 638–643, 2011.

[7]
Robert Forsythe, Thomas Rietz, Roger Myerson, and Robert Weber.
An experimental study of voting rules and polls in three candidate
elections.
International Journal of Game Theory
, 25(3):355–383, 1996.  [8] Umberto Grandi, Andrea Loreggia, Francesca Rossi, Kristen Brent Venable, and Toby Walsh. Restricted manipulation in iterative voting: Condorcet efficiency and borda score. In ADT’13, pages 181–192. Springer, 2013.
 [9] Reshef Meir. Plurality voting under uncertainty. In AAAI’15, 2015.
 [10] Reshef Meir, Omer Lev, and Jeffrey S. Rosenschein. A localdominance theory of voting equilibria. In ACMEC’14, pages 313–330, 2014.
 [11] Samuel Merrill. Strategic decisions under onestage multicandidate voting systems. Public Choice, 36(1):115–134, 1981.
 [12] Roger B. Myerson and Robert J. Weber. A theory of voting equilibria. The American Political Science Review, 87(1):102–114, 1993.

[13]
David Martin Powers.
Evaluation: from precision, recall and fmeasure to roc,
informedness, markedness and correlation.
International Journal of Machine Learning Technology
, 2(1):37.  [14] Annemieke Reijngoud and Ulle Endriss. Voter response to iterated poll information. In 11th, pages 635–644, 2012.
 [15] William H Riker and Peter C Ordeshook. A theory of the calculus of voting. American political science review, 62(1):25–42, 1968.
 [16] Maor Tal, Reshef Meir, and Ya’akov (Kobi) Gal. A study of human behavior in online voting. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2015, Istanbul, Turkey, May 48, 2015, pages 665–673, 2015. Full version available from https://tinyurl.com/yczxugoj.
 [17] Marcelo Tyszler and Arthur Schram. Information and strategic voting. Experimental economics, 19(2):360–381, 2016.
 [18] Karine Van der Straeten, JeanFrançois Laslier, Nicolas Sauger, and André Blais. Strategic, sincere, and heuristic voting under four election rules: an experimental study. Social Choice and Welfare, 35(3):435–472, 2010.
Appendix A Additional Results
scenario  AU  LD  LD + LB  CV  Prag  TMG  NN 

A  0.902  0.902  0.902  0.902  0.902  0.902  0.918 
B  0.903  0.903  0.903  0.903  0.903  0.903  0.922 
C  0.870  0.389  0.819  0.389  0.703  0.693  0.741 
D  0.784  0.657  0.719  0.657  0.730  0.657  0.733 
E  0.813  0.676  0.795  0.730  0.766  0.659  0.744 
F  0.728  0.525  0.609  0.533  0.640  0.411  0.638 
total  0.841  0.671  0.795  0.708  0.781  0.720  0.784 
fmeasure  
total  scenario C  scenario D  scenario E  scenario F  
0.775  0.747  0.767  0.735  0.534  
0.739  0.710  0.653  0.746  0.588  
0.735  0.703  0.705  0.703  0.598  
0.786  0.784  0.707  0.782  0.553 
Appendix B Black box classifier
The “decision model”
is a singlehiddenlayer feedforward neural network classifier. The model consists of a layer of input nodes that represents the features we use in the dataset, A layer of nodes which is called the ”hidden” layer and an output layer that uses the softmax activation function in order to output a classification to one of the possible classes in
. The flow of the data is from input to output in one direction and no recurrences occur in the flow process as opposed to recurrent neural networks. The input nodes are connected to the output nodes via the ”hidden” layer nodes (or neurons) by weighted edges. The algorithm is a supervised learning algorithm that takes a set of class labled records and iteratively learns and adjusts the weights on the edges by comparing the output of each input record to it’s class label. The model is iterative and can be updated easily by new records that it haven’t seen yet. We use a configuration of 3 units in the ”hidden” layer. In our domain we use a set of vote records that consists of raw and generated features. Some of the generated features are normalized by the number of votes in the poll configuration. The class label of each record is the selected preference of the voter which can be one of
. Using feature selection techniques we selected the following features:

(a) Poll and preference information: Candidates poll votes, normalized poll gaps between candidates, preference order, the normalized gap between the leader and the most preferred candidate and the scenario which is the combination of the preference order and the poll information.

(b) Voter information: Aratios which are the number of rounds the voter selected action A divided by the number of rounds it was available. A is the action which we determine using the selected preference ( or ) and the order of the preferences in the poll (namely the scenario). We also use the voter type feature which can be one of {TRT, LB, OTHER} and is determined by a threshold values over the Aratio values. For example: if TRTratio then the voter type is TRT (Truthful).
Roy Fairstein
Ben Gurion University
Israel
Adam Lauz
Ben Gurion University
Israel
Kobi Gal
Ben Gurion University
Israel
Reshef Meir
Technion—Israel Institute of Technology
Technion, Israel