Decision Theory with Resource-Bounded Agents

08/17/2013 ∙ by Joseph Y. Halpern, et al. ∙ cornell university 0

There have been two major lines of research aimed at capturing resource-bounded players in game theory. The first, initiated by Rubinstein, charges an agent for doing costly computation; the second, initiated by Neyman, does not charge for computation, but limits the computation that agents can do, typically by modeling agents as finite automata. We review recent work on applying both approaches in the context of decision theory. For the first approach, we take the objects of choice in a decision problem to be Turing machines, and charge players for the "complexity" of the Turing machine chosen (e.g., its running time). This approach can be used to explain well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on) and belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions) as the outcomes of quite rational decisions. For the second approach, we model people as finite automata, and provide a simple algorithm that, on a problem that captures a number of settings of interest, provably performs optimally as the number of states in the automaton increases.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The standard approach to decision making, going back to Savage [21]

, suggests that an agent should maximize expected utility. But computing the relevant probabilities might be difficult, as might computing the relevant utilities. And even in cases where the probabilities and utilities are not hard to compute, finding the action that maximizes expected utilities can be difficult. In this paper, we consider approaches to decision making that explicitly take computation into account.

The idea of taking computation into account goes back at least to the work of Good [5] and Simon [22]. It has been a prominent theme in the AI literature (see, e.g., [11, 19]). Our work has been inspired by two major lines of research aimed at capturing resource-bounded (i.e. computationally-bounded) agents in the game theory literature. The first, initiated by Rubinstein [18], charges an agent for doing costly computation; the second, initiated by Neyman [13], does not charge for computation, but limits the computation that agents can do, typically by modeling agents as finite automata. We review recent work on applying both approaches in the context of decision theory.

We consider the first approach in the context of the framework of [7], which in turn is a specialization of the framework of [8] to the case of a single agent. The idea is the following: We assume that the agent can be viewed as choosing an algorithm (i.e., a Turing machine); with each Turing machine (TM) and input, we associate its complexity. The complexity can represent, for example, the running time of on that input, the space used, the complexity of (e.g., how many states it has), or the difficulty of finding (some algorithms are easier to think of than others). We deliberately keep the complexity function abstract, to allow for the possibility of representing a number of different intuitions. The agent’s utility can then depend, not just on the payoff, but on the complexity. Thus, we can “charge” for the complexity of computation.

Although this approach seems quite straightforward, we show that it can be used to explain well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on) and belief polarization (people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions).

We then consider the second approach: modeling agents as finite automata. Finite automata are a well-known basic model of computation. An automaton receives a sequence of inputs from the environment, and changes state in response to the input. We capture the fact that an agent is resource bounded by limiting the number of states in the automaton.

In the context of decision theory, this approach was first used by Wilson [24]. She examined a decision problem where an agent needs to make a single decision, whose payoff depends on the state of nature (which does not change over time). Nature is in one of two possible states, (good) and (bad). The agent gets signals, which are correlated with the true state, until the game ends, which happens at each step with probability . At this point, the agent must make a decision. For each , Wilson characterizes an -state optimal finite automaton for making a decision in this setting, under the assumption that is small (so that the agent gets information for many rounds). (See Section 3 for further details.) She then uses this characterization to argue that an optimal -state automaton also exhibits behavior such as belief polarization (again, see Section 3). Thus, some observed human behavior can be explained by viewing people as resource-bounded, but rationally making the best use of their resources (in this case, the limited number of states).

Wilson’s model assumes that nature is static. But in many important problems, ranging from investing in the stock market to deciding which route to take when driving to work, the world is dynamic. Moreover, people do not make decisions just once, but must make them often. For example, when investing in stock markets, people get signals about the market, and need to decide after each signal whether to invest more money, take out money that they have already invested, or to stick with their current position. In recent work [9], we consider the problem of constructing an optimal -state automaton for this setting. We construct a family of quite simple automata, indexed by , the number of states, and a few other parameters. We show that as grows large, this family approaches optimal behavior. More precisely, for all , there is a sufficiently large and member of this family with states whose expected payoff is within of optimal, provided the probability of a state transition is sufficiently small. More importantly, the members of this family reproduce observed human behavior in a series of tasks conducted by Erev, Ert, and Roth [4] (see Section 3 for further discussion). Again, these results emphasize the fact that some observed human behavior can be explained by viewing people as rational, resource-bounded agents.

2 Charging for the complexity of computation

In this section we review and discuss the approach that we used in [7] to take cost of computation into account in decision theory. Much of the material below is taken from [7], which the reader is encouraged to consult for further details and intuition.

The framework that we use is essentially a single-agent version of what we called in [6, 8] Bayesian machine games. In a standard Bayesian game, each player has a type in some set , and makes a single move. Player ’s type can be viewed as describing ’s initial information; some facts that knows about the world. We assume that an agent’s move consists of choosing a Turing machine. As we said in the introduction, associated with each Turing machine and type is its complexity. Given as input a type, the Turing machine outputs an action. The utility of a player depends on the type profile (i.e., the types of all the players), the action profile, and the complexity profile (that is, each player’s complexity).

Example 2.1

Suppose that an agent is given an input , and is asked whether it is prime. The agent gets a payoff of $1,000 if he gives the correct answer, and loses $1,000 if he gives the wrong answer. However, he also has the option of playing safe, and saying “pass”, in which case he gets a payoff of $1. Clearly, many agents would say “pass” on all but simple inputs, where the answer is obvious, although what counts as a “simple” input may depend on the agent.111While primality testing is now known to be in polynomial time [1], and there are computationally-efficient randomized algorithms that that give the correct answer with extremely high probability [16, 23], we can assume that the agent has no access to a computer. The agent’s type can then be taken to be the input. The set of possible types could be, for example, all integers, or all integers that can be written in binary using at most 40 digits (i.e., a number that is less than ). The agent can choose among a set of TMs, all of which output either “prime”, “not prime” or “pass” (which can be encoded as 0, 1, and 2). One natural choice for the complexity of a pair consisting of TM and an input is the running time of on input .

Since the agent’s utility takes the complexity into account, this would justify the agent using a “quick-and-dirty” algorithm which is typically right to compute whether the input is prime; if the algorithm does not return an answer in a relatively small amount of time, the agent can just “pass”. The agent might prefer using such a TM rather than one that gives the correct answer, but takes a long time doing so. We can capture this tradeoff in the agent’s utility function.  

There are some issues that we need to deal with in order to finish modeling Example 2.1 in our framework. Note that if the agent chooses the TM after being given the number , then one of two TMs is clearly optimal: if is in fact prime, then the TM which just says “prime” and halts; if is not prime, then the TM which says “not prime” and halts is clearly optimal. The problem, of course, is that the agent does not know . One approach to dealing with this problem is to assume that the agent chooses the TM before knowing ; this was the approach implicitly taken in [8]. But there is a second problem: we have implicitly assumed that the agent knows what the complexity of each pair is; otherwise, the agent could not maximize expected utility. Of course, in practice, an agent will not know how long it will take a Turing machine to compute an answer on a given input.

We deal with both of these problems by adding one more parameter to the model: a state of nature. In Example 2.1, we allow the TM’s running time and whether

is prime to depend on the state; this means that the “goodness” (i.e., accuracy) of the TM can depend on the state. We model the agent’s beliefs about the running time and accuracy of the TM using a probability distribution on these states.

222This means that the states are what philosophers have called “impossible” possible worlds [10, 17]; for example, we allow states where is taken to be prime even when it is not, or where a TM halts after steps even if it doesn’t. We need such states, which are inconsistent with the laws of mathematics, to model a resource-bounded agent’s possibly mistaken beliefs.

We capture these ideas formally by taking a computational decision problem with types to be a tuple . We explain each of these components in turn. The first four components are fairly standard. is a state space, is a set of types, is a set of actions, and is a probability distribution on (there may be correlation between states and types). In a standard decision-theoretic setting, there is a probability on states (not on states and types), and the utility function associates a utility with each (state, action) pair (intuitively, is the utility of performing action in state ). Here things are more complicated because we want the utility to also depend on the TM chosen. This is where the remaining components of the tuple come in.

The fifth component of the tuple, , is the complexity function. Formally, , so is the complexity of running TM on input in state . The complexity can be, for example, the running time of on input , the space used by on input , the number of states in (this is the measure of complexity used by Rubinstein [18]), and so on. The sixth component of the tuple, , is the output function. It captures the agent’s uncertainty about the TM’s output. Formally, ; is the output of on input in state . Finally, an agent’s utility depends on the state , his type , and the action , as is standard, and the complexity. Since we describe the complexity by a natural number, we take the utility function maps to (the reals). Thus, the expected utility of choosing a TM in the decision problem is . Note that now the utility function gets the complexity of as an argument. The next example should clarify the role of all these components.

Example 2.1 (cont’d): We now have the machinery to formalize Example 2.1. We take , the type space, to consist of all natural numbers ; the agent must determine whether the type is prime. The agent can choose either 0 (the number is not prime), 1 (the number is prime), or 2 (pass); Thus, . Let be some set of TMs that can be used to test for primality. As suggested above, the state space is used to capture the agent’s uncertainty about the output of a TM and the complexity of . Thus, for example, if the agent believes that the TM will output pass with probability , then the set of states such that has probability . We take to be if computes the answer within steps on input , and 10 otherwise. (Think of steps as representing a hard deadline.) If, for example, the agent does not know the running time of a TM , but ascribes probability to finishing in less than steps on input , then the set of states such that has probability . We assume that there is a function that captures the agent’s uncertainty regarding primality; if is prime in state , and 0 otherwise.333Thus, we are allowing “impossible” states, where is viewed as prime even if it is not. Thus, if the agent believes that TM gives the right answer with probability , then the set of states where has probability . Finally, let if is either 0 or 1, and this is the correct answer in state (i.e., ) and . Thus, if the agent is sure that always gives the correct output, then for all states and .  

Example 2.2 (Biases in information processing)

Psychologists have observed many systematic biases in the way that individuals update their beliefs as new information is received (see [14] for a survey). In particular, a first-impressions bias has been observed: individuals put too much weight on initial signals and less weight on later signals. As they become more convinced that their beliefs are correct, many individuals even seem to simply ignore all information once they reach a confidence threshold. Several papers in behavioral economics have focused on identifying and modeling some of these biases (see, e.g., [14] and the references therein, [12], and [15]). In particular, Mullainathan [12] makes a potential connection between memory and biased information processing, using a model that makes several explicit (psychology-based) assumptions on the memory process (e.g., that the agent’s ability to recall a past event depends on how often he has recalled the event in the past). More recently, Wilson [24] demonstrated a similar connection when modeling agents as finite automata, but her analysis is complex (and holds only in the limit).

As we now show, the first-impression-matters bias can be easily explained if we assume that there is a small cost for “absorbing” new information. Consider the following simple game (which is very similar to the one studied by Mullainathan [12] and Wilson [24]). The state of nature is a bit that is with probability . For simplicity, we assume that the agent has no uncertainty about the “goodness” or output of a TM; the only uncertainty involves whether is 0 or 1). An agent receives as his type a sequence of independent samples where with probability . The samples corresponds to signals the agents receive about . An agent is supposed to output a guess for the bit . If the guess is correct, he receives as utility, and otherwise, where is the number of bits of the type he read, and is the cost of reading a single bit ( should be thought of the cost of absorbing/interpreting information). It seems reasonable to assume that ; signals usually require some effort to decode (such as reading a newspaper article, or attentively watching a movie). If , it easily follows by the Chernoff bound (see [2]) that after reading a certain (fixed) number of signals

, the agents will have a sufficiently good estimate of

that the marginal cost of reading one extra signal is higher than the expected gain of finding out the value of . That is, after processing a certain number of signals, agents will eventually disregard all future signals and base their output guess only on the initial sequence. We omit the straightforward details. Essentially the same approach allows us to capture belief polarization.

Suppose for simplicity that two agents start out with slightly different beliefs regarding the value of some random variable

(think of as representing something like “O.J. Simpson is guilty”), and get the same sequence of evidence regarding the value of . (Thus, now the type consists of the initial belief, which can for example be modeled as a probability or a sequence of evidence received earlier, and the new sequence of evidence.) Both agents update their beliefs by conditioning. As before, there is a cost of processing a piece of evidence, so once an agent gets sufficient evidence for either or , he will stop processing any further evidence. If the initial evidence supports , but the later evidence supports even more strongly, the agent that was initially inclined towards may raise his beliefs to be above threshold, and thus stop processing, believing that , while the agent initially inclined towards will continue processing and eventually believe that .  

As shown in [7], we can also use this approach to explain the status quo bias (people are much more likely to stick with what they already have) [20].

Value of computational information and value of conversation:

Having a computational model of decision making also allows us to reconsider a standard notion from decision theory, value of information, and extend it in a natural way so as to take computation into account. Value of information is meant to be a measure of how much an agent should be willing to pay to receive new information. The idea is that, before receiving the information, the agent has a probability on a set of relevant events and chooses the action that maximizes his expected utility, given that probability. If he receives new information, he can update his probabilities (by conditioning on the information) and again choose the action that maximizes his expected utility. The difference between the expected utility before and after receiving the information is the value of the information.

In many cases, an agent seems to be receiving valuable information that is not about what seem to be the relevant events. This means that we cannot do a value of information calculation, at least not in the obvious way.

For example, suppose that the agent is interested in learning a secret, which we assume for simplicity is a number between 1 and 1000. A priori, suppose that the agent takes each number to be equally likely, and so has probability . Learning the secret has utility, say, $1,000,000; not learning it has utility 0. The number is locked in a safe, whose combination is a 40-digit binary number. Intuitively, learning the first 20 digits of the safe’s combination gives the agent some valuable information. But this is not captured when we do a standard value-of-information calculation; learning this information has no impact at all on the agent’s beliefs regarding the secret.

Although this example is clearly contrived, there are many far more realistic situations where people are clearly willing to pay for information to improve computation. For example, companies pay to learn about a manufacturing process that will speed up production; people buy books on speedreading; and faster algorithms for search are clearly considered valuable.

Once we bring computation into decision making, the standard definition of value of information can be applied to show that there is indeed a value to learning the first 20 digits of the combination, and to buying a more powerful computer; expected utility can increase. (See [6] for details.)

But now we can define a new notion: value of conversation. The value of information considers the impact of learning the value of a random variable; by taking computation into account, we can extend this to consider the impact of learning a better algorithm. We can further extend to consider the impact of having a conversation. The point of a conversation is that it allows the agent to ask questions based on history. For example, if the agent is trying to guess a number chosen uniformly at random between 1 and 100, and receives utility of 100 if he guesses it right, having a conversation with a helpful TM that will correctly answer seven yes/no questions is quite valuable: as is well known, with seven questions, the agent can completely determine the number using binary search. The computational framework allows us to make this intuition precise. Again, we encourage the reader to consult [6] for further details.

3 Modeling people as rational finite automata

We now consider the second approach discussed in the introduction, that of modeling the fact that an agent can do only bounded computation. Perhaps the first to do this in the context of decision theory was Wilson [24], whose work we briefly mentioned earlier. Recall that Wilson considers a decision problem where an agent needs to make a single decision. Nature is in one of two possible states, (good) and (bad), which does not change over time; the agent’s payoff depends on the action she chooses and the state of nature. The agent gets one of signals, which are correlated with nature’s state; signal has probability of appearing when the state of nature is , and probability of appearing when the state is . We assume that the agent gets exactly one signal at each time step, so that . This is a quite standard situation. For example, an agent may be on a jury, trying to decide guilt or innocence, or a scientist trying to determine the truth of a theory.

Clearly, the agent should try to learn nature’s state, so as to make an optimal decision. With no computational bounds, an agent should just keep track of all the evidence it has seen (i.e., the number of signals of each type), in order to make the best possible decision. However, a finite automaton cannot do this. Wilson characterizes the optimal -state automaton for making a decision in this setting, under the assumption that (the probability that the agent has to make the decision in any given round) is small. Specifically, she shows that an optimal -state automaton ignores all but two signals (the “best” signal for each of nature’s states). The automaton’s states can be laid out “linearly”, as states 0, …, , and the automaton moves left (with some probability) only if it gets a strong signal for state (and it is not in state 0), and moves right (with some probability) only if it gets a strong signal for state (and is not in state ). Roughly speaking, the lower the current state of the automaton, the more likely from the automaton’s viewpoint that nature’s state is .

The probability of moving left of right, conditional on receiving the appropriate signal, may vary from state to state. In particular, if the automaton is in state , the probability that it moves right is very low. Intuitively, in state 0, the automaton is “convinced” that nature’s state is , and it is very reluctant to give up on that belief. Similarly, if the automaton is in state , the probability that it will move left is very low.

Wilson argues that these results can be used to explain observed biases in information processing, such as belief polarization. For suppose that the true state of nature is . Consider two 5-state automata, and . Suppose that automaton starts in state 1, while automaton starts in state 2. (We can think of the starting state as reflecting some initial bias, for example.) They both receive the same information. Initially, the information is biased towards , so both automata move left; moves to state 0, and moves to state 1. Now the evidence starts to shift towards the true state of the world, . But since it is harder to “escape” from state 0, stays in state 0, while moves to state 4. Thus, when the automata are called upon to decide, makes the decision appropriate for , while makes the decision appropriate for . A similar argument shows how this approach can be used to explain the first-impression bias. The key point is that the order in which evidence is received can make a big difference to an optimal finite automaton, although it should make no difference to an unbounded agent.

Wilson’s model assumes that the state of nature never changes. In recent work, we consider what happens if we allow nature’s state to change [9]. We consider a model that is intended to capture the most significant features of such a dynamic situation. As in Wilson’s model, we allow nature to be in one of a number of different states, and assume that the agent gets signals correlated with nature’s state. But now we allow nature’s state to change, although we assume that the probability of a change is low. (Without this assumption, the signals are not of great interest.)

For definiteness, assume that nature is in one of two states, which we again denote and . Let be the probability of transitioning from to or from to in any given round. Thus, we assume for simplicity that these probabilities are history-independent, and the same for each of the two transitions. (Allowing different probabilities in each direction does not impact the results.) The agent has two possible actions (safe) and (risky). If he plays , he gets a payoff of 0; if he plays he gets a payoff when nature’s state is , and a payoff when nature’s state is . The agent does not learn his payoff, but, as in Wilson’s model, gets one of signals, whose probability is correlated with nature’s state. However, unlike Wilson’s model, the agent gets a signal only if he plays the risky action ; he does not get a signal if he plays the safe action . We denote this setting . A setting is nontrivial if there exists some signal such that . If a setting is trivial, then no signal enables the agent to distinguish whether nature is in state or ; the agent does not learn anything from the signals. For a given setting , we are interested in finding an automaton that has high average utility when we let the number of rounds go to infinity.

Unlike Wilson, we were not able to find a characterization of the optimal -state automaton. However, we were able to find a family of quite simple automata that do very well in practice and, in the limit, approach the optimal payoff. We denote a typical member of this family . The automaton has states, again denoted . State is dedicated to playing . In all other states is played. As in Wilson’s optimal automaton, only “strong” signals are considered; the rest are ignored. More precisely, the signals are partitioned into three sets, (for “positive”), (for “negative”), and (for “ignore” or “indifferent”), with and nonempty. The signals in make it likely that nature’s state is , and the signals in make it likely that the state of nature is . The agent chooses to ignore the signals in ; they are viewed as not being sufficiently informative as to the true state of nature. (Note that is determined by and .)

In each round while in state , the agent moves to state with probability . In a state , if the agent receives a signal in , the agent moves to with probability (unless he is already in state , in which case he stays in state if he receives a signal in ); thus, we can think of as the probability that the agent moves up if he gets a positive signal. If the agent receives a signal in , the agent moves to state with probability (so is the probability of moving down if he gets a signal in ); if he receives a signal in , the agent does not change states. Clearly, this automaton is easy for a human to implement (at least, if it does not have too many states). Because the state of nature can change, it is clearly not optimal to make the states and “sticky”. In particular, an optimal agent has to be able to react reasonably quickly to a change from to , so as to recognize that he should play .

Erev, Ert, and Roth [4] describe contests that attempt to test various models of human decision making under uncertain conditions. In their scenarios, people were given a choice between making a safe move (that had a guaranteed constant payoff) and a “risky” move (which had a payoff that changed according to an unobserved action of other players), in the spirit of our and moves. They challenged researchers to present models that would predict behavior in these settings. The model that did best was called BI-Saw (bounded memory, inertia, sampling and weighting) model, suggested by Chen et al. [3], itself a refinement of a model called I-Saw suggested by Erev, Ert, and Roth [4]. This model has three types of response mode: exploration, exploitation, and inertia. An I-Saw agent proceeds as follows. The agent tosses a coin. If it lands heads, the agent plays the action other than the one he played in the previous step (exploration); if it lands tails, he continues to do what he did in the previous step (inertia), unless the signal received in the previous round crosses a probabilistic “surprise” trigger (the lower the probability of the signal to be observed in the current state, the more likely the trigger is to be crossed); if the surprise trigger is crossed, then the agent plays the action with the best estimated subjective value, based on some sampling of the observations seen so far (exploitation). The major refinement suggested by BI-Saw involves adding a bounded memory assumption, whose main effect is a greater reliance on a small sample of past observations.

The suggested family of automata incorporates all three behavior modes described by the I-Saw model. When the automaton is in state , the agent explores with constant probability by moving to state . In state , the agent continues to do what he did before (in particular, he stays in state ) unless he gets a “meaningful” signal (one in or ), and even then he reacts only with some probability, so we have inertia-like behavior. If he does react, then he exploits the information that he has, which is carried by his current state; that is, he performs the action most appropriate according to his state. The state can be viewed as representing a sample of the last few signals (each state represents remembering seeing one more “good” signal), as in the BI-Saw model. Thus, our family of automata can be viewed as an implementation of the BI-Saw model using small, simple finite automata.

These automata do quite well, both theoretically and in practice. Note that even if the agent had an oracle that told him exactly what nature’s state would be at every round, if he performs optimally, he can get only in the rounds when nature is in state , and when it is in state . In expectation, nature is in state only half the time, so the optimal expected payoff is .

The following result shows that if goes to 0 sufficiently quickly, then the agent can achieve arbitrarily close to the theoretical optimum using an automaton of the form , even without the benefit of an oracle, by choosing the parameters appropriately. Let denote the expected average utility of using the automaton if the state of natures changes with probability .

Theorem 3.1

[9] Let and be functions from to such that . Then for all settings , there exists a partition of the signals, and constants and such that

While Theorem 3.1 gives theoretical support to the claim that automata of the form are reasonable choices for a resource-bounded agent, it is also interesting to see how they do in practice, for relatively small values of . The experimental evidence [9] suggests they will do well. For example, suppose for definiteness that , , and there are four signals, , which have probabilities , , , and , respectively, when the state of nature is , and probabilities , , , and , respectively, when the state of nature is bad. Further suppose that we take signal to be the “good” signal (i.e., we take ), take signal to be the “bad” signal (i.e., we take ), and take . Experiments showed that using the optimal value of (which is dependent on the number of states), an automaton with states already gets an expected payoff of more than ; even with 2 states, it gets an expected payoff of more than . Recall that even with access to an oracle that reveals when nature changes state, the best the agent can hope to get is . On the other hand, an agent that just plays randomly, or always plays or always plays , will get 0. These results are quite robust. For example, for the payoff does not vary much if we just use the optimum for , instead of choosing the optimum value of for each value of . The bottom line here is that, by thinking in terms of the algorithms used by bounded agents, actions that may seem irrational can be viewed as quite rational responses, given resource limitations.

4 Discussion and Conclusion

We have discussed two approaches for capturing resource-bounded agents. The first allows them to choose a TM to play for them, but charges them for the “complexity” of the choice; the second models agents as finite automata, and captures resource-boundedness by restricting the number of states of the automaton. In both cases, agents are assumed to maximize utility. Both approaches can be used to show that some systematic deviations from rationality (e.g., belief polarization) can be viewed as the result of resource-bounded agents making quite rational choices. We believe that the general idea of not viewing behavior as “irrational”, but rather the outcome of resource-bounded agents making rational choices, will turn out to be useful for explaining other systematic biases in decision making and, more generally, behavior in decision problems. We would encourage work to find appropriate cost models, and simple, easy-to-implement strategies with low computational cost that perform well in real scenarios.

Note that the two approaches are closely related. For example, we can easily come up with a cost model and class of TMs that result in agents choosing a particular automaton with states to play for them, simply by charging appropriately for the number of states. Which approach is used in a particular analysis depends on whether the interesting feature is the choice of the complexity function (which can presumably be tested experimentally) or specific details of the algorithm. We are currently exploring both approaches in the context of the behavior of agents in financial markets. In particular, we are looking for simple, easy-to-implement strategies that will explain human behavior in this setting.

References

  • [1] M. Agrawal, N. Keyal, and N. Saxena. Primes is in P. Annals of Mathematics, 160:781–793, 2004.
  • [2] N. Alon and J. H. Spencer. The Probabilistic Method. Wiley, New York, 2004.
  • [3] Wei Chen, Shu-Yu Liu, Chih-Han Chen, and Yi-Shan Lee. Bounded memory, inertia, sampling and weighting model for market entry games. Games, 2(1):187–199, 2011.
  • [4] I. Erev, E. Ert, and A.E. Roth. A choice prediction competition for market entry games: An introduction. Games, 1:117–136, 2010.
  • [5] I. J. Good. Rational decisions. Journal of the Royal Statistical Society, Series B, 14:107–114, 1952.
  • [6] J. Y. Halpern and R. Pass. Game theory with costly computation. In Proc. First Symposium on Innovations in Computer Science, 2010. Available at http://conference.itcs.tsinghua.edu.cn/ICS2010/content/paper/Paper_11.pdf.
  • [7] J. Y. Halpern and R. Pass. I don’t want to think about it now: Decision theory with costly computation. In Principles of Knowledge Representation and Reasoning: Proc. Twelfth International Conference (KR ’10), pages 182–190, 2010.
  • [8] J. Y. Halpern and R. Pass. Algorithmic rationality: Game theory with costly computation. 2011. Available at www.cs.cornell.edu/home/halpern/papers/algrationality.pdf; to appear, Journal of Economic Theory. A preliminary version with the title “Game theory with costly computation” appears in Proc. First Symposium on Innovations in Computer Science, 2010.
  • [9] J. Y. Halpern, R. Pass, and L. Seeman. I’m doing as well as i can: modeling people as rational finite automata. In

    Proc. Twenty-Sixth National Conference on Artificial Intelligence (AAAI ’12)

    , 2012.
  • [10] J. Hintikka. Impossible possible worlds vindicated. Journal of Philosophical Logic, 4:475–484, 1975.
  • [11] E. Horvitz. Reasoning about beliefs and actions under computational resource constraints. In Proc. Third Workshop on Uncertainty in Artificial Intelligence (UAI ’87), pages 429–444, 1987.
  • [12] S. Mullainathan. A memory-based model of bounded rationality. Quarterly Journal of Economics, 117(3):735–774, 2002.
  • [13] A. Neyman. Bounded complexity justifies cooperation in finitely repated prisoner’s dilemma. Economic Letters, 19:227–229, 1985.
  • [14] M. Rabin. Psychology and economics. Journal of Economic Literature, XXXVI:11–46, 1998.
  • [15] M. Rabin and J. Schrag. First impressions matter: A model of confirmatory bias. Quaterly Journal of Economics, 114(1):37–82, 1999.
  • [16] M. O. Rabin. Probabilistic algorithm for testing primality. Journal of Number Theory, 12:128–138, 1980.
  • [17] V. Rantala. Impossible worlds semantics and logical omniscience. Acta Philosophica Fennica, 35:18–24, 1982.
  • [18] A. Rubinstein. Finite automata play the repeated prisoner’s dilemma. Journal of Economic Theory, 39:83–96, 1986.
  • [19] S.J. Russell and D. Subramanian. Provably bounded-optimal agents. Journal of A.I. Research, 2:575–609, 1995.
  • [20] W. Samuelson and R. Zeckhauser. Status quo bias in decision making. Journal of Risk and Uncertainty, 1:7–59, 1998.
  • [21] L. J. Savage. Foundations of Statistics. Wiley, New York, 1954.
  • [22] H. A. Simon. A behavioral model of rational choice. Quarterly Journal of Economics, 49:99–118, 1955.
  • [23] R. Solovay and V. Strassen. A fast Monte Carlo test for primality. SIAM Journal on Computing, 6(1):84–85, 1977.
  • [24] A. Wilson. Bounded memory and biases in information processing. Manuscript, 2002.