Soccer dribbling consists of the ability of a soccer agent to go from the beginning to the end of a region keeping possession of the ball, while an adversary attempts to gain possession. In this work, we focus on the dribbler’s learning process, i.e., the learning of an effective policy that determinesa good action for the dribbler to take at each decision point.
We study the soccer dribbling task using the RoboCup soccer simulator . Specific details of this simulator increase the complexity of the learning process. For example, besides the adversarial and real-time environment, agents’ perceptions and actions are noisy and asynchronous.
We model the soccer dribbling task as a reinforcement learning problem. Our solution to this problem combines the Sarsa algorithm with CMAC for function approximation. Despite the fact that the resulting learning algorithm is not guaranteed to converge to the optimal policy in all cases, many lines of evidence suggest that it converges to near-optimal policies (for example, see [2, 3, 4, 5]).
Besides this introductory section, the rest of this paper is organized as follows. In the next section, we describe the soccer dribbling task. In Section 3, we show how to map this task onto an episodic reinforcement learning framework. In Section 4 and 5, we present, respectively, the reinforcement learning algorithm and its results against a strong adversary. In Section 6, we review the literature related to our work. In Section 7, we conclude and present future research directions.
Ii Soccer Dribbling
Soccer dribbling is a crucial skill for an agent to become a successful soccer player. It consists of the ability of a soccer agent, henceforth called the dribbler, to go from the beginning to the end of a region keeping possession of the ball, while an adversary attempts to gain possession. We can see soccer dribbling as a subproblem of the complete soccer domain. The main simplification is that the players involved are only focused on specific goals, without worrying about team strategies or unrelated individual skills (e.g., passing and shooting). Nevertheless, a successful policy learned by the dribbler can be used in the complete soccer domain whenever a soccer agent faces a dribbling situation.
Since our focus is on the dribbler’s learning process, an omniscient coach agent is used to manage the play. At the beginning of each trial (episode), the coach resets the location of the ball and of the players within a training field. The dribbler is placed in the center-left region together with the ball. The adversary is placed in a random position with the constraint that it does not start with possession of the ball. An example of a starting configuration is shown in Figure 1.
Whenever the adversary gains possession for a set period of time or when the ball goes out of the training field by crossing either the left line or the top line or the bottom line, the coach declares the adversary as the winner of the episode. If the ball goes out of the training field by crossing the right line, then the winner is the first player to intercept the ball. After declaring the winner of an episode, the coach resets the location of the players and of the ball within the training field and starts a new episode. Thus, the dribbler’s goal is to reach the right line that delimits the training field with the ball. We call this task the soccer dribbling task.
We argue that the soccer dribbling task is an excellent benchmark for comparing different machine learning techniques since it involves a complex problem, and it has a well-defined objective, which is to maximize the number of episodes won by the dribbler. We study the soccer dribbling task using the RoboCup soccer simulator.
The RoboCup soccer simulator operates in discrete time steps, each representing 100 milliseconds of simulated time. Specific details of this simulator increase the complexity of the learning process. For example, random noise is injected into all perceptions and actions. Further, agents must sense and act asynchronously. Each soccer agent receives visual information about other objects every 150 milliseconds, e.g., its distance from other players in its current field of view. Each agent has also a body sensor, which detects its current “physical status” every 100 milliseconds, e.g., that agent’s stamina and speed. Agents may execute a parameterized primitive action every 100 milliseconds, e.g., turn(angle), dash(power), and kick(power, angle). Full details of the RoboCup soccer simulator are presented by Chen et al. .
Since possession is not well-defined in the RoboCup soccer simulator, we consider that an agent has possession of the ball whenever the ball is close enough to be kicked, i.e., it is in a distance less than meters from the agent.
Iii The Soccer Dribbling Task as a Reinforcement Learning Problem
In the soccer dribbling task, an episode begins when the dribbler may take the first action. When an episode ends (e.g., when the adversary gains possession for a set period of time), the coach starts a new one, thereby giving rise to a series of episodes. Thus, the interaction between the dribbler and the environment naturally breaks down into a sequence of distinct episodes. This point, together with the fact that the RoboCup soccer simulator operates in discrete time steps, allows the soccer dribbling task to be mapped onto a discrete-time, episodic reinforcement-learning framework.
Roughly speaking, reinforcement learning is concerned with how an agent must take actions in an environment so as to maximize the expected long-term reward . Like in a trial-and-error search, the learner must discover which action is the most rewarding one in a given state of the world. Thus, solving a reinforcement learning problem means finding a function (policy) that maps states to actions so that it maximizes a reward over the long run. As a way of incorporating domain knowledge, the actions available to the dribbler are the following high-level macro-actions, which are built on the simulator’s primitive actions111Henceforth, we use the terms action and macro-action interchangeably, while always distinguishing primitive actions.:
HoldBall(): The dribbler holds the ball close to its body, keeping it in a position that is difficult for the adversary to gain possession;
Dribble(): The dribbler turns its body towards the global angle , kicks the ball meters ahead of it, and moves to intercept the ball.
The global angle is in the range . In detail, the center of the training field has been chosen as the origin of the system, where the zero-angle points towards the middle of the right line that delimits the training field, and it increases in the clockwise direction. Those macro-actions are based on high-level skills used by the UvA Trilearn 2003 team . The first one maps directly onto the primitive action kick. Consequently, it usually takes a single time step to be performed. The second one, however, requires an extended sequence of the primitive actions turn, kick, and dash. To handle this situation, we treat the soccer dribbling task as a 9].
Formally, an SMDP is a 5-tuple , where is a countable set of states, is a countable set of actions, , for , and
, is a probability distribution providing the transition model between states,is a reward associated with the transition , and is a probability distribution indicating the sojourn time in a given state , i.e., the time before transition provided that action was taken in state .
Let be the th macro-action selected by the dribbler. Thus, several simulator’s time steps may elapse between and . Let and be, respectively, the state and the reward following the macro-action . From the dribbler’s point of view, an episode consists of a sequence of SMDP steps, i.e., a sequence of states, macro-actions, and rewards: , where is chosen based exclusively on the state , and is a terminal state in which either the adversary or the dribbler is declared the winner of the episode by the coach. In the formercase, the dribbler receives the reward , while in the latter case its reward is . The intermediate rewards arealways equal to zero, i.e., . Thus, our objective is to find a policy that maximizes the dribbler’s reward, i.e., the number of episodes in which it is the winner.
The dribbler must take a decision at each SMDP step by selecting an available macro-action. Besides the macro-action HoldBall, the set of actions available to the dribbler contains four instances of the macro-action Dribble: Dribble(), Dribble(), Dribble(), and Dribble(). Thus, besides hiding the ball from the adversary, the dribbler can kick the ball forward (strongly and weakly), diagonally upward, and diagonally downward. If at some time step the dribbler has not possession of the ball and the current state is not a terminal state, then it usually means that the dribbler chose an instance of the macro-action Dribble before and it is currently moving to intercept the ball.
We turn now to the state representation used by the dribbler. It consists of a set of state variables which are based on information related to the ball, the adversary, and the dribbler itself. Let be the global angle of the object , and and be, respectively, the relative angle and the distance between the objects and . Further, let and be, respectively, the width and the height of the training field. Finally, let be a function indicating whether the object is close to (less than 1 meter away from) the top line or the bottom line that delimits the training field. In the former case, , whereas in the latter case , and otherwise . Table 1 shows the state variables together with their ranges.
The first three variables help the dribbler to locate itself and the adversary inside the training field. Together, the last two variables can be seen as a point describing the position of the adversary in a polar coordinate system, where the ball is the pole. Thus, these variables are used by the dribbler to locate the adversary with respect to the ball. It is interesting to note that a more informative state representation can be used by adding more state variables, e.g.
, the current speed of the ball and the dribbler’s stamina. However, large domains can be impractical due to the “curse of dimensionality”,i.e., the general tendency of the state space to grow exponentially in the number of state variables . Consequently, we focus on a state representation that is as concise as possible.
The adversary uses a fixed, pre-specified policy. Thus, we can see it as part of the environment in which the dribbler is interacting with. When the adversary has possession of theball, it tries to maintain possession for another time step byinvoking the macro-action HoldBall. If it maintains possession for two consecutive time steps, then it is the winner of the episode. When the adversary does not have the ball, it uses an iterative scheme to compute a near-optimal interception point based on the ball’s position and velocity. Thereafter, the adversary moves to that point as fast as possible. This procedure is the same used by the dribbler when it is moving to intercept the ball after invoking the macro-action Dribble. More details about this iterative scheme can be found in the description of the UvA Trilearn 2003 team .
Iv The Reinforcement Learning Algorithm
Our solution to the soccer dribbling task combines the reinforcement learning algorithm Sarsa with CMAC for function approximation. In what follows, we briefly introduce both of them before presenting the final learning algorithm.
The Sarsa algorithm works by estimating the action-value function, for the current policy and for all state-action pairs . The -function assigns to each state-action pair the expected return from it. Given a quintuple of events, , that make up the transition from the state-action pair to the next one, , the -value of the first state-action pair is updated according to the following equation:
where is the traditional temporal-difference error,
is the learning rate parameter, and is a discount rate governing the weight placed on future, as opposed to immediate, rewards. Sarsa is an on-policy learning method, meaning that it continually estimates , for the current policy , and at the same time changes towards greediness with respect to . A typical policy derived from the -function is an -greedy policy. Given the state , this policy selects a random action with probability and, otherwise, it selects the action with the highest estimated value, i.e., .
In tasks with a small number of state-action pairs, we can represent the action-value function as a table with one entry for each state-action pair. However, this is not the case of the soccer dribbling task. For illustration’s sake, suppose that all variables in Table 1 are discrete. If we consider the actions available to the dribbler and a 20m x 20m training field, we end up with more than state-action pairs. This would not only require an unreasonable amount of memory, but also an enormous amount of data to fill up the table accurately. Thus, we need to generalize from previously experienced states to ones that have never been seen. For dealing with this task, we use a technique commonly known as function approximation.
By using a function approximation, the action-value function is now represented as a parameterized functional form . Now, whenever we make a change in one parameter value, we also change the estimated value of many state-action pairs, thus obtaining generalization. In this work, we use the Cerebellar Model Arithmetic Computer (CMAC) for function approximation [11, 12].
CMAC works by partitioning the state space into multi-dimensional receptive fields, each of which is associated with a weight. In this work, receptive fields are hyper-rectangles in the state space. Nearby states share receptive fields. Thus, generalization occurs between them. Multiple partitions of the state space (layers
) are usually used, which implies that any input vector falls within the range of multipleexcited receptive fields, one from each layer.
Layers are identical in organization, but each one is offset relative to the others so that each layer cuts the state space in a different way. By overlapping multiple layers, it is possible to achieve quick generalization while maintaining the ability to learn fine distinctions. Figure 2 shows an example of two grid-like layers overlaid over a two-dimensional space.
The receptive fields excited by a given state make up the feature set , with each action indexing their weights in a different way. In other words, each macro-action is associated with a particular CMAC. Clearly, the number of receptive fields inside each feature set is equal to the number of layers. The CMAC’s response to a feature set is equal to the sum of the weights of the receptive fields in . Formally, let be the weight of the receptive field indexed by the action . Thus, the CMAC’s response to is equal to , which represents the -value .
CMAC is trained by using the traditional delta rule (also known as the least mean square). In detail, after selecting an action , the weight of an excited receptive field indexed by , , is updated according to the following equation:
where is the temporal-difference error. A major issue when using CMAC is that the total number of receptive fields required to span the entire state space can be very large. Consequently, an unreasonable amount of memory may be needed. A technique commonly used to address this issue is called pseudo-random hashing . It produces receptive fields consisting of noncontiguous, disjoint regions randomly spread throughout the state space, so that only information about receptive fields that have been excited during previous training is actually stored.
Iv-C Linear, Gradient-Descent Sarsa
Our solution to the soccer dribbling task combines the Sarsa algorithm with CMAC for function approximation. We use an -greedy policy for action selection. Sutton and Barto  provide a complete description of this algorithm under the name of linear, gradient-descent Sarsa. Our implementation follows the solution proposed by Stone et al. . It consists of three routines: RLstartEpisode, to be run by the dribbler at the beginning of each episode; RLstep, run on each SMDP step; and RLendEpisode, to be run when an episode ends. In what follows, we present each routine in detail.
Given an initial state , this routine starts by iterating over all available actions. In line 2, it finds the receptive fields excited by , which compose the feature set . Next, in line 3, the estimated value of each macro-action in is calculated as the sum of the weights of the excited receptive fields. In line 5, this routine selects a macro-action by following an -greedy policy and sends it to the RoboCup soccer simulator. Finally, the chosen action and the initial state are stored, respectively, in the variables and .
This routine is run on each SMDP step, whenever the dribbler has to choose a macro-action. Given the current state , it starts by calculating part of the temporal-difference error (Equation 2), namely the difference between the intermediate reward and the expected return of the previous SMDP step, . In lines 2 to 5, this routine finds the receptive fields excited by and uses their weights to compute the estimated value of each action in . In line 6, the next action to be taken by the dribbler is selected according to an -greedy policy. In line 7, this routine finishes to compute the temporal-difference error by adding the discount rate times the expected return of the current SMDP step, . Next, in lines 8 to 10, this routine adjusts the weights of the receptive fields excited in the previous SMDP step by the learning factor times the temporal-difference error (see Equation 3). Since the weights have changed, we must recalculate the expected return of the current SMDP step, (line 11). Finally, the chosen action and the current state are stored, respectively, in the variables and .
This routine is run when an episode ends. Initially, it calculates the appropriate reward based on who won the episode. Next, it calculates the temporal-difference error in the action-value estimates (line 6). There is no need to add the expected return of the current SMDP step () since this value is defined to be for terminal states. Lastly, this routine adjusts the weights of the receptive fields excited in the previous SMDP step.
V Empirical Results
In this section, we report our experimental results with the soccer dribbler task. In all experiments, we used the standard RoboCup soccer simulator (version 14.0.3, protocol 9.3) and a 20m x 20m training region. In that simulator, agents typically have limited and noisy visual sensors. For example, each player can see objects within a view cone, and the precision of an object’s sensed location degrades with distance. To simplify the learning process, we removed those restrictions. Both the dribbler and the adversary were given of noiseless vision to ensure that they would always have complete and accurate knowledge of the environment.
Related to parameters of the reinforcement learning algorithm222The implementation of the learning algorithm can be found at:http://sites.google.com/site/soccerdribbling/, we set , , and . By no means do we argue that these values are optimal. They were set based on results of brief, informal experiments.
The weights of first-time excited receptive fields were set to . The bounds of the receptive fields were set according to the generalization that we desired: angles were given widths of about 20 degrees, and distances were given widths of approximately 3 meters. We used 32 layers. Each dimension of every layer was offset from the others by of the desired width in that dimension. We used the CMAC implementation proposed by Miller and Glanz , which uses pseudo-random hashing. To retain previously trained information in the presence of subsequent novel data, we did not allow hash collisions.
To create episodes as realistic as possible, agents were not allowed to recover their staminas by themselves. This task was done by the coach after five consecutive episodes. This enabled agents to start episodes with different stamina values. We ran this experiment 5 independent times, each one lasting 50,000 episodes, and taking, on average, approximately 74 hours. Figure 3 shows the histogram of the average number of episodes won by the dribbler during the training process. Bins of 500 episodes were used.
Throughout the training process, the dribbler won, on average, episodes (). From Figure 3, we can see that it greatly improves its average performance as the number of episodes increases. At the end of the training process, it is winning slightly less than of the time.
Qualitatively, the dribbler seems to learn two major rules. In the first one, when the adversary is at a considerable distance, the dribbler keeps kicking the ball to the opposite side in which the adversary is located until the angle between them is in the range , i.e., when the adversary is behind the dribbler. After that, the dribbler starts to kick the ballforward. An illustration of this rule can be seen in Figure 4.
The second rule seems to occur when the adversary is relatively close to and in front of the dribbler. Since there is no way for the dribbler to move forward or diagonally without putting the possession at risk, it then holds the ball until the angle between it and the adversary is in the range . Thereafter, it starts to advance by kicking the ball forward. An illustration of this rule can be seen in Figure 5.
After the training process, we randomly generated 10,000 initial configurations to test our solution. This time, the dribbler always selected the macro-action with the highest estimated value, i.e., we set . Further, the weights of the receptive fields were not updated, i.e., we set . We used the receptive fields’ weights resulting from the simulation where the dribbler obtained the highest success rate. The result of this experiment was even better. The dribbler won 5,795 episodes, thus obtaining a success rate of approximately .
V-a One-Dimensional CMACs
For comparison’s sake, we repeated the above experiment using the original solution proposed by Stone et al. . It consists of the same learning algorithm presented in Section 3, but using one-dimensional CMACs. In detail, each layer is an interval along a state variable. In this way, the feature set is now composed by excited receptive fields, i.e., excited receptive fields for each state variable.
One of the main advantages of using one-dimensional CMACs is that it is possible to circumvent the curse of dimensionality. In detail, the state space does not grow exponentially in the number of state variables because dependence between variables is not taken into account.
Figure 6 shows the histogram of the average number of episodes won by the dribbler during the training process. Each simulation took, on average, approximately 43 hours. Throughout the training process, the dribbler won, on average, episodes (). From Figure 6, we can see that the learning algorithm converges much faster when using one-dimensional CMACs. However, its average performance is considerably worse. At the end of the training process, the dribbler is winning, on average, less than of the time.
After the training process, we tested this solution using the same 10,000 initial configurations previously generated. Again, we set , and used the receptive fields’ weights resulting from the simulation where the dribbler obtained the highest success rate. The result of this experiment was slightly better. The dribbler won 3,701 episodes, thus obtaining a success rate of approximately .
Qualitatively, the dribbler seems to learn a rule similar to the one shown in Figure 4. The major difference is that it always kicks the ball to the opposite side in which the adversary is located, it does not matter its distance from the adversary’s location. Consequently, it is highly unlikely that the dribbler succeeds when the adversary is close to it.
We conjecture that one of the main reasons for such a poor performance of the reinforcement learning algorithm when using one-dimensional CMACs is that it does not take into account dependence between variables, i.e., they are treated individually. Hence, such approach may throw away valuable information. For example, the variables and together describe the position of adversary with respect to the ball. However, they do not make as much sense when considered individually.
Vi Related Work
Reinforcement learning has long been applied to the robot soccer domain. For example, Andou  uses “observational reinforcement learning” to refine a function that is used bythe soccer agents for deciding their positions on the field. Riedmiller et al.  use reinforcement learning to learn low-level soccer skills, such as kicking and ball-interception. Nakashima et al. 
propose a reinforcement learning meth-od called “fuzzy Q-learning”, where an agent determines itsaction based on the inference result of a fuzzy rule-based system. The authors apply the proposed method to the sce-nario where a soccer agent learns to intercept a passed ball.
Arguably, the most successful application is due to Stone et al. . They propose the “keepaway task”, which consists of two teams, the keepers and the takers, where the former tries to keep control of the ball for as long as possible, while the latter tries to gain possession. Our solution to the soccer dribbling task follows closely the solution proposed by thoseauthors to learn the keepers’ behavior. Iscen and Erogul  use similar solution to learn a policy for the takers.
Gabel et al. 
propose a task which is the opposite of the soccer dribbling task, where a defensive player must interfere and disturb the opponent that has possession of the ball. Their solution to that task uses a reinforcement learning algorithm with a multilayer neural network for function approximation.
Kalyanakrishnan et al.  present the “half-field offense task”, a scenario in which an offense team attempts to outplay a defense team in order to shoot goals. Those authors pose that task as a reinforcement learning problem, and propose a new learning algorithm for dealing with it.
More closely related to our work are reinforcement learn-ing-based solutions to the task of conducting the ball (e.g., ), which can be seen as a simplification of the dribbling task since it usually does not include adversaries.
We proposed a reinforcement learning solution to the soccer dribbling task, a scenario in which an agent has to go from the beginning to the end of a region keeping possession of the ball, while an adversary attempts to gain possession. Our solution combined the Sarsa algorithm with CMAC for function approximation. Empirical results showed that, after the training period, the dribbler was able to accomplish its task against a strong adversary around of the time.
Although we restricted ourselves to the soccer domain, dribbling, as defined in this paper, is also common in other sports, e.g., hockey, basketball, and football. Thus, the proposed solution can be of value to dribbling tasks of other sports games. Furthermore, we believe that the soccer dribbling task is an excellent benchmark for comparing different machine learning techniques because it involves a complex problem, and it has a well-defined objective.
There are several exciting directions for extending this work. From a practical perspective, we intend to analyze the scalability of our solution, i.e., to study how it performs with training fields of distinct sizes and against different adversaries. Further, we are considering schemes to extend our solution to the original partially observable environment, where the available information is incomplete and noisy.
As stated before, a more informative state representation could be obtained by using more state variables. The major problem of adding extra variables to our solution is that CMAC’s complexity increases exponentially with its dimensionality. Due to this fact, we are considering other solutions which use function approximations whose complexity is unaffected by dimensionality per se, e.g., the Kanerva coding (for example, see Kostiadis and Hu’s work ).
Finally, we note that when modeling the soccer dribbling task as a reinforcement learning problem, we do not directly use intermediate rewards (they are all set to zero). However, they may make the learning process more efficient (for example, see ). Thus, we intend to investigate the influence of intermediate rewards on the final solution in future work.
We would like to thank W. Thomas Miller, Filson H. Glanz, and others from the Department of Electrical and Computer Engineering at the University of New Hampshire for making their CMAC code available.
I. Noda, H. Matsubara, K. Hiraki, and I. Frank, “Soccer server: A tool for
research on multiagent systems,”
Applied Artificial Intelligence, vol. 12, no. 2, pp. 233–250, 1998.
-  G. J. Gordon, “Reinforcement learning with function approximation converges to a region,” in Advances in Neural Information Processing Systems 13, 2001, pp. 1040–1046.
-  R. S. Sutton, “Generalization in reinforcement learning: Successful examples using sparse coarse coding,” in Advances in Neural Information Processing Systems 8, 1996, pp. 1038–1044.
-  J. N. Tsitsiklis and B. V. Roy, “An analysis of temporal-difference learning with function approximation,” IEEE Transactions on Automatic Control, vol. 42, no. 5, pp. 674–690, 1997.
-  T. J. Perkins and D. Precup, “A convergent form of approximate policy iteration,” in Advances in Neural Information Processing Systems 15, 2003, pp. 1595–1602.
-  M. Chen, E. Foroughi, F. Heintz, S. Kapetanakis, K. Kostiadis, J. Kummeneje, I. Noda, O. Obst, P. Riley, T. Steffens, Y. Wang, and X. Yin, Users manual: RoboCup soccer server manual for soccer server version 7.07 and later, 2003, available at http://sourceforge.net/projects/sserver/.
-  R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. The MIT Press, 1998.
-  R. de Boer and J. R. Kok, “The incremental development of a synthetic multi-agent system: the uva trilearn 2001 robotic soccer simulation team,” Master’s thesis, University of Amsterdam, The Netherlands, 2002.
-  M. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, 2005.
-  R. Bellman, Dynamic Programming. Dover Publications, 2003.
-  J. S. Albus, “A Theory of Cerebellar Function,” Mathematical Biosciences, vol. 10, no. 1-2, pp. 25–61, 1971.
-  ——, Brain, Behavior, and Robotics. Byte Books, 1981.
-  P. Stone, R. S. Sutton, and G. Kuhlmann, “Reinforcement Learning for RoboCup-Soccer Keepaway,” Adaptive Behavior, vol. 13, no. 3, pp. 165–188, 2005.
-  W. T. Miller and F. H. Glanz, UNH_CMAC Version 2.1: The University of New Hampshire implementation of the Cerebellar model arithmetic computer - CMAC, 1994, available at http://www.ece.unh.edu/robots/cmac.htm.
-  T. Andou, “Refinement of Soccer Agents’ Positions Using Reinforcement Learning,” in RoboCup-97: Robot Soccer World Cup I, 1998, pp. 373–388.
-  M. A. Riedmiller, A. Merke, D. Meier, A. Hoffman, A. Sinner, O. Thate, and R. Ehrmann, “Karlsruhe Brainstormers - A Reinforcement Learning Approach to Robotic Soccer,” in RoboCup 2000: Robot Soccer World Cup IV, 2001, pp. 367–372.
-  T. Nakashima, M. Udo, and H. Ishibuchi, “A fuzzy reinforcement learning for a ball interception problem,” in RoboCup 2003: Robot Soccer World Cup VII, 2004, pp. 559–567.
-  A. Iscen and U. Erogul, “A new perspective to the keepaway soccer: the takers,” in Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems, 2008, pp. 1341–1344.
-  T. Gabel, M. Riedmiller, and F. Trost, “A Case Study on Improving Defense Behavior in Soccer Simulation 2D: The NeuroHassle Approach,” in RoboCup-2008: Robot Soccer World Cup XII, 2009, pp. 61–72.
-  S. Kalyanakrishnan, Y. Liu, and P. Stone, “Half field offense in RoboCup soccer: A multiagent reinforcement learning case study,” in RoboCup-2006: Robot Soccer World Cup X, 2007, pp. 72–85.
-  M. Riedmiller, R. Hafner, S. Lange, and M. Lauer, “Learning to dribble on a real robot by success and failure,” in IEEE International Conference on Robotics and Automation, 2008, pp. 2207–2208.
-  K. Kostiadis and H. Hu, “KaBaGe-RL: Kanerva-based generalisation and reinforcement learning for possession football,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2001, pp. 292–297.
-  A. Y. Ng, D. Harada, and S. Russell, “Policy invariance under reward transformations: Theory and application to reward shaping,” in Proceedings of the 16th International Conference on Machine Learning, 1999, pp. 278–287.