Inverse Reinforcement Learning for Strategy Identification

07/31/2021 ∙ by Mark Rucker, et al. ∙ University of Virginia 0

In adversarial environments, one side could gain an advantage by identifying the opponent's strategy. For example, in combat games, if an opponents strategy is identified as overly aggressive, one could lay a trap that exploits the opponent's aggressive nature. However, an opponent's strategy is not always apparent and may need to be estimated from observations of their actions. This paper proposes to use inverse reinforcement learning (IRL) to identify strategies in adversarial environments. Specifically, the contributions of this work are 1) the demonstration of this concept on gaming combat data generated from three pre-defined strategies and 2) the framework for using IRL to achieve strategy identification. The numerical experiments demonstrate that the recovered rewards can be identified using a variety of techniques. In this paper, the recovered reward are visually displayed, clustered using unsupervised learning, and classified using a supervised learner.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In adversarial environments, such as sporting events, gaming, or even defending against a cyber attack, one side could gain an advantage by identifying the opponent’s strategy. The value of strategy identification lies in the ability it provides to foresee, and thereby counter, the opponent’s future actions. In combat games, for example, if an opponent’s strategy is identified as overly aggressive, one could lay a trap that exploits the opponent’s aggressive nature. However, an opponent’s strategy is not always apparent and may need to be estimated from observations of their actions. This paper proposes to use machine learning, specifically inverse reinforcement learning, to identify strategies in adversarial environments.

Strategic planning is often defined by four concepts. Goals are high-level concepts that define what needs to be accomplished. Objectives are quantitative measures that determine if a goal has been achieved. Strategies are plans for achieving the defined objectives. Tactics are the low-level actions for carrying out a strategy. In the combat gaming example, the goal is to win the engagement. There are numerous objectives that could be associated with achieving this goal, including securing a particular target or minimizing casualties. Similarly, there are many strategies for achieving objectives; for example, being aggressive or defensive. Tactics are the specific sequence of actions carried out by each side in the engagement. The high-level goal (to win) is known to both sides, and the low-level tactics are observable during the engagement. However, the objectives and the strategies of the opponent are unknown, but given the hierarchy previously defined the objectives and the strategies should be correlated with the tactics. Therefore, we hypothesis that the objectives and the strategies can be inferred from the observed tactics of the opponent.

Markov decision processes (MDPs) [13, 18] are often used in artificial systems to model sequential decision making. An MDP is primarily defined by states, actions, rewards, and a state transition function. A policy maps states to actions, and an optimal policy maximizes expected future reward. The components of an MDP map to the four strategic planning concepts previously defined. The goal is to maximize expected future reward. The tactics are the state-actions pairs that result from an implemented policy. The objectives and the strategy are represented by the reward function, i.e. the reward function is measurable and the parameters of the reward function define an agent’s behavior.

Reinforcement learning (RL) [20] is a common machine learning technique for learning optimal policies through interaction with an environment. The standard RL paradigm involves an agent selecting an action in a given state and then observing a state transition defined by the transition function and a reward defined by the reward function. By repeating this process and exploring using a probabilistic exploration policy that includes random actions, the agent can eventually learn an optimal policy. On the other hand, inverse reinforcement learning (IRL) [11] estimates a reward function from observations of an expert’s actions in an environment.

Inverse reinforcement learning has developed largely within the field of robotics. A few of the notable advances in robotics include training autonomous acrobatic helicopters [1], robot navigation through crowded rooms [6], and autonomous vehicles [25]. More recently there has been growing interest in applying IRL to human behavior problems in areas such as inferring cultural values [12], modelling human routines [3], and even so far as neuroscience models of human reasoning [4]. Unfortunately, theoretical solutions to the IRL problem remain challenging, requiring either strict mathematical assumptions such as linearity (e.g., [2, 26]) or large resource requirements such as sample or computation complexity (e.g., [14, 21]). As such, the field of IRL is a very active one that has huge potential as the theory continues to advance.

Multi-agent IRL estimates reward functions in multi-agent settings and most techniques rely heavily on game-theoretic principles [9, 8]. However, this reliance makes their use impractical when notions of equilibria cannot be defined. At a high level, multi-agent IRL seems like a suitable construct for strategy identification in combat scenarios, e.g. each side could be considered an agent and each side may be composed of multiple agents, but the multi-agent IRL construct and algorithms need significant advancement before they can become feasible.

This paper proposes that IRL can be used to recover unique reward functions that are correlated with strategies in adversarial environments such as combat games. Specifically, the contributions of this work are 1) the demonstration of this concept on gaming combat data using three pre-defined strategies and 2) the framework for using IRL to achieve strategy identification. The IRL algorithm used in this study utilizes kernel methods to realize expressive functions of the state space [17, 5]. IRL has been used to create agent based models [7], identify decision agents [15], and to estimate strategy in other domains, including animal behavior [22], table tennis [10], and financial trading [23, 24]. However, this is the first study to apply IRL to estimating strategies for combat games.

Further, IRL has two traits that make the technique well suited for strategy identification. First, the reward function is independent of the environment dynamics, but the policy is dependent upon the both the reward function and the environment dynamics. This is analogous to the concepts in strategic planning where the strategy and objectives are most likely independent of low-level concepts like setting but the tactics are not. Second, the estimated reward function can be used for prediction if the dynamics change. Simply estimating the policy in this situation is not sufficient as it may no longer be optimal and therefore may not be directing the agent’s actions. The reward function could be used in conjunction with a model-free learner to estimate a new policy and thus predict future actions of the adversary.

This paper is organized as follows. Section II provides background information on MDPs, RL, and IRL. Section III outlines the IRL algorithms used for strategy identification. Section IV describes the numerical experiments performed to validate the IRL methods. Section V provides our conclusions from the numerical experiments and possible areas of future work.

Ii Background

General RL theory rests on Markov decision process models (MDPs). In this paper an MDP is defined as any tuple where is a set of states, is a set of actions,

is a transition probability function satisfying

, and is a reward function.

Given an MDP any function satisfying will be called a policy function. Every possesses a unique value function and action-value function defined as

where is called a discount factor,

is a random variable defined on

and is distributed according to

(1)

The goal of RL algorithms is to learn an optimal policy, , for an MDP where optimality is defined as . The goal of IRL algorithms is to learn a reward , assuming an MDP where is not known, for which a given policy is optimal. Because the traditional IRL problem statement has known degenerate solutions additional requirements are often added to yield useful solutions.

Match Red Blue Match Time Red Deaths Blue Deaths
Count Count Count Min Avg Max Min Avg Max Min Avg Max
Fallback 11 12 11 54s 202s 468s 1 7.6 11 1 7.1 11
Assault 12 12 11 57s 192s 402s 4 9.3 11 3 8.6 11
Flank 13 12 11 54s 162s 279s 1 6.8 11 3 8.1 11
TABLE I: This table provides the summary statistics for 36 simulated engagements. The left column of the table indicates the AI strategy for the blue force in the engagement. The tSNE plot later in this paper is visualizing the reward functions learned from the blue force in these matches.
Item Definition
A state is defined as the current match time, the x,y position and health of all red and blue forces (sans one blue force player), and the position and health of our agent (who replaces the missing blue force player).
There are 9 actions: stay in place or move one step in one of the 8 cardinal/ordinal directions.
All transitions are deterministic with our controlled agent moving according to the action selection and all other agents moving to their positions according to the pre-recorded data
Unknown
TABLE II: The full definition of the MDP.

Iii Algorithms

Iii-a Reward Learning

Iii-A1 IRL Algorithm

To learn rewards the projection variant of [2] was utilized and extended via kernel-based methods as described in [17]. For our purposes we define a kernel as any that induces a positive semi-definite Gram matrix where .

Intuitively, kernel methods can be thought of as either a non-linear extension to linear function approximation techniques or as functions in a space . A particularly common interpretation for kernels is as similarity measures. This interpretation is most often taken when ’s value is in .

Mathematically, kernels can be interpreted as an inner product between vectors in

. That is, . This definition also allows for the norm

which is useful for cluster analysis and visualizations.

In the original projection IRL algorithm, which is extended below, the goal is to match feature expectation given some . In the kernel-based extension feature expectation becomes the kernel expectation, defined as

(2)

The complete kernel-based algorithm is provided in Algorithm 1. It should be noted that the algorithm as stated creates a set of reward functions whose optimal policies have a convex combination that approximates the expert. For our analysis we required a single reward function and simply chose the from this set whose optimal policy had the smallest .

1:Initialize: set as desired
2:Initialize: set and
3:Initialize: set to a random reward
4:Initialize: set for and
5:
6:while  do
7:     
8:     
9:     
10:      for see Algorithm 2
11:     
12:     
13:     
14:end while
Algorithm 1 Kernel-based Projection IRL

Iii-A2 IRL Implementation

Our implementation of the above algorithm uses an empirical estimate of calculated from some sample of observed expert trajectories drawn with probability as defined in Equation 1. Several estimation techniques were tested and the one that seemed to produce the best rewards (determined via human inspection) was

(3)

where was chosen arbitrarily to be 20 and is a function which returns the total number of states in . Such a formulation means that longer episodes have more weight since they’ll have more states and determines how far into the future to consider when estimating the expert’s reward. Equation 3 was also used when estimating on line 4 and 10 in Algorithm 1.

The final component that needs to be implemented for a specific algorithm is the kernel . For our kernel we calculated 6 features to describe the present location of our agent: (1) min distance to living red, (2) max distance to living red, (3) min distance to living blue, (4) max distance to living blue, (5) min cos similarity between living red and blue and (6) max cos similarity between living red and blue. These features were scaled appropriately so that each feature was in and then a Gaussian kernel was applied.

Iii-B Policy Learning

Iii-B1 RL Algorithm

Fig. 1: This figure shows the distribution of expected value given our MDP and 30 random kernel-based reward functions using the same kernel we use in KPIRL. Our method, direct iteration, out-performs other well-known RL algorithms though still falls short of the optimal value iteration. We believe our improved performance is largely due to the simplicity of our MDP and small sample sizes when training rather than an algorithmic advancement, though more analysis is necessary to confirm.

The utilized IRL algorithm requires solving for optimal polices given a reward function on each iteration. To satisfy this requirement, an empirical estimate policy iteration method was used. Our approach is an n-step, model-free, Monte Carlo, on-policy Q-function approximation (see Algorithm 2) and was originally described in [17].

Experiments show that our direct iteration method out-performs other well-known RL algorithms (see Figure 1) given relatively small training samples from our MDP. All comparison RL algorithms come from the Stable Baseline 3 project [16] and have been moderately tuned with respect to hyper-parameters. Every algorithm was given a budget of 10,000 interactions with the environment before comparing the results.

We believe our improved performance is largely due to the simplicity of our MDP and small sample sizes when training rather than an algorithmic advancement, though more analysis is necessary to confirm. For reference purposes Figure 1 also includes value iteration which represents the optimal solution. During final analysis value iteration was not used due to intractable memory on the full problem.

1:Initialize: set , , and for iterations, episodes, steps and obs/episode
2:Initialize: set to random policy
3:Initialize: set
4:
5:for  to  do
6:
7:     for  to  do
8:
9:         Generate according to some distribution
10:         Generate according to some policy
11:         Given generate with
12:
13:         for  to  do
14:              
15:              if   then
16:                  
17:              else
18:                  
19:              end if
20:         end for
21:
22:     end for
23:
24:     Fit a regressor using
25:     
26:
27:end for
28:
29:Return:
Algorithm 2 Direct Estimate Iteration

Iii-B2 RL Implementation

When implementing Algorithm 2 hyper parameter tuning was used to select the values for , , and which gave the best expected value for random rewards. Of the four parameters, only resulted in counter-intuitive behavior, demonstrating performance degradation if was too small or too large. For , and performance generally increased monotonically with decreasing returns.

Additionally, separate experiments were conducted to evaluate various regression learners for line 13 in Algorithm 2

. We tested an SVM kernel regressor, a linear regressor, an AdaBoost regressor and a decision tree regressor. Of these regressors the decision tree performed best when using the same features as those used to calculate the IRL reward kernel.

Line 7 in Algorithm 2

allowed for exploring starts. Four exploration heuristics were evaluated: (1) random selection, (2) greedy selection, (3) epsilon-greedy selection, and (4) softmax selection. Of these four methods for exploring starts softmax selections performed best.

Finally, three other variations on the algorithm were tested. First, bootstrapping the update target resulted in decreased performance. Second, using an every visit MC target (cf. [19]) for updates instead of an n-step target gave a small increase in performance for small state spaces but a large decrease in performance in large state spaces. And third, modifying the algorithm to only use value observations from the most recent policy iteration (i.e., clearing between line 8 and 9 in Algorithm 2) resulted in decreased performance across the board.

Iv Numerical Experiments

Fig. 2: The initial starting positions of the red and blue forces in one of the 36 generated matches. These engagements occurred on flat, open terrain inside an area of 340 by 340 meters.
Fig. 3: Two t-SNE plots were generated. The left represents expert kernel expectations (cf. Equation 2) which we call behavior due to it being directly calculated form observed behavior. The right represents reward functions learned from the expert kernel expectations on the left.

Using the above algorithms three experiments were performed to explore the strengths and weaknesses of using IRL to analyze strategic behavior. To drive these experiments 36 matches involving combat engagements between two opposing forces (referred to as red and blue) were simulated using a gaming combat simulator. In each match both forces were comprised of three fireteams containing three to four AI controlled players each. Matches always occurred in an open field, and the amount of initial space separating the forces was varied slightly (see Figure 2 for an example starting position in an match). At a low level players were controlled by the native AI within the combat game. At a high level forces were nudged to follow one of three strategies: assault, flank or fallback. The red force always assaulted while the blue force assaulted 12 times, flanked 13 times and fellback 11 times. Summary statistics for the data can be found in Table I.

During the 36 simulated matches the location of every AI player, which side they were on, and their health was recorded every 3 seconds. These observations were stored and used during IRL analysis to construct expert state trajectories. Because the IRL algorithm used kernel methods we needed to define . In this analysis, was a similarity kernel that mapped pairs of states to real numbers between 0 and 1 with pairs of similar states mapping closer to 1 and pairs of dissimilar states mapping closer to 0. When determining the similarity of two states considered the minimum distances to blue and red forces, the maximum distances to blue and red forces, and the angles of fire between blue and red forces.

In addition, to learn rewards from the above data using the algorithms in this paper the forward RL problem needed to be solved. To facilitate this a simplified MDP simulation was developed where we could take over individual agents and move them at will through any of the 36 pre-recorded matches. Our MDP was far simpler than the original simulation in that it only generated data at the same fidelity of the recorded observations and the environment did not respond differently to deviations in our controlled agent’s behavior.

Despite these huge simplifications IRL was still able to learn meaningful rewards. The full description of the states, actions and transition function in the simulated MDP environment are provided in Table II.

Using the 36 recorded data sets from the combat game, the kernel , and the simplified MDP it was then possible to both empirically estimate the kernel expectation for the expert policy (see Equation 2) in each data set as well as well as learn a reward function which generated a policy similar to the expert. Using these component, a labeled test was constructed for strategy identification experiments. The label was the blue force’s high level strategy directive, behavior was represented as the empirical kernel expectation for the blue force, and reward was represented as the kernel-based reward function learned via IRL for the blue force.

Three experiments were conducted using the strategy labeled data set: (1) see if a t-SNE plot would show visible separation of strategies with either the expert kernel expectations or reward functions, (2) see if unsupervised learning techniques would be able to cluster strategies with either expert kernel expectations or reward functions and (3) see if a classifier could be trained to identify strategy given either an expert kernel expectation or reward function.

To conduct the t-SNE experiment it was necessary to calculate a distance matrix for both the kernel expectations and the reward functions. This was done via the norm induced by . For example, if and then the distance between and was calculated as

The t-SNE plot generated from the distance matrices seemed to show slight visual improvements in strategy separation for reward functions over their subsequent expert kernel expectation (see Figure 3).

While care should be taken when interpreting t-SNE plots it was interesting to see that when visualizing reward functions flanking strategies were placed in between assault and fallback strategies. This aligned with the data recordings where assaults tended to be direct charges, fallback tended to be direct retreats and flanking strategies tended to be a mix of strategic approaches and fallbacks.

Fig. 4: Two dendrograms were generated from the expert kernel expectations and their learned reward functions using hierarchical agglomerative clustering. The colors in each plot represent best separation for 3 clusters (the known right answer). The clusters in the behavior dendrogram show relatively little pattern. In the reward dendrogram on the other hand two clear clusters emerge, a fallback cluster and an assault/flanking cluster. The third cluster shows little pattern. These results align with those seen in the t-SNE plots.

To conduct the cluster experiments square distance matrices were needed once again. As before these were calculated using the norm induced by . Using the distance matrices, a hierarchical agglomerative clustering algorithm, and a complete linkage function two dendrograms were generated (see Figure 4). Clear patterns can be seen in the reward function clusters where 82% of fallback strategies belong to the second cluster and 80% of flank/assault strategies belong to the third cluster.

An interesting pattern that emerged in the cluster analysis was the first cluster in the kernel-based reward functions. At first glance there is no apparent pattern to this cluster which contains 3 flank, 2 fallback and 2 assault data sets. However, upon further analysis it was found that these 7 data sets represented all the matches where red and blue forces were placed within very close proximity before starting the match. Matches that began with forces in close proximity were considerably more chaotic than mid range and far matches. In close matches units often died quickly and the underlying combat simulator AI had little time to plan for the strategy nudges that were provided.

For the final experiment a kernel-based SVM classifier was trained using a one-to-one approach to handle three classes. The classifier was evaluated on overall prediction accuracy using leave one out on all training points. Using expert kernel expectation gave an overall accuracy of 61% while reward functions gave an overall accuracy of 72%. The class specific breakdowns can be seen in the confusion matrix in Figure

5.

The results here followed a similar pattern to the two previous experiments. The classifier from the rewards outperformed the classifier from the kernel expectations, and assault and fallback strategies were easiest to distinguish while flanking and assaulting were commonly mislabeled. The four reward classifier mistakes that confused flanking and fallback data also belonged to the first cluster in the reward dendrogram. Visual inspection found that these four data sets had many early deaths making it difficult for the low level AI to form coherent strategies.

Fig. 5: Two confusion matrices were created: one from a classifier which predicts strategy from expert kernel expectations and one from a classifier which predicts strategy from the learned reward functions. The reward classifier dominated kernel expectation classifier in all classes except flank.

At the end, one final experiment was conducted. This is one that kernel feature expectations would be unable to do and so there is no comparison to make. Using our learned reward function we replaced one of the AI players in a match to see how closely a reward generated policy would match the replaced player’s trajectory. The result of this can be seen in Figure 6.

Fig. 6: Shown above is a direct overlay of an expert’s behavior with a policy learned from an IRL reward function. The gradient in the lines represents time with the darker portions of the line being earlier in time and the lighter portions of the line later in time. At the end of the trajectories a divergence can be seen which we believe is largely an artifact of small training datasets.

V Conclusion

In this paper we developed a data driven technique to define an MDP and learn a reward function which explains observed behavior. To validate this technique we generated 36 simulated engagements within a high-fidelity combat simulator and examined reward functions learned from observations of the engagement. We were able to show that by examining the reward functions learned through our technique more accurate predictions could be made about the strategy that generated the observed behavior.

In the future we plan on extending this method to a more general multi-agent formulation for IRL. Such an extension seems important for problems where many agents interact with varying goals and skill levels. Finally, we hope to provide a more sophisticated explanation for why our direct estimate method seems to beat out other RL algorithms in the limited resource use case.

Acknowledgment

This work was supported by Systems Engineering, Inc and the Office of Naval Research under grant number N00014-20-C-2011.

References

  • [1] P. Abbeel, A. Coates, M. Quigley, and A. Y. Ng (2007) An application of reinforcement learning to aerobatic helicopter flight. Advances in neural information processing systems 19, pp. 1. Cited by: §I.
  • [2] P. Abbeel and A. Y. Ng (2004) Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on machine learning, pp. 1 (en). Cited by: §I, §III-A1.
  • [3] N. Banovic, T. Buzali, F. Chevalier, J. Mankoff, and A. K. Dey (2016) Modeling and understanding human routine behavior. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 248–260. Cited by: §I.
  • [4] J. Jara-Ettinger (2019-10) Theory of mind as inverse reinforcement learning. Current Opinion in Behavioral Sciences 29, pp. 105–110. Cited by: §I.
  • [5] K. Kim and H. S. Park (2018) Imitation learning via kernel mean embedding. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    Vol. 32. Cited by: §I.
  • [6] H. Kretzschmar, M. Spies, C. Sprunk, and W. Burgard (2016) Socially compliant mobile robot navigation via inverse reinforcement learning. The International Journal of Robotics Research 35 (11), pp. 1289–1307. Cited by: §I.
  • [7] K. Lee, M. Rucker, W. T. Scherer, P. A. Beling, M. S. Gerber, and H. Kang (2017) Agent-based model construction using inverse reinforcement learning. In 2017 Winter Simulation Conference (WSC), pp. 1264–1275. Cited by: §I.
  • [8] X. Lin, S. C. Adams, and P. A. Beling (2019) Multi-agent inverse reinforcement learning for certain general-sum stochastic games. Journal of Artificial Intelligence Research 66, pp. 473–502. Cited by: §I.
  • [9] X. Lin, P. A. Beling, and R. Cogill (2017) Multiagent inverse reinforcement learning for two-person zero-sum games. IEEE Transactions on Games 10 (1), pp. 56–68. Cited by: §I.
  • [10] K. Muelling, A. Boularias, B. Mohler, B. Schölkopf, and J. Peters (2013) Inverse reinforcement learning for strategy extraction. In MLSA13–Proceedings of ‘Machine Learning and Data Mining for Sports Analytics’, workshop@ ECML/PKDD 2013, pp. 16–26. Cited by: §I.
  • [11] A. Y. Ng, S. J. Russell, et al. (2000) Algorithms for inverse reinforcement learning.. In Icml, Vol. 1, pp. 2. Cited by: §I.
  • [12] E. Nouri, K. Georgila, and D. Traum (2012) A cultural decision-making model for negotiation based on inverse reinforcement learning. In Proceedings of the 34th annual conference of the cognitive science society, pp. 2097–2102. Cited by: §I.
  • [13] M. L. Puterman (2014) Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons. Cited by: §I.
  • [14] Q. Qiao and P. A. Beling (2011) Inverse reinforcement learning with Gaussian process. In Proceedings of the 2011 American control conference, pp. 113–118. Cited by: §I.
  • [15] Q. Qiao and P. A. Beling (2013) Recognition of agents based on observation of their sequential behavior. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 33–48. Cited by: §I.
  • [16] A. Raffin, A. Hill, M. Ernestus, A. Gleave, A. Kanervisto, and N. Dormann (2019) Stable baselines3. GitHub. Note: https://github.com/DLR-RM/stable-baselines3 Cited by: §III-B1.
  • [17] M. A. Rucker, L. T. Watson, L. E. Barnes, and M. S. Gerber (2020) Human apprenticeship learning via kernel-based inverse reinforcement learning. External Links: 2002.10904 Cited by: §I, §III-A1, §III-B1.
  • [18] W. T. Scherer, S. Adams, and P. A. Beling (2018) On the practical art of state definitions for Markov decision process construction. IEEE Access 6, pp. 21115–21128. Cited by: §I.
  • [19] S. P. Singh and R. S. Sutton (1996-01) Reinforcement Learning with Replacing Eligibility Traces. Machine Learning 22 (1), pp. 123–158. External Links: ISSN 1573-0565, Link, Document Cited by: §III-B2.
  • [20] R. S. Sutton and A. G. Barto (2018) Reinforcement learning: an introduction. MIT press. Cited by: §I.
  • [21] M. Wulfmeier, D. Rao, D. Z. Wang, P. Ondruska, and I. Posner (2017) Large-scale cost function learning for path planning using deep inverse reinforcement learning. The International Journal of Robotics Research 36 (10), pp. 1073–1087. Cited by: §I.
  • [22] S. Yamaguchi, H. Naoki, M. Ikeda, Y. Tsukada, S. Nakano, I. Mori, and S. Ishii (2018) Identification of animal behavioral strategies by inverse reinforcement learning. PLoS computational biology 14 (5), pp. e1006122. Cited by: §I.
  • [23] S. Y. Yang, Q. Qiao, P. A. Beling, W. T. Scherer, and A. A. Kirilenko (2015) Gaussian process-based algorithmic trading strategy identification. Quantitative Finance 15 (10), pp. 1683–1703. Cited by: §I.
  • [24] S. Y. Yang, Q. Qiao, P. A. Beling, and W. T. Scherer (2014) Algorithmic trading behavior identification using reward learning method. In

    2014 International Joint Conference on Neural Networks (IJCNN)

    ,
    pp. 3807–3414. Cited by: §I.
  • [25] C. You, J. Lu, D. Filev, and P. Tsiotras (2019) Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning. Robotics and Autonomous Systems 114, pp. 1–18. Cited by: §I.
  • [26] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey (2008) Maximum entropy inverse reinforcement learning.. In Aaai, Vol. 8, pp. 1433–1438. Cited by: §I.