Refined Continuous Control of DDPG Actors via Parametrised Activation

06/04/2020 ∙ by Mohammed Hossny, et al. ∙ 0

In this paper, we propose enhancing actor-critic reinforcement learning agents by parameterising the final actor layer which produces the actions in order to accommodate the behaviour discrepancy of different actuators, under different load conditions during interaction with the environment. We propose branching the action producing layer in the actor to learn the tuning parameter controlling the activation layer (e.g. Tanh and Sigmoid). The learned parameters are then used to create tailored activation functions for each actuator. We ran experiments on three OpenAI Gym environments, i.e. Pendulum-v0, LunarLanderContinuous-v2 and BipedalWalker-v2. Results have shown an average of 23.15 LunarLanderContinuous-v2 and BipedalWalker-v2 environments, respectively. There was no significant improvement in Pendulum-v0 environment but the proposed method produces a more stable actuation signal compared to the state-of-the-art method. The proposed method allows the reinforcement learning actor to produce more robust actions that accommodate the discrepancy in the actuators' response functions. This is particularly useful for real life scenarios where actuators exhibit different response functions depending on the load and the interaction with the environment. This also simplifies the transfer learning problem by fine tuning the parameterised activation layers instead of retraining the entire policy every time an actuator is replaced. Finally, the proposed method would allow better accommodation to biological actuators (e.g. muscles) in biomechanical systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep reinforcement learning (DRL) has been used in different domains; and has achieved good results on different tasks such as robotic control, natural language processing and biomechanical control of human models 

(Kidziński et al., 2020, 2018; Mnih et al., 2015; Kober et al., 2013).

While DRL is proven to handle discrete problems effectively and efficiently, continuous control remains a challenging task to accomplish. This is because it relies on physical systems which are prone to noise due to wear and tear, overheating, and altered actuator response function depending on the load each actuators bears; this is more apparent in robotic and biomechanical control problems. In the robotic control domain, for instance, Bi-pedal robots robustly performs articulated motor movements under complex environments and limited resources. These robust movements are achieved using highly sophisticated model-based controllers. However, the motor characteristics are highly dependent on the load and the interaction with the environment. Thus, adaptive continuous control is required to adapt to new situations.

Biomechanical modelling and simulation present a clearer example. In a biomechanical system, the human movement is performed using muscle models (Thelen, 2003; Millard et al., 2013). These models simulate muscle functions, which are complex and dependent on multiple parameters, like muscle maximum velocity, muscle optimal length, muscle maximum isometric force to name a few (Zajac, 1989).

The common challenge facing training DRL agents on continuous action spaces, is the flow of the gradient update throughout the network. The current state of the art is relying on a single configuration of the activation function producing the actuation signals. However, different actuators exhibit different transfer functions; and also noisy feedback from the environment propagates through the entire actor neural network and thus, a drastic change is imposed on the learned policy. The solution we are proposing in this work is to use multiple actuation transfer functions that allow the actor neural network to adaptively modify the actuation response functions to the needs of each actuator.

In this paper, we present a modular perspective of the actor in actor-critic DRL agents and propose modifying the actuation layer to learn the parameters defining the actuation-producing activation functions (e.g. Tanh and Sigmoid). It is important to emphasise the difference between parameterised action spaces and parameterised activation functions. In reinforcement learning, a parametrised action space is commonly referred to as a discrete action space that has an accompanying one or more continuous parameters (Masson et al., 2016). It has been used to solve problems such as the RoboCup (Kitano et al., 1997), which is a robots world-cup soccer game (Hausknecht and Stone, 2015). On the other hand, parameterised activation functions, such as PReLU (He et al., 2015) and SeLU (Klambauer et al., 2017), were introduced to combat overfitting and saturation problems. In this paper, we adopt parameterised activation functions to improve performance of the deep deterministic policy gradient (DDPG) to accommodate the complex nature of real-life scenarios. The rest of this paper is organised as follows. Related work is discussed in Section 2. Proposed method is presented in Section 3. Experiments and results are presented in Section 4. Finally, Section 5 concludes and introduces future advancements.

2 Background

Deep deterministic policy gradient (DDPG) is a widely adopted deep reinforcement learning method for continuous control problems (Lillicrap et al., 2015). A DDPG agent relies on three main components; the actor, the critic and the experience replay buffer (Lillicrap et al., 2015).

In the actor-critic approach (Sutton et al., 1998)

, the actor neural network reads observations from the environment and produces actuation signals. After training, the actor neural network serves as the controller which allows the agent to navigate the environment safely and to perform the desired tasks. The critic network assesses the anticipated reward based on the current observation and the actor’s action. In control terms, the critic network serves as a black-box system identification module which provides guidance for tuning the parameters of a PID controller. The observations, actions, estimated reward and next state observation are stored as an experience in a circular buffer. This buffer serves as a pool of experiences, from where samples are drawn to train the actor and the critic neural networks to produce the correct action and estimate the correct reward, respectively.

There are different DDPG variations in the literature. In (Fujimoto et al., 2018), a twin delay DDPG (TD3) agent was proposed to limit overestimation by using the minimum value between a pair of critics instead of one critic. In (Barth-Maron et al., 2018), it was proposed to expand the DDPG as a distributed process to allow better accumulation of experiences in the experience replay buffer. Other off-policy deep reinforcement learning agents such as soft actor-critic (SAC) are inspired by DDPG although they rely on stochastic parameterisation (Haarnoja et al., 2018a, b). In brief, SAC adapts the reparameterisation trick to learn a statistical distribution of actions from which samples are drawn based on the current state of the environment.

2.1 DDPG Challenges

Perhaps the most critical challenge of the DDPG, and off-policy agents in general, is its sample inefficiency. The main reason behind this challenge is that the actor is updated depending on the gradients calculated by the continuously training of the critic neural network. This gradient is noisy because it relies on the outcome of the simulated episodes. Therefore, the presence of outlier scenarios impact the training of the actor and thus constantly change the learned policy instead of refining it. This is the main reason off-policy DRL training algorithms require maintaining a copy of the actor and critic neural networks to avoid divergence during training.

While radical changes in the learned policy may provide a good exploratory behaviour of the agent, it does come at the cost of requiring many more episodes to converge. Additionally, it is often recommended to have controllable exploratory noise parameters separated from the policy either by inducing actuation noise such as Ornstein–Uhlenbeck (Uhlenbeck and Ornstein, 1930) or maximising the entropy of the learned actuation distribution (Haarnoja et al., 2018a, b). Practically, however, for very specific tasks, and most continuous control tasks are, faster convergence is often a critical aspect to consider. Another challenge, which stems from practical applications, is the fact that actuators are physical systems and are susceptible to having different characterised transfer functions in response to the supplied actuation signals. These characterisation discrepancies are almost present in every control system due to wear and tear, fatigue, overheating, and manufacturing factors. While minimal discrepancies are easily accommodated with a PID controller, they impose a serious problem with deep neural networks. This problem, in return, imposes a serious challenge during deployment and scaling operations.

3 Proposed Method

In order to address the aforementioned challenges, we propose parameterising the final activation function to include scaling and translation parameters . In our case, we used instead of to allow the actor neural network to accommodate the discrepancies of the actuator characteristics by learning and

. The added learnable parameters empower the actor with two additional degrees of freedom.

Figure 1: Proposed modification to the actor. The final fully connected layer branches into two fully connected layers to learn the and parameters of .

3.1 Modular Formulation

In a typical DRL agent, an actor consists of several neural network layers. While the weights of all layers collectively serve as a policy, they do serve different purposes based on their interaction with the environment. The first layer encode observations from the environment and thus we propose to call it the observer layer. The encoded observations are then fed into several subsequent layers and thus we call them the policy layers. Finally, the output of the policy layers are usually fed to a single activation function. Throughout this paper, we will denote to the observer, policy and action parts of the policy neural network as , , , respectively. We will also denote to the observation, the pre-mapped action space, and the final action space as , and , respectively. To that end, the data flow of the observation through the policy to produce an action can be summarised as;

(1)
(2)

where , and .

In a typical actor, there is no distinction between the observer and policy layers. Also, the actuation layer is simply regarded as the final activation function

and thus the actor is typically modelled as one multi layer perceptron neural network (MLP). The problem with having

as 111Sigmoid is also a popular activation function where . is that it assumes that all actuators in the system exhibit the same actuation-torque characterisation curves under all conditions.

Figure 2: Desired parameterisation of the activation function. Allowing extra degrees of freedom empowers the actor neural network to accommodate outlier scenarios with minimal update to the actual policy. The results here are from the proposed actor trained and tested on bipedal walker environment. Colour brightness indicate different stages throughout the episode from start (bright) to finish (dark).

3.2 Parameterising

Beccause actuation characterisation curves differ based on their role and interaction with the environment, using a single activation function, forces the feedback provided by the environment to propagate throughout the gradients of the entire policy. Therefore, we chose to use a parameterised to model the scaling and the translation of the activation function, and thus the data flow in Eq. 2 can be expanded as;

(3)
(4)
(5)

where , are simple fully connected layers and remains an activation function (i.e. ) as shown in Fig. 1. Adjusting the activation curves based on the interaction with the environment allows the policy to remain intact and thus leads to a more stable training as discussed in the following section. Figure 2 shows the learned parameterised activation functions of the bipedal walker problem.

While the automatic differentiation engines are capable of adjusting the flow of gradient updates, there are two implementation considerations to factor in the design of the actor. First, the scale degree of freedom parameterised by , in the case of and sigmoid does affect the slope of the activation function. A very small will render to be almost constant while a very high produces a square signal. Both extreme cases impose problems to the gradient calculations. On one hand, a constant signal produces zero gradients and prevents the policy neural network from learning. On the other hand, a square signal produces unstable exploding gradients. Another problem also occurs when , which usually changes the behaviour of the produced signals. Therefore, we recommend using a bounded activation function after when estimating .

Second, the translation degree of freedom parameterised by , allows translating the activation function to an acceptable range which prevents saturation. However, this may, at least theoretically, allow the gradients of the policy and observer layers to have monotonically increasing gradients as long as the

can accommodate. This in return may cause an exploding gradient problem. In order to prevent this problem we recommend employing weight normalisation after calculating the gradients 

(Salimans and Kingma, 2016).

(a) Inv. Pendulum

(b) Lunar Lander

(c) Bipedal Walk
Figure 3: OpenAI gym environments used for testing.

4 Experiments and Results

In order to test the efficacy and stability of the proposed method we trained a DDPG agent with and without the proposed learnable activation parameterisation. Both models were trained and tested on three OpenAI gym environments, shown in Fig. 3, that are Pendulum-v0, LunarLanderContinous-v2 and BipedalWalker-v2. For each environment six checkpoint models were saved (best model for each seed). The saved models were then tested for 20 trials with new random seeds (10 episodes with 500 steps each). The total number of test episodes is 1200 for each environment. The results of the three environments are tabulated in Tab. 1 and Tab. 2.

4.1 Models and Hyperparameters

The action mapping network is where the proposed and classical models differ. The proposed model branches the final layer of into two parallel fully connected layers to infer the parameters of in activation function. The classical model adds two more fully connected layers separated by activation function. The added layers ensures that the number of learnable parameters is the same in both models to guarantee a fair comparison.

Both models were trained on the three environments for the same number of episodes (200 steps each). However, number of steps may vary depending on early termination cases. The models were trained with different pseudo-random number generator (PRNG) seeds. We set the experience replay buffer to

samples. We chose ADAM optimiser for the back-propagation optimisation and set the learning rate of both the actor and the critic to 1E-3 with first and second moments set to

, , respectively. We set the reward discount and the soft update of the target neural networks . We also added a simple Gaussian noise with to allow exploration. During the training we saved the best model (i.e. checkpoint). DDPG hyper-parameters tuning is thoroughly articulated in (Lillicrap et al., 2015).

Pendulum-v0 LunarLanderContinuous-v2 BipedalWalker-v2
Classic Tanh -269.20 167.43 114.25 41.20 125.27 15.78
Learnable Tanh (proposed) -268.39 166.32 140.69 45.21 167.60 8.45
Improvement 0.30% 23.15% 33.80%
Table 1: Episode Reward (meanstd). Higher mean is better.
Pendulum-v0 LunarLanderContinuous-v2 BipedalWalker-v2
Classic Tanh -0.54 0.34 0.40 0.28 0.23 0.09
Learnable Tanh (proposed) -0.54 0.34 0.67 0.35 0.29 0.16
Improvement 0.26% 65.59% 26.76%
Table 2: Step Reward (meanstd). Higher mean is better.

4.2 Inverted Pendulum Results

In the inverted pendulum problem (Fig. 4), the improvement is insignificant because the environment featured only one actuator. However, the policy adapted by the proposed agent features a fine balance of actuation signals. In contrast, the classical MLP/Tanh model exerts additional oscillating actuation signals to maintain the achieved optimal state, as shown in Fig. 4(e). This oscillation, imposes a wear and tear challenge on mechanical systems and fatigue risks in biomechanical systems. While this difference is reflected with minimal difference in the environment reward, it is often a critical decision to make in practical applications.

(a) Training Performance
(b) Step Reward
(c) Epsd. Reward
(d) Observation
(e) Action
Figure 4: Inverted pendulum results. No significant improvement in training performance (a). The best models from both methods reported similar reward progression patterns (b, c, d). The proposed method achieves a more stable control where as the classic method oscillates actions to maintain control (e).

4.3 Lunar Lander Results

Figure 5 shows the training and reward curves of the lunar landing problem. The instant reward curve of the lunar landing problem demonstrate an interesting behaviour in the first steps. The classic method adopts an energy-conservative policy by shutting down the main throttle and engaging in free falling for

steps to a safe margin and then keep hovering above ground to figure out a safe landing. The conserved energy contributes to the overall reward at each time step. While this allows for faster reward accumulation, this policy becomes less effective with different initial conditions. Depending on the speed and the attack angle, the free-falling policy requires additional effort for manoeuvring the vehicle to the landing pad. The proposed agent, on the other hand, accommodates the initial conditions and slows down the vehicle in the opposite direction to the entry angle to maintain a stable orientation and thus allows for a smoother lateral steering towards a safe landing as shown in Fig 

6 (a and c). It is worth noting that both agents did not perform any type of reasoning or planning. The main difference is the additional degrees of freedom the proposed parametrised activation function offers. These degrees of freedom allow the actor neural network to adopt different response functions to accommodate the initial conditions.

(a) Training Performance
(b) Step Reward
(c) Epsd. Reward
Figure 5: Lunar lander results. There is a significant improvement in training performance at the early stages of the training (a). Agents with the proposed method outperforms the classical method in terms of reward progression (b and c).

4.4 Bipedal Walker Results

Training and reward curves of the bipedal walking problem are illustrated in Fig. 7. In general, the agent with the proposed action mapping out performs the classical agent in the training, step and episode reward curves as shown in Fig. 7(a, b, c). The spikes in the step reward curves shows instances where agents lost stability and failed to complete the task. The episode reward curve shows that the proposed method allows the agent to run and complete the task faster. This is due to a better coordination between the left and right legs while taking advantage of the gravity to minimise the effort. This is demonstrated in Fig. 7-d where the proposed agent maintains a pelvis orientation angular velocity and vertical velocity close to zero. This, in return, dedicates the majority of the spent effort towards moving forward. This is also reflected in Fig. 7-e where the actuation of the proposed agent stabilises faster around zero and thus exploiting the gravity force. In contrast, the classical agent, spends more effort to balance the pelvis and thus it takes longer time to stabilise actuation. Finally, the locomotion actuation patterns in Fig. 7-e demonstrate the difference between the adapted policies. The classical agent relies more on locomoting using Knee2 while the proposed agent provides more synergy between joint actuators. This difference in exploiting the gravity during locomotion is an essential key in successful bipedal locomotion as “controlled falling” (Novacheck, 1998).

(a) Landing Trajectory
(b) Position
(c) Actions
(d) Velocity
Figure 6: Lunar lander results. Proposed and classical actors adopt different landing trajectories (a). Actors without the proposed method preserves effort by engaging in free falling to a safe altitude (b-bottom) and then exerts more effort to perform safe landing (c-bottom). Actors with the proposed method decelerates and engage in manoeuvring to a safe landing (b, c, d).
(a) Training Performance
(d) Observations
(e) Actions
Figure 7: Bipedal walker results. There is a significant improvement in training performance (a). Actors with the proposed method outperforms the classical method in terms of reward progression (b and c), more stable (d), performs the task faster (d) with minimal effort (e).

5 Conclusions

In this paper, we discussed the advantages of adding parameterisation degrees of freedom to the actor in the DDPG actor-critic agent. The proposed methods is simple and straight forward, yet it outperforms the classical actor which utilises the standard

activation function. The main advantage of the proposed method lies in producing stable actuation signals as demonstrated in the inverted pendulum and bipedal walker problems. Another advantage that was apparent in the lunar landing problem is the ability to accommodate changes in initial conditions. This research highlights the importance of parameterised activation functions. While the discussed advantage may be minimal for the majority of supervised learning problems, they are essential for dynamic problems addressed by reinforcement learning. This is because reinforcement learning methods, especially the off-policy ones, rely on previous experiences during training.

The advantage of the proposed method in the bipedal walking problem and the wide variety of activation functions demonstrated in Fig. 2 suggests a promising potential for solving several biomechanics problems where different muscles have different characteristics and response functions. Applications such as fall detection and prevention (Abobakr et al., 2018), ocular motility and the associated cognitive load and motion sickness (Iskander et al., 2018b, a, 2019; Attia et al., 2018a), as well as intent prediction of pedestrians and cyclists (Saleh et al., 2018, 2020). The stability of the training using the parameterised in an actor-critic architecture also shows potential for advancing the Generative Adversarial Networks (GANs) research for image synthesis (Attia et al., 2018b).

This research can be expanded in several directions. First, the parameterisation of can be extended from being deterministic (presented in this paper), to a stochastic parameterisation by inferring the distributions of and . Second, the separation between the policy and the action parts of the actor neural network allows preserving the policy part while fine tuning only the action part to accommodate actuator characterisation discrepancies due to wear and tear during operations. Finally, the modular characterisation of different parts of the actor neural network into observer, policy and action parts requires investigating scheduled training to lock and unlock both parts alternatively to maximise the dedicated function each part the actor carries out.

References

  • Abobakr et al. (2018) Abobakr, A., Hossny, M., Nahavandi, S., 2018. A skeleton-free fall detection system from depth images using random decision forest. IEEE Systems Journal PP, 1–12. doi:10.1109/JSYST.2017.2780260.
  • Attia et al. (2018a) Attia, M., Hettiarachchi, I., Hossny, M., Nahavandi, S., 2018a.

    A time domain classification of steady-state visual evoked potentials using deep recurrent-convolutional neural networks, pp. 766–769.

    doi:10.1109/ISBI.2018.8363685.
  • Attia et al. (2018b) Attia, M., Hossny, M., Zhou, H., Yazdabadi, A., Asadi, H., Nahavandi, S., 2018b. Realistic hair simulator for skin lesion images using conditional generative adversarial network. Preprints 2018100756 doi:10.20944/preprints201810.0756.v1.
  • Barth-Maron et al. (2018) Barth-Maron, G., Hoffman, M., Budden, D., Dabney, W., Horgan, D., Dhruva, T., Muldal, A., Heess, N., Lillicrap, T., 2018. Distributed distributional deterministic policy gradients. Cited By 10.
  • Fujimoto et al. (2018) Fujimoto, S., Van Hoof, H., Meger, D., 2018. Addressing function approximation error in actor-critic methods. arXiv preprint arXiv:1802.09477 .
  • Haarnoja et al. (2018a) Haarnoja, T., Zhou, A., Abbeel, P., Levine, S., 2018a.

    Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor, in: International Conference on Machine Learning, pp. 1861–1870.

  • Haarnoja et al. (2018b) Haarnoja, T., Zhou, A., Hartikainen, K., Tucker, G., Ha, S., Tan, J., Kumar, V., Zhu, H., Gupta, A., Abbeel, P., et al., 2018b. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905 .
  • Hausknecht and Stone (2015) Hausknecht, M., Stone, P., 2015. Deep reinforcement learning in parameterized action space. arXiv preprint arXiv:1511.04143 .
  • He et al. (2015) He, K., Zhang, X., Ren, S., Sun, J., 2015.

    Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, pp. 1026–1034.

    doi:10.1109/ICCV.2015.123. cited By 3729.
  • Iskander et al. (2019) Iskander, J., Attia, M., Saleh, K., Nahavandi, D., Abobakr, A., Mohamed, S., Asadi, H., Khosravi, A., Lim, C.P., Hossny, M., 2019. From car sickness to autonomous car sickness: A review. Transportation research part F: traffic psychology and behaviour 62, 716–726. doi:10.1016/j.trf.2019.02.020.
  • Iskander et al. (2018a) Iskander, J., Hossny, M., Nahavandi, S., 2018a. A review on ocular biomechanic models for assessing visual fatigue in virtual reality. IEEE Access 6, 19345–19361. doi:10.1109/ACCESS.2018.2815663.
  • Iskander et al. (2018b) Iskander, J., Hossny, M., Nahavandi, S., Del Porto, L., 2018b. An ocular biomechanic model for dynamic simulation of different eye movements. Journal of biomechanics 71, 208–216.
  • Kidziński et al. (2018) Kidziński, Ł., Mohanty, S.P., Ong, C.F., Hicks, J.L., Carroll, S.F., Levine, S., Salathé, M., Delp, S.L., 2018. Learning to run challenge: Synthesizing physiologically accurate motion using deep reinforcement learning, in: The NIPS’17 Competition: Building Intelligent Systems. Springer, pp. 101–120.
  • Kidziński et al. (2020) Kidziński, Ł., Ong, C., Mohanty, S.P., Hicks, J., Carroll, S., Zhou, B., Zeng, H., Wang, F., Lian, R., Tian, H., et al., 2020. Artificial intelligence for prosthetics: Challenge solutions, in: The NeurIPS’18 Competition. Springer, pp. 69–128.
  • Kitano et al. (1997) Kitano, H., Asada, M., Kuniyoshi, Y., Noda, I., Osawa, E., Matsubara, H., 1997. Robocup: A challenge problem for ai. AI magazine 18, 73–73.
  • Klambauer et al. (2017) Klambauer, G., Unterthiner, T., Mayr, A., Hochreiter, S., 2017. Self-normalizing neural networks. arXiv:1706.02515.
  • Kober et al. (2013) Kober, J., Bagnell, J.A., Peters, J., 2013. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research 32, 1238–1274.
  • Lillicrap et al. (2015) Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D., 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 .
  • Masson et al. (2016) Masson, W., Ranchod, P., Konidaris, G., 2016. Reinforcement learning with parameterized actions, in: Thirtieth AAAI Conference on Artificial Intelligence.
  • Millard et al. (2013) Millard, M., Uchida, T., Seth, A., Delp, S.L., 2013. Flexing computational muscle: modeling and simulation of musculotendon dynamics. Journal of biomechanical engineering 135, 021005.
  • Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al., 2015. Human-level control through deep reinforcement learning. Nature 518, 529.
  • Novacheck (1998) Novacheck, T.F., 1998. The biomechanics of running. Gait & posture 7, 77–95.
  • Saleh et al. (2018) Saleh, K., Hossny, M., Nahavandi, S., 2018.

    Intent prediction of pedestrians via motion trajectories using stacked recurrent neural networks.

    IEEE Transactions on Intelligent Vehicles 3, 414–424. doi:10.1109/TIV.2018.2873901.
  • Saleh et al. (2020) Saleh, K., Hossny, M., Nahavandi, S., 2020. Spatio-temporal densenet for real-time intent prediction of pedestrians in urban traffic environments. Neurocomputing 386, 317–324. doi:10.1016/j.neucom.2019.12.091.
  • Salimans and Kingma (2016) Salimans, T., Kingma, D., 2016. Weight normalization: A simple reparameterization to accelerate training of deep neural networks, pp. 901–909. Cited By 213.
  • Sutton et al. (1998) Sutton, R.S., Barto, A.G., et al., 1998. Introduction to reinforcement learning. volume 2. MIT press Cambridge.
  • Thelen (2003) Thelen, D.G., 2003. Adjustment of muscle mechanics model parameters to simulate dynamic contractions in older adults. Journal of biomechanical engineering 125, 70–77.
  • Uhlenbeck and Ornstein (1930) Uhlenbeck, G.E., Ornstein, L.S., 1930. On the theory of the brownian motion. Phys. Rev. 36, 823–841. URL: https://link.aps.org/doi/10.1103/PhysRev.36.823, doi:10.1103/PhysRev.36.823.
  • Zajac (1989) Zajac, F.E., 1989. Muscle and tendon: Properties , models, scaling and application to biomechanics and motor control. Critical reviews in biomedical engineering 17, 359–411.