Deep Reinforcement Learning with Smooth Policy

03/21/2020 ∙ by Qianli Shen, et al. ∙ Georgia Institute of Technology 54

Deep neural networks have been widely adopted in modern reinforcement learning (RL) algorithms with great empirical successes in various domains. However, the large search space of training a neural network requires a significant amount of data, which makes the current RL algorithms not sample efficient. Motivated by the fact that many environments with continuous state space have smooth transitions, we propose to learn a smooth policy that behaves smoothly with respect to states. In contrast to policies parameterized by linear/reproducing kernel functions, where simple regularization techniques suffice to control smoothness, for neural network based reinforcement learning algorithms, there is no readily available solution to learn a smooth policy. In this paper, we develop a new training framework —Smooth Regularized Reinforcement Learning (SR^2L), where the policy is trained with smoothness-inducing regularization. Such regularization effectively constrains the search space of the learning algorithms and enforces smoothness in the learned policy. We apply the proposed framework to both on-policy (TRPO) and off-policy algorithm (DDPG). Through extensive experiments, we demonstrate that our method achieves improved sample efficiency.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep reinforcement learning has enjoyed great empirical successes in various domains, including robotics, personalized recommendations, bidding, advertising and games (Levine et al., 2018; Zheng et al., 2018; Zhao et al., 2018; Silver et al., 2017; Jin et al., 2018). At the backbone of its success is the superior approximation power of deep neural networks, which parameterize complex policy, value or state-action value functions, etc. However, the high complexity of deep neural networks makes the search space of the learning algorithm prohibitively large, thus often requires a significant amount of training data, and suffers from numerous training difficulties such as overfitting and training instability (Thrun & Schwartz, 1993; Boyan & Moore, 1995; Zhang et al., 2016).

Reducing the size of the search space while maintaining the network’s performance requires special treatment. While one can simply switch to a network of smaller size, numerous empirical evidences have shown that small network often leads to performance degradation and training difficulties. It is commonly believed that training a sufficiently large network (also known as over-parameterization) with suitable regularization (e.g., dropout Srivastava et al. (2014), orthogonality parameter constraints Huang et al. (2018)) is the most effective way to adaptively constrain the search space, while maintaining the performance benefits of a large network.

For reinforcement learning problems, entropy regularization is one commonly adopted regularization, which is believed to help facilitate exploration in the learning process. Yet in the presence of high uncertainty in the environment and large noise, such regularization might yield poor performances. More recently, Pinto et al. (2017) propose robust adversarial reinforcement learning (RARL) that aims to perform well under uncertainties by training the agent to be robust against the adversarially perturbed environment. However, in addition to the marginal performance gain, the requirement of learning the additional adversarial policy makes the update of RARL computationally expensive and less sample efficient than traditional learning algorithms. Cheng et al. (2019) on the other hand propose a control regularization that enforces the behavior of the deep policy to be similar to a policy prior, yet designing a good prior often requires a significant amount of domain knowledge.

Different from previous works, we propose a new training framework – Smoothness Regularized Reinforcement Learning () for training reinforcement algorithms. Through promoting smoothness, we effectively reduce the size of the search space when learning the policy network and achieve state-of-the-art sample efficiency. Our goal of promoting smoothness in the policy is motivated by the fact that natural environments with continuous state space often have smooth transitions from state to state, which favors a smooth policy – similar states leading to similar actions. As a concrete example, for MuJoCo environment (Todorov et al., 2012), which is a system powered by physical laws, the optimal policy can be described by a set of differential equations with certain smoothness properties.

Promoting smoothness is particularly important for deep RL, since deep neural networks can be extremely non-smooth, due to their high complexity. It is observed that small changes in neural networks’ input would result in significant changes in its output. Such non-smoothness has drawn significant attention in other domains (e.g., image recognition, information security) that involve using neural networks as a part of the decision process (Goodfellow et al., 2014; Kurakin et al., 2016)

. To train a smooth neural network, we need to employ many hacks in the training process. In supervised learning setting with i.i.d. data, these hacks include but not limited to batch normalization

(Ioffe & Szegedy, 2015), layer normalization (Ba et al., 2016), orthogonal regularization (Huang et al., 2018). However, most of existing hacks do not work well in RL setting, where the training data has complex dependencies. As one significant consequence, current reinforcement learning algorithms often lead to undesirable non-smooth policy.

Our proposed training framework uses a smoothness-inducing regularization to encourage the output of the policy (decision) to not change much when injecting small perturbation to the input of the policy (observed state). The framework is motivated by local shift sensitivity in robust statistics literature (Hampel, 1974), which can also be considered as a measure of the local Lipschitz constant of the policy. We highlight that is highly flexible and can be readily adopted into various reinforcement learning algorithms. As concrete examples, we apply to the TRPO algorithm (Schulman et al., 2015), which is an on-policy method, and the regularizer directly penalizes non-smoothness of the policy. In addition, we also apply to DDPG algorithm (Lillicrap et al., 2015), which is an off-policy method, and the regularizer penalizes non-smoothness of either the policy or the state-action value function (also known as the Q-function), from which we can further induce a smooth policy.

Our proposed smoothness-inducing regularizer is related to several existing works (Miyato et al., 2018; Zhang et al., 2019; Hendrycks et al., 2019; Xie et al., 2019; Jiang et al., 2019)

. These works consider similar regularization techniques, but target at other applications with different motivations, e.g., semi-supervised learning, unsupervised domain adaptation and harnessing adversarial examples in image classification.

The rest of the paper is organized as follows: Section 2 introduces the related background; Section 3 introduces our proposed smooth regularized reinforcement learning () in detail; Section 4 presents numerical experiments on various MuJoCo environments to demonstrate the superior performance of .

Notations: We let denote the -radius ball measured in metric centered at point . We use

to denote the identity matrix in

-dimensional Euclidean space.

2 Background

We consider a Markov Decision Process

, in which an agent interacts with an environment in discrete time steps. We let denote the continuous state space, denote the action space, denote the transition kernel, denote the reward function, denote the initial distribution and denote the discount factor. An agent’s behavior is defined by a policy, either stochastic or deterministic. A stochastic policy

maps a state to a probability distribution over the action space

. A deterministic policy maps a state directly to an action . At each time step, the agent observes its state , takes action , and receives reward . The agent then transits into the next state with probability given by the transition kernel . The goal of the agent is to find a policy that maximize the expected discounted reward with discount factor :

One way to solve the above problem is the classical policy gradient algorithms, which estimate the gradient of the expected reward through trajectory samples, and update the parameters of the policy by following the estimated gradient. The policy gradient algorithms are known to suffer from high variance of estimated gradient, which often leads to aggressive updates and unstable training. To address this issue, numerous other variants have been proposed. Below we briefly review two popular ones used in practice.

2.1 Trust Region Policy Optimization (TRPO)

TRPO iteratively improves a parameterized policy by solving a trust region type optimization problem. Before we describe the algorithm in detail, we need several definitions in place. The value function and the state-action value function are defined by:

with . The advantage function and the discounted state visitation distribution (unnormalized) are defined by:

At the -th iteration of TRPO, the policy is updated by solving:

(1)

where is a tuning parameter for controlling the size of the trust region, and

denotes the Kullback-Leibler divergence between two distributions

over support . For each update, the algorithm: (i) Samples trajectories using the current policy ; (ii) Approximates for each state-action pair by taking the discounted sum of future rewards along the trajectory; (iii) Replaces the expectation in (2.1) and by sample approximation, then solves (2.1) with conjugate gradient algorithm.

2.2 Deep Deterministic Policy Gradient (DDPG)

DDPG uses the actor-critic architecture, where the agent learns a parameterized state-action value function (also known as the critic) to update the parameterized deterministic policy (also known as the actor).

DDPG uses a replay buffer, which is also used in in Deep Q-Network Mnih et al. (2013). The replay buffer is a finite sized cache. Transitions are sampled from the environment according to the policy and the tuple is stored in the replay buffer. When the replay buffer is full, the oldest samples are discarded. At each time step, and are updated by sampling a mini-batch uniformly from the buffer.

Update of state-action value function. The update of the state-action value function network depends on the deterministic Bellman equation:

(2)

The expectation depends only on the environment. This means that unlike TRPO, DDPG is an off-policy method, which can use transitions generated from a different stochastic behavior policy denoted as (see (Lillicrap et al., 2015) for detail). At the -th iteration, we update the by minimizing the associated mean squared Bellman error of transitions sampled from the replay buffer. Specifically, let and be a pair of target networks, we set , and then update the critic network:

After both critic and actor networks are updated, we update the target networks by slowly tracking the critic and actor networks:

with .

Update of policy. The policy network is updated by maximizing the value function using policy gradient:

(3)

Similar to updating the critic, we use the minibatch sampled from the replay buffer to compute approximated gradient of and perform the update:

3 Method

In this section, we present the smoothness-inducing regularizer in its general form and describe its intuition in detail. We also apply the proposed regularizer to popular reinforcement learning algorithms to demonstrate its great adaptability.

3.1 Learning Policy with

We first focus on directly learning smooth policy with the proposed regularizer. We assume that the state space is continuous, i.e., .

For a fixed state and a policy , encourages the output and to be similar, where state is obtained by injecting a small perturbation to state . We assume the perturbation set is an -radius ball measured in metric , which is often chosen to be the distance: . To measure the discrepancy between the outputs of a policy, we adopt a suitable metric function denoted by . The non-smoothness of policy at state is defined in an adversarial manner:

To obtain a smooth policy , we encourage smoothness at each state of the entire trajectory. We achieve this by taking expectation with respect to the state visitation distribution induced by the policy, and our smoothness-inducing regularizer is defined by:

(4)

For a stochastic policy , we set the metric to be the Jeffrey’s divergence, and the regularizer takes the form

(5)

where the Jeffrey’s divergence for two distributions is defined by:

(6)

For a deterministic policy , we set the metric to be the squared norm of the difference:

(7)
Figure 1: Smoothness of policy at state . If policy is smooth at state , then perturbed state leads to action similar to the original action . If the policy is non-smooth at state , then the perturbed state leads to drastically different action .

The smoothness-inducing adversarial regularizer is essentially measuring the local Lipschitz continuity of policy under the metric . More precisely, we encourage the output (decision) of to not change much if we inject a small perturbation bounded in metric to the state (See Figure 1). Therefore, by adding the regularizer (4) to the policy update, we can encourage to be smooth within the neighborhoods of all states on all possible trajectories regarding to the sampling policy. Such a smoothness-inducing property is particularly helpful to prevent overfitting, improve sample efficiency and overall training stability .

TRPO with (TRPO-SR). We now apply the proposed smoothness inducing regularizer to TRPO algorithm, which is itself an on-policy algorithm. Since TRPO uses a stochastic policy, we use the Jeffrey’s divergence to penalize the discrepancy between decisions for the regular state and the adversarially perturbed state, as suggested in (5).

Specifically, TRPO with smoothness-inducing regularizer updates the policy by solving the following subproblem at the -th iteration:

s.t. (8)

3.2 Learning Q-function with Smoothness-inducing Regularization

The proposed smoothness-induced regularizer can be also used to learn a smooth Q-function, which can be further used to generate a smooth policy.

We measure the non-smoothness of a Q-function at state-action pair by the squared difference of the state-action value between the normal state and the adversarially perturbed state:

To enforce smoothness at every state-action pair, we take expectation with respect to the entire trajectory, and the smoothness-inducing regularizer takes the form

where denotes the behavior policy for sampling in off-policy training setting.

DDPG with . We now apply the proposed smoothness-inducing regularizer to DDPG algorithm, which is itself an off-policy algorithm. Since DDPG uses two networks: the actor network and the critic network, we propose two variants of DDPG, where the regularizer is applied to the actor or the critic network.

Regularizing the Actor Network (DDPG-SR-A). We can directly penalize the non-smoothness of the actor network to promote a smooth policy in DDPG. Since DDPG uses a deterministic policy , when updating the actor network, we penalize the squared difference as suggested in (7) and minimize the following objective:

The policy gradient can be written as:

with for .

Regularizing the Critic Network (DDPG-SR-C). Since DDPG simultaneously learns a Q-function (critic network) to update the policy (actor network), inducing smoothness in the critic network could also help us to generate a smooth policy. By incorporating the proposed regularizer for penalizing Q-function, we obtain the following update for inducing a smooth Q-function in DDPG:

where is the mini-batch sampled from the replay buffer.

3.3 Solving the Min-max Problem

Adding the smoothness-inducing regularizer in the policy/Q-function update often involves solving a min-max problem. Though the inner max problem is not concave, simple stochastic gradient algorithm has been shown to be able to solve it efficiently in practice. Below we describe how to perform the update of TRPO-SR, including solving the corresponding min-max problem. The details are summarized in Algorithm 1. We leave the detailed description of DDPG-SR-A and DDPG-SR-C in the appendix.

  Input: step sizes and , number of iterations for inner optimization, number of iterations for policy updates, perturbation strength , regularization coefficient .
  Initialize: randomly initialize the policy network .
  for  do
     Sample trajectory from current policy .
     Estimate advantage function using sample approximation.
     for  do
        Randomly initialize .
        for  do
           .
           .
        end for
        .
     end for
     
  end for
Algorithm 1 Trust Region Policy Optimization with Smoothness-inducing Regularization.

4 Experiment

We apply the proposed training framework to two popular reinforcement learning algorithms: TRPO and DDPG. Both of these algorithms have become the standard routine to solve large-scale control tasks, and the building blocks of many state-of-the-art reinforcement learning algorithms. For TRPO, we directly learns a smooth policy; For DDPG, we promote the smoothness either in the actor (policy) or the critic (Q-function).

4.1 Implementation

Figure 2: OpenAI Gym Mujoco Benchmarks.

Our implementation of

training framework is based on the open source toolkit garage

(garage contributors, 2019). We test our algorithms on OpenAI gym (Brockman et al., 2016) control environments with the MuJoCo (Todorov et al., 2012) physics simulator. For all tasks, we use a network of hidden layers, each containing neurons, to parameterize the policy and the Q-function. For fair comparison, except for the hyper-parameters related to the smooth regularizer, we keep all the hyper-parameters the same as in the original implementation of garage. We use the grid search to select the hyper-parameters (perturbation strength , regularization coefficient ) of the smoothness-inducing regularizer. We set the search range to be . To solve the inner maximization problem in the update, we run steps of projected gradient ascent, with step size set as . For each algorithm and each environment, we train 10 policies with different initialization for 500 iterations (1K environment steps for each iteration).

Below we briefly describe the environments we use to evaluate our algorithms (See also Figure 2).

Swimmer. The swimmer is a planar robot of a single torso with links and actuated joints in a viscous container. The -dimensional state space includes positions and velocities of sliders, angles and angular velocity of hinges. The -dimensional action space includes torque of each actuated joints.

HalfCheetah. The half-cheetah is a planar biped robot with rigid links, including two legs and a torso, along with actuated joints. The -dimensional state space includes positions and velocities of sliders, angles and angular velocity of hinges. The -dimensional action space includes torque of each actuated joints.

Walker2D. The walker is a planar biped robot consisting of links, corresponding to two legs and a torso, along with actuated joints. The -dimensional state space includes positions and velocities of sliders and angles and angular velocity of hinges. The -dimensional action space includes torque of each actuated joints.

Hopper. The hopper is a planar monopod robot with rigid links, corresponding to the torso, upper leg, lower leg, and foot, along with 3 actuated joints. The 11-dimensional state space includes positions and velocities of sliders, angles and angular velocity of hinges. The -dimensional action space includes torque of each actuated joints.

Ant. The ant is a planar biped robot consisting of links, corresponding to legs and a torso, along with actuated joints. The -dimensional state space includes positions and velocities of sliders, angles and angular velocity of hinges. The -dimensional action space includes torque of each actuated joints.

4.2 Evaluating the Learned Policies

TRPO with (TRPO-SR). We use Gaussian policy in our implementation. Specifically, for a given state

, the action follows a Gaussian distribution

, where is also a learnable parameter. Then the smoothness-inducing regularizer (5) takes the form:

Figure 3: Learning curves (meanstandard deviation) for TRPO-SR (orange) trained policies versus the TRPO (blue) baseline when tested in clean environment. For all the tasks, TRPO-SR achieves a better mean reward than TRPO, with a reduction of variance in initial stage.
Figure 4: Learning curves (meanstandard deviation) for DDPG-SR-A (orange) and DDPG-SR-C (green) trained policies versus the DDPG (blue) baseline when tested in clean environment. For all the tasks, DDPG-SR-A(C) trained policies achieve better mean reward compared to DDPG.
Figure 5: Percentile plots for TRPO-SR (orange) trained policies versus the TRPO (blue) baseline when tested in clean environment. Algorithms are run on ten different initializations and sorted to show the percentiles of cumulative reward. For all the tasks, TRPO-SR achieves better or competitive best performance, and better worst case performance, compared with TRPO baseline.
Figure 6: Percentile plots for DDPG-SR-A (orange) and DDPG-SR-C (green) trained policies versus the DDPG (blue) baseline when tested in clean environment. Algorithms are run with different initializations and sorted to show the percentiles of cumulative reward. For all the tasks, DDPG-SR-A(C) achieves better or competitive best performance, and significantly better worst case performance, compared with DDPG baseline.

Figure 4 shows the mean and variance of the cumulative reward (over policies) for policies trained by TRPO-SR and TRPO for Swimmer, HalfCheetah, Hopper and Ant. For all the four tasks, TRPO-SR learns a better policy in terms of the mean cumulative reward. In addition, TRPO-SR enjoys a smaller variance of the cumulative reward with respect to different initializations. These two observations confirm that our smoothness-inducing regularization improves sample efficiency as well as the training stability.

We further show that the advantage of our proposed training framework goes beyond improving the mean cumulative reward. To show this, we run the algorithm with different initializations, sort the cumulative rewards of learned policies and plot the percentiles in Figure 6.

  • For all four tasks, TRPO-SR uniformly outperforms the baseline TRPO.

  • For Swimmer and HalfCheetach tasks, TRPO-SR significantly improves the worst case performance compared to TRPO, and have similar best case performance.

  • For Walker and Ant tasks, TRPO-SR significantly improves the best case performance compared to TRPO.

Our empirical results show strong evidences that the proposed not only improves the average reward, but also makes the training process significantly more robust to failure case compared to the baseline method.

DDPG with . We repeat the same evaluations for applying the proposed framework to DDPG (DDPG-SR-A and DDPG-SR-C). Figure 4 shows the mean and variance of the cumulative reward for policies trained by DDPG-SR-A and DDPG-SR-C in HalfCheetah, Hopper and Walker2D and Ant environments. For all the four tasks, DDPG-SR learns a better policy in terms of mean reward. For task Ant, DDPG-SR-A shows superior training stability, which is the only algorithm without drastic decay in the initial training stage. In addition, DDPG-SR-C shows competitive performance compared to DDPG-SR-A, significantly outperforms DDPG-SR-A and DDPG for task HalfCheetah. This shows that instead of directly learning a smooth policy, we can turn to learn a smooth Q-function and obtain similar performance benefits.

Figure 6 plots percentiles of cumulative reward of learned policies using DDPG and DDPG-SR. Similar to TRPO-SR, both DDPG-SR-A and DDPG-SR-C uniformly outperform the baseline DDPG for all the reward percentiles. DDPG-SR is able to significantly improve the the worst case performance, while maintaining competitive best case performance compared to DDPG.

4.3 Robustness with Disturbance

We demonstrate that even if the training framework is not targeting for robustness, the trained policy is still able to achieve robustness against both stochastic and adversarial measurement error, which is a classical setting considered in partially observable Markov decision process (POMDP) (Monahan, 1982). To show this, we evaluate the robustness of the proposed training framework in the Swimmer and HalfCheetah environments. We evaluate the trained policy with two types of disturbances in the test environment: for a given state , we add it with either (i) random disturbance which are sampled uniformly from , or (ii) adversarial disturbance, which are generated by solving:

using steps of projected gradient ascent. For all evaluations, we use disturbance set . For each policy and disturbed environment, we do stochastic rollouts to evaluate the policy and plot the cumulative reward of policy.

To evaluate the robustness of TRPO with , we run both baseline TRPO and TRPO-SR in the Swimmer environment. Figure 8 plots the cumulative reward against the disturbance strength (). We see that for both random and adversarial disturbance, increasing the strength of the disturbance decreases the cumulative reward of the learned policies. On the other hand, we see that TRPO-SR clearly achieves improved robustness against perturbations, as its reward declines much slower than the baseline TRPO.

Figure 7: Plot of cumulative reward (meanstandard deviation with multiple rollouts) of TRPO-SR (orange) trained policies versus the TRPO (blue) baseline when tested in disturbed environment. The random disturbance is uniformly sampled from set . The adversarial disturbance belongs to the set . TRPO-SR trained policies achieve a slower decline in performance than TRPO as we increase disturbance strength, and a significant reduction of variance under large disturbance strength.
Figure 8: Plot of cumulative reward (meanstandard deviation with multiple rollouts) of DDPG-SR-A (orange) and DDPG-SR-C (green) trained policies versus the DDPG (blue) baseline when tested in disturbed environment. The disturbance is uniformly sampled from set . The adversarial disturbance belongs to the set . As we increase disturbance strength , DDPG-SR-A and DDPG-SR-C trained policies achieve a slower decline in performance than DDPG.

To evaluate the robustness of DDPG with , we run baseline DDPG, DDPG-SR-A and DDPG-SR-C in the HalfCheetah environment. Figure 8 plots the cumulative reward against the disturbance strength (). We see that incorporating the proposed smoothness-inducing regularizer into either the actor or the critic network improves the robustness of the learned policy against state disturbances.

4.4 Sensibility with Hyperparameters

The proposed smoothness-inducing regularizer involves setting two hyper-parameters, the coefficient of the regularizer and the disturbance strength . We vary different choices of and plot the heatmap of cumulative reward for each configuration in Figure 9. In principle, and both control the strength of the regularization in similar way, as large and both increase the strength of the regularization, advocating for more smoothness; small and both decrease the strength of the regularization, favoring for cumulative reward over smoothness. We observe similar behavior in Figure 9: a relatively small and large shows similar performance with a relatively large and small .

Figure 9: Heatmap of cumulative reward of TRPO-SR trained policies with different and . Each policy is trained for 2M environment steps.

5 Conclusion

We develop a novel regularization based training framework to learn a smooth policy in reinforcement learning. The proposed regularizer encourages the learned policy to produce similar decisions for similar states. It can be applied to either induce smoothness in the policy directly, or induce smoothness in the Q-function, thus enjoys great applicability. We demonstrate the effectiveness of by applying it to two popular reinforcement learning algorithms, including TRPO and DDPG. Our empirical results show that improves sample efficiency and training stability of current algorithms. In addition, the induced smoothness in the learned policy also improves robustness against both random and adversarial perturbations to the state.

References

Appendix A Appendix

We present two variants of DDPG with the proposed smoothness-inducing regularizer. The first algorithm, DDPG-SR-A, directly learns a smooth policy with a regularizer that measures the non-smoothness in the actor network (policy). The second variant, DDPG-SR-C, learns a smooth Q-function with a regularizer that measure the non-smoothness in the critic network (Q-function). We present the details of DDPG-SR-A and DDPG-SR-C in Algorithm 2 and Algorithm 3

  Input: step size for target networks , coefficient of regularizer , perturbation strength , number of iterations to solve inner optimization problem , number of training steps , number of training episodes , step size for inner maximization , step size for updating actor/critic network .
  Initialize: randomly initialize the critic network and the actor network , initialize target networks and with and , initialize replay buffer .
  for  episode = 1 …, M  do
     Initialize a random process for action exploration.
     Observe initial state .
     for  do
        Select action where is the exploration noise.
        Take action , receive reward and observe the new state .
        Store transition into the replay buffer .
        Sample mini-batch of transitions from the replay buffer .
        Set for .
        Update the critic network: .
        for  do
           Randomly initialize .
           for  do
              .
           end for
           Set .
        end for
        Update the actor network:
        Update the target networks:
     end for
  end for
Algorithm 2 DDPG with smoothness-inducing regularization on the actor network (DDPG-SR-A).
  Input: step size for target networks , coefficient of regularizer , perturbation strength , number of iterations to solve inner optimization problem , number of training steps , number of training episodes , step size for inner maximization , step size for updating actor/critic network .
  Initialize: randomly initialize the critic network and the actor network , initialize target networks and with and , initialize replay buffer .
  for  episode = 1 …, M  do
     Initialize a random process for action exploration.
     Observe initial state .
     for  do
        Select action where is the exploration noise.
        Take action , receive reward and observe the new state .
        Store transition into replay buffer .
        Sample mini-batch of transitions from the replay buffer .
        Set for .
        for  do
           Randomly initialize .
           for  do
              .
           end for
           Set .
        end for
        Update the critic network:
        Update the actor network:
        Update the target networks:
     end for
  end for
Algorithm 3 DDPG with smoothness-inducing regularization on the critic network (DDPG-SR-C).