Sim and Real: Better Together

10/01/2021 ∙ by Shirli Di-Castro Shashua, et al. ∙ 0

Simulation is used extensively in autonomous systems, particularly in robotic manipulation. By far, the most common approach is to train a controller in simulation, and then use it as an initial starting point for the real system. We demonstrate how to learn simultaneously from both simulation and interaction with the real environment. We propose an algorithm for balancing the large number of samples from the high throughput but less accurate simulation and the low-throughput, high-fidelity and costly samples from the real environment. We achieve that by maintaining a replay buffer for each environment the agent interacts with. We analyze such multi-environment interaction theoretically, and provide convergence properties, through a novel theoretical replay buffer analysis. We demonstrate the efficacy of our method on a sim-to-real environment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reinforcement learning (RL) is a framework where an agent interacts with an unknown environment, receives a feedback from it, and optimizes its performance accordingly Sutton and Barto (2018); Bertsekas (2005). There have been attempts of learning a control policy directly from real world samples Levine et al. (2018); Yahya et al. (2017); Pinto and Gupta (2016); Kalashnikov et al. (2018). However, in many cases, learning from the actual environment may be slow, costly, or dangerous, while learning from a simulated system can be fast, cheap, and safe. The advantages of learning from simulation are counterbalanced by the reality-gap Jakobi et al. (1995): the loss of fidelity due to modeling limitations, parameter errors, and lack of variety in physical properties. The quality of the simulation may vary: when the simulation mimics the reality well, we can train the agent on the simulation and then transfer the policy to the real environment, in a one shot manner (e.g., Andrychowicz et al. (2020)). However in many cases, simulation demonstrates low fidelity which leads to the following question: Can we mitigate the differences between real environments ("real") and simulations ("sim") thereof, so as to train an agent that learns from both, and performs well in the real one?

In this work, we propose to learn simultaneously on real and sim, while controlling the rate in which we collect samples from each environment and controlling the rate in which we use these samples in the policy optimization. This synergy offers a speed-fidelity trade-off and harnesses the advantage of each domain. Moreover, the simulation speed encourages exploration that helps to accelerate the learning process. The real system in turn can improve exploitation in the sense that it mitigates the challenges of sim-to-real policy transfer, and encourages the learner to converge to relevant solutions. A general scheme describing our proposed setup is depicted in Figure 1. In a nutshell, there is a single agent interacting with environments (on the left). Each sample provided by an environment is pushed into a corresponding replay buffer (RB). On the right, the agent pulls samples from the RBs and is trained on them. In the sim-to-real scheme, .

Figure 1: Mixing environments scheme. The agent selects an environment

with probability

and interacts with it. Simultaneously, the agent chooses RB(j) with probability and samples from this replay buffer a stored transition

, which is used for estimating the TD error and update the policy parameters.

In the specific scheme for mixing real and sim samples in the learning process, separate probability measures for collecting samples and for optimizing parameters policies are used. The off-policy nature of our scheme enables separation between real and sim samples which in turn helps controlling the rate of real samples used in the optimization process. In this work we discuss two RL algorithms that can be used with this scheme: (1) off-policy linear actor critic with mixing sim and real samples and (2) Deep Deterministic Policy Gradient (DDPG; Lillicrap et al. (2015)

) mixing scheme variant based on neural networks. We analyze the asymptotic convergence of the linear algorithm and demonstrate the mixing samples variant of DDPG in a sim-to-real environment.

The naive approach in which one pushes the state-action-reward-next-state tuples into a single shared replay buffer is prone to failures due to the imbalance between simulation and real roll-outs. To overcome this, we maintain separate replay buffers for each of the environments (e.g., in the case of a single robot and a simulator we would have two replay buffers). This allows us to extract the maximum valuable information from reality by distinguishing its tuples from those generated by other environments, while continuously improving the agent using data from all input streams. Importantly, although the rate of samples is skewed in favor of the simulation, the learning may be carried out using a different rate. In a sense, the mechanism we suggest is a version of the

importance sampling technique Bucklew (2013).

Our main contributions in this work are as follows:

  1. We present a method for incorporating real system samples and simulation samples in a policy optimization process while distinguishing between the rate of collecting samples and the rate of using them.

  2. We analyze the asymptotic convergence of our proposed mixing real and sim scheme.

  3. To the best of our knowledge, we provide for the first time theoretical analysis of the dynamics and properties of replay buffer such as its Markovity and the explicit probability measure induces by the replay buffer.

  4. We demonstrate our findings in a simulation of sim-to-real, with two simulations where one is a distorted version of the other and analyze it empirically.

2 Related Work

Sim-to-Real: Sim-to-Real is a long investigated topic in robotics where one aims to reduce the reality gap between the real system and its digital twin implementation. A general framework where we transfer results from one domain to another is domain adaptation. In vision, this approach have helped to gain state-of-the-art results Ganin et al. (2017); Shu et al. (2018); Long et al. (2015); Bousmalis et al. (2016); Kim et al. (2017); Shrivastava et al. (2017). In our work, we focus on the physical aspects of the sim-to-real gap. Related to domain adaptation, is the approach of domain randomization, where the randomization is done in simulation in order to robustify and enhance the detection and object recognition capability Tobin et al. (2017); Sadeghi and Levine (2016); James et al. (2017); Vuong et al. (2019). Recently, James et al. (2019) proposed a method where both simulation and reality are adapted to a common domain. Andrychowicz et al. Andrychowicz et al. (2020) extensively randomize the task of reaching a cube pose where one-shot transfer is achieved but with large sample complexity. Randomization may also be applied to dynamics, e.g., Peng et al. (2018), where robustness to inaccuracy in real world parameters is achieved.

Another approach in Sim-to-Real is how to change the simulation in the light of real samples. In Chebotar et al. (2019) the agent learns mainly from simulation but its parameters are updated to match the behavior in reality by reducing the difference between simulation and reality roll-outs. Our method is a direct approach that incorporates phenomena that is difficult to simulate accurately. In Bayesian context, Ramos et al. (2019) provide a principled framework to reason about the uncertainty in simulation parameters. Kang et al. Kang et al. (2019) investigated how real system and simulation data can be combined in training deep RL algorithms. They separate between the data types by using real data to learn about the dynamics of the system, and simulated data to learn a generalizing perception system. Our method mix real and simulation data by controlling the rate of streaming each data type into the learning agent.

Replay Buffer analysis: Large portion of RL algorithms use replay buffers Lin (1993); Mnih et al. (2013) but here we review only works that provide some analysis. Several works study the effect of replay buffer size on the agent performance Zhang and Sutton (2017); Liu and Zou (2018). Our focus is the effect of controlling the rate of collecting samples and the rate of using them in the optimization process. Fedus et al. Fedus et al. (2020) investigated the effect of the ratio between these rates on the learning process through simulated experiments, while our focus is on the theoretical aspects. Other works studied the criteria for prioritizing transitions to enhance learning Schaul et al. (2015); Pan et al. (2018); Zha et al. (2019). In case of multiple agents that share their policy, Horgan et al. Horgan et al. (2018) argue in favor of a shared replay buffer for all agents and a prioritizing mechanism. We, on the other hand, emphasize the advantage of separating replay buffers when collecting samples from different environments to enable a mixing management in the learning process.

Stochastic Approximation: Our proposed algorithm is based on the Stochastic Approximation method Kushner and Clark (2012). Konda and Tsitsiklis Konda and Tsitsiklis (2000) proposed the actor-critic algorithm, and established the asymptotic convergence for the two time-scale actor-critic, with TD() learning-based critic. Bhatnagar et al. Bhatnagar et al. (2008) proved the convergence result for the original actor-critic and natural actor-critic methods. Di Castro and Meir Di Castro and Meir (2010) proposed a single time-scale actor-critic algorithm and proved its convergence. Recently, several finite sample analyses were applied by Wu et al. (2020); Zou et al. (2019); Dalal et al. (2018) and more but these works have not analyzed the RB asymptotic behavior while we do.

3 Setup

We model the problem using a Markov Decision Process (MDP;

Puterman (1994)), where and are the state space and action space, respectively. We let denote the probability of transitioning from state to state when applying action . The MDP measure and the policy measure

induce together a Markov Chain (MC) measure

( is matrix form). We consider a probabilistic policy , parameterized by which expresses the probability of the agent to choose an action given that it is in state . We let denote the stationary distribution induced by the policy . The reward function is denoted by . Throughout the paper we assume the following.

Assumption 1.

1. The set is compact. 2. The reward for all .

Assumption 2.

For any policy , the induced Markov chain of the MDP process is irreducible and aperiodic.

The goal of the agent is to find a policy that maximizes the average reward that the agent receives during its interaction with the environment Puterman (1994). Under an ergodicity assumption, the average reward over time eventually converges to the expected reward under the stationary distribution Bertsekas (2005)

(1)

The state-value function evaluates the overall expected accumulated rewards given a starting state and a policy

(2)

where the actions follow the policy and the next state follows the transition probability . Denote

to be the vector value function defined in (

2). Therefore, the vectorial Bellman Equation (BE) for a fixed policy is , where is a vector of rewards for each state Puterman (1994). We recall that the solution to the BE is unique up to an additive constant. In order to have a unique solution, we choose a state to be of value , i.e., (due to Assumption 2, can be any of ).

In our specific setup, we consider a model where there are MDPs, denoted by , all share the same state space , action space , and reward function . The environment dynamics, though, are different, and are denoted by a transition function . Together with a shared policy , each is induced by a state transition measure and a stationary distribution . Let and define the average reward over environments,

(3)

The following assumption resembles Assumption 2 for environments.

Assumption 3.

For any policy , the induced Markov chain of MDP is irreducible and aperiodic for all .

We define to be the throughput of and it is defined as the number of samples MDP provides for a unit time. In sim-to-real context, this setup can practically handle several robots and several simulation instances. We assume for the sim-to-real scenario that .

Since the samples from real arrive at a lower throughput than the sim, if we push the samples into two separate Replay Buffers (RB; Lin (1993); Mnih et al. (2013)) based on their sources, we can leverage the relatively scarce, but valuable samples that originated in the real system. This observation is the main motivation for our "Mixing Sim and Real" scheme, presented in the next section.

4 Mixing Sim and Real Algorithm

In order to reconcile the dynamics disparity, we propose our Mixing Sim and Real Algorithm with Linear Actor Critic, presented in Algorithm 1 and described in Figure 1. We consider environments, modeled as MDPs, , where the agent maintain a replay buffer for each MDP, respectively. For the sake of analysis simplicity, we replace

with the following random variable. The agent chooses an environment to communicate with according to

where , , and . The agent collects transitions from the chosen environment and stores them in the corresponding . In order to approximate the rates correctly, we choose for the agent to interact according to the rates.

1:  Initialize Replay Buffers with size and initialize for .
2:  Initialize actor parameters , critic parameters and average reward estimator .
3:  for  do
4:     Sample , interact with according to policy and add the transition to . Increment .
5:     Sample and choose transitions from denoted as .
6:     
7:     Update average reward
8:     Update critic
9:     Update actor
10:  end for
Algorithm 1 Mixing Sim and Real with Linear Actor Critic

We train the agent in an off-policy manner. The agent selects for sampling the next batch for training according to where , , and . This distribution remains static, and hence the selections in time are i.i.d111We note that one could remove this restriction and think of other schemes in which the replay buffer selection distribution changes over time based on some prescribed optimization goal, cost, etc.. In addition, the distribution that selects which samples to train over should be different than the distribution that controls the throughput each environments pushes samples to the RB. In that way, scarce samples from the real environment can get higher influence on the training.

Once a RB is selected, the sampled batch is used for optimizing the actor and the critic parameters. In this work, we propose a two time scale linear actor critic optimization scheme Konda and Tsitsiklis (2000), which is an RB-based version of Bhatnagar et al. (2008) Algorithm. We analyze its convergence properties in Section 5. We note, however, that other optimization schemes can be provided, such as DDPG Lillicrap et al. (2015), which we use in our experiments.

We define a tuple of indices where corresponds to and corresponds to the -th sample in this . In addition, it corresponds to time where this is the time when the agent interacted with the -th MDP and the -sample was added to . Let be a transition sampled at time from . Whenever it is clear from the context, we simply use .

The temporal difference (TD) error is a random quantity based on a single sampled transition from ,

(4)

where is a linear approximation for , is a feature vector for state and is a parameter vector. In Algorithm 1, average reward, critic and actor parameters are updated based on the TD error (see lines 7 - 9). Note that for the actor updates, we use a projection that projects any to a compact set whenever .

In order to gain understanding of our proposed setup, in the next section we characterize the behaviour of the iterations in Algorithm 1.

5 Convergence Analysis for Mixing Sim and Real with Linear Approximation

The standard tool in the literature for analyzing iterations of processes such as two time scale Actor-Critic in the context of RL is SA; Stochastic Approximation Kushner and Yin (2003); Borkar (2009); Bertsekas and Tsitsiklis (1996). This analysis technique includes two parts: proving the existence of a fixed point, and bounding the rate of convergence to this fixed point. By far, the most popular methods for proving convergence is the Ordinary Differential Equation (ODE) method. Usually, the iteration should demonstrate either some monotonicity property, or a contraction feature in order for the iteration to converge.

Although in practice such algorithms (after some tuning) usually converge to an objective value, it is not always guaranteed. To achieve that in a stochastic approximation setup, the main known result shows that the iteration can be decomposed into a deterministic function, which depends only on the problem parameters, and a martingale difference noise, which is bounded in some way.

In this section we show that the iterations of Algorithm 1 converge to a stable point of a corresponding ODE. We begin with showing that the process of sampling transitions from RBs is a Markov process. Afterward, we show that if the original Markov chain is irreducible and aperiodic, then also the RBs Markov process is irreducible and aperiodic. This property is required for proving the convergence of the iterations in Algorithm 1 using SA tools. We conclude this section with showing that if in some sense sim is close to real, then the properties of the mixed process is close to the properties of both sim and real.

5.1 Asymptotic Convergence of Algorithm 1

Let be a replay buffer storing the last transitions from MDP . Let be the state of at time , i.e., where is a transition tuple pushed at some time . We denote the collection of all as . We define and be i.i.d random processes based on and , respectively. We define to be the process induced by Algorithm 1, i.e.,

(5)

The next lemma states the is Markovian. The proof is deferred to the Supplementary material A.1.

Lemma 1 ( induced by Algorithm 1 is Markovian).

1. The random process is a Markovian. 2. Under Assumption 3, there exists some such that is irreducible and aperiodic for .

Next, we present several assumptions that are necessary for proving the convergence of Algorithm 1. The first assumption is a standard requirement for policy gradient methods.

Assumption 4.

For any state–action pair , is continuously differentiable in the parameter .

Proving convergence for a general function approximation is hard. In our case we demonstrate the convergence for a linear function approximation (LFA; Bertsekas and Tsitsiklis (1996)). In matrix form, it can be expressed as where . The following assumption is needed for the uniqueness of the convergence point of the critic.

Assumption 5.

1. The matrix has full rank. 2. The functions are Liphschitz in and bounded. 3. For every , where is a vector of ones.

In order to get a with probability 1 using the SA convergence, the following standard assumption is needed. Note that in the actor-critic setup we need two time-scales convergence, thus, in this assumption the critic is a ‘faster’ recursion than the actor.

Assumption 6.

The step-sizes , , , satisfy , and .

We define the induced MC for the time with a corresponding parameter . For this parameter, we denote with the transition matrix at that time and the corresponding state distribution vector (both induced by the policy ). Finally, we define the following diagonal matrix and the reward vector with elements . Based on these definitions we define

(6)

where

is the identity matrix and

is a vector of ones. The intuition behind and is the following. For an online TD(0)-learning under a stationary policy we have a fixed point at the solution to the equation (Bertsekas and Tsitsiklis (1996); Lemma 6.5). In our case, since we have RBs where each one with samples entered at different times, we have a superposition of all these samples. When , for all index . We let and define

(7)

For proving the convergence of the critic, we assume the policy is fixed. Thus, for each RB the induced MC is one for all the samples in this RB, so the sum over disappear for and . Now we are ready to prove the following theorems, regarding Algorithm 1. We note that Theorems 2 and 3 state the critic and actor convergence.

Theorem 2.

(Convergence of the Critic to a fixed point)
Under Assumptions 1-6, for any given and as in the updates in Algorithm 1, we have and with probability 1, where is obtained as a unique solution to .

The proof for Theorem 2 follows the proof for Lemma 5 in Bhatnagar et al. (2009), see more details in the supplementary material A.2. For establishing the convergence of the actor updates, we define additional terms. Let denote the set of asymptotically stable equilibria of the ODE and let be the -neighborhood of . Let , and define

Theorem 3.

(Convergence of the actor)
Under Assumptions 1-6, given , such that for , obtained using Algorithm 1, if , then as with probability one.

The proof for Theorem 3 follows the proof for Theorem 2 in Bhatnagar et al. (2009) and is given in the supplementary material A.3.

5.2 Sim2Real Asymptotic Convergence Properties

In this section we analyze the convergence properties of the Mixing Sim and Real algorithm we use. The main idea is that if sim and real are close in their dynamics through the MDP transition matrix many properties of their MDPs under the same policy are close as well. Moreover, we show that under the assumption of sim close to real, any process derived from both processes is close to both sim and real.

Assumption 7.

(Closeness of sim and real). For all , , , we have .

The following theorem states that if Assumption 7 holds then the convergence points of sim, real, and the mixed process (as defined in Algorithm 1) convergences to close points.

Theorem 4.

Consider a policy and Assumptions 1, 2, and 7. Then, for each , , and we have:
1. The induced MC of sim and real, and , satisfy .
2. Let where its elements are identical to the first elements of . The corresponding stationary distributions satisfy , where

is the largest eigenvalue of the matrix

.
3. The convergence points for the average reward and value functions under the policy for sim and real satisfy and .

The proof for Theorem 4 is in the supplementary material B. Based on this Theorem, it follows immediately that any convex combination of "close" enough sim and real share the same properties as both sim and real. We defer to supplementary material the precise statement.

6 Experimental Evaluation

In this section we evaluate the performance of our proposed algorithm on two Fetch Push environments Plappert et al. (2018), one acts as the real environment and the other is the simulation environment 222Code for the experiments can is available at: https://github.com/sdicastro/SimAndRealBetterTogether.. Although our theoretical results are on the proposed mixing scheme with linear function approximation, in this section we focus on non-linear methodologies, i.e., using neural networks. We set meaning there is only one real and one simulation environments. We denote by the probability of collecting samples from the real environment and by the probability of choosing samples from the real environment for the optimization process. We are interested in demonstrating the effect of different and values on the learning process. We investigate different mixing strategies for combining real and sim samples.

  1. "Mixed": real and sim episodes are collected according to Algorithm 1.

  2. "Real only": The agent collects and optimize only real samples (i.e., and ).

  3. "Sim only": The agent collects and optimize only sim samples (i.e., and ).

  4. "Sim first": At the beginning the agent collects and optimize only sim samples. When the success rate in the sim reaches 0.7, we switch to sampling and optimizing only using real.

  5. "Sim-dependent": At the beginning the agent collects and optimize only sim samples. When the success rate in the sim environment reaches 0.7, we switch to the "Mixed" strategy.

In the Fetch Push task, a robot arm needs to push an object on a table to a certain goal point. The state is represented by the gripper, object and target position and pose, as well as their velocities and angular velocities333The final dimension is after removing non-informative dimensions.. The action specifies the desired gripper position at the next time-step. The agent gets a reward of -1, if the desired goal was not yet achieved and 0 if it was achieved within some tolerance. To solve the task we used our mixing sim and real algorithm and replaced the linear actor-critic optimization scheme (lines 6-9 in Algorithm 1) with DDPG Lillicrap et al. (2015) together with Hindsight Experience Replay (HER; Andrychowicz et al. (2017)) optimization scheme. We created the real and sim environments using the Mujoco simulator Todorov et al. (2012). The difference between the environments is the friction between the object and the table. We preceded the following experiments with an experiment to depict a region of friction parameters where training the task using only sim samples and using the trained policy in the real environment does not solve the task (see supplementary material Section C.3).

We emphasize that we evaluate the performance in each experiment according to the success rate in the real environment, as this is the environment of final interest. In addition, we seek for mixing strategies that achieve the lowest number of real samples since usually they are costly and harder to get than sim samples.

Figure 2: "Real only", "Sim only", and "Mixed" strategies with fixed and different values. (a)

Success rate in the real environment vs. number of epochs. Each epoch corresponds to 100 episodes, mixed with real and sim episodes. The success rate is computed every epoch over 10 test episodes.

(b) Success rate in the real environment vs. number of real episodes (c) Number of real episodes vs. number of sim episodes. The size of the markers corresponds to the increasing success rate. For all graphs, we repeated each experiment with

different random seeds and present the mean and standard deviation values.

Different values: We fix optimization parameter and test different collection parameter . Results are presented in Figure 2. We notice that when the agent is trained using "Sim only" strategy (), it fails to solve the task in real (Figure 2a). Next, when the agent is trained using "Real only" strategy (), the task is solved. However, for achieving 0.9 success rate, "Real only" requires approximately 20K real environment episodes and to increase it to success rate of 1, it requires approximately 40K real episodes (Figures 2b and 2c). Observing the values in-between, we see that achieves the best performance – it uses fewer (K) real episodes to achieve high success rates compared to the "Real only" strategy. Notice that as increases the performance deteriorates. This phenomenon can be explained due to the mixed samples distribution. When is low, most of the data distribution is based on sim, and real samples do not change it much, but only "fine tune" the learning. When increases, the data distribution is composed of two different environments which may confuse the agent.

Different values: In this experiment, we fix and test for . Results are presented in Figure 3. When is low and equals , the agent fails to solve the task (Figure 3a). But when is higher than , the performance improves where no significant differences are observed for . For , the algorithm achieves the best performance: high success rate of 0.9 while using fewer real episodes and fewer sim episodes compared to other values (Figures 3b and 3c). Interestingly, when is too high (with respect to , i.e., ) the performance deteriorates, suggesting that choosing is preferable.

Figure 3: The "Mixed" strategy with fixed and different values. (a), (b) and (c) descriptions are the same as in Figure 2. In (c), the size of the markers corresponds to the increasing success rate: .

Different Mixing Strategies: We tested different mixing strategies. "Mixed", "Sim first" and "Sim-dependent" as described above. Results are presented in Figure 4. Using the "Sim-dependent" strategy reduced the required real and sim episodes to achieve 0.9 success rate comparing to the "Mixed" strategy with the same and values (Figure 4c). When using "Sim first" strategy, we observe that although in the beginning of the learning it uses only sim samples, once it switches to use only real samples, the agent requires many more real episodes to achieve success rate of (compared to the "Mixed" and "Sim-dependent" strategies; Figures 4b and 4c). Although the most common approach is to train a policy in simulation and then use it as an initial starting point for the real system, we see that applying the mixing strategy after transferring the policy to real can reduce further the required real episodes while maintaining high success rate.

Figure 4: Comparing strategies: "Mixed", "Sim-dependent" and "Sim first". (a), (b) and (c) descriptions are the same as in Figure 2. It can be clearly seen in (c) that "Sim first" requires the most number of real episodes to achieve a high success rate. In addition, (b) and (c) demonstrate that for the same () tuple, for example (), "Sim-dependent" strategy achieves higher success rates with less number of real episodes, compared to the "Mixing" strategy.

7 Conclusions and Future Work

In this work we analyzed a mixing strategy between simulation and real system samples. By separating the rate of collecting samples from each environment and the rate of choosing samples for the optimization process, we were able to achieve a significant reduction in the amount of real environment samples, comparing to the common strategy of using the same rate for both collection and optimization phases. This reduction is of special interest since usually the real samples are costly and harder to achieve. We believe this work can lead to a new line of research. First, finite sample analysis for our proposed algorithm can reveal its exact sample complexity. Comparing it to the sample complexity of learning only on real environment can emphasis the advantage of using the mixing strategy. Second, other replay buffer prioritization schemes can now be theoretically analyzed using the dynamics and properties of replay buffers we have developed. Third, our approach is limited to the online case, where new samples are collected during training. Adapting our approach to the offline case can discover new venues in the offline RL research. Fourth, learning the real samples collection rate and adapting it during training can further improve our approach.

References

  • M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, O. P. Abbeel, and W. Zaremba (2017) Hindsight experience replay. In Advances in neural information processing systems, pp. 5048–5058. Cited by: §C.1, §C.2, Appendix C, §6.
  • O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, et al. (2020) Learning dexterous in-hand manipulation. The International Journal of Robotics Research 39 (1), pp. 3–20. Cited by: §1, §2.
  • D. Bertsekas (2005) Dynamic programming and optimal control. Athena scientific Belmont, MA. Cited by: §1, §3.
  • D. P. Bertsekas and J. N. Tsitsiklis (1996) Neuro-dynamic programming. Athena Scientific. Cited by: §A.2.1, §5.1, §5.1, §5.
  • S. Bhatnagar, M. Ghavamzadeh, M. Lee, and R. S. Sutton (2008) Incremental natural actor-critic algorithms. In Advances in neural information processing systems, pp. 105–112. Cited by: §A.2, §A.2, §A.2, §A.3, §A.3, §2, §4.
  • S. Bhatnagar and S. Kumar (2004) A simultaneous perturbation stochastic approximation-based actor-critic algorithm for markov decision processes. IEEE Transactions on Automatic Control 49 (4), pp. 592–598. Cited by: §A.3.
  • S. Bhatnagar, R. S. Sutton, M. Ghavamzadeh, and M. Lee (2009) Natural actor–critic algorithms. Automatica 45 (11), pp. 2471–2482. Cited by: §5.1, §5.1.
  • V. S. Borkar and S. P. Meyn (2000) The ode method for convergence of stochastic approximation and reinforcement learning. SIAM Journal on Control and Optimization 38 (2), pp. 447–469. Cited by: §A.2, §A.2, §A.2.
  • V. S. Borkar (2009) Stochastic approximation: a dynamical systems viewpoint. Vol. 48, Springer. Cited by: §5.
  • K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan (2016) Domain separation networks. In Advances in neural information processing systems, pp. 343–351. Cited by: §2.
  • J. Bucklew (2013) Introduction to rare event simulation. Springer Science & Business Media. Cited by: §1.
  • Y. Chebotar, A. Handa, V. Makoviychuk, M. Macklin, J. Issac, N. Ratliff, and D. Fox (2019) Closing the sim-to-real loop: adapting simulation randomization with real world experience. In 2019 International Conference on Robotics and Automation (ICRA), pp. 8973–8979. Cited by: §2.
  • G. Dalal, B. Szörényi, G. Thoppe, and S. Mannor (2018) Finite sample analyses for td (0) with function approximation. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    Cited by: §2.
  • D. Di Castro and R. Meir (2010) A convergent online single time scale actor critic algorithm.

    The Journal of Machine Learning Research

    11, pp. 367–410.
    Cited by: §2.
  • W. Fedus, P. Ramachandran, R. Agarwal, Y. Bengio, H. Larochelle, M. Rowland, and W. Dabney (2020) Revisiting fundamentals of experience replay. In International Conference on Machine Learning, pp. 3061–3071. Cited by: §2.
  • Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky (2017) Domain-adversarial training of neural networks. In

    Domain Adaptation in Computer Vision Applications

    ,
    pp. 189–209. Cited by: §2.
  • D. Horgan, J. Quan, D. Budden, G. Barth-Maron, M. Hessel, H. Van Hasselt, and D. Silver (2018) Distributed prioritized experience replay. arXiv preprint arXiv:1803.00933. Cited by: §2.
  • R. A. Horn and C. R. Johnson (2012) Matrix analysis. Cambridge university press. Cited by: §B.1.
  • N. Jakobi, P. Husbands, and I. Harvey (1995) Noise and the reality gap: the use of simulation in evolutionary robotics. In European Conference on Artificial Life, pp. 704–720. Cited by: §1.
  • S. James, A. J. Davison, and E. Johns (2017) Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task. arXiv preprint arXiv:1707.02267. Cited by: §2.
  • S. James, P. Wohlhart, M. Kalakrishnan, D. Kalashnikov, A. Irpan, J. Ibarz, S. Levine, R. Hadsell, and K. Bousmalis (2019) Sim-to-real via sim-to-sim: data-efficient robotic grasping via randomized-to-canonical adaptation networks. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 12627–12637. Cited by: §2.
  • D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, et al. (2018) Qt-opt: scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293. Cited by: §1.
  • K. Kang, S. Belkhale, G. Kahn, P. Abbeel, and S. Levine (2019) Generalization through simulation: integrating simulated and real data into deep reinforcement learning for vision-based autonomous flight. In 2019 International Conference on Robotics and Automation (ICRA), pp. 6008–6014. Cited by: §2.
  • T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim (2017)

    Learning to discover cross-domain relations with generative adversarial networks

    .
    In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1857–1865. Cited by: §2.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §C.1.
  • V. R. Konda and J. N. Tsitsiklis (2000) Actor-critic algorithms. In Advances in neural information processing systems, pp. 1008–1014. Cited by: §2, §4.
  • H. J. Kushner and D. S. Clark (2012) Stochastic approximation methods for constrained and unconstrained systems. Vol. 26, Springer Science & Business Media. Cited by: §A.3, §2.
  • H. Kushner and G. G. Yin (2003) Stochastic approximation and recursive algorithms and applications. Vol. 35, Springer Science & Business Media. Cited by: §5.
  • S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen (2018)

    Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection

    .
    The International Journal of Robotics Research 37 (4-5), pp. 421–436. Cited by: §1.
  • T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: Appendix C, §1, §4, §6.
  • L. Lin (1993) Reinforcement learning for robots using neural networks. Technical report Carnegie-Mellon Univ Pittsburgh PA School of Computer Science. Cited by: §2, §3.
  • R. Liu and J. Zou (2018) The effects of memory replay in reinforcement learning. In 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 478–485. Cited by: §2.
  • M. Long, Y. Cao, J. Wang, and M. I. Jordan (2015) Learning transferable features with deep adaptation networks. In Proceedings of the 32nd International Conference on International Conference on Machine Learning-Volume 37, pp. 97–105. Cited by: §2.
  • V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller (2013) Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. Cited by: §2, §3.
  • Y. Pan, M. Zaheer, A. White, A. Patterson, and M. White (2018) Organizing experience: a deeper look at replay mechanisms for sample-based planning in continuous state domains. arXiv preprint arXiv:1806.04624. Cited by: §2.
  • X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel (2018) Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 1–8. Cited by: §2.
  • L. Pinto and A. Gupta (2016)

    Supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours

    .
    In 2016 IEEE international conference on robotics and automation (ICRA), pp. 3406–3413. Cited by: §1.
  • M. Plappert, M. Andrychowicz, A. Ray, B. McGrew, B. Baker, G. Powell, J. Schneider, J. Tobin, M. Chociej, P. Welinder, et al. (2018) Multi-goal reinforcement learning: challenging robotics environments and request for research. arXiv preprint arXiv:1802.09464. Cited by: §6.
  • M. L. Puterman (1994) Markov decision processes. Wiley and Sons. Cited by: §3, §3.
  • F. Ramos, R. C. Possas, and D. Fox (2019) Bayessim: adaptive domain randomization via probabilistic inference for robotics simulators. arXiv preprint arXiv:1906.01728. Cited by: §2.
  • F. Sadeghi and S. Levine (2016) Cad2rl: real single-image flight without a single real image. arXiv preprint arXiv:1611.04201. Cited by: §2.
  • T. Schaul, J. Quan, I. Antonoglou, and D. Silver (2015) Prioritized experience replay. arXiv preprint arXiv:1511.05952. Cited by: §2.
  • A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb (2017) Learning from simulated and unsupervised images through adversarial training. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2107–2116. Cited by: §2.
  • R. Shu, H. H. Bui, H. Narui, and S. Ermon (2018) A dirt-t approach to unsupervised domain adaptation. arXiv preprint arXiv:1802.08735. Cited by: §2.
  • R. S. Sutton and A. G. Barto (2018) Reinforcement learning: an introduction. MIT press. Cited by: §1.
  • J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel (2017) Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 23–30. Cited by: §2.
  • E. Todorov, T. Erez, and Y. Tassa (2012) Mujoco: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. Cited by: §6.
  • Q. Vuong, S. Vikram, H. Su, S. Gao, and H. I. Christensen (2019) How to pick the domain randomization parameters for sim-to-real transfer of reinforcement learning policies?. arXiv preprint arXiv:1903.11774. Cited by: §2.
  • Y. Wu, W. Zhang, P. Xu, and Q. Gu (2020) A finite time analysis of two time-scale actor critic methods. arXiv preprint arXiv:2005.01350. Cited by: §2.
  • A. Yahya, A. Li, M. Kalakrishnan, Y. Chebotar, and S. Levine (2017) Collective robot reinforcement learning with distributed asynchronous guided policy search. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 79–86. Cited by: §1.
  • D. Zha, K. Lai, K. Zhou, and X. Hu (2019) Experience replay optimization. arXiv preprint arXiv:1906.08387. Cited by: §2.
  • S. Zhang and R. S. Sutton (2017) A deeper look at experience replay. arXiv preprint arXiv:1712.01275. Cited by: §2.
  • S. Zou, T. Xu, and Y. Liang (2019) Finite-sample analysis for sarsa with linear function approximation. arXiv preprint arXiv:1902.02234. Cited by: §2.

Appendix A Proof of Main Lemmas and Theorems of Section 5.1

a.1 Proof of Lemma 1

Proof.

1. Proving Markovity requires that

(8)

Let us denote , and . Recall that and that the time index of entering a transition into RB(k) is for all and for all . Index relates the position in RB(k) in which the transition is placed at time . In addition, recall that where . Let be RB(k) of MDP at time , denoted as .We denote the collection of all as .

Remark 1.

Note that each time step that a transition enters some is unique. That is, for a fixed , for all and for . Moreover, for all and all . In addition, note that when a new transition is pushed into the RB, the oldest transition in the RB is thrown away, and all the transitions in the RB, move one index forward, that is for and .

Computing the l.h.s. of (8) yields

where in equality (1) we use the definition, in equality (2) we wrote the RB samples explicitly, in equality (3) the terms were rearranged, in equality (4) we expressed the probability as a conditional product, and in equality (5) we use the fact that and are independent random variables and the rule of pushing transition into RB():

Similarly, computing the r.h.s of (8) yields

Both sides of (8) are equal and therefore is Markovian.

2. According to Assumption 3, we assume that for every environment and for every policy the Markov Process induced by the MDP together with the policy is irreducible and aperiodic. In addition, we assume , where is the time where we have full RBs, each one with transitions. This means that when a new transition arrives to RB(k), it requires throwing away the oldest transition in the buffer. We saw in part 1 that

(9)

Let be an index set. We now write explicitly the following term

(10)

where we expressed the probability as a conditional product, separating RB() at time from all other RB’s. Note that in : for all since these RB’s do not change in this time-step.

We continue with expression (a).