Deep reinforcement learning (DRL) has achieved considerable success in high dimensional robotic control problems[1, 2, 3]. Most of these methods have been demonstrated on a single agent that is learning a single task. However, to obtain a truly intelligent agent, it would be desired to train the agent to perform a variety of different tasks, which is referred to as multi-task learning.
There are three general approaches to train an agent to perform multiple tasks. One can train separate agents for each tasks and later consolidate them together using supervised learning. However, training independently for each task can be sample inefficient. Another approach is to train multiple tasks sequentially. This approach is attractive in that it resembles how humans learn, however it is difficult to design efficient and scalable algorithms that retain the knowledge from earlier tasks. The third category is to learn multiple tasks concurrently. Existing work in this direction has been focused on learning common representations between multiple tasks and trying to achieve better data efficiency.
In this work, we study how weight sharing across multiple neural network policies can be used to improve concurrent learning of multiple tasks. We propose an algorithm that, given a set of learning problems and a fixed data budget, selects the best weights to be shared across the policies. We compute a metric for each weight in the neural network control policy that assesses the disagreement among the tasks using the variance of policy gradients.
We evaluate our methods on learning similar continuous motor control problems using reinforcement learning. We present three examples, and in each example a set of policies are trained on related tasks. We compare our result to four baseline methods, where the first one shares all the weights between policies, the second one shares no weight between policies, the third one randomly selects the weight to split, and the last one uses a standard architecture for learning multi-task problems. We show that by sharing part of the weights selected by our approach, the learning performance can be effectively improved.
Ii Related Work
In recent years, researchers have used deep reinforcement learning on continuous control problem with high-dimensional state and action spaces [2, 3, 4]. Powerful learning algorithms have been proposed to develop control policies for highly dynamic motor skills in simulation [2, 4] or for robot manipulation tasks on real hardware . These algorithms typically require a large number of samples to learn a policy for a single task. Directly applying them to train a policy that is capable of multiple tasks might be possible in theory but would be data and computationally inefficient in practice.
One way to train an agent to perform multiple tasks is to first learn each individual task and later consolidate them into one policy. Rusu et al. introduced policy distillation to compress a trained policy into a smaller model or to consolidate multiple trained expert policies into a unified one . They demonstrated that the distilled policy can, in some cases, perform better than the original expert policy. A similar algorithm was proposed by Parisotto et al. to learn a single agent capable of playing multiple Atari games . Researchers have also applied this approach to learn parameterized robotic control tasks such as throwing darts  or hitting a table tennis ball . These algorithms work well when the expert policies are easy to learn individually and do not present conflicting actions, but these assumptions are not always true.
Alternatively, an agent can learn a single policy for multiple tasks sequentially [9, 10, 11]. Rusu et al. proposed to use a progressive neural network, in which each column corresponds to a task . When learning a new task, the algorithm utilizes weights from the previously trained models. Fernando et al. introduced pathnet, which selects pathways from a collection of connected neural network modules for learning new tasks . These methods can effectively retain the knowledge of previously trained policies, but the size of the network can grow extensively. Kirkpatrick et al. proposed to use a quadratic penalty on the neural network weights to prevent the old tasks from being forgotten . They demonstrated sequential learning results on supervised learning problems and reinforcement learning on Atari games. However, it is unclear how well this method can perform on robotic control problems with a continuous action space.
Directly learning multiple tasks simultaneously has also been well explored [12, 13, 14, 15]. Pinto and Gupta  demonstrated that simultaneously training two deep neural networks with a partially shared representation achieves better performance than training only one task using the same amount of training data . They manually selected the network parameters to be shared by the two policies, whereas, in our work, we attempt to identify them in an automatic way. For an agent performing multiple tasks in the same environment, Borsa et al. introduced an algorithm to learn the value function with a shared representation and used a multi-task policy iteration algorithm to search for the policy . However, a new value function would need to be trained when a new environment is introduced. Teh et al. applied the idea of policy distillation to multi-task learning by learning a distilled policy that contains common information for all the individual tasks and using it to regularize the learning of task-specific policies. They evaluated their method on a maze navigation problem and a 3D game playing problem. However, for controlling robots with potentially different morphologies and joint types, it is unclear whether the common knowledge can be captured in one distilled policy.
Another line of research for multi-task learning incorporates the task-related information to the state input [16, 17, 18, 19]. Peng et al. duplicated the first layer of the neural network corresponding to different phases of humanoid locomotion . A similar architecture was used in  to achieve different behaviors in a neural network. Our approach generalizes these architectures by selectively share the weights among the policies.
Iii-a Markov Decision Process (MDP)
We model robotic control problems as Markov Decision Processes (MDPs), defined by a tuple, where is the state space, is the action space, is the reward function, is the initial state distribution, is the transition function and is the discount factor. The goal of reinforcement learning is to search for the optimal policy parameterized by , that maximizes the expected long-term reward:
where the value function of a policy, , is defined as the expected long-term reward of following the policy from some input state :
In the context of multi-task learning, we can model each task as an MDP , where is the index of the task. The objective function (1) is then modified to be:
where is the total number of tasks.
Iii-B Policy Gradient Algorithm
Policy gradient methods directly estimate the gradient of the objective function (1) w.r.t the policy parameters and use gradient descent to optimize the policy. In this work, we use Proximal Policy Optimization (PPO)  to learn the optimal control policy because it provides better data efficiency and learning performance than the alternative learning algorithms.
Similar to many policy gradient methods, PPO defines an advantage function as , where is the state-action value function that evaluates the return of taking action at state and following the policy thereafter. However, PPO minimizes a modified objective function to the original MDP problem:
where are rollouts collected using an old policy , is the importance re-sampling term that enables us to use data sampled under the old policy to estimate expectation for the current policy . The and the operators together ensure that does not change too much from . More details in deriving the objective functions can be found in the original paper on PPO .
Our method aims to integrate two important techniques in learning multiple tasks, joint learning and specialization, into one coherent algorithm. Learning multiple tasks jointly is a well known strategy to improve learning efficiency and generalization. However, to further improve each individual task, a specialized curriculum and training are often needed. We define a “task” as a particular reward function performed by a particular dynamic system. Therefore, two different tasks can mean two different robots achieving the same goal, the same robot achieving two different goals, or both.
Our algorithm carries out two learning phases: joint training and specialization training. We first train a policy, , represented by a fully connected neural network to jointly learn the common representations across the different tasks (Section IV-A
). A policy is defined as a Gaussian probability distribution of actionconditioned on a state . The mean of the distribution is represented by the neural network and the covariance is defined as part of the policy parameters, , which also include the weights and the biases of the network. Based on the gradient information gathering during the joint training phase, we then select a subset of weights to be specialized to individual tasks (Section IV-B).
Iv-a Joint Training
The goal of jointly learning multiple tasks is to learn a common representation of the multiple tasks, as well as to provide critical information to determine which weights should be shared across tasks in the specialization training phase. The training process is identical to training a single task except that the rollout pool consists of trajectories generated for performing different tasks. We use PPO to search for a policy that maximizes the surrogate loss defined in Equation (4). During joint training, the policy does not distinguish between different tasks. This forces the policy to learn a common representation of the multiple tasks.
Iv-B Specialization Training
In the specialization training phase, we first analyze the policy after the joint training phase and select a subset of the weights to be shared across the policies. We compute a per-weight specialization metric to estimate whether a particular weight in the neural network should be shared or specialized to each task. The key idea of our approach is, for each weight in the network, to estimate the disagreement among different tasks.
Algorithm 1 begins by collecting rollouts for each task using the current policy. We then approximate the gradient of the PPO loss (Equation 4) with respect to the policy parameters , using rollouts :
After we approximate the policy gradients for all tasks, we obtain
gradient vectors:. For each element of the gradient vector, we compute its variance across all tasks. If a particular variance is low, this implies that the tasks are in agreement on the update of the corresponding weight in the network. From variances, we identify the smallest ones and share the corresponding weights across the policies of different tasks. The rest of the weights will be split to specialize each task.
The architecture of the new network can be viewed as copies of old networks jointed at the output layer to produce the final action. If a weight is shared, its value must be the same across subnetworks and is updated in unison. Otherwise, a weight can assume different values for the different subnetworks (Figure 1). We initialize the new network using the values of the old network after the joint training phase.
Since the policy gradient itself is approximated with samples, the specialization metric depending on these gradients might also contain a significant amount of noise. In our experiments, we average the specialization metric over iterations of PPO updates before selecting the weights to be shared or specialized.
We evaluate our approach on three multi-task continuous control problems. Our algorithm introduces two hyper-parameters that need to be determined: the number of iterations of joint training (jt) and the percentage of specialized network weights (sp). For each task, we run our method with joint training (jt) of , and samples and specialization percentages (sp) of , and , creating sets of hyper-parameters in total. We also test our method for joint training only (sp) and specialization without joint training (jt, sp). We compare our method to two other baseline methods where 1) specialization is done randomly and 2) a one hot vector is appended to the policy input to minimally disambiguate the tasks (append). Among the hyper-parameters being tested, we find that jt and sp works consistently well for all of our examples. Thus we use these parameters for the random specialization baseline.
We use the implementation of PPO in OpenAI Baselines . To represent the control policy, we use a neural network with three hidden layers, comprised of hidden units each. All results demonstrated in this work are simulated using DartEnv , a fork of the OpenAI Gym  library that uses Dart  as the underlying rigid body simulator. The simulation timestep is set to s. We run each example three times and report the average learning curves. We choose the total iteration numbers empirically so that the policies can be sufficiently trained to learn the motor skills.
V-a Robot hopping with different shapes
We begin with an example of hopping locomotion of a single-legged robot. We design three single-legged robots that are constructed with capsules, boxes and ellipsoids respectively, as shown in Figure 2. In addition, we scale them to have different total heights. These variations lead to considerable difference in the inertia and contacts of the robots, while the similarity in the configuration should lead to similar locomotion gaits, which we expect the joint-training to capture. We use a batch size of for the training.
The result of this example can be found in Figure 3 and Table I. In most cases, using joint training with specialization achieves better performance than all four baselines (sp, jt sp, jt sp= random and append), showing the effectiveness of our method for this problem. Meanwhile, we observe that the three learning curves associated with jt obtained the best overall performance, while with jt= notable variance can be observed across different specialization amounts.
V-B 2D bipedal walking in two directions
In this example, we train one robot to perform different tasks. Specifically, we train a bipedal robot to move forward and backward. This is different from the previous examples where we train different robots to perform the same task. The bipedal robot is constructed similar to the 2D Walker example in OpenAI Gym  and is constrained to move in its sagittal plane. We reward linear velocities at the COM of the robot for being positive and negative to achieve walking forward and backward. We use a batch size of for training the policies. An illustration of the resulting motion can be seen in Figure 4.
The learning result of this example is shown in Figure 5 and Table I. We can see that specialization works particularly well for this task and even random specialization achieves decent learning performance. Note that with the same amount of joint training and specialization, our approach still outperforms random specialization. In addition, though joint training only (sp) and training separately (jt, sp) achieve similar performance, they learn different behaviors, with the former learn to stand still and the latter learn to make a few steps before losing balance.
V-C Hopper with different torso mass
In this example, we train two single-legged robot with different torso mass to hop forward. By controlling the difference between the torso mass of the two robots, we can specify the similarity among the tasks for a multi-task problem. We fix one of the two robot to have a torso mass of kg and assign the torso mass of the other robot from three options: kg, kg and kg. We use a batch size of for training the policies.
The results are shown in Figure 6. We see that when the two tasks are very different from each other, training two policies separately can achieve better performance than training single policy (Figure 6(a)). As the two tasks become more similar, training single policy achieves better performance and eventually outperforms training separate policies (Figure 6(b) and (c)). In any case, the policies trained with our method always achieve top performance in all three problems, showing the effectiveness of joint training and selective specialization.
We have shown that by combining joint training and policy specialization, we can improve the performance and efficiency of concurrent learning of multiple robotic motor skills of different types. We evaluated our method on sets of hyper-parameters for joint training and specialization. We demonstrated that in most cases our approach can be helpful compared to baselines, while different joint training and specialization amounts can result in notably different performances. We identified one pair of hyper-parameter that works well for all the presented examples, which in a practical setting, can be used as an initial guess followed by additional fine-tuning of the hyper-parameters. However, we recognize a few limitations that require further investigations.
In this work, we investigated training with specialization occurring at one particular point during learning, and do not allow parameters to be shared by a subset of the tasks during specialization. We found this scheme work well for our test cases, however, a potentially more powerful strategy would be to allow multiple specializations throughout the learning and perform specialization at subsequently finer levels.
In PPO, both the policy and the value function are represented by neural networks and optimized throughout the training. In this work, we apply specialization only to the policy network. We found that specialization of the value function network did not achieve notable improvement in a preliminary test and it would lead to additional hyper-parameter search. However, for certain problems, it could be beneficial to specialize the value function network as well. Our framework can be easily applied to achieve this by replacing the PPO surrogate loss in Equation (5) with the loss of the value function network.
One important direction of investigation is to automatically determine the optimal hyper-parameters. One possible direction would be to learn a predictive model that estimates the performance improvement for different amounts of specialization. Another future direction is generalization to new tasks. Through training multiple tasks with shared parameters, it is possible that the trained policies learn a common representation of the space of the multiple tasks. It would be interesting to see if training on a novel but related task by initializing the policy using our approach would achieve better learning performance.
We have introduced a method for learning multiple related robotic motor skills concurrently with improved data efficiency. The key stages of our approach consist of a joint training phase and a specialization phase. We proposed a metric using the variance of the task-based policy gradient to selectively split the neural network policy for specialization. We demonstrated our approach on three multi-task examples where different robots are trained to perform different tasks. For these examples, our approach improves the learning performance compared to joint training alone, independent training, random policy specialization and a standard architecture for multi-task learning.
This work is supported by NSF award IIS-1514258.
-  T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” CoRR, vol. abs/1509.02971, 2015. [Online]. Available: http://arxiv.org/abs/1509.02971
J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region
policy optimization,” in
Proceedings of the 32nd International Conference on Machine Learning (ICML-15), 2015, pp. 1889–1897.
-  S. Levine, C. Finn, T. Darrell, and P. Abbeel, “End-to-end training of deep visuomotor policies,” Journal of Machine Learning Research, vol. 17, no. 39, pp. 1–40, 2016.
-  J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
-  A. A. Rusu, S. G. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V. Mnih, K. Kavukcuoglu, and R. Hadsell, “Policy distillation,” arXiv preprint arXiv:1511.06295, 2015.
-  E. Parisotto, J. L. Ba, and R. Salakhutdinov, “Actor-mimic: Deep multitask and transfer reinforcement learning,” arXiv preprint arXiv:1511.06342, 2015.
-  B. Da Silva, G. Konidaris, and A. Barto, “Learning parameterized skills,” arXiv preprint arXiv:1206.6398, 2012.
J. Kober, E. Öztop, and J. Peters, “Reinforcement learning to adjust robot
movements to new situations,” in
IJCAI Proceedings-International Joint Conference on Artificial Intelligence, vol. 22, no. 3, 2011, p. 2650.
-  J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al., “Overcoming catastrophic forgetting in neural networks,” Proceedings of the National Academy of Sciences, p. 201611835, 2017.
-  C. Fernando, D. Banarse, C. Blundell, Y. Zwols, D. Ha, A. A. Rusu, A. Pritzel, and D. Wierstra, “Pathnet: Evolution channels gradient descent in super neural networks,” CoRR, vol. abs/1701.08734, 2017. [Online]. Available: http://arxiv.org/abs/1701.08734
-  A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell, “Progressive neural networks,” CoRR, vol. abs/1606.04671, 2016. [Online]. Available: http://arxiv.org/abs/1606.04671
-  L. Pinto and A. Gupta, “Learning to push by grasping: Using multiple tasks for effective learning,” in Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017, pp. 2161–2168.
-  H. A. L. J. Zhaoyang Yang, Kathryn Merrick, “Multi-task deep reinforcement learning for continuous action control,” in Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, 2017, pp. 3301–3307. [Online]. Available: https://doi.org/10.24963/ijcai.2017/461
-  D. Borsa, T. Graepel, and J. Shawe-Taylor, “Learning shared representations in multi-task reinforcement learning,” arXiv preprint arXiv:1603.02041, 2016.
-  Y. W. Teh, V. Bapst, W. M. Czarnecki, J. Quan, J. Kirkpatrick, R. Hadsell, N. Heess, and R. Pascanu, “Distral: Robust multitask reinforcement learning,” arXiv preprint arXiv:1707.04175, 2017.
-  N. Heess, G. Wayne, Y. Tassa, T. Lillicrap, M. Riedmiller, and D. Silver, “Learning and transfer of modulated locomotor controllers,” arXiv preprint arXiv:1610.05182, 2016.
-  W. Yu, C. K. Liu, and G. Turk, “Preparing for the unknown: Learning a universal policy with online system identification,” arXiv preprint arXiv:1702.02453, 2017.
-  X. B. Peng, G. Berseth, K. Yin, and M. Van De Panne, “Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning,” ACM Transactions on Graphics (TOG), vol. 36, no. 4, p. 41, 2017.
-  C. Florensa, Y. Duan, and P. Abbeel, “Stochastic neural networks for hierarchical reinforcement learning,” arXiv preprint arXiv:1704.03012, 2017.
-  P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, and Y. Wu, “Openai baselines,” https://github.com/openai/baselines, 2017.
-  “Dartenv: Openai gym environments transferred to the dart simulator.”
-  G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “Openai gym,” CoRR, vol. abs/1606.01540, 2016. [Online]. Available: http://arxiv.org/abs/1606.01540
-  “DART: Dynamic Animation and Robotics Toolkit”.” [Online]. Available: http://dartsim.github.io/