Multi-task Learning with Gradient Guided Policy Specialization

09/23/2017 ∙ by Wenhao Yu, et al. ∙ Georgia Institute of Technology 0

We present a method for efficient learning of control policies for multiple related robotic motor skills. Our approach consists of two stages, joint training and specialization training. During the joint training stage, a neural network policy is trained with minimal information to disambiguate the motor skills. This forces the policy to learn a common representation of the different tasks. Then, during the specialization training stage we selectively split the weights of the policy based on a per-weight metric that measures the disagreement among the multiple tasks. By splitting part of the control policy, it can be further trained to specialize to each task. To update the control policy during learning, we use Trust Region Policy Optimization with Generalized Advantage Function (TRPOGAE). We propose a modification to the gradient update stage of TRPO to better accommodate multi-task learning scenarios. We evaluate our approach on three continuous motor skill learning problems in simulation: 1) a locomotion task where three single legged robots with considerable difference in shape and size are trained to hop forward, 2) a manipulation task where three robot manipulators with different sizes and joint types are trained to reach different locations in 3D space, and 3) locomotion of a two-legged robot, whose range of motion of one leg is constrained in different ways. We compare our training method to three baselines. The first baseline uses only joint training for the policy, the second trains independent policies for each task, and the last randomly selects weights to split. We show that our approach learns more efficiently than each of the baseline methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Deep reinforcement learning (DRL) has achieved considerable success in high dimensional robotic control problems 

[1, 2, 3]. Most of these methods have been demonstrated on a single agent that is learning a single task. However, to obtain a truly intelligent agent, it would be desired to train the agent to perform a variety of different tasks, which is referred to as multi-task learning.

There are three general approaches to train an agent to perform multiple tasks. One can train separate agents for each tasks and later consolidate them together using supervised learning. However, training independently for each task can be sample inefficient. Another approach is to train multiple tasks sequentially. This approach is attractive in that it resembles how humans learn, however it is difficult to design efficient and scalable algorithms that retain the knowledge from earlier tasks. The third category is to learn multiple tasks concurrently. Existing work in this direction has been focused on learning common representations between multiple tasks and trying to achieve better data efficiency.

In this work, we study how weight sharing across multiple neural network policies can be used to improve concurrent learning of multiple tasks. We propose an algorithm that, given a set of learning problems and a fixed data budget, selects the best weights to be shared across the policies. We compute a metric for each weight in the neural network control policy that assesses the disagreement among the tasks using the variance of policy gradients.

We evaluate our methods on learning similar continuous motor control problems using reinforcement learning. We present three examples, and in each example a set of policies are trained on related tasks. We compare our result to four baseline methods, where the first one shares all the weights between policies, the second one shares no weight between policies, the third one randomly selects the weight to split, and the last one uses a standard architecture for learning multi-task problems. We show that by sharing part of the weights selected by our approach, the learning performance can be effectively improved.

Ii Related Work

In recent years, researchers have used deep reinforcement learning on continuous control problem with high-dimensional state and action spaces [2, 3, 4]. Powerful learning algorithms have been proposed to develop control policies for highly dynamic motor skills in simulation [2, 4] or for robot manipulation tasks on real hardware [3]. These algorithms typically require a large number of samples to learn a policy for a single task. Directly applying them to train a policy that is capable of multiple tasks might be possible in theory but would be data and computationally inefficient in practice.

One way to train an agent to perform multiple tasks is to first learn each individual task and later consolidate them into one policy. Rusu et al. introduced policy distillation to compress a trained policy into a smaller model or to consolidate multiple trained expert policies into a unified one [5]. They demonstrated that the distilled policy can, in some cases, perform better than the original expert policy. A similar algorithm was proposed by Parisotto et al. to learn a single agent capable of playing multiple Atari games [6]. Researchers have also applied this approach to learn parameterized robotic control tasks such as throwing darts [7] or hitting a table tennis ball [8]. These algorithms work well when the expert policies are easy to learn individually and do not present conflicting actions, but these assumptions are not always true.

Alternatively, an agent can learn a single policy for multiple tasks sequentially [9, 10, 11]. Rusu et al. proposed to use a progressive neural network, in which each column corresponds to a task [11]. When learning a new task, the algorithm utilizes weights from the previously trained models. Fernando et al. introduced pathnet, which selects pathways from a collection of connected neural network modules for learning new tasks [10]. These methods can effectively retain the knowledge of previously trained policies, but the size of the network can grow extensively. Kirkpatrick et al. proposed to use a quadratic penalty on the neural network weights to prevent the old tasks from being forgotten [9]. They demonstrated sequential learning results on supervised learning problems and reinforcement learning on Atari games. However, it is unclear how well this method can perform on robotic control problems with a continuous action space.

Directly learning multiple tasks simultaneously has also been well explored [12, 13, 14, 15]. Pinto and Gupta [12] demonstrated that simultaneously training two deep neural networks with a partially shared representation achieves better performance than training only one task using the same amount of training data [12]. They manually selected the network parameters to be shared by the two policies, whereas, in our work, we attempt to identify them in an automatic way. For an agent performing multiple tasks in the same environment, Borsa et al. introduced an algorithm to learn the value function with a shared representation and used a multi-task policy iteration algorithm to search for the policy [14]. However, a new value function would need to be trained when a new environment is introduced. Teh et al. applied the idea of policy distillation to multi-task learning by learning a distilled policy that contains common information for all the individual tasks and using it to regularize the learning of task-specific policies[15]. They evaluated their method on a maze navigation problem and a 3D game playing problem. However, for controlling robots with potentially different morphologies and joint types, it is unclear whether the common knowledge can be captured in one distilled policy.

Another line of research for multi-task learning incorporates the task-related information to the state input [16, 17, 18, 19]. Peng et al. duplicated the first layer of the neural network corresponding to different phases of humanoid locomotion [18]. A similar architecture was used in [19] to achieve different behaviors in a neural network. Our approach generalizes these architectures by selectively share the weights among the policies.

Iii Background

Iii-a Markov Decision Process (MDP)

We model robotic control problems as Markov Decision Processes (MDPs), defined by a tuple

, where is the state space, is the action space, is the reward function, is the initial state distribution, is the transition function and is the discount factor. The goal of reinforcement learning is to search for the optimal policy parameterized by , that maximizes the expected long-term reward:

(1)

where the value function of a policy, , is defined as the expected long-term reward of following the policy from some input state :

(2)

In the context of multi-task learning, we can model each task as an MDP , where is the index of the task. The objective function (1) is then modified to be:

(3)

where is the total number of tasks.

Iii-B Policy Gradient Algorithm

Policy gradient methods directly estimate the gradient of the objective function (

1) w.r.t the policy parameters and use gradient descent to optimize the policy. In this work, we use Proximal Policy Optimization (PPO) [4] to learn the optimal control policy because it provides better data efficiency and learning performance than the alternative learning algorithms.

Similar to many policy gradient methods, PPO defines an advantage function as , where is the state-action value function that evaluates the return of taking action at state and following the policy thereafter. However, PPO minimizes a modified objective function to the original MDP problem:

(4)

where are rollouts collected using an old policy , is the importance re-sampling term that enables us to use data sampled under the old policy to estimate expectation for the current policy . The and the operators together ensure that does not change too much from . More details in deriving the objective functions can be found in the original paper on PPO [4].

Iv Method

Our method aims to integrate two important techniques in learning multiple tasks, joint learning and specialization, into one coherent algorithm. Learning multiple tasks jointly is a well known strategy to improve learning efficiency and generalization. However, to further improve each individual task, a specialized curriculum and training are often needed. We define a “task” as a particular reward function performed by a particular dynamic system. Therefore, two different tasks can mean two different robots achieving the same goal, the same robot achieving two different goals, or both.

Our algorithm carries out two learning phases: joint training and specialization training. We first train a policy, , represented by a fully connected neural network to jointly learn the common representations across the different tasks (Section IV-A

). A policy is defined as a Gaussian probability distribution of action

conditioned on a state . The mean of the distribution is represented by the neural network and the covariance is defined as part of the policy parameters, , which also include the weights and the biases of the network. Based on the gradient information gathering during the joint training phase, we then select a subset of weights to be specialized to individual tasks (Section IV-B).

Iv-a Joint Training

The goal of jointly learning multiple tasks is to learn a common representation of the multiple tasks, as well as to provide critical information to determine which weights should be shared across tasks in the specialization training phase. The training process is identical to training a single task except that the rollout pool consists of trajectories generated for performing different tasks. We use PPO to search for a policy that maximizes the surrogate loss defined in Equation (4). During joint training, the policy does not distinguish between different tasks. This forces the policy to learn a common representation of the multiple tasks.

Iv-B Specialization Training

In the specialization training phase, we first analyze the policy after the joint training phase and select a subset of the weights to be shared across the policies. We compute a per-weight specialization metric to estimate whether a particular weight in the neural network should be shared or specialized to each task. The key idea of our approach is, for each weight in the network, to estimate the disagreement among different tasks.

Algorithm 1 begins by collecting rollouts for each task using the current policy. We then approximate the gradient of the PPO loss (Equation 4) with respect to the policy parameters , using rollouts :

(5)

After we approximate the policy gradients for all tasks, we obtain

gradient vectors:

. For each element of the gradient vector, we compute its variance across all tasks. If a particular variance is low, this implies that the tasks are in agreement on the update of the corresponding weight in the network. From variances, we identify the smallest ones and share the corresponding weights across the policies of different tasks. The rest of the weights will be split to specialize each task.

The architecture of the new network can be viewed as copies of old networks jointed at the output layer to produce the final action. If a weight is shared, its value must be the same across subnetworks and is updated in unison. Otherwise, a weight can assume different values for the different subnetworks (Figure 1). We initialize the new network using the values of the old network after the joint training phase.

Fig. 1: Illustration of network architecture. Blue edges denote the weights shared by the tasks. Green edges are weights specialized to task 1 and yellow edges are specialized to task 2.

Since the policy gradient itself is approximated with samples, the specialization metric depending on these gradients might also contain a significant amount of noise. In our experiments, we average the specialization metric over iterations of PPO updates before selecting the weights to be shared or specialized.

1:for  do
2:     Collect rollouts from task
3:     Compute using
4:for  do
5:      variance() 
6:Select the smallest ’s  
7:Share corresponding weights in network 
Algorithm 1 Gradient-guided weight selection

V Results

We evaluate our approach on three multi-task continuous control problems. Our algorithm introduces two hyper-parameters that need to be determined: the number of iterations of joint training (jt) and the percentage of specialized network weights (sp). For each task, we run our method with joint training (jt) of , and samples and specialization percentages (sp) of , and , creating sets of hyper-parameters in total. We also test our method for joint training only (sp) and specialization without joint training (jt, sp). We compare our method to two other baseline methods where 1) specialization is done randomly and 2) a one hot vector is appended to the policy input to minimally disambiguate the tasks (append). Among the hyper-parameters being tested, we find that jt and sp works consistently well for all of our examples. Thus we use these parameters for the random specialization baseline.

We use the implementation of PPO in OpenAI Baselines [20]. To represent the control policy, we use a neural network with three hidden layers, comprised of hidden units each. All results demonstrated in this work are simulated using DartEnv [21], a fork of the OpenAI Gym [22] library that uses Dart [23] as the underlying rigid body simulator. The simulation timestep is set to s. We run each example three times and report the average learning curves. We choose the total iteration numbers empirically so that the policies can be sufficiently trained to learn the motor skills.

V-a Robot hopping with different shapes

We begin with an example of hopping locomotion of a single-legged robot. We design three single-legged robots that are constructed with capsules, boxes and ellipsoids respectively, as shown in Figure 2. In addition, we scale them to have different total heights. These variations lead to considerable difference in the inertia and contacts of the robots, while the similarity in the configuration should lead to similar locomotion gaits, which we expect the joint-training to capture. We use a batch size of for the training.

The result of this example can be found in Figure 3 and Table I. In most cases, using joint training with specialization achieves better performance than all four baselines (sp, jt sp, jt sp= random and append), showing the effectiveness of our method for this problem. Meanwhile, we observe that the three learning curves associated with jt obtained the best overall performance, while with jt= notable variance can be observed across different specialization amounts.

Fig. 2: Three single-legged robots with different shapes and sizes. All of them are trained to hop forward.
Fig. 3: Learning curves for the three hoppers example.

V-B 2D bipedal walking in two directions

In this example, we train one robot to perform different tasks. Specifically, we train a bipedal robot to move forward and backward. This is different from the previous examples where we train different robots to perform the same task. The bipedal robot is constructed similar to the 2D Walker example in OpenAI Gym [22] and is constrained to move in its sagittal plane. We reward linear velocities at the COM of the robot for being positive and negative to achieve walking forward and backward. We use a batch size of for training the policies. An illustration of the resulting motion can be seen in Figure 4.

The learning result of this example is shown in Figure 5 and Table I. We can see that specialization works particularly well for this task and even random specialization achieves decent learning performance. Note that with the same amount of joint training and specialization, our approach still outperforms random specialization. In addition, though joint training only (sp) and training separately (jt, sp) achieve similar performance, they learn different behaviors, with the former learn to stand still and the latter learn to make a few steps before losing balance.

3 Hoppers Biped
sp 1601.94 877.80
jt=,sp= 2240.07 966.65
jt=k,sp= random 1994.28 2154.09
append 1603.91 800.59
jt=k,sp= 2516.05 1804.13
jt=k,sp= 2447.45 1407.69
jt=k,sp= 2541.31 2681.31
jt=k,sp= 2407.68 2773.29
jt=k,sp= 2635.56 2671.23
jt=k,sp= 2302.74 2381.18
jt=k,sp= 2076.98 1931.47
jt=k,sp= 2227.95 2438.77
jt=k,sp= 2242.20 2562.53

TABLE I: Average final performances of different methods for training the three hopping robots with different shapes (3 Hoppers) and the biped robot walking forward and backward (Biped).
Fig. 4: Bipedal robot moving forward (top) and backward (bottom).
Fig. 5: Learning curves for the bipedal robot example.
Fig. 6: Learning curves for the hopper example. (a) the two hoppers have torso mass of kg and kg. (b) the two hoppers have torso mass of kg and kg. (c) the two hoppers have torso mass of kg and kg.

V-C Hopper with different torso mass

In this example, we train two single-legged robot with different torso mass to hop forward. By controlling the difference between the torso mass of the two robots, we can specify the similarity among the tasks for a multi-task problem. We fix one of the two robot to have a torso mass of kg and assign the torso mass of the other robot from three options: kg, kg and kg. We use a batch size of for training the policies.

The results are shown in Figure 6. We see that when the two tasks are very different from each other, training two policies separately can achieve better performance than training single policy (Figure 6(a)). As the two tasks become more similar, training single policy achieves better performance and eventually outperforms training separate policies (Figure 6(b) and (c)). In any case, the policies trained with our method always achieve top performance in all three problems, showing the effectiveness of joint training and selective specialization.

Vi Discussion

We have shown that by combining joint training and policy specialization, we can improve the performance and efficiency of concurrent learning of multiple robotic motor skills of different types. We evaluated our method on sets of hyper-parameters for joint training and specialization. We demonstrated that in most cases our approach can be helpful compared to baselines, while different joint training and specialization amounts can result in notably different performances. We identified one pair of hyper-parameter that works well for all the presented examples, which in a practical setting, can be used as an initial guess followed by additional fine-tuning of the hyper-parameters. However, we recognize a few limitations that require further investigations.

In this work, we investigated training with specialization occurring at one particular point during learning, and do not allow parameters to be shared by a subset of the tasks during specialization. We found this scheme work well for our test cases, however, a potentially more powerful strategy would be to allow multiple specializations throughout the learning and perform specialization at subsequently finer levels.

In PPO, both the policy and the value function are represented by neural networks and optimized throughout the training. In this work, we apply specialization only to the policy network. We found that specialization of the value function network did not achieve notable improvement in a preliminary test and it would lead to additional hyper-parameter search. However, for certain problems, it could be beneficial to specialize the value function network as well. Our framework can be easily applied to achieve this by replacing the PPO surrogate loss in Equation (5) with the loss of the value function network.

One important direction of investigation is to automatically determine the optimal hyper-parameters. One possible direction would be to learn a predictive model that estimates the performance improvement for different amounts of specialization. Another future direction is generalization to new tasks. Through training multiple tasks with shared parameters, it is possible that the trained policies learn a common representation of the space of the multiple tasks. It would be interesting to see if training on a novel but related task by initializing the policy using our approach would achieve better learning performance.

Vii Conclusion

We have introduced a method for learning multiple related robotic motor skills concurrently with improved data efficiency. The key stages of our approach consist of a joint training phase and a specialization phase. We proposed a metric using the variance of the task-based policy gradient to selectively split the neural network policy for specialization. We demonstrated our approach on three multi-task examples where different robots are trained to perform different tasks. For these examples, our approach improves the learning performance compared to joint training alone, independent training, random policy specialization and a standard architecture for multi-task learning.

Acknowledgment

This work is supported by NSF award IIS-1514258.

References