Reinforcement learning (RL) 
has been successful in solving many control problems rooted in fixed Markov Decision Processes (MDPs) environments
. However, the extremely close interactions between the RL algorithms and the MDPs leads to the difficulty to reuse the knowledge learned from one task in new tasks. This difficulty further impedes RL policies from being adept in solving high dimensional complicated tasks. For example, it is easy to train an autonomous vehicle to travel from an origin position to a target position. However, if one takes into consideration a bunch of vehicle and pedestrian obstacles, the difficulty of the problem could grow overwhelming for shallow policy models. In order to avoid all the obstacles, one would have to train a deep policy network with very sparse reward input. Therefore, the training process usually requires an unbearably large amount of computation. What makes it worse is that such policies are hardly reusable in other scenarios, even if the new task is very similar to the previous one. Suppose a speed limit requirement is added to the autonomous driving task, although the input of the policy network is already tediously high dimensional, given there is no entrance for the speed limit information to enter the policy network, it is not possible that the pretrained policy accomplishes the new task, no matter how the policy network is tuned. Therefore, RL frameworks with fixed policy models can hardly address such high dimensional and complicated tasks in environments of great variance.
We propose to address this problem from a new perspective: modularizing complicated and high dimensional problems using a series of attributes. The attributes refer especially to global characteristics or requirements that take effect throughout the task. An example of attribute learning is shown in Fig. 1. Concretely, to solve the complicated driving problem, one first decompose the requirements of the task into a target reaching attribute, an obstacle avoidance attribute and a speed limit attribute, then train the modular network for each of the attributes, and finally assemble the attribute networks together to produce the overall policy. Modularizing a task using a series of attributes has three main intriguing advantages:
Decomposing a high dimensional complicated task into low dimensional attributes makes the training process much easier and faster.
Trained attribute modules can be reused in new tasks, making it possible to build up versatile policies that can adjust to changes in tasks by assembling attribute modules.
In attribute learning, specific state information is provided only to its corresponding attribute modules. This decoupling formulation makes it possible to dynamically manage state space in high dimensional environments.
In order to modularize the attributes, we propose a simple but efficient RL framework called the cascade attribute learning network (CALNet). The brief idea of the CALNet is shown in Fig. 1. In CALNet, the attribute modules are connected in cascade series. Each attribute module receives both the output of its preceding module and its corresponding states, and returns the action that satisfies all the attributes ahead of it. The details of the CALNet architecture and the training methods are described in Section III. Using the CALNet, one can zero shoot an unseen task by separately learning all the attributes in the task and assemble the attribute modules in series together. The reminder of this paper is organized as follows: the related works and the background of RL are introduced in Section II. In Section III, the architecture of the CALNet and the implementation details are described. In Section IV, we show simulation results to validate the proposed model using a variety of robots and attributes and give discussions on the experiments. The conclusions are given in Section V.
Ii Related Work and Background
Ii-a Related Work
There have been lots of attempts to create versatile intelligence that can not only solve complicated tasks, but adjust to changes in the tasks as well. Transfer learning is a key tool that makes the use of previously learned knowledges for the better or faster learning of new knowledges. Rusu et al  designed a multi-column (network) framework, referred to as progressive network, in which newly added columns are laterally connected to previously learned columns for knowledge transfer. Drafty et al  and Braylan et al 
also designed interesting network architectures for knowledge transfer in MAV control and video game playing. For the combinations of transfer learning and imitation learning, Ammar et al
uses unsupervised learning to map states for transfer, assuming the existence of distance function between different state spaces. Gupta et al learns an invariant feature between different dimensional states and use demonstrations to increase the density of the rewards. Our work differs from those works mainly in that we put emphasis on the modularization of attributes, which are concrete and meaningful modules that can be conveniently assembled into various combinations.
There are other methods seeking to learn a globally general policy: Meta learning  attempts to build self-adaptive learners that improve their bias through accumulating experience. One shot imitation learning , for example, is a meta learning framework which is trained using a number of different tasks so that new skills could be learned from a single expert demonstration. Curriculum learning (CL) trains a model on a sequence of cognate tasks that get more and more challenging gradually, so as to solve hard tasks that could not be learned from scratching. Florensa et al 
applied reverse curriculum generation (RCL) in RL. In the early stage of the training process, the RCL initializes the agent state to be very close to the target state, making the policy very easy to train. They then gradually increase the random level of the initial state as the RL model performs better and better. Our policy training strategy is inspired by the idea of CL and achieved satisfying robustness for the policies. There are also researches in training modular neural networks, investigates the combinations of multiple robots and tasks, while  investigates the combinations of multiple sequential subtasks. Our work, different from those works, looks into modularization in a different dimension. We investigate the modularization of attributes, the characteristics or requirements that take effect throughout the whole task.
Ii-B Deep Reinforcement Learning Background
The objective of RL is to maximize the expected sum of the discounted rewards in an agent-environment-interacting MDP. The agent observes state at time , and selects an action according to its policy parameterized by . The environment receives and , and returns the next state and the reward in this step . The in the objective function is a discounting coefficient. The main approaches for reinforcement learning include deep Q-learning (DQN) , asynchronous advantage actor critic (A3C) , trust region policy optimization (TRPO) , and proximal policy optimization (PPO) . Approaches used in continuous control are mostly policy gradient methods, i.e. A3C, TRPO, and PPO. Vanilla policy gradient method updates the parameters
by ascending the log probability of actionwith higher advantage . The surrogate objective function is
Although A3C uses the unbiased estimator of policy gradient, large updates can prevent the policy from converging. TRPO introduces a constraint to restrict the updated policy from being too far in Kullback-Leibler (KL) distance from the old policy. Usually, TRPO solves an unconstrained optimization with a penalty punishing the KL distance between and , specifically,
However, the choice of the penalty coefficient has been a problem . Therefore, PPO modifies TRPO by using a simple clip function parameterized by to limit the policy update. Specifically,
This simple objective turns out to perform well while enjoying better sample complexity, thus we are using PPO as the default RL algorithm in our policy training. We are also inspired by  to build a distributed framework with multiple threads to speed up the training process.
The advantage function describes how better a policy is compared to a baseline. Traditionally the difference between the estimated Q value and value functions is applied as the advantage . Recently Schulman et al  proposed using generalized advantage estimation (GAE) to leverage the bias and variance of the advantage estimator.
Iii The CALNet
Iii-a Problem Formulation
We consider an agent performing a complicated task with multiple attributes. Since the agent is fixed, its action space is a fixed space, which we call . We decompose the task into a series of attributes, denoted . We refer to the attribute as the base attribute, which usually corresponds to the most fundamental goal of the task, such as the target reaching attribute in the autonomous driving task. We define the state space of each attribute to be the minimum state space that fully characterizes the attribute, denoted . For example, let the base attribute be the target reaching attribute, and the attribute be the obstacle avoidance attribute. Then consists of the states of the agent and the target, while consists of the states of the agent and the obstacle, and yet does not include the states of the target.
Each attribute has an unique reward function as well, denoted . Each is a function mapping a state action pair to a real number reward, i.e.
. Similarly, there is a specific transition probability distribution for each attribute, denoted:. And for each attribute, its transition function takes in the state action pairs and outputs the states for the next timestep, that is, .
A key characteristic in our problem formulation is that the state spaces for different attributes can be different. This formulation enables the attribute learning network to dynamically manage the state space of the task. Specifically, the states of the attribute, , is fed to the module of the attribute in the network.
Iii-B Network architecture
The architecture of the CALNet is shown in Fig. 2. and Fig. 3. Both the training phase (Fig. 2.) and the testing phase (Fig. 3.) of the CALNet are implemented in cascade orders. In the training phase, first a RL policy is trained to accomplish the goal of the base attribute. The base attribute network takes in and outputs , the reward and transition functions of the MDP are given by and . This process is a default RL training process.
Then the attribute module is trained in series of the base attribute module. The attribute module consists of a compensate network and a weighted sum operator. The compensate network is fed with state , and action chosen by . The output of the compensate network is the compensate action , which is used to compensate to produce the overall action . The reward for the MDP is given by so that the requirements for both attributes are satisfied. The new transition function may not be directly calculated using and , but it can be easily obtained from the environment. Since the parameters of the base attribute network are pretrained, the cascading attribute network would extract the features of the attribute by exploring the new MDP under the guidance of the base policy.
It is noted that in the weighted sum operator, the weight of the compensative action is initiated to be small and increased over the training time. That is, at the early stage of the training process, mainly takes effect, while gradually gets to influence the overall as the training goes on. For other attributes, the training method is the same with that of the attribute.
In the testing phase, the designated attribute modules are connected in series following the base attribute, as shown in Fig. 3. In the CALNet, the attribute module takes in and , and outputs that satisfies all the attributes before the module. The final output is the overall output that satisfies all the attributes in the attribute array.
Iii-C Training Method
To guarantee the capacity of the CALNet, the policies need to meet two requirements:
The attribute policies should be robust over the state space, rather than being effective only at the states that are close to the optimal trajectory. This requirement guarantees the attribute policies to be instructive when more compensate actions are added on the top of them.
The compensate action for a certain attribute should be close to zero if the agent is in a state where this attribute is not active. This property increases the capability of multi-attribute structures.
For the sake of the robustness of the attribute policies, we apply CL to learn a general policy that can accomplish the task starting from any initial state. The CL algorithm first trains a policy with fixed initial state. As the training goes on, the random level of the initial state is smoothly increased, until the initial state is randomly sampled from the whole state space. The random level is increased only if the policy is capable enough for the current random level.
For example, consider the task of moving a ball to reach a target point in a 2 dimensional space. In each episode, the initial position of the ball is randomly sampled in a circular area. The random level in this case is the radius of the circle. In the early training stage, the radius is set to be very small, and the initial position is almost fixed. As the policy gains more and more generality, the reward in each episode increases. Once the reward reaches a threshold, the random level is increased, and the initial position of the ball is sampled from a larger area. The terminal random level corresponds to the circumstance where the circular sampling area fully covers the working zone. If the policy performs well under the terminal random level, the policy is considered successfully trained. The pseudocode for this process is shown in Algorithm 1.
To guarantee the second requirement, an extra loss term that punishes the magnitude of the compensative action, , is added to the reward function so as to reduce when attribute is not active.
Our experiments aim to validate the capability and advantage of the CALNet. In this Section, first we introduce the experiment setup, we then show the capability of the CALNet to modularize and assemble attributes in multi-attribute tasks. In the last part of this section, we compare the CALNet with the baseline RL algorithm, and show that the CALNet can adjust to complicated tasks more easily.
The experiments are powered by the MuJoCo physics simulator 22] method with GAE  as the advantage estimator.
We design three robots as agents in our experiments. They are a robot arm in 2 dimensional space, a moving ball in 2 dimensional space, and a robot arm in 3 dimensional space. For all three robots we have enabled both position control and force control modes.
For each agent we have designed 5 attributes:
Iv-A1 reaching (base attribute)
The reaching task is a natural selection for the base attribute. For the ball agent, the goal is to collide the target object. For the robot arm agents, the goal is to touch the target object.
Iv-A2 obstacle (position phase)
The obstacle attribute is to add an rigid obstacle ball in the space. Negative rewards are given if the robot collides the obstacle. Therefore, in baseline RL training, the agent can be dissuaded from exploring the right direction.
Iv-A3 automated door (time phase)
The automated door attribute is purely time controlled. The door blocking the target is opened only at some certain time. This attribute is harder than an obstacle, since it punishes the agent even if it goes to the right direction at a wrong time.
Iv-A4 speed limit (velocity phase)
The speed limit attribute adds a time-variant speed limit on the agent. The agent gets punished if it surpasses the speed limit. But if the robot’s speed is too slow, it may not be able to finish the task in one episode.
Iv-A5 force disturbance (acceleration phase)
The force disturbance attribute adds a time-variant force disturbance to the agent (or each joint for the arm).
Iv-B CALNet Performance
The first set of experiments test the capability of the CALNet to learn attributes and assemble learned attributes. We first train the base attribute module using the baseline RL algorithm with CL, and then use the cascading modules to modularize the different attributes based on the pretrained base module. The results show that all the attributes can be successfully added to the base attribute using the CALNet. Fig. 5. shows some of the examples of the agent performing different attributes combinations.
We also test the transferability of the cascading modules and the capability of the CALNet of modeling tasks with multiple attributes. Concretely, we first train two attribute modules in parallel based on the pretrained base module. Then we connect the two attribute modules in series following the base attribute module. The CALNet structure is the same as the one shown in Fig. 3. The policy derived using the assembled network can zero shoot most of the tasks satisfying requirements of both attributes.
Fig. 6. shows two examples of the CALNet zero shooting a task where the moving ball reaches the target while avoiding two obstacles simultaneously. It is emphasized that this task is never trained before. We achieve zero shooting simply by connecting two pretrained obstacle attribute modules in series following the base module. Undeniably as the attributes grow more complicated and the number of attributes gets larger, it would require a certain amount of finetune. However, the advantage of modularizing and assembling attributes is remarkable, since the finetuning process is much easier and faster compared to training a new policy from scratch (as discussed in Section IV-C).
Iv-C Comparison with Baseline RL Methods
We compare the capability of the CALNet and the baseline RL by comparing their training processes on a same task. We consider the MDP in which the ball agent gets to the target while avoiding an obstacle. The CALNet is trained with CL. For baseline RL trained with CL, in many cases it is to too hard for the agent to reach the target. Therefore, we have also implemented RCL, which let the initial state be very close to the target in the early stage of the training phase. Using RCL, the RL could gain positive reward very fast. The challenge would be whether the RL algorithm could maintain high reward level as the random level increases.
For CALNet, the base attribute has been trained, and we train the obstacle avoidance attribute module based on the base module. For the baseline RL, the task is trained from scratching. The focus of the comparison is placed on the responding reward and random level in CL versus the training iterations.
The reward and the random level curves are shown in Fig. 7, with the horizontal axis representing the training iterations. It is shown that the baseline RL using CL barely learns anything. This is because the reward is too sparse and the agent is consistently receiving punish from the obstacle, and fells into some local minimum. For the baseline RL using RCL, in the early stage, the average discounted reward in an episode is high as expected. But as the random level rises, the performance of the baseline RL with RCL drops. Therefore, the random level increases slowly as the training goes on.
The CALNet, on the other hand, is able to overcome the misleading punishments from the obstacle, thanks to the guidance of the instructive base attribute policy. As a result, the random level of the CALNet rises rapidly, and the CALNet achieves terminal random level more than 10 times faster than the baseline. These results indicate that the attribute module learns substantial knowledge of the attribute as the CL based training goes on.
In this paper, we propose the attribute learning and present the advantages of using this novel method to modularize complicated tasks. The RL framework we propose, the CALNet, uses cascading attribute modules to model the characteristics of the attributes. The attribute modules are trained with the guidance of the pretrained base attribute module. We validated the effectiveness of the CALNet of modularizing and assembling attributes, and showed the advantages of the CALNet in solving complicated tasks compared to the baseline RL. Our future work includes transferring attributes between different base attributes and even different agents. Another potential direction is to investigate the attribute learning models that can assemble lots of attributes. We believe that attribute learning can help human build versatile controllers more easily.
-  Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. Vol. 1. No. 1. Cambridge: MIT press, 1998.
-  Levine, Sergey, and Pieter Abbeel. ”Learning neural network policies with guided policy search under unknown dynamics.” Advances in Neural Information Processing Systems. 2014.
-  Schulman, John, et al. ”High-dimensional continuous control using generalized advantage estimation.” arXiv preprint arXiv:1506.02438 (2015).
Levine, Sergey, et al. ”End-to-end training of deep visuomotor policies.” Journal of Machine Learning Research 17.39 (2016): 1-40.
-  Taylor, Matthew E., and Peter Stone. ”Transfer learning for reinforcement learning domains: A survey.” Journal of Machine Learning Research 10.Jul (2009): 1633-1685.
-  Pan, Sinno Jialin, and Qiang Yang. ”A survey on transfer learning.” IEEE Transactions on knowledge and data engineering 22.10 (2010): 1345-1359.
-  Rusu, Andrei A., et al. ”Progressive neural networks.” arXiv preprint arXiv:1606.04671 (2016).
-  Rusu, Andrei A., et al. ”Sim-to-real robot learning from pixels with progressive nets.” arXiv preprint arXiv:1610.04286 (2016).
-  Daftry, Shreyansh, J. Andrew Bagnell, and Martial Hebert. ”Learning transferable policies for monocular reactive MAV control.” International Symposium on Experimental Robotics. Springer, Cham, 2016.
-  AlexanderBraylan, MarkHollenbeck, and RistoMiikkulainen ElliotMeyerson. ”Reuse of neural modules for general video game playing.” (2016).
-  Ammar, Haitham Bou, et al. ”Unsupervised cross-domain transfer in policy gradient reinforcement learning via manifold alignment.” Proc. of AAAI. 2015.
-  Gupta, Abhishek, et al. ”Learning invariant feature spaces to transfer skills with reinforcement learning.” arXiv preprint arXiv:1703.02949 (2017).
Vilalta, Ricardo, and Youssef Drissi. ”A perspective view and survey of meta-learning.” Artificial Intelligence Review 18.2 (2002): 77-95.
-  Duan, Yan, et al. ”One-Shot Imitation Learning.” arXiv preprint arXiv:1703.07326 (2017).
-  Bengio, Yoshua, et al. ”Curriculum learning.” Proceedings of the 26th annual international conference on machine learning. ACM, 2009.
-  Florensa, Carlos, et al. ”Reverse Curriculum Generation for Reinforcement Learning.” arXiv preprint arXiv:1707.05300 (2017).
-  Devin, Coline, et al. ”Learning modular neural network policies for multi-task and multi-robot transfer.” Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017.
-  Andreas, Jacob, Dan Klein, and Sergey Levine. ”Modular multitask reinforcement learning with policy sketches.” arXiv preprint arXiv:1611.01796 (2016).
-  Mnih, Volodymyr, et al. ”Human-level control through deep reinforcement learning.” Nature 518.7540 (2015): 529-533.
-  Mnih, Volodymyr, et al. ”Asynchronous methods for deep reinforcement learning.” International Conference on Machine Learning. 2016.
-  Schulman, John, et al. ”Trust region policy optimization.” Proceedings of the 32nd International Conference on Machine Learning (ICML-15). 2015.
-  Schulman, John, et al. ”Proximal Policy Optimization Algorithms.” arXiv preprint arXiv:1707.06347 (2017).
-  Kullback, Solomon, and Richard A. Leibler. ”On information and sufficiency.” The annals of mathematical statistics 22.1 (1951): 79-86.
-  Heess, Nicolas, et al. ”Emergence of Locomotion Behaviours in Rich Environments.” arXiv preprint arXiv:1707.02286 (2017).
-  Todorov, Emanuel, Tom Erez, and Yuval Tassa. ”MuJoCo: A physics engine for model-based control.” Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012.