1 Introduction
The field of domainindependent planning designs planners that can be run for all symbolic planning problems described in a given input representation. The planners use representationspecific algorithms, thus allowing themselves to be run on all domains that can be expressed in the representation.
Two popular representation languages for expressing probabilistic planning problems are PPDDL, Probabilistic Planning Domain Description Language (Younes et al., 2005), and RDDL, Relational Dynamic Influence Diagram Language (Sanner, 2010)
. These languages express Markov Decision Processes (MDPs) with potentially large state spaces using a factored representation. Most traditional algorithms for PPDDL and RDDL planning solve each problem instance independently are not able to share policies between two or more problems in the same domain
(Mausam and Kolobov, 2012).Very recently, probabilistic planning researchers have begun exploring ideas from deep reinforcement learning, which approximates statevalue or stateaction mappings via deep neural networks. Deep RL algorithms only expect a domain simulator and do not expect any domain model. Since the RDDL or PPDDL models can always be converted to a simulator, every modelfree RL algorithm is applicable to the MDP planning setting. Recent results have shown competitive performance of deep reactive policies generated by RL on several planning benchmarks
(Fern et al., 2018; Toyer et al., 2018).Because neural models can learn latent representations, they can be effective at efficient transfer from one problem to another. In this paper, we present the first domainindependent transfer algorithm for MDP planning domains expressed in RDDL. As a first step towards this general research area, we investigate transfer between equisized problems from the same domain, i.e. those with same state variables, but different connectivity graphs.
We name our novel neural architecture, ToRPIDo – Transfer of Reactive Policies Independent of Domains. ToRPIDo brings together several innovations that are wellsuited to symbolic planning problems. First, it exploits the given symbolic (factored) state and its connectivity structure to generate a state embedding using a graph convolutional network (Goyal and Ferrara, 2017)
. An RL agent learnt over the state embeddings transfers well across problems. One of the most important challenges for our model are the actions, which are also expressed in a symbolic language. The same ground action name in two problem instances may actually mean different actions, since the state variables over which the action is applied may have different interpretations. As a second innovation, we train an action decoder that learns the mapping from instanceindependent stateaction embedding to an instancespecific ground action. Third, we make use of the given transition function by training an instanceindependent model of domain transition in the embedding space. This gets transferred well and enables a fast learning of action decoder for every new instance. Finally, as a fourth innovation, we also implement an adversarially optimized instance classifier, whose job is to predict which instance a given embedding is coming from. It helps
ToRPIDo to learn instanceindependent embeddings more effectively.We perform experiments across three standard RDDL domains from IPPC, the International Probabilistic Planning Competition (Grzes et al., 2014). We compare the learning curves of ToRPIDo with A3C, a state of the art deep RL engine (Mnih et al., 2016), and AttendAdaptTransfer (A2T), a state of the art deep RL transfer algorithm (Rajendran et al., 2017). We find that ToRPIDo has a much superior learning performance, i.e, obtains a much higher reward for the same number of learning steps. Its strength is in nearzero shot learning – it can quickly train an action decoder for a new instance and obtains a much higher reward than baselines without running any RL or making any simulator calls. To summarize,

We present the first domainindependent transfer algorithm for symbolic MDP domains expressed in RDDL language.

Our novel architecture ToRPIDo uses the graph structure and transition function present in RDDL to induce instanceindependent RL, state encoder and other components. These can be easily transferred to a test instance and only an action decoder is learned from scratch.

ToRPIDo has significantly superior learning curves compared to existing transfer algorithms and training from scratch. Its particular strength is its nearzero shot transfer – transfer of policies without running any RL.
We release the code of ToRPIDo for future research.^{1}^{1}1Available at https://github.com/dairiitd/torpido
2 Background and Related Work
2.1 Reinforcement Learning
In its standard setting, an RL agent acts for long periods of time in an uncertain environment and wishes to maximize its longterm return. Its dynamics is modeled via a Markov Decision Process (MDP), which takes as input a state space , an action space , unknown transition dynamics , and an unknown reward function (Puterman, 1994). The agent in state at time takes an action to get a reward and make a transition to via its MDP dynamics. The step return is defined as the discounted sum of rewards, . The value function is the expected (infinite step) discounted return from state if all actions are selected according to policy . The action value function is the expected discounted return after taking action in state and then selecting actions according to thereafter.
Deep RL algorithms approximate policy (Williams, 1992) or value function (Mnih et al., 2015) or both (Mnih et al., 2016) via a neural network. Our work builds upon the Asynchronous Advantage ActorCritic (A3C) algorithm (Mnih et al., 2016), which constructs approximations for both the policy (using the ‘actor’ network) and the value function (using the ‘critic’ network). The parameters of the critic network are adjusted to maximize the expected reward by using the gradient of the ‘advantage’ function, which measures the improvement of the action over the expected state value . Hence the update to the critic network is the expectation of . The actor network maximizes the step lookahead reward by minimizing the expectation of mean squared loss, . Here, the optimization is with respect to , the cumulative parameters in both actor and critic networks, and are their previous values. Furthermore, many instances of the agent interact in parallel with many instances of the environment, which both accelerates and stabilizes learning in A3C.
Transfer Learning in Deep RL: Neural models are highly amenable for transfer learning, because they can learn generalized representations. Initial approaches to transfer learning in deep RL involved transferring the value function or policies from the source to the target task. More recent methods have developed these ideas further, for example, by using expert policies from multiple tasks and combining them with source task features to learn an actormimic policy (Parisotto et al., 2015), or by using a teacher network to propose a curriculum over tasks for effective multitask learning (Matiisen et al., 2017). A preliminary approach has also used a symbolic frontend for deep RL (Garnelo et al., 2016). We compare our transfer algorithm against a recent algorithm that uses an attention mechanism to allow selective transfer and avoid negative transfer (Rajendran et al., 2017).
Recently, there have also been attempts at performing a zeroshot transfer, i.e., without seeing the new domain. An example is DARLA (Higgins et al., 2017), which leverages the recent work on domain adaptation to learn a domainindependent representation of the state, and learns a policy over this state representation, hence making the learned policy robust to domain shifts. Our work attempts a nearzero shot learning by learning a good policy with limited learning, and without any RL.
2.2 Probabilistic Planning
Planning problems are special cases of RL problems in which the transition function and reward function are known (Mausam and Kolobov, 2012). In this work, we consider planning problems that model finitehorizon discounted reward MDPs with a known initial state (Kolobov et al., 2012). Thus, our problems take as input , where is the horizon for the problem. Probabilistic planners can use the model to perform a full Bellman backup, i.e., expectation over all next outcomes from an action (e.g., in Value Iteration (Bellman, 1957)), whereas RL agents can only backup from a single sampled next state (e.g., in Q Learning (Sutton and Barto, 1998)).
Factored MDPs provide a more compact way to represent MDP problems. They decompose a state into a set of binary state variables ; the transition function specifies the change in each state variable, and the reward function also uses a factored representation. Solving a finitehorizon factored MDP is EXPTIMEcomplete, because it can represent an MDP exponential states in polynomial size (Littman, 1997).
RDDL Reprentation: RDDL describes a factored MDP via objects, predicates and functions. It is a firstorder representation, i.e., it can be initiated with a different set of objects to construct MDPs from the same domain. A domain has parameterized nonfluents to represent the part of the state space that does not change. A planning state needs those state variables that can change via actions or natural dynamics. They are represented as parameterized fluents. The transition function for the system is specified via (stochastic) functions over next state variables conditioned on current state variables and actions. The reward function is also defined in the factored form using the state variables.
We illustrate the RDDL language via the SysAdmin domain (Guestrin et al., 2001), which consists of a set of
computers connected in a network. Each computer in the network can be shut down with a probability dependent on the ratio of its ‘on’ neighbours to total number of neighbours. Any ‘off’ computer can randomly switch on with a reboot probability. The agent can take the action of rebooting a single computer, or no action at all in each time step. Note that noop is also a valid action as this domain would evolve even if the agent does not take any action. The reward at each timestep is the number of ‘on’ computers at that timestep. The natural structure of the problem, and the factored nature of the transition and reward function make this problem perfectly suited to be represented in RDDL as follows. Objects:
, the computers; Non fluents: , whose value is 1 if is a neighbor of ; State fluents: , which is 1 if the computer is on; Action fluents: , which denotes that the agent rebooted the computer; Reward function: ; Transition function: If , then , Else if then , Else . Here all primed fluents denote the value at the next time step, and , , and are constants modeling the dynamics of the domain.Deep Learning for Planning: Value Iteration Networks formalize the idea of running the Value Iteration algorithm within the neural model (Tamar et al., 2017), however, they operate on the state space, instead of factored space. There have been three recent works on the use of neural architectures for domainindependent factored MDP planning. One work learns deep reactive policies by using a network that mimics the local dependency structure in the RDDL representation of the problem (Fern et al., 2018). We are also interested in problems specified in RDDL, but are more focused on transfer across problem instances. There also has been an early attempt on transfer in planning problems, for two specific classical (deterministic) domains of TSP and Sokoban (Groshev et al., 2018). In contrast, we propose a transfer mechanism that can be used for domainindependent RDDL planning. Finally, ActionSchema Nets use layers of propositions and actions for solving and transferring between goaloriented PPDDL planning problems (Toyer et al., 2018). RDDL allows concurrent conditional effects, which when converted to PPDDL can lead to exponential blowup in the action space. Therefore, ASNets are not scalable to RDDL domains considered in this paper.
2.3 Graph Convolutional Networks
Graph Convolutional Networks (GCN) generalize convolutional networks to arbitrarily structured graphs (Goyal and Ferrara, 2017). A GCN layer take as input a feature representation for every node in the graph ( feature matrix, where is the number of nodes in the graph, and is the input feature dimension), and a representation for the graph structure (an adjacency matrix ), and produces an output feature representation for every node (in the form of an matrix, where is the output feature dimension). A layer of GCN can be written as . where and are the feature representations for and layers.
We use this propagation rule (from (Kipf and Welling, 2017)) in our work: , where ,
being the identity matrix, and
is the diagonal node degree matrix of . Intuitively, this propagation rule implies that the feature at a particular node of the layer is the weighted sum of the features of the node and all its neighbours at the layer. Furthermore, these weights are shared at all nodes of the graph, similar to how the weights of a CNN kernel are shared at all locations of the image. Hence, at each layer, the GCN expands its receptive field at each node by 1. A deep GCN network, i.e., a network consisting of stacked GCN layers, can therefore have a large enough receptive field and construct good feature representations for each node of a graph.3 Problem formulation
The transfer learning problem is formulated as follows. We wish to learn the policy of the target problem instance , where in addition to , we are given source problem instances . For this paper, we make the assumption that the state, action spaces and rewards of all problems is the same, even though their initial state, and nonfluents could be different. For example, computers in SysAdmin may be arranged in different topologies (based on different values of nonfluents ). Any transfer learning solution will operate in two phases: (1) Learning phase: Learn policies over each source problem instance, and possibly also learn general representations that will help in transfer. (2) Transfer phase: Learn the policy for using all output of the learning phase.
An ideal zeroshot transfer approach will be one where the target instance environment will not even be used during the transfer phase. Two indicators of good transfer are a high pretrain (zeroshot transfer) score, and a more sampleefficient learning compared to a policy learnt from scratch on .
4 Transfer Learning framework
Our approach hinges on the claim that there exists a ‘good’ embedding space for all states, as well as a ‘good’ embedding space of all stateaction pairs, which is shared by all equisized instances of a given domain. A ‘good’ state embedding space is a space in which similar states are close together and dissimilar states are far apart (similarly for stateaction pair embeddings).
Our neural architecture is shown in Figure 1. Broadly, our architecture has five components: a state encoder (SE), an action decoder (SAD), an RL module (RL), a transition module (Tr), and an instance classifier (IC). In the training phase, ToRPIDo learns instanceindependent modules for SE, RL, Tr and IC, but an instancespecific SAD (for ). Its transfer phase operates in two steps. First, using the general SE and Tr models, it learns weights for SAD, the target action decoder. We call this nearzero shot learning, because this can compute a policy for without running any RL. Once, this SAD is effectively trained, we transfer all other components and retrain them via RL to improve the policy further for the target instance.
State Encoder: We leverage the structure of RDDL domains to represent the instance in the form of a graph (state variables as nodes, edges between nodes if the respective objects are connected via the nonfluents in the domain). The state encoder takes the adjacency matrix for the instance graph and the current state as input, and transforms the state to its embedding. We use the Graph Convolution Network (GCN) to perform this encoding. The GCN constructs multiple features for each node at each layer. Hence, the output of the deep GCN consists of multidimensional features at each node, which represent embeddings for the corresponding state variables. These embeddings are concatenated to produce the final state embedding, which is the output of the state encoder module.
RL Module: This deep RL agent takes in a state embedding as input and outputs a policy in the form of a stateaction embedding. This embedding is an abstract representation of a soft action, i.e., a distribution over actions in the embedding space (which will be further decoded by action encoder). We think of this embedding as representing the pair of state and soft action, instead of just a soft action. This is because the same action may have different effects depending on the state, and hence a standalone action embedding would not make sense. This can be seen as a neural representation for the notion of stateaction pair symmetries in RL domains (Anand et al., 2015, 2016). We use the A3C algorithm to learn our RL agent, because it has been shown to be robust and stable (Mnih et al., 2016), though other RL variants can easily replace this module. A3C uses a simulator, which can be easily created by sampling from the known transition function in the RDDL representation.
We note that only the policy network of the A3C agent is shared between instances and operates in the embedding space. The value network is different for each instance and operates in the original state space. We did not try to learn a transferable value function as we were ultimately only concerned with the policy, and not the value in the target domain. Hence, in our case, the sole purpose of the value function is to assist the policy function in learning a good policy.
Action decoder:
The action decoder aims to learn a transformation from the stateaction embedding to a soft action (probability distribution over actions). However, such a transformation would not be welldefined, as a stateaction embedding could correspond to more than one (symmetric) stateaction pairs, and hence more than one corresponding actions. E.g., consider a navigation problem over a square grid
(Ravindran, 2004), with the goal at topright corner. Its immediate neighbors (the state to the left say , and state below, say ) will be symmetric as they both can reach the goal in one step. We expect them to have the same stateaction pair embedding with their respective optimal actions. However, the optimal actions are "right" for and "up" for .To resolve this ambiguity, we need to input the state as well into the action decoder. The decoder outputs a probability distribution over actions . ToRPIDo implements the action decoder using a fully connected network with a softmax to output a distribution over actions. It is important to realize that we need a separate action decoder for each instance, as the required transformation is different for different instances. For example, if the navigation problem is changed so that the goal is now in the lower left corner, all other embeddings may transfer, but the final action output will be different ("down" and "left" for states symmetric to and ). Action decoder is the only component, which cannot be directly transferred from source problems to the target problem.
Transition Transfer Module: To speed up the transfer to the target domain, we additionally learn a transition module in the learning phase. This module takes in as input the current and next state embeddings (), and outputs a softaction embedding (interpreted as a distribution over actions), with the semantics that the output distribution . I.e., the output embedding maintains which action is more likely to be responsible for the transition from to . The gold data for training this module can be easily computed via the RDDL representation. Note that the transition and RL module share both the state and stateaction embedding spaces. This novel module allows us to quickly learn an action decoder, thus allowing a nearzero shot transfer to take place.
Instance classifier: ToRPIDo’s aim is to learn stateaction embeddings independent of the instance (since they are shared between all instances). To explicitly enforce this condition, we use an idea from domain adaptation (Ganin et al., 2017). Essentially, as an auxiliary task, we try to learn a classifier to predict the problem instance, given a stateaction embedding. This is done in an adversarial manner, so that the model learns to produce such stateaction embeddings, that even the best possible classifier is unable to predict the instance from which they were generated. In the ideal case, at equilibrium, the model learns to produce stateaction embeddings which are instance invariant, as even the best instance classifier would predict an equal probability over all source instances.
4.1 ToRPIDo’s Training & Transfer Phases
Learning Phase: During the learning phase, we learn a policy over each of the problem instances in a multitask manner by sharing the state encoder and RL module, and by using separate decoders for each instance. We also learn the transition function to predict the distribution of actions given consecutive states. The instance invariance is implemented using a gradient reversal layer, as in (Ganin et al., 2017)
. I.e., the gradients for the instance classification loss are backpropagated in the standard manner through the instance classifier layer, but with their sign reversed in all the layers preceding the stateaction embedding (hence enforcing the adversarial objective function of the game described). The loss function for training is a weighted sum of the policy gradient loss of A3C, a crossentropy loss for prediction of the actions given consecutive states (from the transition module), and the instance misclassification loss, i.e., the cross entropy loss from the instance module with sign reversed. The instance classification module is trained to minimize the crossentropy loss between the predicted instance distribution and groundtruth instance. Mathematically,
(1) 
where is the number of training instances, is the loss function of the policy network of the A3C agent, is the crossentropy loss for the instance classifier and is crossentropy loss for transition module. Here represent the combined parameters of the encoder and RL module, represents the parameters of the decoder module of the agent, represents the parameters of the instance classifier module and represents parameters of transition module. We are seeking parameters , , and that deliver a saddle point of the functional such that and .
Transfer Phase: During this phase, the state encoder requires simply inputting the adjacency matrix of the target instance and it directly outputs state embeddings for this problem using SE. Since our RL agent operates in the embedding space, it is exactly the same for the target instance, and, hence, is directly transferred. However, an action decoder needs to be relearnt. For this, we make use of the fact that the transition function also operates only in the embedding space, so can also be directly transferred. For the target instance, we try to predict the distribution over actions given consecutive states in the new instance. The weights for the state encoder and transition module remain fixed, while the weights for the decoder for the new instance are learned. This decoder can then be directly used to transform the stateaction embeddings predicted by the RL module into a distribution over actions, or a policy for the new instance. Hence, we are able to achieve a near zeroshot transfer, i.e., without doing any RL in the new environment and by simply retraining action encoder via transition transfer. After the weights of each module have been initialized as above, ToRPIDo follows the same training procedure as in A3C. This generates the learning curve, post the zeroshot transfer.
In summary, this architecture leverages the extra information in RDDL domains in two ways: (i) it uses the input structure to represent the state as a graph, and uses a GCN to learn a state embedding (ii) it uses the transition function (available in the RDDL file) to learn the decoder for the new domain.
Train iter  0  0.1M  0.5M  

Algo  A3C  A2T  TP  A3C  A2T  TP  A3C  A2T  TP  A3C  A2T  TP 
Sys#1  0.00  0.08  0.23  0.01  0.09  0.26  0.11  0.19  0.39  0.31  0.38  1.0 
Sys#5  0.00  0.02  0.26  0.03  0.11  0.30  0.08  0.17  0.64  0.30  0.40  1.0 
Sys#10  0.02  0.06  0.26  0.04  0.09  0.33  0.08  0.13  0.49  0.32  0.33  1.0 
Game#1  0.00  0.19  0.34  0.04  0.22  0.49  0.43  0.60  0.98  0.54  0.61  1.0 
Game#5  0.00  0.03  0.41  0.11  0.11  0.55  0.24  0.17  0.77  0.40  0.44  1.0 
Game#10  0.07  0.05  0.34  0.03  0.08  0.49  0.08  0.14  0.88  0.20  0.20  1.0 
Navi#1  0.00  0.04  0.72  0.01  0.04  0.72  0.10  0.19  0.9  0.22  0.25  1.0 
Navi#2  0.00  0.01  0.68  0.05  0.06  0.73  0.26  0.45  1.0  0.55  0.56  1.0 
Navi#3  0.00  0.01  0.50  0.03  0.03  0.60  0.21  0.42  0.71  0.40  0.40  1.0 
5 Experiments
We wish to answer three experimental questions. (1) Does ToRPIDo help in transfer to new problem instances? (2) What is the comparison between ToRPIDo and other state of the art transfer learning frameworks? (3) What is the importance of each component in ToRPIDo?
Domains: We make all comparisons on three different RDDL domains used in IPPC, International Planning Competition 2014 (Grzes et al., 2014) – SysAdmin, Game of Life and Navigation. We have already described the SysAdmin domain (Guestrin et al., 2001) in background section. The Game of Life represents a gridworld cellular automata, where each cell is dead or alive. Each alive cell continues to live in next step as long as there is no over or underpopulation (measured by number of adjacent live cells). Additionally, the agent can make any one cell alive in each step. Finally, the Navigation domain requires a robot to move in a grid from one location to the other using four actions – up, down, left and right. There is a river in the middle and the robot can drown with a nonzero probability in each location. If it drowns, it restarts from the start state in the next episode and current episode is over.
All three of these domains have some spatial structure, but that is implicit in the symbolic description (exposed via nonfluents in RDDL). This makes them ideal choices for a first study of its kind. For each domain we perform experiments on three different instances. A higher numbered problem roughly corresponds to a problem with much larger state space. For example, SysAdmin1 has 10 computers and SysAdmin10 has 50 computers (effective state space sizes and respectively).
Experimental Settings and Comparison Algorithms:
In the spirit of domainindependent planning, all hyperparameters are kept constant for all problems in all domains. Our parameters are as follows. A3C’s value network as well as policy network use two GCN layers (3, 7 feature maps) and two fully connected layers. The action decoder implements two fully connected layers. All layers use the exponential linear unit (ELU) activations
(Clevert et al., 2015). All networks are trained using RMSProp with a learning rate of
. For ToRPIDo, we set , i.e., the training phase uses four source problems. Random problems are generated for training using the generators available for each domain. All weights of GCN of policy network, and the RL module of policy network are shared.We implement two baseline algorithms for comparison. First, we implement a base nontransfer algorithm, A3C. This is chosen since ToRPIDo uses A3C as its RL agent. Thus, this comparison will directly show the value of transfer. We also implement a state of the art RL transfer solution called A2T – Attend, Adapt and Transfer (Rajendran et al., 2017), which retrains while using attention over the learned source policies for selective transfer. It also uses the same four problems as source as in ToRPIDo. This comparison will expose the specific value of our transfer mechanism, which uses the RDDL representation, compared to a representationagnostic transfer mechanism.
A3C+GCN  A3C+GCN + SAD  A3C+GCN + SAD + IC  

Sys#1  0.01  0.25  0.23 
Sys#5  0.01  0.23  0.26 
Sys#10  0.01  0.22  0.26 
Game#1  0.03  0.31  0.34 
Game#5  0.02  0.26  0.41 
Game#10  0.01  0.24  0.34 
Navi#1  0.05  0.70  0.72 
Navi#2  0.03  0.59  0.68 
Navi#3  0.01  0.27  0.50 
Evaluation Metrics: First, we output learning curves. For that we stop training after a set number of training iterations (say
) and estimate the return from the current policy
by simulating it till the horizon specified in the RDDL file. The reported values are an average of 100 such simulations. Moreover, we also report the metric , where . Here and respectively represent the lowest and the highest values obtained on this instance (by any algorithm at any time in training). Since is a ratio, it acts as an indicator of the training ’stage’ of the model, and hence helps to understand the transfer process as it progresses in time, irrespective of the starting (random) reward and the final reward for the model. Moreover, acts as a measure of (near) zeroshot transfer.5.1 ToRPIDo’s Transfer Ability
We first measure the ability of the model to transfer knowledge across problem instances. We compare against all baselines. Figure 2 compares the learning curves of ToRPIDo
with the two baselines on one problem each of the three domains (error bars are 95% confidence intervals over ten runs). The results on the other problems are quite similar. The xaxis is RL training time on the target instance, which for
ToRPIDo also includes the time for training of action decoder. First, we find that A3C is not very competitive with even A2T in its learning. This suggests that transfer is quite valuable for these problems. We also find that A2T itself has substantially worse performance compared to ToRPIDo. We attribute this to the various components in ToRPIDo that exploit the domain knowledge expressed in RDDL representation.In Table 1 we show the comparisons between these algorithms at four different training points. We report the values, as described above. We find that ToRPIDo is vastly ahead of all algorithms at all times, underscoring the immense value our architecture offers.
5.2 Ablation Study
In order to understand the incremental contribution of each of our components we compare three different versions of ToRPIDo. The first version is A3C+GCN. This version only performs state encoding but does not perform any action decoding. Our next version is A3C+GCN+SAD, which incorporates the action decoding (and also transition transfer module to aid action decoding). Finally, our full system adds an IC to previous name – it includes the instance classification component.
Figure 3 shows the learning curves for the three problems. We observe that use of GCN helps the algorithm converge to a high final score. Comparing this to vanilla A3C and A2T in Figure 2, we learn that the use of GCN is critical in exposing the structure of the domain to the RL agent, helping it in learning a final good policy. However, the zeroshot nature of the transfer is very weak, because the action names may be different in the source and target. Use of actiondecoder and transition transfer speeds up the near zeroshot transfer immensely. This can be observed from Table 2, which compares these algorithms before starting the RL training. We see a huge jump in for the model with action decoder compared to the one without it. Finally, Table 2 suggests that the improvement of instance classification in the beginning is significant. However, very soon the incremental benefit is reduced; final ToRPIDo performs only marginally better than the A3C+GCN+SAD version.
6 Conclusions
We present the first domainindependent transfer algorithm for transferring deep RL policies from source probabilistic planning problems (expressed in RDDL language) to a target problem from the same domain.^{2}^{2}2Available at https://github.com/dairiitd/torpido Our algorithm ToRPIDo combines a base RL agent (A3C) with several novel components that use the RDDL model: state encoder, action decoder, transition transfer module and instance classifier. Only action decoder needs to be relearnt for a new problem; all the other components can be directly transferred. This allows ToRPIDo to perform an effective transfer even before the RL starts, by quickly retraining the action decoder using the given RDDL model (near zeroshot learning). Experiments show that ToRPIDo is vastly superior in its learning curves compared to retraining from scratch as well as a stateoftheart RL transfer method. In the future, we wish to extend this to transfer across problem sizes, and later transfer across domains.
Acknowledgements
We thank Ankit Anand and the anonymous reviewers for their insightful comments on an earlier draft of the paper. We also thank Alan Fern, Scott Sanner, Akshay Gupta and Arindam Bhattacharya for initial discussions on the research. This work is supported by research grants from Google, a Bloomberg award, an IBM SUR award, a 1MG award, and a Visvesvaraya faculty award by Govt. of India. We thank Microsoft Azure sponsorships, and the IIT Delhi HPC facility for computational resources.
References

Anand et al. (2015)
Ankit Anand, Aditya Grover, Mausam, and Parag Singla.
ASAPUCT: abstraction of stateaction pairs in UCT.
In
Proceedings of the TwentyFourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 2531, 2015
, pages 1509–1515, 2015.  Anand et al. (2016) Ankit Anand, Ritesh Noothigattu, Mausam, and Parag Singla. OGAUCT: onthego abstractions in UCT. In Proceedings of the TwentySixth International Conference on Automated Planning and Scheduling, ICAPS 2016, London, UK, June 1217, 2016., pages 29–37, 2016.
 Bellman (1957) Richard Bellman. A Markovian Decision Process. Indiana University Mathematics Journal, 1957.
 Clevert et al. (2015) DjorkArné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). CoRR, abs/1511.07289, 2015. URL http://arxiv.org/abs/1511.07289.
 Fern et al. (2018) Alan Fern, Murugeswari Issakkimuthu, and Prasad Tadepalli. Training deep reactive policies for probabilistic planning problems. In ICAPS, 2018.

Ganin et al. (2017)
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo
Larochelle, François Laviolette, Mario Marchand, and Victor S.
Lempitsky.
Domainadversarial training of neural networks.
In
Domain Adaptation in Computer Vision Applications.
, pages 189–209. 2017.  Garnelo et al. (2016) Marta Garnelo, Kai Arulkumaran, and Murray Shanahan. Towards deep symbolic reinforcement learning. CoRR, abs/1609.05518, 2016. URL http://arxiv.org/abs/1609.05518.
 Goyal and Ferrara (2017) Palash Goyal and Emilio Ferrara. Graph embedding techniques, applications, and performance: A survey. CoRR, abs/1705.02801, 2017.
 Groshev et al. (2018) Edward Groshev, Aviv Tamar, Maxwell Goldstein, Siddharth Srivastava, and Pieter Abbeel. Learning generalized reactive policies using deep neural networks. In ICAPS, 2018.
 Grzes et al. (2014) Marek Grzes, Jesse Hoey, and Scott Sanner. International Probabilistic Planning Competition (IPPC) 2014. In ICAPS, 2014. URL https://cs.uwaterloo.ca/~mgrzes/IPPC_2014/.
 Guestrin et al. (2001) Carlos Guestrin, Daphne Koller, and Ronald Parr. Maxnorm projections for factored mdps. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, IJCAI 2001, Seattle, Washington, USA, August 410, 2001, pages 673–682, 2001.

Higgins et al. (2017)
Irina Higgins, Arka Pal, Andrei A. Rusu, Loïc Matthey, Christopher
Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, and
Alexander Lerchner.
DARLA: improving zeroshot transfer in reinforcement learning.
In
Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 611 August 2017
, pages 1480–1490, 2017.  Kipf and Welling (2017) Thomas N. Kipf and Max Welling. Semisupervised classification with graph convolutional networks. In ICLR, 2017.
 Kolobov et al. (2012) Andrey Kolobov, Mausam, and Daniel S. Weld. A theory of goaloriented mdps with dead ends. In Proceedings of the TwentyEighth Conference on Uncertainty in Artificial Intelligence, Catalina Island, CA, USA, August 1418, 2012, pages 438–447, 2012.
 Littman (1997) Michael L. Littman. Probabilistic propositional planning: Representations and complexity. In Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Innovative Applications of Artificial Intelligence Conference, AAAI 97, IAAI 97, July 2731, 1997, Providence, Rhode Island., pages 748–754, 1997.
 Matiisen et al. (2017) Tambet Matiisen, Avital Oliver, Taco Cohen, and John Schulman. Teacherstudent curriculum learning. CoRR, abs/1707.00183, 2017. URL http://arxiv.org/abs/1707.00183.
 Mausam and Kolobov (2012) Mausam and Andrey Kolobov. Planning with Markov Decision Processes: An AI Perspective. Morgan & Claypool Publishers, 2012.
 Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
 Mnih et al. (2016) Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 1924, 2016, pages 1928–1937, 2016.
 Parisotto et al. (2015) Emilio Parisotto, Lei Jimmy Ba, and Ruslan Salakhutdinov. Actormimic: Deep multitask and transfer reinforcement learning. CoRR, abs/1511.06342, 2015. URL http://arxiv.org/abs/1511.06342.
 Puterman (1994) M.L. Puterman. Markov Decision Processes. John Wiley & Sons, Inc., 1994.
 Rajendran et al. (2017) J. Rajendran, A. S. Lakshminarayanan, M. M. Khapra, P. Parthasarathy, and B. Ravindran. Attend, adapt, and transfer: Attentive deep architecture for adaptive transfer from multiple sources in the same domain. In ICLR, 2017.
 Ravindran (2004) Balaraman Ravindran. An Algebraic Approach to Abstraction in Reinforcement Learning. PhD thesis, University of Massachusetts Amherst, 2004.
 Sanner (2010) Scott Sanner. Relational Dynamic Influence Diagram Language (RDDL): Language Description. 2010.
 Sutton and Barto (1998) Richard S. Sutton and Andrew G. Barto. Reinforcement learning  an introduction. Adaptive computation and machine learning. MIT Press, 1998.
 Tamar et al. (2017) Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration networks. In Proceedings of the TwentySixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 1925, 2017, pages 4949–4953, 2017.
 Toyer et al. (2018) Sam Toyer, Felipe W. Trevizan, Sylvie Thiébaux, and Lexing Xie. Action schema networks: Generalised policies with deep learning. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, February 27, 2018, 2018.
 Williams (1992) Ronald J. Williams. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256, 1992.
 Younes et al. (2005) Håkan L. S. Younes, Michael L. Littman, David Weissman, and John Asmuth. The first probabilistic track of the international planning competition. J. Artif. Intell. Res., 24:851–887, 2005.
Comments
There are no comments yet.