Whereas many traditional machine learning models operate on sequential or Euclidean (grid-like) data representations, gnn allow for graph-structured inputs. gnn have yielded breakthroughs in a variety of complex domains, including drug discovery (Lim et al., 2019; Stokes et al., 2020), fraud detection (Wang et al., 2020)et al., 2018; Sarlin et al., 2020) and particle physics (Heintz et al., 2020).
gnn have also been successfully applied to rl, with promising results on locomotion control tasks with small state and action spaces. In this domain, gnn have shown comparable performance to standard mlps and superior performance in multi-task and transfer settings (Wang et al., 2018; Huang et al., 2020; Kurin et al., 2020b). A key factor in the success of gnn here is the capacity of a single gnn to operate over arbitrary graph topologies (patterns of connectivity between nodes) without modification.
Because gnn only model local relationships of neighbouring nodes (which in many graphs is independent of the size of the overall graph), they are expected to scale well to high-dimensional data comprising local structures, such as sparse graphs. However, so far gnn in rl have only shown competitive performance on lower dimensional locomotion control tasks while underperforming mlp in higher dimensions.
This paper investigates and address this phenomenon, identifying the factors underlying poor GNN scaling and introducing a method to combat them. We begin with an analysis of the nn architecture (Wang et al., 2018), which we choose for its gnn-based policy with promising performance and zero-shot transfer to larger agents. We show that a key limitation for training nn on larger tasks is the instability of policy updates as the size of the agent increases.
It is well known that for many continuous control tasks, great care must be taken to prevent destructive policy updates. Consequently, inspired by natural gradients (Amari, 1997; Kakade, 2001), all current state-of-the-art methods in deep rl for continuous control (Schulman et al., 2015, 2017; Abdolmaleki et al., 2018) employ trust region-like constraints that limit the change in policy for each update. We find that gnn like nn exhibit a much higher tendency to violate those constraints. Furthermore, tightening the constraint is not sufficient for good performance and also leads to slower learning.
Instead, in this paper, we investigate which structures in the gnn are responsible for this policy divergence. To do so, we apply different learning rates to different parts of the network. Surprisingly, the best performance is attained when training with a learning rate of zero in the parts of the GNN architecture that encode, decode, and propagate messages in the graph.
Based on this analysis, we derive sf, a simple training technique for gnn to enable them to scale to high-dimensional environments. sf freezes the parameters of particular operations within the gnn to their initialised values, keeping them fixed throughout training while updating the non-frozen parameters as before.
Experimentally, we show that applying sf to nn dramatically improves asymptotic performance on larger tasks, and gives reduced sample complexity across tasks of all sizes. This enables gnn performance to match that of mlps on the largest locomotion task available, indicating that gnn-based policies are an excellent choice for even the most challenging locomotion problems.
2.1 Reinforcement Learning
We formalise an rl problem as a mdp. An mdp is a tuple . The first two elements define the state space and the action space . At every time step , the agent employs a policy to output a distribution over actions, selects action , and transitions from state to , as specified by the transition function
which defines a probability distribution over states. For the transition, the agent gets a reward. The last element of an mdp specifies initial distribution over states, i.e., states an agent can be in at time step zero.
Solving an mdp means finding a policy that maximises an objective, in our case the expected discounted sum of rewards , where is a discount factor. pg find an optimal policy by doing gradient ascent on the objective: with parameterising the policy.
2.2 Proximal Policy Optimisation
ppo (Schulman et al., 2016) is an actor-critic method that has proved effective for a variety of domains including locomotion control (Heess et al., 2017). ppo approximates the natural gradient using a first order method, which has the effect of keeping policy updates within a “trust region”. This is done through the introduction of a surrogate objective to be optimised:
is a clipping hyperparameter that effectively limits how much a state-action pair can cause the overall policy to change at each update. This objective is computed over a number of optimisation epochs, each of which gives an update to the new policy. If during this process a state-action pair with a positive advantage reaches the upper clipping boundary, the objective no longer provides an incentive for the policy to be improved with respect to that data point. This similarly applies to state-action pairs with a negative advantage if the lower clipping limit is reached.
2.3 Graph Neural Networks
gnns are a class of neural architecture designed to operate over graph-structured data. We define a graph as a tuple comprising a set of nodes and edges
. A labelled graph has corresponding feature vectors for each node and edge that form a pair of matrices, where and . For gnns we often consider directed graphs, where the order of an edge defines as the sender and as the receiver.
A gnn takes a labelled graph and outputs a second graph with new labels. Most gnn architectures retain the same topology for as used in , in which case a gnn can be viewed as a mapping from input labels to output labels .
A common gnn framework is the mpnn (Gilmer et al., 2017), which generates this mapping using steps or ‘layers’ of computation. At each layer in the network, a hidden state and message is computed for every node in the graph.
An mpnn implementation calculates these through its choice of message functions and update functions, denoted and respectively. A message function computes representations from hidden states and edge features, which are then aggregated and passed into an update function to compute new hidden states:
for all nodes , where is the neighbourhood of all sender nodes connected to receiver by a directed edge.
The node input labels are used as the initial hidden states . mpnn assumes only node output labels are required, using each final hidden state as the output label111Gilmer et al. (2017) also specify a readout function, which aggregates the final node representations into a global representation for the entire graph. This is not necessary for the tasks considered in this paper, which only require node-level outputs. .
nn is a type of mpnn designed for locomotion control, based on the gated GNN architecture (Li et al., 2016). nn uses the morphology (physical structure) of the agent as the basis for the gnn’s input graph, with edges representing body parts and nodes representing the joints that connect them. Input labels encode positional information about body parts, which the gnn uses to compute output labels determining the force exerted at each joint. nn does not use input or output edge labels, and assumes a given task can be represented purely at the node level.
nn can be viewed in terms of the mpnn framework, using a message function consisting of a single mlp for all layers that takes as input only the state of the sender node . The update function is a single gru (Cho et al., 2014) that maintains an internal state equivalent to for node , and takes as input the aggregated message .
In addition, nn uses an encoder to generate the initial hidden states from the input labels, and a decoder
to turn final hidden states into scalar output labels. These output labels are used as means for normal distributions at each node from which actions are then sampled. The standard deviation of these distributions is a separate vector of parameters learned during training.
This gives the following set of equations to describe the operation of nn:
for layers , with
for all nodes .
nn uses ppo to train the policy, with parameter updates computed via the Adam optimisation algorithm (Kingma and Ba, 2015). With respect to the mdp of a given task, at timestep nn assumes that the state can be factored into the set of feature vectors . The gnn is then propagated through its layers, and the set of output features are returned. These are required to be scalar values representing the per-joint actions, which are concatenated to form the action vector . The transition dynamics are applied as specified by the mdp, returning a new state to be fed into the gnn, after which this process repeats.
3 Analysing GNN Scaling Challenges
In this section, we use nn to analyse the challenges that limit gnns’ ability to scale. We focus on nn as its architecture is more closely aligned with the gnn framework than alternative approaches to structured locomotion control (see Section 4). We use mostly the same experimental setup as Wang et al. (2018), with details of any differences and our choice of hyperparameters outlined in the appendix.
We focus on environments derived from the Gym (Brockman et al., 2016) suite, using the MuJoCo (Todorov et al., 2012) physics engine. The main set of tasks we use to assess scaling is the selection of Centipede-n agents (Wang et al., 2018), chosen because of their relatively complex structure and ability to be scaled up to high-dimensional input-action spaces.
The morphology of a Centipede-n agent consists of a line of n/2 body segments, each with a left and right leg attached (see Figure 1). The graph used as the basis for the gnn corresponds to the physical structure of the agent’s body. At each timestep in the environment, the MuJoCo engine sends a feature vector containing positional information regarding body parts and joints, expecting a scalar value to be returned specifying forces to be applied at each joint. The agent is rewarded for forward movement along the -axis as well as a small ‘survival’ bonus for keeping its body within certain bounds, and given negative rewards proportional to the size of its actions and the magnitude of force it exerts on the ground.
Existing work applying gnn to locomotion control tasks avoid training directly on larger agents, i.e., those with many nodes in the underlying graph representation. For example, Wang et al. (2018) state that for nn, “training a CentipedeEight from scratch is already very difficult”. Huang et al. (2020) also limit training their smp architecture to small agent types.
3.2 Scaling Performance
To demonstrate the poor scaling of the nn architecture to larger agents, we compare its performance on a selection of Centipede-n tasks to that of an mlp-based policy. As in previous literature (e.g., Wang et al., 2018; Huang et al., 2020), our goal is not to outperform MLPs, but ideally to achieve similar performance so that the additional benefits of gnns can be realised. These benefits include the applicability of a single gnn-based policy to agents and graphs of any size, yielding improved multi-task and transfer performance, even in the zero-shot setting, which is not feasible for mlp-based policies. In this paper we aim to improve the performance of training gnns on larger agents so that these benefits can be gained in a wider range of settings. Consequently, the mlp-based policy is not a baseline to outperform, but an estimate of what should be achievable on a single task when training with gnns.
Figure 2 shows that for the smaller Centipede-n agents both policies are similarly effective, but as the size of the agent increases the performance of nn drops relative to the mlp. A visual inspection of the behaviour of these agents shows that for Centipede-20, nn barely makes forward progress at all, whereas the mlp moves effectively. Thus when training large agents one must choose between the strong performance of an mlp or the strong generalisation of a gnn; neither method attains both.
These nn results are also highly sensitive to the value of the term in the ppo surrogate objective (see Section 2.2). Figure 3 demonstrates the effect of changing for the Centipede-20 agent, showing that small changes significantly impair performance. Either the step in policy space is too great (large ) or over-clipping reduces the extent to which the policy can learn from the experience sampled (small ). This suggests that the difficulty of limiting the policy divergence as a result of updates is a factor in the poor performance of nn.
We can mitigate this sensitivity to some degree by using much lower learning rates or larger batch sizes. However, this significantly degrades sample efficiency, making training large agents with nn infeasible (see the appendix for supporting results). In the next section, we investigate a feature of the gnn design that contributes to this damaging instability across policy updates.
3.3 Overfitting in nn
A known deficiency of mpnn architectures is that message functions implemented as mlps are prone to overfitting (Hamilton, 2020, p.55). For on-policy rl algorithms like ppo, the policy being trained is used to generate a batch of training data for each update. Overfitting can occur if parameter updates for a given batch are not representative of those that would be learned given the whole policy distribution, or if updates do not generalise well to the new policy derived as a result of the update. In such cases, subsequent batch updates are liable to cause excessively large steps in policy space, harming overall performance.
We investigate the extent to which such overfitting is a factor in the policy instability exhibited by nn. A standard approach to reducing overfitting is parameter regularisation, which discourages the learning of large parameter values within the model. Figure 4 demonstrates the effect of using L2 regularisation on the parameters of nn’s message function MLP . This adds an extra term to the optimisation objective to be minimised. The value of determines the strength of the regularisation, and hence the trade-off between minimising the original objective and the regularisation term.
At the optimal value we see clear improvement in performance, indicating the presence of overfitting in the unregularised message mlps. However, the L2-regularised nn model is still substantially inferior to using an mlp to represent the entire policy, and does not sufficiently address the problem of scaling gnns to high-dimensional tasks.
We also investigate lowering the learning rate in parts of the gnn architecture that overfit the training batches. If parts of the network are particularly prone to damaging overfitting, training them more slowly may reduce their contribution to policy instability across updates. Results for this experiment can be seen in Figure 5.
Surprisingly, not only does lowering the learning rate in parts of the model improve performance, but the best performance is obtained when the encoder , message function and decoder each have their learning rate set to zero. Whereas the update function is implemented as a gru, the other three functions are implemented as mlps. This further supports our hypothesis that the mlp-based functions in the gnn are prone to overfitting.
Training with a learning rate of zero is equivalent to parameter freezing (e.g. Brock et al. (2017)), where parameters are fixed to their initialised values throughout training. nn can learn a policy with some of its functions frozen, as learning still takes place in other parts of the network. For instance, if we consider freezing the encoder, this results in an arbitrary mapping of input features to the initial hidden states. As the update function that processes this representation is still trained by the optimiser, so long as key information from the input features needed by the policy is not lost as a result of the encoding, the update function can still learn useful representations. Similarly, a frozen decoder may still result in an effective policy if the update function can learn to produce a final hidden state in a form that the decoder maps to the desired action.
Based on the effectiveness of parameter freezing within parts of the network in our analysis of nn, we propose a simple technique for improving the training of gnns via gradient-based optimisation, which we name sf (a naturally-occurring frozen graph structure).
sf assumes a gnn architecture made up internally of functions , where denotes the parameters of a given function. The sf algorithm selects a subset of these functions and places their parameters in the frozen set .
In practice, we found optimal performance for , i.e. when freezing the encoder, decoder and message function of the gnn. If not stated otherwise, this is the architecture we refer to as sf in subsequent sections.
During training, sf excludes parameters in
from being updated by the optimiser, instead fixing them to whatever values the gnn architecture uses as an initialisation. Gradients still flow through these operations during backpropagation, but their parameters are not updated.
For our experiments, we initialise the values in the gnn using the orthogonal initialisation (Saxe et al., 2014)
. This procedure first individually samples a matrix of numbers from a Gaussian distribution, then calculates the QR decomposition of that matrix, and finally takes the orthogonal matrix Q as the initialised values for a given parameter matrix. We found this to be slightly more effective for frozen and unfrozen training than Gaussian-based initialisations such as the Xavier initialisation(Glorot and Bengio, 2010)
For our message function, which has input and output dimensions of the same size, we find that performance with the frozen orthogonal initialisation is similar to that of simply using the identity function instead of an mlp. However, in the general case where the input and output dimensions of functions in the network differ (such as in the encoder and decoder, or in gnn architectures where layers use representations of different dimensionality), this simplification is not possible and freezing is required.
4 Related Work
4.1 Structured Locomotion Control
introduce nn, which trains a gnn based on the agent’s morphology, along with a selection of scalable locomotion benchmarks. nn achieves multi-task and transfer learning across morphologies, even in the zero-shot setting (i.e., without further training), which standard mlp-based policies fail to achieve.Sanchez-Gonzalez et al. (2018) use a similar gnn-based architecture for learning a model of the agent’s environment, which can then be used for model-predictive control.
Huang et al. (2020) propose smp, which focuses on multi-task training and shows strong generalisation to out-of-distribution agent morphologies using a single policy. The architecture of smp has similarities with a gnn, but requires a tree-based description of the agent’s morphology rather than a graph, and replaces size- and permutation-invariant aggregation at each node with a fixed-cardinality mlp. smp is based on prior work by Pathak et al. (2019) who propose dgn, where a gnn is used to learn a policy enabling multiple small agents to cooperate by combining their physical structures.
A related approach (Kurin et al., 2020b) uses an architecture based on transformers (Vaswani et al., 2017) to represent locomotion policies. Transformers can be seen as gnns using attention for edge-to-vertex aggregation and operating on a fully connected graph, meaning their computational complexity scales quadratically with the graph size.
For all of these existing approaches to gnn-based locomotion control, training is restricted to small agents. In the case of nn and dgn, emphasis is placed on the ability to perform zero-shot transfer to larger agents, but this still incurs a significant drop in performance.
4.2 Graph-Based Reinforcement Learning
gnn have recently gained traction in rl due to their ability to support variable sized inputs and outputs, opening new avenues for rl applications as well as enhancing the capabilities of agents on existing benchmarks.
use policy gradient methods to learn heuristics of a quantified Boolean formulae solver.Kurin et al. (2020a) use DQN (Mnih et al., 2015) with graph networks (Battaglia et al., 2018) to learn the branching heuristic of a Boolean SAT solver. For the latter, variance increases with the size of the graph, making training on large graphs challenging.
Other approaches involve the construction of graphs based on factorisation of the environmental state into objects with associated attributes (Bapst et al., 2019; Loynd et al., 2020). In multi-agent rl, researchers have used a similar approach to model the relationship between agents, as well as environmental objects (Zambaldi et al., 2019; Iqbal et al., 2020; Li et al., 2020). In this setting, increasing the number of agents can result in additional problems, such as combinatorial explosion of the action space.
Our approach can be potentially useful to the above work in improving scaling properties across a variety of domains.
4.3 Parameter Freezing
Research on parameter freezing has mostly focused on the transfer learning or “fine tuning” use case. Here a neural network is pre-trained on a source task and only the final layers are then trained on the task, with earlier layers left frozen (e.g. Yosinski et al., 2014; Houlsby et al., 2019). Progressively freezing layers during training has also been shown in some cases to improve wall-clock training time, with minimal reduction in performance (Brock et al., 2017).
We are not aware of work that uses the same strategy as sf of freezing parameters to their initialisation values without pre-training. However, techniques such as dropout (Srivastava et al., 2014) have been used widely to improve generalisation, suggesting that good representations can still be learned without training every parameter at each step.
The design of rewards for the standard Gym tasks is similar to that outlined for Centipede-n: the agent is punished for taking large actions and rewarded for forward movement and for ‘survival’, defined as the agent’s body staying within certain spatial bounds. An episode terminates when these bounds are exceeded, or after a fixed number of timesteps.
We now present experiments that assess the performance of sf when applied to nn.
5.1 Experimental Setup
All training statistics are calculated as the mean across six independent runs (unless specified otherwise), with the standard error across runs indicated by the shaded areas on each graph. The average reward per episode typically has high variance, so to smooth our results we plot the mean taken over a sliding window of 30 data points. Further experimental details are outlined in the appendix.
5.2 Scaling to High-Dimensional Tasks
Figure 6 compares the scaling properties of the regular nn model with sf. As the size of the agent increases, sf significantly outperforms nn, with comparable asymptotic performance to the mlp. This makes gnn a strong choice of policy representation, able to match mlps even on large graphs, while retaining their ability to transfer to tasks with different state and action dimensions without retraining (Wang et al., 2018).
5.3 Policy Stability and Sample Efficiency
By reducing overfitting in parts of the gnn, sf mitigates the effect of destructive policy updates seen with regular nn. One consequence is that the policy can train effectively on smaller batch sizes. This is demonstrated in Figure 7, which shows the performance of nn trained regularly versus using sf as the batch size decreases.
A potential benefit of training with smaller batch sizes is improved sample efficiency, as fewer timesteps are taken in the environment per update. However, smaller batch sizes also lead to increased policy divergence, as with less data to train on the policy may overfit, and gradient estimates have higher variance. When the policy divergence is too great, performance begins to decrease, limiting how small the batch can be. Our results show that because of the reduction in policy divergence, sf can use a smaller batch size than standard training before performance begins to degrade, and until this point the use of smaller batch sizes improves sample efficiency. This provides a wider motivation for the use of sf than just scaling to larger agents: it also improves sample efficiency across agents regardless of size.
The success of sf in scaling to larger agents can also be understood in this context. Without sf, for nn to attain strong performance on large agents an infeasibly large batch size is required, leading to poor sample efficiency. The more stable policy updates enabled by sf make solving these large tasks tractable.
5.4 PPO Clipping
sf’s improved policy stability also reduces the amount of clipping performed by the ppo surrogate objective (see Section 2.2) across each training batch. Figure 8 shows the percentage of state-action pairs that are affected by clipping for regular nn versus sf, on the Centipede-20 agent.
When nn is trained without using sf a larger percentage of state-action pairs are clipped during ppo updates—a consequence of the greater policy divergence caused by overfitting. Methods like ppo clipping are necessary to keep the policy within the trust region, particularly for standard nn due to its tendency towards large policy changes; however, such approaches involve a trade-off. For ppo if too many data points reach the clipping limit during optimisation, the algorithm is only able to learn on a small fraction of the experience collected, reducing the effectiveness of training.
One of sf’s strengths is that because it reduces policy divergence it requires less severe restrictions to keep the policy within the trust region. The combination of this effect and the ability to train well on smaller batch sizes enables sf’s strong performance on the largest agents.
We proposed sf, a method that enables gnn-based policies to be trained effectively on much larger locomotive agents than was previously possible. We no longer observe a substantial difference in performance between using gnns to represent a locomotion policy and the standard approach of using mlps, even on the most challenging morphologies. Based on these results, combined with the strong multi-task and transfer properties shown in previous work, we believe that in many cases gnn-based policies are the most useful representation for locomotion control problems. We have also provided insight into why poor scaling occurs for certain gnn architectures, and why parameter freezing is effective in addressing the overfitting problem we identify. We hope that our analysis and method can facilitate future work in training gnns on even larger graphs, for a wider range of learning problems.
- Maximum a posteriori policy optimisation. CoRR abs/1806.06920. External Links: Cited by: §1.
- Neural learning in structured parameter spaces-natural riemannian gradient. NIPS, pp. 127–133. Cited by: §1.
- Structured agents for physical construction. In ICML, pp. 464–474. Cited by: §4.2.
Relational inductive biases, deep learning, and graph networks. CoRR abs/1806.01261. External Links: Cited by: §4.1, §4.2.
- FreezeOut: accelerate training by progressively freezing layers. CoRR abs/1706.04983. External Links: Cited by: §3.4, §4.3.
- OpenAI gym. CoRR abs/1606.01540. External Links: Cited by: §A.2, §3.1, §5.1.
- Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP, pp. 1724–1734. External Links: Cited by: §2.4.
- Discriminative embeddings of latent variable models for structured data. In ICML, JMLR Workshop and Conference Proceedings, Vol. 48, pp. 2702–2711. Cited by: §4.2.
- Neural message passing for quantum chemistry. In ICML, Vol. 70, pp. 1263–1272. Cited by: §2.3, footnote 1.
- Understanding the difficulty of training deep feedforward neural networks. In AISTATS, JMLR Proceedings, Vol. 9, pp. 249–256. Cited by: §3.4.
- Graph representation learning. Synthesis Lectures on Artifical Intelligence and Machine Learning 14 (3), pp. p55. Cited by: §3.3.
- Emergence of locomotion behaviours in rich environments. CoRR abs/1707.02286. External Links: Cited by: §2.2.
- Accelerated charged particle tracking with graph neural networks on fpgas. CoRR abs/2012.01563. External Links: Cited by: §1.
- Parameter-efficient transfer learning for NLP. In ICML, Proceedings of Machine Learning Research, Vol. 97, pp. 2790–2799. Cited by: §4.3.
- One policy to control them all: shared modular policies for agent-agnostic control. In International Conference on Machine Learning, pp. 4455–4464. Cited by: Snowflake: Scaling GNNs to High-Dimensional Continuous Control via Parameter Freezing, §1, §3.1, §3.2, §4.1.
AI-QMIX: attention and imagination for dynamic multi-agent reinforcement learning. CoRR abs/2006.04222. External Links: Cited by: §4.2.
- A natural policy gradient. In NIPS, pp. 1531–1538. Cited by: §1.
Learning combinatorial optimization algorithms over graphs. In NIPS, pp. 6348–6358. Cited by: §4.2.
- Adam: A method for stochastic optimization. In ICLR, Cited by: §2.4.
- Can q-learning with graph networks learn a generalizable branching heuristic for a SAT solver?. In NeurIPS, Cited by: §4.2.
- My body is a cage: the role of morphology in graph-based incompatible control. CoRR abs/2010.01856. External Links: Cited by: §1, §4.1.
- Learning heuristics for quantified boolean formulas through reinforcement learning. In ICLR, Cited by: §4.2.
- Deep implicit coordination graphs for multi-agent reinforcement learning. CoRR abs/2006.11438. External Links: Cited by: §4.2.
- Gated graph sequence neural networks. In ICLR, Cited by: §2.4.
- Predicting drug-target interaction using a novel graph neural network with 3d structure-embedded graph representation. J. Chem. Inf. Model. 59 (9), pp. 3981–3988. External Links: Cited by: §1.
- Working memory graphs. In ICML, Proceedings of Machine Learning Research, Vol. 119, pp. 6404–6414. Cited by: §4.2.
- Human-level control through deep reinforcement learning. Nat. 518 (7540), pp. 529–533. External Links: Cited by: §4.2.
- Learning to control self-assembling morphologies: A study of generalization via modularity. In NeurIPS, pp. 2292–2302. Cited by: §4.1.
- Graph networks as learnable physics engines for inference and control. In ICML, Proceedings of Machine Learning Research, Vol. 80, pp. 4467–4476. Cited by: §4.1.
SuperGlue: learning feature matching with graph neural networks.
IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4937–4946. External Links: Cited by: §1.
- Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In ICLR, Cited by: §3.4.
- The graph neural network model. IEEE Trans. Neural Networks 20 (1), pp. 61–80. External Links: Cited by: §4.1.
- Trust region policy optimization. In ICML, JMLR Workshop and Conference Proceedings, Vol. 37, pp. 1889–1897. Cited by: §1.
- High-dimensional continuous control using generalized advantage estimation. In ICLR, Cited by: §2.1, §2.2.
- Proximal policy optimization algorithms. CoRR abs/1707.06347. External Links: Cited by: §A.1, §1.
- Person re-identification with deep similarity-guided graph neural network. In ECCV, Lecture Notes in Computer Science, Vol. 11219, pp. 508–526. External Links: Cited by: §1.
- Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15 (1), pp. 1929–1958. Cited by: §4.3.
- A deep learning approach to antibiotic discovery. Cell 180 (4), pp. 688–702. Cited by: §1.
- MuJoCo: A physics engine for model-based control. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. External Links: Cited by: §A.2, §3.1.
- Attention is all you need. In NIPS, Vol. 30, pp. 5998–6008. Cited by: §4.1.
- NerveNet: learning structured policy with graph neural networks. In ICLR, Cited by: Figure 12, §A.1, §A.1, Snowflake: Scaling GNNs to High-Dimensional Continuous Control via Parameter Freezing, §1, §1, Figure 1, §3.1, §3.1, §3.2, §3, §4.1, §5.1, §5.2.
- APAN: asynchronous propagate attention network for real-time temporal graph embedding. CoRR abs/2011.11545. External Links: Cited by: §1.
- How transferable are features in deep neural networks?. In NIPS, pp. 3320–3328. Cited by: §4.3.
- Deep reinforcement learning with relational inductive biases. In ICLR, Cited by: §4.2.
Appendix A Appendix
a.1 Experimental Details
Here we outline further details of our experimental approach to supplement those given in Section 5.1.
We train a policy by interleaving two processes. First, we perform repeated rollouts of the current policy in the environment to generate on-policy training data. Second, we optimise the policy with respect to the training data collected to generate a new policy, and then repeat.
To improve wall-clock training time, for larger agents we perform rollouts in parallel over multiple CPU threads, scaling from a single thread for Centipede-6 to five threads for Centipede-20. Rollouts terminate once the sum of timesteps experienced across all threads reaches the training batch size.
For optimisation we shuffle the training data randomly and split the batch into eight minibatches. We perform ten optimisation epochs over these minibatches, in the manner defined by the ppo algorithm (Schulman et al., 2017) (see Section 2.2).
Each experiment is performed six times and results are averaged across runs. The exception to this is Figure 5, where results are an average of three runs.
Our starting point for selecting hyperparameters is the hyperparameter search performed by Wang et al. (2018), whose codebase ours is derived from.
To ensure that we have the best set of hyperparameters for training on large agents, we ran our own hyperparameter search on Centipede-20 for sf, as seen in Table 1.
|Batch size||512, 1024, 2048, 4096|
|Learning rate||1e-4, 3e-4, 1e-5|
|Learning rate scheduler||adaptive, constant|
|clipping||0.02, 0.05, 0.1, 0.2|
|gnn layers||2, 4, 10|
|gru hidden state size||64, 128|
|Learned action std||shared, separate|
Across the range of agents tested on, we conducted a secondary search over just the batch size, learning rate and clipping value for each model. For the latter two hyperparameters, we found that the values in Table 1 did not require adjusting.
For the batch size, we used the lowest value possible until training deteriorated. Using nn, a batch size of 2048 was required throughout, whereas using sf a batch size of 1024 was best for Walker, 512 was best for Centipede-8 and Centipede-6, and 2048 for all other agents.
Wang et al. (2018) provide experimental results for the nn model, which we use as a baseline for our experiments. Out of the Centipede-n models, they provide direct training results for Centipede-8 (see the non-pre-trained agents in their Figure 5). Our performance results are comparable, but taken over many more timesteps. Their final mlp results appear slightly different to ours at the same point (around 500 more reward), likely due to hyperparameter tuning for performance over a different time-frame.
They also provide performance metrics for trained Centipede-4 and Centipede-6 agents across the models compared (their Table 1). The results reported here are significantly less than the best performance we attain for both mlp and nn on Centipede-6. We suspect this discrepancy is due to running for fewer timesteps in their case, but precise stopping criteria is not provided.
Our experiments were run on four different machines during the project, depending on availability. These machines use variants of the Intel Xeon E5 processor (models 2630, 2699 and 2680), containing between 44 and 88 CPU cores. As running the agent in the MuJoCo environment is CPU-intensive, we observed little decrease in training time when using a GPU; hence the experiments reported here are only run on CPUs.
Runtimes for our results vary significantly depending on the number of threads allocated and batch size used. Our standard runtime for Centipede-6 (single thread) for ten million timesteps is around 24 hours, scaling up to 48 hours for our standard Centipede-20 configuration (five threads). Our experiments on the default MuJoCo agents also take approximately 24 hours for a single thread.
Our anonymised source code can be found in the provided zip archive, alongside documentation for building the software and its dependencies. We plan to open-source our code for publication.
Our code is an extension of the nn codebase: https://github.com/WilsonWangTHU/NerveNet. This repository contains the original code/schema defining the Centipede-n agents.
The other standard agents are taken from the Gym (Brockman et al., 2016): https://github.com/openai/gym. The specific hopper, walker and humanoid versions used are Hopper-v2, Walker2d-v2 and Humanoid-v2.
For our mlp results on the Gym agents, as state-of-the-art performance baselines have been well established in this case, we use the OpenAi Baselines codebase (https://github.com/openai/baselines) to generate results, to ensure the most rigorous and fair comparison possible.