Augmented Neural ODEs

04/02/2019 ∙ by Emilien Dupont, et al. ∙ University of Oxford 16

We show that Neural Ordinary Differential Equations (ODEs) learn representations that preserve the topology of the input space and prove that this implies the existence of functions Neural ODEs cannot represent. To address these limitations, we introduce Augmented Neural ODEs which, in addition to being more expressive models, are empirically more stable, generalize better and have a lower computational cost than Neural ODEs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The relationship between neural networks and differential equations has been studied in several recent works

(weinan2017proposal; lu2017beyond; haber2017stable; ruthotto2018deep; chen2018neural). In particular, it has been shown that Residual Networks (he2016deep) can be interpreted as discretized ODEs. Taking the discretization step to zero gives rise to a family of models called Neural ODEs (chen2018neural)

. These models can be efficiently trained with backpropagation and have shown great promise on a number of tasks including modeling continuous time data and building normalizing flows with low computational cost

(chen2018neural; grathwohl2018ffjord).

In this work, we explore some of the consequences of taking this continuous limit and the restrictions this might create compared with regular neural nets. In particular, we show that there are simple classes of functions Neural ODEs (NODEs) cannot represent. While it is often possible for NODEs to approximate these functions in practice, the resulting flows are complex and lead to ODE problems that are computationally expensive to solve. To overcome these limitations, we introduce Augmented Neural ODEs (ANODEs) which are a simple extension of NODEs. ANODEs augment the space on which the ODE is solved, allowing the model to use the additional dimensions to learn more complex functions using simpler flows (see Fig. 1). In addition to being more expressive models, ANODEs significantly reduce the computational cost of both forward and backward passes of the model compared with NODEs. Our experiments also show that ANODEs generalize better, achieve lower losses with fewer parameters and are more stable to train.

Figure 1: Learned flows for a Neural ODE and an Augmented Neural ODE. The flows (shown as lines with arrows) map input points to linearly separable features for binary classification. Augmented Neural ODEs learn simpler flows that are easier for the ODE solver to compute.

2 Neural ODEs

NODEs are a family of deep neural network models that can be interpreted as a continuous equivalent of Residual Networks (ResNets). To see this, consider the transformation of a hidden state from a layer to in ResNets

where is the hidden state at layer and is some differentiable function which preserves the dimension of (typically a CNN). The difference can be interpreted as a discretization of the derivative with timestep . Letting , we see that

so the hidden state can be parameterized by an ODE. We can then map a data point into a set of features by solving the Initial Value Problem (IVP)

to some time . The hidden state at time , i.e. , corresponds to the features learned by the model. The analogy with ResNets can then be made more explicit. In ResNets, we map an input to some output by a forward pass of the neural network. We then adjust the weights of the network to match with some . In NODEs, we map an input to an output by solving an ODE starting from . We then adjust the dynamics of the system (encoded by ) such that the ODE transforms to a which is close to .

Figure 2: Diagram of Neural ODE architecture.

ODE flows.

We also define the flow associated to the vector field

of the ODE. The flow is defined as the hidden state at time , i.e. , when solving the ODE from the initial condition . The flow measures how the states of the ODE at a given time depend on the initial conditions . We define the features of the ODE as , i.e. the flow at the final time to which we solve the ODE.

NODEs for regression and classification. We can use ODEs to map input data to a set of features or representations . However, we are often interested in learning functions from to , e.g. for regression or classification. To define a model from to , we follow the example given in lin2018resnet for ResNets. We define the NODE as where is a linear map and is the mapping from data to features. As shown in Fig. 2, this is a simple model architecture: an ODE layer, followed by a linear layer.

3 A simple example in 1d

In this section, we introduce a simple function that ODE flows cannot represent, motivating many of the examples discussed later. Let be a function such that and .

Proposition 1.

The flow of an ODE cannot represent .

A detailed proof is given in the appendix. The intuition behind the proof is simple; the trajectories mapping to and to must intersect each other (see Fig. 3). However, ODE trajectories cannot cross each other, so the flow of an ODE cannot represent . This simple observation is at the core of all the examples provided in this paper and forms the basis for many of the limitations of NODEs.

Experiments. We verify this behavior experimentally by training an ODE flow on the identity mapping and on . The resulting flows are shown in Fig. 3. As can be seen, the model easily learns the identity mapping but cannot represent . Indeed, since the trajectories cannot cross, the model maps all input points to zero to minimize the mean squared error.

ResNets vs NODEs. NODEs can be interpreted as continuous equivalents of ResNets, so it is interesting to consider why ResNets can represent but NODEs cannot. The reason for this is exactly because ResNets are a discretization of the ODE, allowing the trajectories to make discrete jumps to cross each other (see Fig. 3). Indeed, the error arising when taking discrete steps allows the ResNet trajectories to cross. In this sense, ResNets can be interpreted as ODE solutions with large errors, with these errors allowing them to represent more functions.

Figure 3: (Top left) Continuous trajectories mapping to (red) and to (blue) must intersect each other, which is not possible for an ODE. (Top right) Solutions of the ODE are shown in solid lines and solutions using the Euler method (which corresponds to ResNets) are shown in dashed lines. As can be seen, the discretization error allows the trajectories to cross. (Bottom) Resulting vector fields and trajectories from training on the identity function (left) and (right).

4 Functions Neural ODEs cannot represent

Figure 4: Diagram of for .

We now introduce classes of functions in arbitrary dimension which NODEs cannot represent. Let and let be a function such that

where is the Euclidean norm. An illustration of this function for is shown in Fig. 4. The function maps all points inside the blue sphere to and all points in the red annulus to .

Proposition 2.

Neural ODEs cannot represent .

A proof is given in the appendix. While the proof requires tools from ODE theory and topology, the intuition behind it is simple. In order for the linear layer to map the blue and red points to and respectively, the features for the blue and red points must be linearly separable. Since the blue region is enclosed by the red region, points in the blue region must cross over the red region to become linearly separable, requiring the trajectories to intersect, which is not possible. In fact, we can make more general statements about which features Neural ODEs can learn.

Proposition 3.

The feature mapping is a homeomorphism, so the features of Neural ODEs preserve the topology of the input space.

A proof is given in the appendix. This statement is a consequence of the flow of an ODE being a homeomorphism, i.e. a continuous bijection whose inverse is also continuous; see, e.g., (younes2010shapes). This implies that NODEs can only continuously deform the input space and cannot for example tear a connected region apart.

Discrete points and continuous regions. It is worthwhile to consider what these results mean in practice. Indeed, when optimizing NODEs we train on inputs which are sampled from the continuous regions of the annulus and the sphere (see Fig. 4). The flow could then squeeze through the gaps between sampled points making it possible for the NODE to learn a good approximation of the function. However, flows that need to stretch and squeeze the input space in such a way are likely to lead to ill-posed ODE problems that are numerically expensive to solve. In order to explore this, we run a number of experiments (the code to reproduce all experiments in this paper is available at https://github.com/EmilienDupont/augmented-neural-odes).

4.1 Experiments

We first compare the performance of ResNets and NODEs on simple regression tasks. To provide a baseline, we not only train on

but also on data which can be made linearly separable without altering the topology of the space (implying that Neural ODEs should be able to easily learn this function). To ensure a fair comparison, we run large hyperparameter searches for each model and repeat each experiment 20 times to ensure results are meaningful across initializations (see appendix for details). We show results for experiments with

and in Fig. 5. For , the ResNet easily fits the function, while the NODE cannot approximate . For , the NODE eventually learns to approximate , but struggles compared to ResNets. This problem is less severe for the separable function, presumably because the flow does not need to break apart any regions to linearly separate them.

4.2 Computational Cost and Number of Function Evaluations

One of the known limitations of NODEs is that, as training progresses and the flow gets increasingly complex, the number of steps required to solve the ODE increases (chen2018neural; grathwohl2018ffjord). As the ODE solver evaluates the function at each step, this problem is often referred to as the increasing number of function evaluations (NFE). In Fig. 6, we visualize the evolution of the feature space during training and the corresponding NFEs. The NODE initially tries to move the inner sphere out of the annulus by pushing against and stretching the barrier. Eventually, since we are mapping discrete points and not a continuous region, the flow is able to break apart the annulus to let the flow through. However, this results in a large increase in NFEs, implying that the ODE stretching the space to separate the two regions becomes more difficult to solve, making the computation slower.

(a) in
(b) in
(c) Separable function in
Figure 5: Comparison of training losses of NODEs and ResNets. Compared to ResNets, NODEs struggle to fit both in and . The difference between ResNets and NODEs is less pronounced for the separable function.
Figure 6: Evolution of the feature space as training progresses and the corresponding number of function evaluations required to solve the ODE. As the ODE needs to break apart the annulus, the number of function evaluations increases.

5 Augmented Neural ODEs

Motivated by our theory and experiments, we introduce Augmented Neural ODEs (ANODEs) which provide a simple solution to the problems we have discussed. We augment the space on which we learn and solve the ODE from to , allowing the ODE flow to lift points into the additional dimensions to avoid trajectories intersecting each other. Letting denote a point in the augmented part of the space, we can formulate the augmented ODE problem as

i.e. we concatenate every data point with a vector of zeros and solve the ODE on this augmented space. We hypothesize that this will also make the learned (augmented) smoother, giving rise to simpler flows that the ODE solver can compute in fewer steps. In the following sections, we verify this behavior experimentally and show both on toy and image datasets that ANODEs achieve lower losses, better generalization and lower computational cost than regular NODEs.

5.1 Experiments

We first compare the performance of NODEs and ANODEs on toy datasets. As in previous experiments, we run large hyperparameter searches to ensure a fair comparison. As can be seen on Fig. 7, when trained on in different dimensions, ANODEs are able to fit the functions NODEs cannot and learn much faster than NODEs despite the increased dimension of the input. The corresponding flows learned by the model are shown in Fig. 7. As can be seen, in , the ANODE moves into a higher dimension to linearly separate the points, resulting in a simple, nearly linear flow. Similarly, in , the NODE learns a complicated flow whereas ANODEs simply lift out the inner circle to separate the data. This effect can also be visualized as the features evolve during training (see Fig. 8).

Figure 7: (Left) Loss plots for NODEs and ANODEs trained on in (top) and (bottom). ANODEs easily approximate the functions and are consistently faster than NODEs. (Right) Flows learned by NODEs and ANODEs. ANODEs learn simple nearly linear flows while NODEs learn complex flows that are difficult for the ODE solver to compute.

Computational cost and number of function evaluations. As ANODEs learn simpler flows, they would presumably require fewer iterations to compute. To test this, we measure the NFEs for NODEs and ANODEs when training on . As can be seen in Fig. 8, the NFEs required by ANODEs hardly increases during training while it nearly doubles for NODEs. We obtain similar results when training NODEs and ANODEs on image datasets (see Section 5.2).

Figure 8: (Left) Evolution of features during training for ANODEs. The top left tile shows the feature space for a randomly initialized ANODE and the bottom right tile shows the features after training. (Right) Evolution of the NFEs during training for NODEs and ANODEs trained on in .

Generalization. As ANODEs learn simpler flows, we also hypothesize that they generalize better to unseen data than NODEs. To test this, we first visualize to which value each point in the input space gets mapped by a NODE and an ANODE that have been optimized to approximately zero training loss. As can be seen in Fig. 9, since NODEs can only continuously deform the input space, the learned flow must squeeze the points in the inner circle through the annulus, leading to poor generalization. ANODEs, in contrast, map all points in the input space to reasonable values. As a further test, we can also create a validation set by removing random slices of the input space (e.g. removing all points whose angle is in ) from the training set. We train both NODEs and ANODEs on the training set and plot the evolution of the validation loss during training in Fig. 9. While there is a large generalization gap for NODEs, presumably because the flow moves through the gaps in the training set, ANODEs generalize much better and achieve near zero validation loss.

As we have shown, experimentally we obtain lower losses, simpler flows, better generalization and ODEs requiring fewer NFEs to solve when using ANODEs. We now test this behavior on image data by training models on MNIST and CIFAR10.

Figure 9: (Left) Plots of how NODEs and ANODEs map points in the input space to different outputs (both models achieve approximately the same zero training loss). As can be seen, the ANODE generalizes better. (Middle) Training and validation losses for NODE. (Right) Training and validation losses for ANODE.

5.2 Image Experiments

We perform experiments on MNIST and CIFAR10 using convolutional architectures for . As the input is an image, the hidden state is now in where is the number of channels and and are the height and width respectively. In the case where we augmented the space as . For images we augment the space as , i.e. we add channels of zeros to the input image. While there are other ways to augment the space, we found that increasing the number of channels works well in practice and use this method for all experiments. Full training and architecture details can be found in the appendix.

Results for models trained with and without augmentation are shown in Fig. 10. As can be seen, ANODEs train faster and obtain lower losses at a smaller computational cost than NODEs. On MNIST for example, ANODEs with 10 augmented dimensions achieve the same loss in roughly 10 times fewer iterations (for CIFAR10, ANODEs are roughly 5 times faster). Perhaps most interestingly, we can plot the NFEs against the loss to understand roughly how complex a flow (i.e. how many NFEs) are required to model a function that achieves a certain loss. For example, to compute a function which obtains a loss of 0.8 on CIFAR10, a NODE requires approximately 100 function evaluations whereas ANODEs only require 50. Similar observations can be made for MNIST, implying that ANODEs can model equally rich functions at half the computational cost of NODEs.

Parameter efficiency. As we augment the dimension of the ODEs, we also increase the number of parameters of the models, so it may be that the improved performance of ANODEs is due to the higher number of parameters. To test this, we train a NODE and an ANODE with the same number of parameters on both MNIST (84k weights) and CIFAR10 (172k weights). We find that the augmented model achieves significantly lower losses with fewer NFEs than the NODE, suggesting that ANODEs use the parameters more efficiently than NODEs (see appendix for details and results).

NFEs and weight decay. The increased computational cost during training is a known issue with NODEs and has previously been tackled by adding weight decay (grathwohl2018ffjord). As ANODEs also achieve lower computational cost, we test models with various combinations of weight decay and augmentation (see appendix for detailed results). We find that ANODEs without weight decay significantly outperform NODEs with weight decay. However, using both weight decay and augmentation achieves the lowest NFEs at the cost of a slightly higher loss. Combining augmentation with weight decay may therefore be a fruitful avenue for further scaling NODE models.

Figure 10: Training losses, NFEs and NFEs vs Loss for various augmented models on MNIST (top row) and CIFAR10 (bottom row). Note that indicates the size of the augmented dimension, so corresponds to a regular NODE model.

Generalization for images. As noted in Section 5.1, ANODEs generalize better than NODEs on simple datasets, presumably because they learn simpler and smoother flows. We also test this behavior on CIFAR10 by training models with and without augmentation on the training set and calculating the loss on the test set. As can be seen on Fig. 11, both the NODE and ANODE overfit the training data, but ANODEs achieve lower validation loss than NODEs (1.18 vs 1.34). This suggests that ANODEs also achieve better generalization on image datasets.

Figure 11: Training and validation losses on CIFAR10 for NODEs and ANODEs. Both models overfit the training data, but ANODEs achieve a lower minimum for the validation loss.

Stability. While experimenting with NODEs we found that the NFEs could often become prohibitively large (in excess of 1000, which roughly corresponds to a 1000-layer ResNet). For example, when overfitting a NODE on MNIST, the learned flow can become so ill posed the ODE solver requires timesteps that are smaller than machine precision resulting in underflow. Further, this complex flow often leads to unstable training resulting in exploding losses. As shown in Fig. 12, augmentation consistently leads to stable training and fewer NFEs, even when overfitting.

Scaling. To measure how well the models scale to larger datasets, we train NODEs and ANODEs on 200 classes of ImageNet. As can be seen in Fig. 12, ANODEs scale better, achieve lower losses and train almost 10 times faster than NODEs.

Figure 12: Instabilities in the loss (left) and NFEs (middle) when fitting NODEs to MNIST. In the latter stages of training, NODEs can become unstable and the loss and NFEs become erratic. (Right) Losses on ImageNet for NODEs and ANODEs.

Augmentation for ResNets. Since ResNets can be interpreted as discretized equivalents of NODEs, it is interesting to consider how augmenting the space could affect the training of ResNets. Indeed, most ResNet architectures (he2016deep; xie2017aggregated) already employ a form of augmentation by performing convolutions with a large number of filters before applying residual blocks. This effectively corresponds to augmenting the space by the number of filters minus the number of channels in the original image. Further, behrmann2018invertible and ardizzone2018analyzing also augment the input with zeros to build invertible ResNets and transformations. Through the analogy between NODEs and ResNets, we hope some of the ideas presented in this paper could help guide future research into ResNet architectures.

6 Scope and Future Work

In this section, we describe some limitations of ANODEs, outline potential ways they may be overcome and list ideas for future work. First, while ANODEs are faster than NODEs, they are still slower than ResNets. Second, there may be different architectural choices that could have similar properties to those exhibited by ANODEs. For example, chen2018neural downsample MNIST twice with regular convolutions (and hence also increase the number of channels in a similar way to ANODEs) before applying a sequence of NODEs to train on MNIST. Finally, the augmented dimension can be seen as an extra hyperparameter to tune. While the model is robust for a range of augmented dimensions, we observed that for excessively large augmented dimensions (e.g. adding 100 channels to MNIST), the model tends to perform worse with higher losses and NFEs. We believe the ideas presented in this paper could create interesting avenues for future research, including:

Overcoming the limitations of NODEs. In order to allow trajectories to travel across each other, we augmented the space on which the ODE is solved. However, there may be other ways to achieve this, such as learning an augmentation (as in ResNets) or adding noise (similarly to wang2018enresnet).

Augmentation for Normalizing Flows. The NFEs typically becomes prohibitively large when training continuous normalizing flow (CNF) models (grathwohl2018ffjord). Adding augmentation to CNFs could likely mitigate this effect and we plan to explore this in future work.

Improved understanding of augmentation. It would be useful to provide more theoretical analysis for how and why augmentation improves the training of NODEs and to explore how this could guide our choice of architectures and optimizers for NODEs.

7 Conclusion

In this paper, we highlighted and analysed some of the limitations of Neural ODEs. We proved that there are classes of functions NODEs cannot represent and, in particular, that NODEs only learn features that are homeomorphic to the input space. We showed through experiments that this lead to slower learning and complex flows which are expensive to compute. To mitigate these issues, we proposed Augmented Neural ODEs which learn the flow from input to features in an augmented space. Our experiments show that ANODEs can model more complex functions using simpler flows while achieving lower losses, reducing computational cost, and improving stability and generalization.

Acknowledgements

We would like to thank Anthony Caterini, Daniel Paulin, Abraham Ng, Joost Van Amersfoort and Hyunjik Kim for helpful discussions and feedback. Emilien gratefully acknowledges his PhD funding from Google DeepMind. Arnaud Doucet acknowledges support of the UK Defence Science and Technology Laboratory (Dstl) and Engineering and Physical Research Council (EPSRC) under grant EP/R013616/1. This is part of the collaboration between US DOD, UK MOD and UK EPSRC under the Multidisciplinary University Research Initiative. Yee Whye Teh’s research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) ERC grant agreement no. 617071.

References

Appendix A Proofs

Throughout this section, we refer to the following Initial Value Problem (IVP)

(1)

where and is continuous in and globally Lipschitz continuous in , i.e. there is a constant such that

for all . These conditions imply the solutions of the IVP exist and are unique for all (see e.g. Theorem 2.4.5 in ahmad2015textbook).

We define the flow associated to the vector field as the solution at time of the ODE starting from the initial condition . The flow measures how the solutions of the ODE depend on the initial conditions. Following the analogy between ResNets and NODEs, we define the features output by the ODE as the flow at the final time to which we solve the ODE, i.e. . Finally, we define the NODE model as the composition of the feature function and a linear map .

For clarity and completeness, we include proofs of all statements. Whenever propositions or theorems are already known we include references to proofs.

a.1 ODE trajectories do not intersect

This result is well known and proofs can be found in standard ODE textbooks (e.g. Proposition C.6 in younes2010shapes).

Proposition.

Let and be two solutions of the ODE (1) with different initial conditions, i.e. . Then, for all , . Informally, this proposition states that ODE trajectories cannot intersect.

Proof. Suppose there exists some where . Define a new IVP with initial condition and solve it backwards to time . As the backwards IVP also satisfies the existence and uniqueness conditions, its solution is unique implying that its value at is unique. This contradicts the assumption that and so there is no such that .

a.2 Gronwall’s Lemma

We will make use of Gronwall’s Lemma and state it here for completeness. We follow the statement as given in howard1998gronwall:

Theorem.

Let be an open set. Let be a continuous function and let satisfy the IVPs:

Assume there is a constant such that

Then for

Proof. See e.g. howard1998gronwall or Theorem 3.8 in younes2010shapes.

Appendix B Proof for 1d example

Let be a function such that

Proposition 1.

The flow of an ODE cannot represent .

Proof. The proof follows two steps:

  1. [label=()]

  2. Continuous trajectories mapping to and to must cross each other.

  3. Trajectories of ODEs cannot cross each other.

This is a contradiction and implies the proposition. Part (b) was proved in Section A.1. All there is left to do is to prove part (a).

Suppose there exists an such that there are trajectories and where

As and are solutions of the IVP, they are continuous; see, e.g., coddington1955theory. Define the function . Since both and are continuous, so is . Now and , so by the Intermediate Value Theorem there is some where , i.e. where . So and intersect.

Appendix C Proof that is a homeomorphism

Since the following theorem plays a central part in the paper, we include a proof of it here for completeness. For a more general proof, we refer the reader to Theorem C.7 in younes2010shapes.

Theorem.

For all , is a homeomorphism.

Proof. In order to prove that is a homemorphism, we need to show that

  1. [label=()]

  2. is continuous

  3. is a bijection

  4. is continuous

Part (a). Consider two initial conditions of the ODE system, and where is some perturbation. By Gronwall’s Lemma, we have

Rewriting in terms of , we have

Letting , this implies that is continuous in for all .

Part (b). Suppose there exists initial conditions such that . We define the IVP starting from and solve it backwards to time . The solution of the IVP is unique, so it cannot map back to both and . So for each , we must have , that is the map between and is one-to-one.

Part (c). To check that the inverse is continuous, we note that we can set the initial condition to and solve the IVP backwards in time (as it satisfies the existence and uniqueness conditions). The same reasoning as part (a) then applies.

Therefore is a continuous bijection and its inverse is continuous, i.e. it is a homeomorphism.

Corollary.

Features of Neural ODEs preserve the topology of the input space.

Proof. Since is a homeomorphism, so is . Homeomorphims preserve topological properties, so Neural ODEs can only learn features which have the same topology as the input space.

This corollary implies for example that NODEs cannot break apart or create holes in a connected region of the input space.

Appendix D Proof that there are classes functions NODEs cannot represent

This section presents a proof of the main claim of the paper.

Let and let be a function such that

We denote the sphere where by and the annulus where by (see Fig. 13). For a set , we write to denote the feature transformation of the set.

Proposition 2.

Neural ODEs cannot represent .

(a)
(b)
Figure 13: (a) Diagram of in 2d. (b) An example of the map from input data to features necessary to represent (which NODEs cannot learn).

Proof. For a NODE to map points in to and points in to , the linear map must map the features in to and the features in to , which implies that and must be linearly separable. We now show that this is not possible if is a homeomorphism.

Define a disk by with boundary and interior . Now , and , that is all points in should be mapped to (i.e. they are in ) and a subset of points in should be mapped to (i.e. they are in ). So if and are not linearly separable, then neither are or .

The feature transformation is a homeomorphism, so and , i.e. points on the boundary get mapped to points on the boundary and points in the interior to points in the interior (armstrong2013basic). So it remains to show that and cannot be linearly separated. For notational convenience, we will write .

Suppose all points in

lie above some hyperplane, i.e. suppose there exists a linear function

and a constant such that for all . If were linearly separable from then for all . We now show that this is not the case. Since is a connected subset of (since is connected and is a homeomorphism), every point can be written as a convex combination of points on the boundary (to see this consider a line passing through a point in the interior and its intersection with the boundary). So if , then

for some and . Now,

so all points in the interior are on the same side of the hyperplane as points on the boundary, that is the interior and the boundary are not linearly separable. This implies that the set of features and cannot be linearly separated and so that NODEs cannot represent .

(a)
(b)
(c)
Figure 14: (a) Diagram of the disk and its boundary. The boundary is equal to the inner boundary of . (b) An example of how transforms the disk. (c) The boundary of the transformed set is above the hyperplane, which implies that all points on the interior must also be above the hyperplane.

Appendix E Modeling NODEs and

In this section, we describe how to choose and model . We first note that

can be parameterized by any standard neural net architecture, including ones with activation functions that are not everywhere differentiable such as ReLU. Existence and uniqueness of solutions to the ODE are still guaranteed and all results in this paper hold under these conditions.

The function depends on both the time and the hidden state . Following the architecture used by chen2018neural, we model

as a CNN or an MLP with weights that are not a function of time, and instead encode the time dependency by passing a concatenated tensor

as input to the neural network. The architectures of the CNNs and MLPs we used are described in the following section.

Appendix F Experimental Details

We used the ODE solvers in the torchdiffeq111https://github.com/rtqichen/torchdiffeq library for all experiments (chen2018neural). We used the Runge-Kutta 45 solver with an absolute and relative error tolerance of 1e-3. The code to reproduce all results in this paper can be found at https://github.com/EmilienDupont/augmented-neural-odes.

f.1 Architecture

Throughout all our experiments we used the ReLU activation function. We also experimented with softplus but found that this generally slowed down learning.

f.1.1 Toy datasets

We parameterized by an MLP with the following structure and dimensions

where the additional dimension on the input layer is because we append the time as an input. Choices for and are given for each model in the following section.

f.1.2 Image datasets

We parameterized by a convolutional block with the following structure and dimensions

  • conv, filters, padding.

  • conv, filters, padding.

  • conv, filters, padding.

where is specified for each architecture in the following sections and is the number of channels ( for MNIST and for CIFAR10 and ImageNet). We append the time as an extra channel on the feature map before each convolution.

f.2 Hyperparameters

For the toy datasets, each experiment was repeated 20 times. The resulting plots show the mean and standard deviation for these runs.

f.2.1 Hyperparameter search

To ensure a fair comparison between models, we ran a large hyperparameter search for each model and chose the hyperparameters with the lowest loss to generate the plots in the paper. We used skorch and scikit-learn (pedregosa2011scikit) to run the hyperparameter searches and ran 3 cross validations for each setting.

For and we trained on (i.e. on the dataset of concentric spheres), with 1000 points in the inner sphere and 2000 points in the outer annulus. We used , and

and trained for 50 epochs. The space of hyperparameters we searched were:

  • Batch size: 64, 128

  • Learning rate: 1e-3, 5-4, 1e-4

  • Hidden dimension: 16, 32

  • Number of layers (for ResNet): 2, 5, 10

  • Number of augmented dimensions (for ANODE): 1, 2, 5

The best parameters for ResNets:

  • : Batch size 64, learning rate 1e-3, hidden dimension 32, 5 layers

  • : Batch size 64, learning rate 1e-3, hidden dimension 32, 5 layers

The best parameters for Neural ODEs:

  • : Batch size 64, learning rate 1e-3, hidden dimension 32

  • : Batch size 64, learning rate 1e-3, hidden dimension 32

The best parameters for Augmented Neural ODEs:

  • : Batch size 64, learning rate 1e-3, hidden dimension 32, augmented dimension 5

  • : Batch size 64, learning rate 1e-3, hidden dimension 32, augmented dimension 5

f.2.2 Image experiments

For both MNIST and CIFAR10, we used filters and repeated each experiment 5 times. For models with approximately the same number of parameters we used, for MNIST

  • NODE: 92 filters 84,395 parameters

  • ANODE: 64 filters, augmented dimension 5 84,816 parameters

and for CIFAR10

  • NODE: 125 filters 172,358 parameters

  • ANODE: 64 filters, augmented dimension 10 171,799 parameters

For the ImageNet experiments, we used the Tiny ImageNet dataset consisting of 200 classes of images. We also repeated each experiment 5 times. We used models with approximately the same number of parameters, specifically:

  • NODE: 164 filters 366,269 parameters

  • ANODE: 64 filters, augmented dimension 5 365,714 parameters

Appendix G Additional Results

In this section, we show additional results which were not included in the main paper.

g.1 Feature space evolution

We visualize the evolution of the feature space when training a NODE on and on a separable function in Fig. 15. As can be seen, the NODE struggles to push the inner sphere out of the annulus for . On the other hand, when training on the separable dataset, the NODE easily transforms the input space.

g.2 Parameter efficiency

As noted in the main paper, when we augment the dimension of the ODEs, we also increase the number of parameters of the model. We test whether the improved performance of ANODEs is due to the higher number of parameters by training NODEs and ANODEs with the same number of parameters on MNIST and CIFAR10. As can be seen in Fig. 16, the augmented model achieves lower losses with fewer NFEs than a NODE with the same number of parameters, suggesting that ANODEs use the parameters more efficiently than NODEs.

g.3 Augmentation and weight decay

grathwohl2018ffjord train NODE models with weight decay to reduce the NFEs. As ANODEs also achieve low NFEs, we test models with various combinations of weight decay and augmentation and show results in Fig. 17. We find that ANODEs significantly outperform NODEs even when using weight decay. However, using both weight decay and augmentation achieves the lowest NFEs at the cost of a slightly higher loss.

g.4 Comparing ResNets, NODEs and ANODEs

In the main paper, we compare the training time of ResNets with NODEs and the training time of NODEs with ANODEs. In Fig. 18, we compare all three methods in a single plot.

g.5 Examples of flows

We include further plots of flows learned by NODEs and ANODEs in Fig. 19. As can be seen, ANODEs consistently learn simple, nearly linear flows, while NODEs require more complicated flows to separate the data.

Figure 15: Evolution of the feature space during training. The leftmost tile shows the feature space for a randomly initialized NODE and the rightmost tile shows the feature space after training. The top row shows a model trained on and the bottow row a model trained on a separable function.
Figure 16: Losses, NFEs and NFEs vs Loss for various augmented models on MNIST and CIFAR10. Note that indicates the size of the augmented dimension, so corresponds to a regular NODE model.
Figure 17: Losses and NFEs for models with and without weight decay. ANODEs perform better than NODEs with weight decay but adding weight decay to ANODEs also reduces their NFEs at the cost of a slightly higher loss.
Figure 18: Losses for various models trained on in . As can be seen, ANODEs are slightly slower than ResNets, but faster than NODEs.
Figure 19: Flows learned by NODEs and ANODEs trained on various datasets. The top row shows results for NODEs, the bottom row shows results for ANODEs. The models in the left column were trained on separable data, whereas the models in the right column were trained on . NODEs learn more complex flows, particularly on data which is not separable.