1 Introduction
1.1 Convolution and Graph Convolution
Recent successes of deep learning have demonstrated that the inductive bias of Convolutional Neural Networks (CNNs) makes them extremely efficient for analyzing data with an inherent grid structure, such as images or video. In particular, many applications use these models to make pernode (perpixel) predictions over grid graphs: examples include image segmentation, optical flow prediction, anticipating motion of objects in a scene, and facial detection/identification. Further work applies these methods to emulate physical models, by discretizing the input domain. Computational Fluid Dynamics and other scientific tasks featuring PDEs or ODEs on a domain discretized by a rectangular lattice have seen recent breakthroughs applying machine learning models, like CNNs to handle data which is structured this way. These models learn a set of local filters whose size is much smaller than the size of the domain  these filters may then be applied simultaneously across the entire domain, leveraging the fact that at a given scale the local behavior of the neighborhood around a pixel (voxel) is likely to be similar at all grid points.
Graph Convolutional Networks (GCNs) are a natural extension of the above idea of image ‘filters’ to arbitrary graphs rather than
D grids, which may be more suitable in some scientific contexts. Intuitively, GCNs replace the image filtering operation of CNNs with repeated passes of: 1) aggregation of information between nodes according to some structure matrix 2) nonlinear processing of data at each node according to some rule (most commonly a flat neural network which takes as separate input(s) the current vector at each node). We refer the reader to a recent survey by Bacciu et al
(2019) for a more complete exploration of the taxonomy graph neural networks.1.2 Microtubules
As an example of a dataset whose underlying graph is not a grid, we consider a coarsegrained simulation of a microtubule. Microtubules (MTs) are selfassembling nanostructures which are ubiquitous in living cells. MTs play important structural roles during cell division, cell growth, and separation of chromosomes (in eukaryotic cells) (Chakrabortty et al., 2018). Microtubules are comprised of a lattice structure of two conformations ( and ) of tubulin. Freefloating tubulin monomers associate energetically into dimer subunits, which then associate headtotail to form long chainlike complexes called protofilaments. Protofilaments associate sideto side in a sheet; at some critical number of protofilaments (which varies between species and cell type) the sheet wraps closed to form a repeating helical lattice with a seam. See (Pampaloni and Florin, 2008), Page 303, Figure 1. Key properties of microtubules are:
Dynamic instability: microtubules grow from one end by attracting freefloating tubulin monomers (VanBuren et al., 2005). Microtubules can spontaneously enter a “catastrophe” phase, in which they rapidly unravel, but can also “rescue” themselves from the catastrophe state and resume growth (Gardner et al., 2013; Shaw et al., 2003).
Interactions: Microtubules interact with one another: they can dynamically avoid one another during the growth phase, or collide and bundle up, or collide and enter catastrophe (Tindemans et al., 2014). The exact mechanism governing these interactions is an area of current research.
Structural strength: microtubules are very stiff, with a Young’s Modulus estimated at 1GPa for some cases (Pampaloni and Florin, 2008). This stiffness is thought to play a role in reinforcing cell walls (Kis et al., 2002).
In this work we introduce a model which learns to reproduce the dynamics of a graph signal (defined as an association of each node in the network with a vector of discrete or realvalued labels) at multiple scales of graph resolution. We apply this model framework to predict the potential energy of each tubulin monomer in a mechanochemical simulation of a microtubule.
1.3 Simulation of MTs and Prior Work
Noncontinuum, noneventbased simulation of large molecules is typically done by representing some molecular subunit as a particle/rigid body, and then defining rules for how these subunits interact energetically. Molecular Dynamics (MD) simulation is an expansive area of study and a detailed overview is beyond the scope of this paper. We instead proceed to describe in general terms some basic ideas relevant to the numerical simulation detailed in Section 3.1. MD simulations proceed from initial conditions by computing the forces acting on each particle (according to the potential energy interactions and any external forces, as required), determining their instantaneous velocities and acceleration accordingly, and then moving each particle by the distance it would move (given its velocity) for some small timestep. Many variations of this basic idea exist. The software we use for our MD simulations, LAMMPS (Plimpton, 1993) allows for many different types of update step: we use Verlet integration (updating particle position according to the central difference approximation of acceleration (Verlet, 1967)) and Langevin dynamics (modeling the behavior of a viscous surrounding solvent implicitly (Schneider and Stoll, 1978)). We also elect to use the microcanonical ensemble (NVE)  meaning that the update steps of the system maintain the total number of particles, the volume of the system, and the total energy (kinetic + potential). For more details of our simulation, see Section 3.1 and the source code, available in the Supplementary Material accompanying this paper. Independent of implementation details, a common component of many experiments in computational molecular dynamics is the prediction of the potential energy associated with a particular conformation of some molecular structure. Understanding the energetic behavior of a complex molecule yields insights into its macroscale behavior: for instance, the problem of protein folding can be understood as seeking a lowerenergy configuration. In this work, we apply graph convolutional networks, trained via a method we introduce, to predict these energy values for a section of microtubule.
1.4 Mathematical Background and Notation
Definitions:
For all basic terms (graph, edge, vertex, degree) we use standard definitions. We use the notation to represent the sequence of indexed by the integers .
Graph Laplacian: The graph Laplacian is the matrix given by where is the adjacency matrix of , and is an appropriately sized vector of 1s. The graph Laplacian is given by some authors as the opposite sign.
Graph Diffusion Distance (GDD): Given two graphs and , with the Graph Diffusion Distance is given by:
(1) 
where represents some set of constraints on , and represents the Frobenius norm. We take to be orthogonality: . Note that since in general is a rectangular matrix, it may not be the case that . Our recent work (Author and Author, 2019) has examined variants of this distance measure, and techniques for efficiently calculating this distance. Detailing these is outside of the scope of this paper; all matrices detailed in this work were calculated using the constrained optimization package Pymanopt (Townsend et al., 2016).
Prolongation matrix: we use the term “prolongation matrix” to refer to a matrix which is the optimum of the minimization given in the definition of the GDD.
2 Model Architecture
The model we propose is an ensemble of GCNs at multiple scales, with optimized projection matrices performing the mapping in between scales (i.e. between ensemble members). More formally, Let represent a sequence of graphs with , and let be their structure matrices (for some chosen method of calculating the structure matrix given the graph). In all experiments in this paper, we take , the graph Laplacian, as previously defined ^{1}^{1}1Other GCN research uses powers of the Laplcian, the normalized Laplacian, the symmetric normalized laplacian, etc. Comparison of these structure matrices is out of scope of this paper.. In an ensemble of Graph Convolutional Networks, let
represent the parameters (filter matrix and bias vector) in layer
of the th network.We follow the GCN formulation given by Kipf and Welling (2016)
. Assuming an input tensor
of dimensions (where is the number of nodes in the graph and is the dimension of the label at each node), we inductively define the layerwise update rules for a graph convolutional network as:where
is the activation function of the
th layer.When , let be an optimal (in either the sense of Graph Diffusion Distance, or in the sense we detail in section 3.3) prolongation matrix from to , i.e. Then, for , let be shorthand for the matrix product . For example, .
Our multiscale ensemble model is then constructed as:
This model architecture is illustrated in Figure 1. When the matrices are constant/fixed, we will refer to this model as a GPCN, for Graph ProlongationConvolutional Network. However, we find in our experiments in Section 3.3 that validation error is further reduced when the operators are tuned during the same gradient update step which updates the filter weights, which we refer to as an “adaptive” GPCN or AGPCN. We explain our method for choosing and optimizing matrices in Section 3.3.
3 Numerical Experiments
3.1 Dataset
In this Section we detail the process for generating the simulated microtubule data for comparison of our model with other GCN ensemble models. Our microtubule structure has 13 protofilaments (each 48 tubulin monomers long). As in a biological microtubule, each tubulin monomer is offset (along the axis parallel to the protofilaments) from its neighbors in adjacent protofilaments, resulting in a helical structrure with a pitch of 3 tubulin units. We refer to this pitch as the “offset” in Section 3.2. Each monomer subunit (624 total) is represented as a point mass of 50 Dalton (ng). The diameter of the whole structure is 26nm, and the length is nm. The model itself was constructed using Moltemplate (Jewett et al., 2013), a tool for constructing large regular molecules to be used in LAMMPS simulations.
For this model, we define energetic interactions for angles and associations only. No steric or dihedral interactions were used: for dihedrals, this was because the lattice structure of the tube meant any set of four molecules contributed to multiple, contradictory dihedral interactions ^{2}^{2}2Association and angle constraints were sufficient to replicate the bending resistance behavior of microtubules. We hope to run a similar experiment using higherorder particle interactions (which may be more biologically plausible), in future work.. Interaction energy of an association was calculated using the “harmonic” bond style in LAMMPS, i.e. where is the resting length and is the strength of that interaction. The energy of an angle was similarly calculated using the “harmonic” angle style, i.e. where is the resting angle and is again the interaction strength. The resting lengths and angles for all energetic interactions were calculated using the resting geometry of our microtubule graph : a LAMMPS script was used to print the value of every angle interaction in the model, and these were collected and grouped based on value (all angles, all angles, etc). Each strength parameter was varied over the values in , producing parameter combinations. Langevin dynamics were used, but with small temperature, to ensure stability and emphasize mechanical interactions. See Table 1 and Figure 3 for details on each strength parameter.
GNU Parallel (Tange, 2011) was used to run a simulation for each combination of interaction parameters, using the particle dynamics simulation engine LAMMPS. In each simulation, we clamp the first two rings of tubulin monomers (nodes 126) in place, and apply force (in the negative direction) to the final two rings of monomers (nodes 599624). This force starts at 0 and ramps up during the first 128000 timesteps (one step ns) to its maximum value of N. Once maximum force is reached, the simulation runs for 256000 additional timesteps, which in our experience was long enough for all particles to come to rest. See Figure 2 for an illustration (visualized with Ovito (Stukowski, 2010)) of the potential energy perparticle at the final frame of a typical simulation run. Every timesteps, we save the following for every particle: the position ; components of velocity ; components of force ; and the potential energy of the particle . The dataset is then a concatenation of the 12 saved frames from every simulation run, comprising all combinations of input parameter values, where for each frame we have:
, the input graph signal, a matrix holding the position, velocity, and force on each particle, as well as values of the five interaction coefficients; and
, the output graph signal, a matrix holding the potential energy calculated for each particle.
During training, after a training/validation split, we normalize the data by taking the mean and standard deviation of the
input and output tensors along their first axis. Each data tensor is then reduced by the mean and divided by the standard deviation so that all inputs to the network have zero mean and unit standard deviation. We normalize using the training data only.Association interactions  



angle interactions  

3.2 Graph Coarsening
In this Section we outline a procedure for determining the coarsened structure matrices to use in the hierarchy of GCN models comprising a GPCN. We use our microtubule graph as an example. In this case, we have two apriori guidelines for producing the reducedorder graphs: 1) the reduced models should still be a tube and 2) it makes sense from a biological point of view to coarsen by combining the  pairs into single subunits. Given these restrictions, we can explore the space of coarsened graphs and find the coarse graph which is nearest to our original graph (under the GDD).
Our microtubule model is a tube of length 48 units, 13 units per complete “turn”, and with the seam offset by three units. We generalize this notion as follows: Let be the offset, and be the number of monomers in one turn of the tube, and the number of turns of a tube graph . The graph used in our simulation is thus . We pick the medium scale model to be , as this is the result of combining each pair of tubulin monomer units in the fine scale, into one tubulin dimer unit in the medium scale. We pick the coarsest graph by searching over possible offset tube graphs. Namely, we vary and , and compute the optimal and its associated distance . Figure 4 shows the distance between and various other tube graphs as parameters and are varied. The nearest to is that with and . Note that Figure 4 has two columns for each value of : these represent the coarse edges along the seam having weight (relative to the other edges) 1 (marked with an ) or having weight 2 (no ). This is motivated by the fact that our initial condensing of each dimer pair condensed pairs of seam edges into single edges.
3.3 Comparison to Other GCN Ensemble Models



Single GCN  


2GCN Ensemble  


3GCN Ensemble  


2level GPCN  


3level GPCN  


NGCN (radii 1,2,4)  


NGCN (radii 1,2,4,8,16)  

Model Name 




Single GCN  1.50 0.09  1.37796  
Ensemble  2 GCNs  1.38 0.09  1.16949  
Ensemble  3 GCNs  1.44 0.16  1.24315  
2level GPCN  1.40 0.14  1.18357  
2level AGPCN  0.23 0.05  0.14109  
3level GPCN  1.95 0.20  1.69807  
3level AGPCN  0.181 0.029  0.13726  

1.41 0.11  1.31325  

1.41 0.08  1.30904  
DiffPool  4.6 1.2  3.14470  

Mean error and uncertainty of several GCN ensemble models across ten random trials. For each trial, the random seed was set to the same value for each model. Reported values are the minimum error on the validation set during training (not the error at the final epoch). Normalized Mean Squared Error (NMSE) values are unitless.
Model Name  Mean time per batch (s) 

Single GCN  0.0312 
Ensemble  2 GCNs  0.0471 
Ensemble  3 GCNs  0.0588 
2level GPCN  0.0524 
2level AGPCN  0.0339 
3level GPCN  0.0324 
3level AGPCN  0.0371 
NGCN, radii (1,2,4)  0.0862 
NGCN, radii (1,2,4,8,16)  0.138 
DiffPool  0.0580 
Mean wallclock time to perform feedforward and backpropagation for one batch of data, for various GCN ensemble models. Times were collected on a single Intel(R) Xeon(R) CPU core and an NVIDIA TITAN X GPU.
In this experiment we demonstrate the efficiency advantages of our model by comparing our approach to other ensemble Graph Convolutional Networks. Within each ensemble, each GCN model consists of several graph convolution layers, followed by several dense layers which are applied to each node separately (nodewise dense layers can be alternatively understood as a GCN layer with , although we implement it differently for efficiency reasons). The input to the dense layers is the nodewise concatenation of the output of each GCN layer. Each ensemble is the sum output of several such GCNs. We compare our models to 1, 2, and 3 member GCN ensembles with the same number of filters (but all using the original finescale structure matrix). For GPCN models, matrices were calculated using Pymanopt (Townsend et al., 2016) to optimize Equation 1 subject to orthogonality constraints. The same were used to initialize the (variable) matrices of AGPCN models.
We also compare our model to the work of AbuElHaija et. al (2018), who introduce the NGCN model: an ensemble GCN in which each ensemble member uses a different power of the structure matrix (to aggregate information from neighborhoods of radius ). We include a NGCN with radii (1,2,4) and a NGCN with radii (1,2,4,8,16).
All models were trained with the same train/validation split, using ADAM with default hyperparameters, in TensorFlow
(Abadi and others, 2016). Random seeds for Python, TensorFlow, Numpy, and Scipy were all initialized to the same value for each training run, to ensure that the train/validation split is the same across all experiments, and the batches of drawn data are the same. See supplementary material for version numbers of all software packages used. Training batch size was set to 8, all GCN layers have ReLU activation, and all dense layers have sigmoidal activation with the exception of the output layer of each network (which is linear). All modes were trained for 1000 epochs of 20 batches each. The time per batch of each model is listed in Table
4. Since hardware implementations may differ, we estimate the computational cost in FLOPs of each operation in our models. The cost of a graph convolutional layer with structure matrix , input data , and filter matrix is estimated as: , where is the number of nonzero entries of . This is calculated as the sum of the costs of the two matrix multiplications and , with the latter assumed to be implemented as sparse matrix multiplication and therefore requiring operations. For implementation reasons, our GCN layers (across all models) do not use sparse multiplication; if support for arbitrarydimensional sparse tensor outer products is included in TensorFlow in the future, we would expect the wallclock times in Table 4 to decrease. The cost of a dense layer (with input data , and filter matrix ) applied to every node separately is estimated as: . The cost of taking the dot product between a matrix and a matrix (for example, the restriction/prolongation by ) is estimated as .We summarize the structure of each of our models in Table 2. In Figure 6 we show a comparison between each of these models, for one particular random seed (42). Error on the validation set is tracked as a function of computational cost expended to train the model (under our cost assumption given above). We see that all four GPCN models outperform the other types of ensemble model during early training, in the sense that they reach lower levels of error for the same amount of computational work performed. Additionally, the adaptive GPCN models outperform all other models in terms of absolute error: after the same number of training epochs (using the same random seed) they reach an order of magnitude lower error. Table 3 shows summary statistics for several runs of this experiment with varying random seeds; we see that the AGPCN models consistently outperform all other models considered. Note that Figures 6,8, and 7 plot the Normalize Mean Squared Error (NMSE). This unitless value compares the output signal to the target after both are normalized by the procedure described in section 3.1.
3.4 Comparison: AllatOnce or CoarsetoFine Training
In this Section we compare the computational cost of training the entire GPCN at once, versus training the different ‘resolutions’ (meaning the different GCNs in the hierarchy) of the network according to a more complicated training schedule. This approach is motivated by recent work in coarsetofine training of both flat and convolutional neural networks (Scott and Mjolsness, 2019; Zhao et al., 2019; Haber et al., 2018; Dou and Wu, 2015; Ke et al., 2017), as well as the extensive literature on Algebraic MultiGrid (AMG) methods (Vaněk et al., 1996).
AMG solvers for differential equations on a mesh (which arises as the discretization of some volume to be simulated) proceed by performing numerical “smoothing steps” at multiple resolutions of discretization. The intuition behind this approach is that modes of error should be smooth at a spatial scale which is equivalent to their wavelength, i.e. the solver shouldn’t spend many cycles resolving longwavelength errors at the finest scale, since they can be resolved more efficiently at the coarse scale. Given a solver and a hierarchy of discretizations, the AMG literature defines several types of training procedures or “cycle” types (Fcycle, Vcycle, Wcycle). These cycles can be understood as being specified by a recursion parameter , which controls how many times the smoothing or training algorithm visits all of coarser levels of the hierarchy in between smoothing steps at a given scale. For example, when the algorithm proceeds from fine to coarse and back again, performing one smoothing step at each resolution  a ‘V’ cycle.
We investigate the efficiency of training 3level GPCN and AGPCN (as described in Section 3.3), using multigridlike training schedules with , as well as “coarsetofine” training: training the coarse model to convergence, then training the coarse and intermediate models together (until convergence), then finally training all three models at once. For coarsetofine training convergence was defined to have occurred once 10 epochs had passed without improvement of the validation error.
Our experiments (see Figure 7) show that these training schedules do result in a slight increase in efficiency of the GPCN model, especially during the early phase of training. However, we also find that these schedules are seemingly not compatible with the adaptive GPCN, as demonstrated by the fact that there are long periods of training with no improvement in validation loss. Notably, the coarsetofine schedule is an exception: the coarsetofine training of the AGPCN outperformed even the GPCN with no multigrid training.
3.5 Comparison with DiffPool
Graph coarsening procedures are in general not differentiable. DiffPool (Ying et al., 2018) aims to address this by constructing an auxiliary GCN, whose output is a pooling matrix. Formally: Suppose that at layer of a GCN we have a structure matrix and a data matrix . In addition to GCN layers as described in Section 2, Ying et. al define a pooling operation at layer as:
where is an auxillary GCN with its own set of parameters , and is the softmax function. The output of is a
matrix, each row of which is softmaxed to produce an affinity matrix
whose rows each sum to 1, representing each finescale node being connected to one unit’s worth of coarsescale nodes. The coarsened structural and data matrices for the next layer are then calculated as:(2) 
Clearly, the additional GCN layers required to produce incur additional computational cost. We compare our 3level GPCN (adaptive and not) models from the experiment in Section 3.3 to a model which has the same structure, but in which each matrix is replaced by the appropriatelysized output of a DiffPool module, and furthermore the coarsened structure matrices are produced as in Equation 2.
We see that our GPCN model achieves comparable validation loss with less computational work, and our AGPCN model additionally achieves lower absolute validation loss.
4 Future Work
4.1 Differentiable Models of Molecular Dynamics
This work demonstrates the use of feedforward neural networks to approximate the energetic potentials of a mechanochemical model of an organic molecule. Pertimestep, GCN models may not be as fast as highlyparallelized, optimized MD codes. However, neural networks are highly flexible function approximators: the GCN training approach outlined in this paper could also be used to train a GCN which predicts the energy levels per particle at the end of a simulation (once equilibrium is reached), given the boundary conditions and initial conditions of each particle. In the case of our MT experiments, approximately
steps were required to reach equilibrium. The computational work to generate a suitably large and diverse training set would then be amortized by the GCN’s ability to generalize to initial conditions, boundary conditions, and hyperparameters outside of this data set. Furthermore, this GCN reduced model would be fully differentiable, making it possible to perform gradient descent with respect to any of these inputs.4.2 Tensor Factorization
Recent work has reexamined GCNs in the context of the extensive literature on tensor decompositions. LanczosNet (Liao et al., 2019)
, uses QR decomposition of the structure matrix to aggregate information from large neighborhoods of the graph. The “Tensor Graph Convolutional Network” of Zhang et. al
(2018), is a different decomposition method, based on graph factorization; a product of GCNs operating on each factor graph can be as accurate as a single GCN acting on the product graph. Since recent work (Scott and Mjolsness, 2019)has shown that the GDD of a graph product is bounded by the distances between the factor graphs, it seems reasonable to combine both ideas into a model which uses a separate GPCN for each factor. One major benefit of this approach would be that a transferlearning style approach can be used. For example, we could train a product of two GCN models on a short section of microtubule; and then reuse the weights in a model that predicts energetic potentials for a longer microtubule. This would allow us to extend our approach to MT models whose lengths are biologically relevant, e.g.
tubulin monomers.5 Conclusion
We introduce a new type of graph ensemble model which explicitly learns to approximate behavior at multiple levels of coarsening. Our model outperforms several other types of GCN, including both other ensemble models and a model which coarsens the original graph using DiffPool. We also explore the effect of various training schedules, discovering that AGPCNs can be effectively trained using a coarsetofine training schedule. We present the first use of GCNs to approximate energetic potentials in a model of a microtubule.
Acknowledgements
This work was funded by the Human Frontiers Science Program [grant HFSP  RGP0023/2018]; and the National Institute of Health [grant R01 HD073179], as well as the hospitality of the Center for NonLinear Studies (CNLS) at Los Alamos National Laboratory.
References
 Tensorflow: a System for LargeScale Machine Learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283. Cited by: §3.3.
 NGCN: MultiScale Graph Convolution for Semisupervised Node Classification. External Links: 1802.08888 Cited by: §3.3.
 Novel DiffusionDerived Distance Measures for Graphs. Suppressed for Anonymity. Cited by: §1.4.
 A Gentle Introduction to Deep Learning for Graphs. arXiv preprint arXiv:1912.12693. Cited by: §1.1.
 A Computational Framework for Cortical Microtubule Dynamics in Realistically Shaped Plant Cells. PLoS Computational Biology 14 (2), pp. e1005959. Cited by: §1.2.
 CoarsetoFine Trained MultiScale Convolutional Neural Networks for Image Classification. In 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. Cited by: §3.4.
 Microtubule Catastrophe and Rescue. Current Opinion in Cell Biology 25 (1), pp. 14–22. Cited by: §1.2.

Learning Across Scales  Multiscale Methods for Convolution Neural Networks.
In
ThirtySecond AAAI Conference on Artificial Intelligence
, Cited by: §3.4.  Moltemplate: a CoarseGrained Model Assembly Tool. Biophysical Journal 104 (2), pp. 169a. Cited by: §3.1.

Multigrid Neural Architectures.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 6665–6673. Cited by: §3.4.  SemiSupervised Classification with Graph Convolutional Networks. arXiv preprint arXiv:1609.02907. Cited by: §2.
 Nanomechanics of Microtubules. Physical Review Letters 89 (24), pp. 248101. Cited by: §1.2.
 LanczosNet: MultiScale Deep Graph Convolutional Networks. arXiv preprint arXiv:1901.01484. Cited by: §4.2.
 Microtubule Architecture: Inspiration for Novel Carbon Nanotubebased Biomimetic Materials. Trends in Biotechnology 26 (6), pp. 302–310. Cited by: §1.2.
 Fast Parallel Algorithms For ShortRange Molecular Dynamics. Technical report Sandia National Labs., Albuquerque, NM (United States). Cited by: §1.3.

MolecularDynamics Study of a ThreeDimensional OneComponent Model for Distortive Phase Transitions
. Physical Review B 17 (3), pp. 1302. Cited by: §1.3.  Multilevel Artificial Neural Network Training for Spatially Correlated Learning. SIAM Journal on Scientific Computing 41 (5), pp. S297–S320. Cited by: §3.4, §4.2.
 Sustained Microtubule Treadmilling in Arabidopsis Cortical Arrays. Science 300 (5626), pp. 1715–1718. Cited by: §1.2.
 Visualization and Analysis of Atomistic Simulation Data with OVITO  the Open Visualization Tool. Modelling Simulation in Materials Science and Engineering 18 (1). External Links: Document Cited by: §3.1.
 GNU Parallel  The CommandLine Power Tool. ;login: The USENIX Magazine 36 (1), pp. 42–47. External Links: Link, Document Cited by: §3.1.
 Efficient EventDriven Simulations Shed New Light on Microtubule Organization in the Plant Cortical Array. Frontiers in Physics 2, pp. 19. Cited by: §1.2.
 Pymanopt: a python Toolbox for Optimization on Manifolds using Automatic Differentiation. The Journal of Machine Learning Research 17 (1), pp. 4755–4759. Cited by: §1.4, §3.3.
 Mechanochemical Model of Microtubule Structure and SelfAssembly Kinetics. Biophysical Journal 89 (5), pp. 2911–2926. Cited by: §1.2.
 Algebraic Multigrid by Smoothed Aggregation for Second and Fourth Order Elliptic Problems. Computing 56 (3), pp. 179–196. Cited by: §3.4.
 Computer “Experiments” on Classical Fluids. I. Thermodynamical Properties of LennardJones Molecules. Physical Review 159 (1), pp. 98. Cited by: §1.3.
 Hierarchical Graph Representation Learning with Differentiable Pooling. In Advances in Neural Information Processing Systems, pp. 4800–4810. Cited by: §3.5.
 Tensor Graph Convolutional Neural Network. arXiv preprint arXiv:1803.10071. Cited by: §4.2.
 PGUnet+: Progressive Growing of Unet+ for Automated Cervical Nuclei Segmentation. Lecture Notes in Computer Science, pp. 51–58. Cited by: §3.4.
Comments
There are no comments yet.