In recent years, machine learning has found wide-spread and successful application in quantum chemistry, condensed-matter physics and materials science, e.g. for potential energy surface and accelerated molecular dynamics simulations (behler2007generalized, ; gastegger2017machine, ; behler_first_2017, ; glielmo2017accurate, ; chmiela_machine_2017, ; schnet_JCP, ; chmiela2018towards, ) as well as predictions of properties across chemical compound space (rupp2012fast, ; de2016comparing, ; faber2017prediction, ; podryabinkin2017active, ; bartok2017machine, ; dragoni2018achieving, ; faber2018alchemical, ; pronobis2018capturing, ; pronobis2018many, )
. Recent deep learning architecturesDTNN ; Schnet_NIPS ; lubbers2018hierarchical ; gilmer2017neural ; jorgensen2018neural yield accurate predictions of chemical properties while learning atomistic representations directly from atom types and positions. Beyond the accuracy of those networks, there has been increasing research regarding the interpretation of their predictions baehrens2010explain ; simonyan2013deep ; zeiler2014visualizing ; LRP2 ; kindermans2018learning as well as extracting quantum-chemical insights from learned atomistic representations DTNN ; schnet_JCP ; Book_chapter . SchNet schnet_JCP and DTNN DTNN use a sequence of interaction blocks to model quantum interactions. While in a single interaction block only pairwise interactions are considered, their repeated application infuses increasingly complex environmental information into the atom-wise representations. Intuitively, the first layers can only model pair-wise, local interactions, whereas deeper layers are able to capture more complex, longer ranging interactions.
In this paper, we aim to quantify how much each interaction block contributes to the final representation depending on the underlying reference data as well as how the importance of interaction blocks develops during training. We achieve this by introducing a modified SchNet architecture – sc-SchNet – which uses weighted skip-connections to assemble the final representation (see Fig. 1). Skip-connections have been shown to smoothen the loss landscape and enable the training of extremely deep networks li2018visualizing ; he_deep_2015 ; huang_densely_2016 . While SchNet already contains ResNet-style skip-connections by modeling the interactions as additive corrections, here we add skip-connections by assembling the final atom-wise representations as a linear combination of the intermediate representations obtained by each interaction block. This allows us to explore how SchNet obtains molecular representations from these interactions and how the underlying data influence the training process as well as the final representation.
The remainder of this work is structured as follows: first we give an overview of the original SchNet architecture schnet_JCP ; Schnet_NIPS and outline the modifications that lead to sc-SchNet. Then, we present results for energy predictions across chemical compound space using the QM9 dataset and also for prediction of potential energy surfaces and force fields using the MD17 dataset. Finally, we analyze the weighted skip-connections to obtain insights about how the structure of a molecule as well as the composition of a dataset influences the learned representations.
is a deep neural network for atomistic systems following the DTNN frameworkDTNN , i.e. atom-wise representations are constructed by starting from element-specific embeddings , for atom with nuclear charge , followed by repeated, pair-wise interaction corrections
Fig. 1 shows an overview of the SchNet using interaction blocks on the left as well as the architecture of the interaction block on the right. Because the corrections depend on all previous atom-wise representations of neighbors within a given cutoff radius, more and more complex spatial information is incorporated in the atom-wise representations . SchNet models quantum interactions by continuous-filter convolutions Schnet_NIPS in order to enable smooth predictions of potential energy surfaces. After a given number of interaction corrections, the energy of a molecule is predicted by
where a fully-connected neural network predicts latent atom-wise energy contributions from the final representations . For further details, we refer to the original papers Schnet_NIPS ; schnet_JCP as well as the reference implementation Schnetpack .
In this work, we extend this architecture by weighted skip-connections. More concretely, we do not only pass the atom features through the interactions blocks but also construct the final representation as a weighted sum of intermediate representations. This allows the model to access the evolution of the features through the entire forward pass. In order to distinguish our extended architecture from standard SchNet, we will refer to it as sc-SchNet in the following. Fig. 1 (left) visualizes the modifications applied to SchNet to arrive at sc-SchNet in orange.
First, we unroll the interaction corrections of the standard SchNet, i.e. Eq. 1, as
defining for compactness. In sc-SchNet, we instead use a weighted sum over interaction corrections
where the are contribution weights
of the different interaction stages. Contribution weights are trainable parameters of the network. They are initialized as uniformly distributed, and normalized to one, before being updated during the learning process.
Note that the interaction corrections still only depend on the previous corrections through Eq. 1. Therefore, sc-SchNet effectively decouples the evolution of the features during the interaction corrections from the composition of the final representation. While this may help during training li2018visualizing , it also allows us to obtain interpretable contributions of different interaction blocks.
This approach however has the disadvantage that the contribution weights depend on the magnitude of the interaction correction . In order to avoid this, we rescale them with respect to this magnitude by replacing the aggregation rule (4) by
where we defined the quantity with denoting the average norm in the current minibatch. In the following sections, we will only consider the normalized contribution weights .
|Aspirin (N=1k)||energy||kcal mol||SchNet||0.40 0.01||0.57 0.01|
|sc-SchNet||0.39 0.01||0.54 0.01|
|forces||kcal mol Å||SchNet||0.94 0.01||1.38 0.02|
|sc-SchNet||0.95 0.01||1.47 0.02|
|Salicylic Acid (N=1k)||energy||kcal mol||SchNet||0.24 0.01||0.31 0.01|
|sc-SchNet||0.23 0.01||0.29 0.01|
|forces||kcal mol Å||SchNet||0.63 0.01||0.94 0.01|
|sc-SchNet||0.63 0.01||0.97 0.02|
|Benzene (N=1k)||energy||kcal mol||SchNet||0.10 0.00||0.12 0.01|
|sc-SchNet||0.08 0.00||0.11 0.01|
|forces||kcal mol Å||SchNet||0.32 0.01||0.46 0.01|
|sc-SchNet||0.31 0.01||0.45 0.01|
|Ethanol (N=1k)||energy||kcal mol||SchNet||0.08 0.00||0.12 0.01|
|sc-SchNet||0.07 0.00||0.11 0.01|
|forces||kcal mol Å||SchNet||0.29 0.01||0.52 0.01|
|sc-SchNet||0.30 0.01||0.54 0.01|
|QM9 (N=110k)||kcal mol||SchNet||0.24 0.01||0.43 0.01|
|sc-SchNet||0.24 0.01||0.46 0.01|
In the following, we demonstrate the performance of sc-SchNet for energy prediction as well as an analysis of how different datasets influence the representations. We evaluate our model for energy prediction across chemical compound space using the QM9 benchmark qm9_1 ; qm9_2 as well as for prediction of potential energy surfaces and force fields for the MD17 dataset chmiela2018towards ; Schnet_NIPS ; schnet_JCP . Our models have been implemented using SchNetPack Schnetpack
, which is based on the deep learning framework PyTorchpytorch_paper .
3.1 Property Prediction
We predict the internal energy for the QM9 benchmark which contains organic molecules composed of the chemical elements H, C, N, O and F. Each experiment is repeated on three different data splits for both SchNet and sc-SchNet. We use SGDRloshchilov_sgdr:_2016
(stochastic gradient descent with warm restarts) as learning rate scheduler with an episode length of 50 epochs. We take 110k molecules from the entire dataset which we divide in 105k examples for training and 5k for validation. The remaining data is left for testing. We use a mini-batch size of 100 examples and 256 features for the atom-wise representations as well as six interaction blocks. The initial learning rate is.
In case of the MD17 dataset, we study molecular dynamics trajectories of benzene, ethanol, salicylic acid and aspirin. Here, the mini-batch size consists of 50 examples and the initial learning rate is . For every MD17 trajectory, we trained both networks on a combined loss of energies and forces, where the force model is obtained as the negative partial derivative of the energy mode with respect to the atomic positions (see Schnet_NIPS ).
shows the mean absolute and root mean squared errors for predictions of energies and forces with the corresponding standard errors. We observe that the performances of both models are comparable.sc-SchNet achieves slightly better performances on both MAE and RMSE for energy prediction on all studied molecules from MD17. For molecular forces and predictions on QM9, the results agree with conventional SchNet within the standard error.
3.2 Interaction Analysis
Having demonstrated that sc-SchNet achieves comparable prediction performance to SchNet, we go on to study the contribution weights according to Eq. 5 which can be interpreted as the importance of the information retrieved from the corresponding layer. Fig. 2
shows their evolution during the training process for the QM9 dataset. For each contribution weight, we report the average over three runs and the standard deviation as indicated by the shaded area surrounding each curve.
The most apparent aspects of Fig. 2 are the curves representing the initial embedding as well as the last interaction block, respectively, which contribute the most to the final representation. While the initial embedding contains information about the chemical composition, which makes up for the majority of energy differences between molecules, the last interaction block contains the most complex representations of atomic environments. Another important aspect is how the representation is obtained during training: at first, most of the information is retrieved from the initial embedding and, to a lesser extent, the first and second interaction block. However, the contribution of the early intermediates decreases steeply during the first epochs. This suggests a greater focus on the lower layers in the initial stages of training, where local, pair-wise features are useful for a rough fit of the energy. For the network to propagate useful information to the higher layers, approximately 100 training epochs are required before the final interaction exhibits rising contribution weights and surpasses the atom type embeddings after about 500 epochs. While the intermediate interactions 1-5 are crucial to construct , they do not contribute that much to the final representation directly.
Fig. 3 presents the interaction contribution analysis for molecular dynamics trajectories from the MD17 dataset. The contribution weights show a radically different behavior during the training processes compared to QM9. In contrast to the QM9 results, the contribution of atom type embeddings remains approximately constant during training. While QM9 contains a large variety of molecules, the chemical composition is the same for each MD17 trajectory, reducing the influence of the embedding on the energy. However, the atom type still is important for the characterization of the interactions of an atom. We observe this in particular for benzene which is quite stable due to its aromatic ring. Since SchNet uses rotationally invariant, atom-wise features, it inherently accounts for the symmetries of the molecule. Thus, the atom embedding for benzene corresponds directly to the position in the molecule, i.e. ring member or saturating hydrogen.
This becomes less apparent when moving to larger and more flexible molecules for which the interactions gain importance compared to the atom types. The interaction features start off with relatively high contribution weights that decrease during training. We conjecture that this is due to the fact that the relation between the geometry of the molecule and energy is first modeled by the interactions before sc-SchNet is able to resolve the interaction behavior of different atom types. In contrast, the large energy changes in QM9 are due to changes in composition, so that the model can directly map this to atom type embeddings.
Comparing the curves of the contribution weights, in larger and more flexible molecules the higher interaction blocks are of greater importance since more complex chemical environments need to be represented. For salicylic acid and aspirin, higher interactions contribute more than the atom type embeddings. In other words, the complex interaction within the molecule cannot be resolved to less complex interactions as was the case for benzene and, partially, ethanol. This demonstrates that the relative contribution of the atom embeddings and interactions during the learning process strongly depends on the molecular structure.
Atomistic end-to-end learning is not only able to yield fast and accurate predictions but can moreover be employed to obtain valuable insights from the learned representations schnet_JCP ; Book_chapter . In this work, we have introduced an extension to SchNet using skip layer connections in order to obtain a more transparent and interpretable model for latent representation of atomistic systems. sc-SchNet not only obtains comparable predictions to SchNet but also allows us to study how the obtained atom-wise representations are assembled from atom type features as well as interaction corrections with different degree of complexity. We observe significant differences in the relative importance of the contribution weights depending on whether the model is trained across chemical compound space or on single trajectories. Moreover, the interaction contributions reflect the size and flexibility of the molecule to be predicted. In future studies, we will investigate whether the interaction coefficients encode information on the locality of different molecular properties. In addition, their relation to fundamental modes of molecular motion will be explored in the context of the MD17 molecules.
This work was supported by the Federal Ministry of Education and Research (BMBF) for the Berlin Big Data Center BBDC (01IS14013A) and the Berliner Zentrum für Maschinelles Lernen BZML (01IS18037A). MG acknowledges support provided by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement NO 792572. Correspondence to KAN and KTS.
- (1) Jörg Behler and Michele Parrinello. Generalized neural-network representation of high-dimensional potential-energy surfaces. Phys. Rev. Lett., 98(14):146401, 2007.
- (2) Michael Gastegger, Jörg Behler, and Philipp Marquetand. Machine learning molecular dynamics for the simulation of infrared spectra. Chem. Sci., 8(10):6924–6935, 2017.
- (3) Jörg Behler. First Principles Neural Network Potentials for Reactive Simulations of Large Molecular and Condensed Systems. Angew. Chem. Int. Ed., 56(42):12828–12840, October 2017.
- (4) Aldo Glielmo, Peter Sollich, and Alessandro De Vita. Accurate interatomic force fields via machine learning with covariant kernels. Phys. Rev. B, 95(21):214302, 2017.
- (5) Stefan Chmiela, Alexandre Tkatchenko, Huziel E Sauceda, Igor Poltavsky, Kristof T Schütt, and Klaus-Robert Müller. Machine learning of accurate energy-conserving molecular force fields. Sci. Adv., 3(5):e1603015, 2017.
- (6) Kristof T Schütt, Huziel E Sauceda, P-J Kindermans, Alexandre Tkatchenko, and K-R Müller. Schnet–a deep learning architecture for molecules and materials. J. Chem. Phys., 148(24):241722, 2018.
- (7) Stefan Chmiela, Huziel E Sauceda, Klaus-Robert Müller, and Alexandre Tkatchenko. Towards exact molecular dynamics simulations with machine-learned force fields. Nat. Commun., 9:3887, 2018.
- (8) Matthias Rupp, Alexandre Tkatchenko, Klaus-Robert Müller, and O Anatole Von Lilienfeld. Fast and accurate modeling of molecular atomization energies with machine learning. Phys. Rev. Lett., 108(5):058301, 2012.
- (9) Sandip De, Albert P Bartók, Gábor Csányi, and Michele Ceriotti. Comparing molecules and solids across structural and alchemical space. Phys. Chem. Chem. Phys., 18(20):13754–13769, 2016.
- (10) Felix A Faber, Luke Hutchison, Bing Huang, Justin Gilmer, Samuel S Schoenholz, George E Dahl, Oriol Vinyals, Steven Kearnes, Patrick F Riley, and O Anatole von Lilienfeld. Prediction errors of molecular machine learning models lower than hybrid dft error. J. Chem. Theory Comput., 13(11):5255–5264, 2017.
- (11) Evgeny V Podryabinkin and Alexander V Shapeev. Active learning of linearly parametrized interatomic potentials. Computational Materials Science, 140:171–180, 2017.
- (12) Albert P Bartók, Sandip De, Carl Poelking, Noam Bernstein, James R Kermode, Gábor Csányi, and Michele Ceriotti. Machine learning unifies the modeling of materials and molecules. Sci. Adv., 3(12):e1701816, 2017.
- (13) Daniele Dragoni, Thomas D Daff, Gábor Csányi, and Nicola Marzari. Achieving dft accuracy with a machine-learning interatomic potential: Thermomechanics and defects in bcc ferromagnetic iron. Physical Review Materials, 2(1):013808, 2018.
- (14) Felix A Faber, Anders S Christensen, Bing Huang, and O Anatole von Lilienfeld. Alchemical and structural distribution based representation for universal quantum machine learning. J. Chem. Phys., 148(24):241717, 2018.
- (15) Wiktor Pronobis, Kristof T Schütt, Alexandre Tkatchenko, and Klaus-Robert Müller. Capturing intensive and extensive dft/tddft molecular properties with machine learning. The European Physical Journal B, 91(8):178, 2018.
- (16) Wiktor Pronobis, Alexandre Tkatchenko, and Klaus-Robert Müller. Many-body descriptors for predicting molecular properties with machine learning: Analysis of pairwise and three-body interactions in molecules. J. Chem. Theory Comput., 14(6):2991–3003, 2018.
Kristof T Schütt, Farhad Arbabzadah, Stefan Chmiela, Klaus R Müller,
and Alexandre Tkatchenko.
Quantum-chemical insights from deep tensor neural networks.Nat. Commun., 8:13890, 2017.
Kristof Schütt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan
Chmiela, Alexandre Tkatchenko, and Klaus-Robert Müller.
SchNet: A continuous-filter convolutional neural network for modeling quantum interactions.In Advances in Neural Information Processing Systems, pages 991–1001, 2017.
- (19) Nicholas Lubbers, Justin S Smith, and Kipton Barros. Hierarchical modeling of molecular energies using a deep neural network. J. Chem. Phys., 148(24):241715, 2018.
- (20) Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. Proceedings of the 34th International Conference on Machine Learning, pages 1263–1272, 2017.
- (21) Peter Bjørn Jørgensen, Karsten Wedel Jacobsen, and Mikkel N Schmidt. Neural message passing with edge updates for predicting properties of molecules and materials. arXiv preprint arXiv:1806.03146, 2018.
- (22) David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert MÃžller. How to explain individual classification decisions. J Mach Learn Res, 11(Jun):1803–1831, 2010.
- (23) Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
Matthew D Zeiler and Rob Fergus.
Visualizing and understanding convolutional networks.
European conference on computer vision, pages 818–833. Springer, 2014.
- (25) Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting and understanding deep neural networks. Digit. Signal Process., 73:1–15, 2018.
- (26) Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, and Sven Dähne. Learning how to explain neural networks: Patternnet and patternattribution. In International Conference on Learning Representations, 2018.
- (27) Kristof T. Schütt, Michael Gastegger, Alexandre Tkatchenko, and Klaus-Robert Müller. Quantum-chemical insights from interpretable atomistic neural networks. arXiv preprint arXiv:1806.10349, 2018.
- (28) Hao Li, Zheng Xu, Gavin Taylor, and Tom Goldstein. Visualizing the loss landscape of neural nets. arXiv preprint arXiv:1712.09913, 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- (30) Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2261–2269. IEEE, 2017.
- (31) Kristof T Schütt, Pan Kessel, Michael Gastegger, Kim Nicoli, Alexandre Tkatchenko, and Klaus-Robert Müller. SchNetPack: A Deep Learning Toolbox for Atomistic Systems. arXiv preprint arXiv:1809.01072, 2018.
- (32) Raghunathan Ramakrishnan, Pavlo O Dral, Matthias Rupp, and O Anatole von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. Sci. Data, 1, 2014.
- (33) Lars Ruddigkeit, Ruud van Deursen, Lorenz C. Blum, and Jean-Louis Reymond. Enumeration of 166 billion organic small molecules in the chemical universe database gdb-17. J. Chem. Inf. Model., 52(11):2864–2875, 2012.
- (34) Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. 2017.
- (35) Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. International Conference on Learning Representations, 2017.