Differentiable Molecular Simulations for Control and Learning

by   Wujie Wang, et al.

Molecular dynamics simulations use statistical mechanics at the atomistic scale to enable both the elucidation of fundamental mechanisms and the engineering of matter for desired tasks. The behavior of molecular systems at the microscale is typically simulated with differential equations parameterized by a Hamiltonian, or energy function. The Hamiltonian describes the state of the system and its interactions with the environment. In order to derive predictive microscopic models, one wishes to infer a molecular Hamiltonian that agrees with observed macroscopic quantities. From the perspective of engineering, one wishes to control the Hamiltonian to achieve desired simulation outcomes and structures, as in self-assembly and optical control, to then realize systems with the desired Hamiltonian in the lab. In both cases, the goal is to modify the Hamiltonian such that emergent properties of the simulated system match a given target. We demonstrate how this can be achieved using differentiable simulations where bulk target observables and simulation outcomes can be analytically differentiated with respect to Hamiltonians, opening up new routes for parameterizing Hamiltonians to infer macroscopic models and develop control protocols.


Symplecticity of coupled Hamiltonian systems

We derived a condition under which a coupled system consisting of two fi...

HamNet: Conformation-Guided Molecular Representation with Hamiltonian Neural Networks

Well-designed molecular representations (fingerprints) are vital to comb...

Learning to grow: control of materials self-assembly using evolutionary reinforcement learning

We show that neural networks trained by evolutionary reinforcement learn...

Dissipative SymODEN: Encoding Hamiltonian Dynamics with Dissipation and Control into Deep Learning

In this work, we introduce Dissipative SymODEN, a deep learning architec...

Quantum Model Learning Agent: characterisation of quantum systems through machine learning

Accurate models of real quantum systems are important for investigating ...

Hamiltonian Graph Networks with ODE Integrators

We introduce an approach for imposing physically informed inductive bias...

1 Introduction

At the atomic level, physical processes are governed by differential equations containing many degrees of freedom. Macroscopic phenomena in matter emerge from microscopic interactions that can be simulated through numerical integration of the equations of motion. In classical simulations, these equations of motion are derived from a Hamiltonian quantity

. In quantum simulations, they are derived from a Hamiltonian operator . Examples of microscopic quantities from simulations are time series of positions, velocities, and forces on atoms and molecules. From these, a rich family of macroscopic observables can be calculated to describe the configurational and temporal correlation functions of atoms. These observables determine different properties of the simulated materials.

Classically, simulating the positions of particles with conserved energy requires integrating the Hamiltonian equations of motion:


where and are the respective momentum and position of the particle. is the Hamiltonian of the system. For conservative systems, it is given by the sum of kinetic energy and the potential energy,


where boldface denotes the set of quantities for all particles, is the potential energy and is the kinetic energy of the particle.

Simulating an entire system explicitly tracking all relevant degrees of freedom is computationally intractable in most cases. Typically, one is interested in a small subset of a system, such as a molecule or protein, and concerned only with the influence of the environment, also known as bath, on the system but not the details of the environment itself. For this reason, one usually incorporates an environment Hamiltonian with coarse-grained macroscopic variables of interest into the original Hamiltonian: . The inclusion of is important in simulating systems under certain thermodynamic conditions. For example, the crystallization and melting of water occur under specific temperature and pressure. These conditions are imposed by the environment, which must therefore be incorporated into . The environment, and therefore , can also be explicitly controlled and optimized in many complex physical processes. Rotskoff and Vanden-Eijnden (2018); Tafoya et al. (2019); Sivak and Crooks (2016) For example, can represent an external laser that is varied to control chemical reaction dynamics. In nucleation processes, time-dependent temperature protocols can be designed to drive molecular self-assembly Sajfutdinow et al. (2018); Jacobs et al. (2015) to escape kinetic traps.

The control setting in molecular simulations is similar to that of robotics control of fine-grained degrees of freedom, as described by Zhong et al. Zhong et al. (2019) However, molecular simulation is typically concerned with coarse-grained variables that control macroscopic thermodynamics states. These variables represent ensembles of many micro-states. Such control is usually realized by incorporating a dissipative Hamiltonian term.

Recent advances in differential equation solvers have shown that differentiable simulations may be performed, in which the result of a simulation can be analytically differentiated with respect to its inputs. Chen et al. (2018); Lu et al. (2019b); Li et al. (2019); Liang et al. (2019); Holl et al. (2020); Lu et al. (2019a); Long et al. (2017) This work applies differentiable simulation to a variety of molecular learning and control problems. We show that a Hamiltonian can be learned such that the macroscopic observables computed from a simulation trajectory match a given target. This is done through automatic differentiation of the macroscopic observables with respect to the system Hamiltonian. Moreover, we show that the same principles can be used to control the system Hamiltonian to force the system towards a target state.

Figure 1: Differentiable molecular simulation workflow for learning and controlling physical models. The ODE is propagated with Eq. 1; and denote the set of momenta and positions for all particles.

2 Approach

2.1 Molecular simulations

Molecular simulations with a control Hamiltonian In this work, we demonstrate the use of automatic differentiation in molecular dynamics simulations. To simulate systems under fixed thermodynamic conditions with macroscopic thermodynamic controls, most applications require fixed temperature, pressure, or strain. A typical way to impose thermodynamic constraints is to introduce modified equations of motion that contain virtual external variables. For example, to control the temperature of a systems, a virtual heat bath variable is needed to couple to the system variables. Nosé (1984); Martyna et al. (1992); Parrinello and Rahman (1982)

These modified equations of motion can be integrated using ODE solvers. In the Appendix we provide details of the Nose-Hover chain, a common method for imposing constant temperature conditions. In our experiments we have implemented a differentiable version of this integrator to represent realistic dynamics of particle systems at constant temperature while allowing backpropagation.

The use of a graph neural network to represent a control Hamiltonian

is also demonstrated in an example application. Graph neural networks (GNN) are a state-of-the-art architecture to learn the molecular potential energy function Gilmer et al. (2017); Schütt et al. (2017); Duvenaud et al. (2015) while preserving physical symmetries (translational, rotational and permutational). GNNs have shown flexible fitting power to represent molecular energies, and can be used to model control Hamiltonians (see Appendix).

Quantum dynamics with control To extend the notion of Hamiltonian dynamics to quantum mechanics, we consider the evolution of the wave function , where are the spatial degrees of freedom,

is the Hamiltonian operator, and the exponential term is a unitary propagator. In abstract vector notation, this can be written as

. Time-dependent control terms in the Hamiltonian can be used to steer the evolution toward a desired outcome. For example, one can manipulate the intensity spectrum and/or the phase of light to control quantum behaviour. The latter is known as coherent control, Shapiro and Brumer (2003); Judson and Rabitz (1992) and has been used to control the out-of-equilibrium behavior of various physical processes. Zhu et al. (1995); Stievater et al. (2001); Brinks et al. (2010); Haché et al. (1997); Warren et al. (1993) A prototypical light-controlled process is the isomerization of the molecule retinal, which is responsible for human vision, Palczewski (2012) light-sensing, Briggs and Spudich (2005) and bacterial proton pumping. Schulten and Tavan (1978) Changes in the light impingent on the molecule can alter the efficiency, or quantum yield, of isomerization. Coherent control of retinal within an entire protein, bacteriorhodopsin, was demonstrated in Ref. Prokhorenko et al. (2006). Further, coherent control of the isomerization of retinal was demonstrated in vivo in the light-sensitive ion channel channelrhodopsin-2. Paul et al. (2017) This control led to the manipulation of current emanating from living brain cells.

Computational and theoretical analysis of control can explain and motivate experimental control protocols. Lavigne and Brumer (2019b, a, 2017); Pachón and Brumer (2013) In Ref. Lavigne and Brumer (2017), for example, an experimental control protocol was reproduced computationally and explained theoretically. A minimal model for retinal Hahn and Stock (2000)

was used to simulate the quantum yield (isomerization efficiency), and a genetic algorithm was used to shape the incident pulse. With the minimal retinal model used commonly in the literature,

Axelrod and Brumer (2019); Tscherbul and Brumer (2014, 2015); Lavigne and Brumer (2017) we here analyze the control of the incident pulse through differentiable simulations. We allow for control of both the phase and intensity spectrum of the light; restriction to phase-only (i.e. coherent) control is also straightforward. In particular, we show that back-propagation through a quantum simulation can be used to shape the incident pulse and control the isomerization quantum yield. This provides an example of out-of-equilibrium molecular control with differentiable molecular simulations.

2.2 Back-propagation with the adjoint method

To be able to differentiate molecular simulations to reach a control target, we adopt the reverse-mode automatic differentiation method from the work by Chen et al., which uses adjoint sensitivity methods. Pontryagin et al. (1962); Chen et al. (2018) Taking derivatives requires computation of the adjoint state . Evaluating the loss requires the reverse-time integration of the vector-Jacobian product:


where represents the Hamiltonian ODE defined in Eq. 1, with being the learnable parameters in the Hamiltonian. The reverse-mode automatic differentiation computes the gradient through the adjoint states without backpropagating through the forward computations in the ODE solver. This has the advantage of having constant memory costs. The ability to output positions and momenta at individual timesteps allows one to directly compute observables and correlation functions from a trajectory. Separate reverse adjoint integrations are performed for observations at different individual observation times.

3 Targeted molecular dynamics

In this section, we show that with differentiable simulations, it is possible to bias the molecular dynamics in simulations towards a target state specified by a set of observables. From the perspective of engineering, a control Hamiltonian and thermodynamic protocol can be used to actively drive self-assembly of molecular structures. Nguyen and Vaikuntanathan (2016); Jacobs et al. (2015, 2015); Hormoz and Brenner (2011) This type of protocol connects a starting state to a final state, with the motion governed by two Hamiltonians. Being able to devise such simulation protocols is also useful in the context of non-equilibrium work relations (such as the celebrated Jarzynski equality Jarzynski (1997, 1997) and Crook’s Fluctuation Theorem, Crooks (1999)), because this allows one to compute free energy differences with both equilibrium Vaikuntanathan and Jarzynski (2008) and non-equilibrium simulations. Rotskoff and Vanden-Eijnden (2018) A similar equality is also known in statistics as annealed importance sampling. Neal (1998); Grosse et al. (2015); Habeck (2017) We show that one can adaptively learn a control Hamiltonian by perturbing a prior Hamiltonian with backpropagation to produce a state with a set of target observables and simulation outcomes.

In a first example, it is shown how differentiable molecular simulations allow control of dynamics by pertubatively seeking transitions pathways from a starting state to a target state. This learning scheme does not require a user-defined reaction coordinate. This is advantageous, because reaction coordinates are challenging to define in many complex dynamic processes, such as protein folding and chemical reactions. We demonstrate the application in toy 2-D and 3-D systems (Fig. 2 and 3, respectively). In the 2D case, a bias potential is trained to steer a particle in 2D double well potential energy surface toward a target state. It starts as a small pertubative potential and then slowly increases in magnitude to help the particle pass over the barrier separating the states. This similar to biasing methods that have been used in identifying pathways in protein folding Mendels et al. (2018) and chemical reactions. Maeda et al. (2016)

Figure 2: Controllable simulations in a toy 1 particle system in a 2D space. The control Hamiltonian is learned so that the particle goes from a starting position to a target. We use an MLP to parametrize a 2D bias potential. The target loss to minimize is the distance between the final and target positions. The particle is initialized with zero velocity. During training, we run Hamiltonian dynamics with 100 steps and compute the loss to update the bias potential and steer the particle trajectory toward the target state.

To train a controllable 3D polymer folding process, we use a GNN to represent , which biases a linear chain of identical particles connected with a harmonic Hamiltonian into a helical fold (see Fig. 3). The prior potential is a sum over identical harmonic bond potentials, with an equilibrium bond length of 1 in arbitrary units: , where

is the index for the bonds along the chain. This Hamiltonian is perturbed in the simulations to produce a folded helical structure at a constant temperature using the Nose-Hoover Chain integrator described above and in the Appendix. We back-propagate through the simulations to continuously update the GNN parameters, so that the loss function

is minimized. Here, the set of functions include structural variables of the polymer chain: bond distances, angles and dihedrals angles along the chain.

Figure 3: A We perform continuous model training during the simulations to bias a harmonic polymer chain toward a targeted helix shape. This is accomplished by training a bias Hamiltonian parameterized by a GNN. Here, and denote the set of momenta and positions for all particles, and denotes the Hamiltonian control variables that maintain a certain temperature. The simulations are run for 40000 steps, and the loss is computed and differentiated to update GNN weights every 40 simulation steps.

4 Learning from observables

We demonstrate an example of fitting pair correlation distributions in Lennard Jones (LJ) systems (Fig. 4). Pair correlation distributions characterize structural and thermodynamic properties of condensed phase systems. Jones (1924) We demonstrate that by differentiating through the simulation trajectories, one can actively modify parameters to match a target distribution function. To make the distribution function our differentiable target, we implement a differentiable histogram to approximate the typical non-differentiable histogram operation. This is done by summing over pair distances expanded in a basis of Gaussians (‘‘Gaussian smearing"), followed by normalization to ensure that the histogram integration yields the total number of pair distances. Given an observation of pair distance that we wish to approximately map onto a histogram with bins, the Gaussian-smeared function is given by


where approximates the bin width. A Gaussian basis is used to replace the non-differentiable Dirac Delta function to compute an approximate histogram. The total normalized histogram is the expected value over individual samples of over the all the observations of pair distances (between atoms and ) in the trajectory:


The pair correlation function is obtained by normalizing by differential volumes:

Figure 4: Computational workflow to fit pair distribution functions for an LJ system. The target distributions functions are obtained with = 1 and

= 1. For each training epoch, we compute the pair distribution functions from simulated trajectories. We back-propagate the mean square loss between simulated and target pair distribution functions to update the LJ parameters.

We set up the experiment to learn a Lennard Jones potential Jones (1924): , with the total potential energy of the system given by . The parameters to be optimized are and . We simulate the system at three different temperatures and obtain three different pair correlation functions . For each temperature the model learns to reproduce the same radial distribution function by minimizing the mean square loss between the simulated pair distribution function and the target. At each epoch, the parameters are updated with gradient descent.

5 Control protocol for light-driven quantum dynamics

Figure 5: A Potential energy surfaces of the model retinal Hamiltonian. The model consists of two electronic states, denoted with blue and red, a vibrational mode , and a torsional isomerization mode . B Control of the time-averaged quantum yield as a function of training epoch.

We use the model introduced in Ref. Hahn and Stock (2000) for the retinal chromophore. The model Hamiltonian consists of two diabatic electronic states, a single torsional mode for the isomerizing double bond, and a single stretching mode (see Fig. 3B). Details of the model, the construction of the Hamiltonian and the operators of interest can be found in Refs. Hahn and Stock (2000); Tscherbul and Brumer (2015); Axelrod and Brumer (2019).

The total Hamiltonian of the system and the control field is given by


where is the system Hamiltonian, is the control Hamiltonian, is the dipole operator, and is the electric field. The system wave function evolves under the Schrödinger equation,


where is the wave function and is the reduced Planck constant. Computationally, the wave function is represented as a vector and the Hamiltonian as a matrix, and the system dynamics are obtained by solving Eq. (8). Physical quantities are obtained as expectation values of operators, , where is an observable and is the corresponding operator. The minimal model contains only two nuclear degrees of freedom, but coupling to other nuclear modes and to the solvent can have significant effects on the quantum yield.Balzer et al. (2003) These effects can be approximately treated with perturbative master equations. Tscherbul and Brumer (2015); Axelrod and Brumer (2018, 2019) In this case a similar control scheme could be used, with the Schrödinger equation replaced by a master equation, and the wave function replaced by a density operator.

In the first training epoch, a Gaussian form is used for the controllable electric field:


Here, is the amplitude of the field, is the center frequency, is the pulse arrival time, and is the pulse duration. The pulse duration is set to fs, the center frequency to eV, and the arrival time to fs. The short pulse duration is chosen to approximate a delta-function, the arrival time to ensure that the pulse is completely contained within the simulation, and the center frequency to approximately match the electronic excitation energy. The field amplitude is initialized as . This choice is arbitrary since, as explained below, the quantum yield is normalized with respect to excited state population, and hence to the field intensity.

The quantity to be optimized is the quantum yield, i.e. the efficiency of isomerization. The cis projection operator is denoted as and the trans projection operator as . In position space, the operators are given by and , where is the Heaviside step function and the molecular rotation coordinate. The quantum yield is given by


where denotes a quantum expectation value, is the projection of onto the diabatic excited electronic state, projects onto the ground diabatic state, and is the ground state population. Subtraction of ensures that any population remaining in the ground state does not contribute to the quantum yield. Since the quantum yield depends on time, we optimize its average over a time period after the pulse is over:


where denotes a time average. Here, is the time at which the yield is first recorded, and is the averaging time. We set ps and ps.

The dynamics are discretized with a timestep of fs. The electric field is discretized with a time step of fs. This is done to limit the timescale over which the electric field can vary. To avoid the use of complex numbers in backpropagation, the real and imaginary parts of the wave function are stored as separate vectors and , respectively. They then follow the coupled equations of motion


The expectation value of an operator is given by


During simulations, we backpropagate through the simulation trajectory to optimize both the magnitude and phase of the temporal electric field. The numerical results are shown in Fig. 3 B. The quantum yield begins at approximately 0.6, and after 50 epochs reaches 0.8 as the electric field is improved. These results show that differentiable simulations can be used to learn control protocols for electric field-driven isomerization.

6 Conclusions

Simulating complex landscapes of molecular states is computationally challenging. Being able to control molecular simulations allows accelerating the exploration of configurational space and to intelligently design control protocols for various applications. In this work, we proposed a framework for controlling molecular simulations based on macroscopic quantities to develop learning and control protocols. The proposed approach is based on model learning through simulation time feedback from bulk observables. This also opens up new possibilities for designing control protocols for equilibrium and non-equilibrium simulations by incorporating bias Hamiltonians. This work can be extended to the simulation of other types of molecular systems with different thermodynamic boundary conditions and different control scenarios.

7 Related work

On differentiable simulations Several works have incorporated physics-based simulations to control and infer movements of mechanical objects. These are done by incorporating inductive biases that obey Hamiltonian dynamics. Greydanus et al. (2019); Sanchez-Gonzalez et al. (2019) Many works also focus on performing model control over dynamical systems. Li et al. (2019); Battaglia et al. (2016); Zhong et al. (2019); Morton et al. (2018) Differentiable simulations/sampling with automatic differentiation have also been utilized in constructing models from data in many differential equation settings like computational fluid dynamics, Schenck and Fox (2018); Morton et al. (2018) physics simulations, Hu et al. (2019); Bar-Sinai et al. (2019); Hu et al. (2019); Liang et al. (2019) quantum chemistry, Tamayo-Mendoza et al. (2018)

, tensor networks

Liao et al. (2019), generating protein structures, Ingraham et al. (2019b, a); Townshend et al. (2018); Anand et al. (2019); Senior et al. (2020)

and estimating densities of probability distributions with normalizing flows

Zhang et al. (2018); Grathwohl et al. (2019) and point clouds. Yang et al. (2019) Much progress has been made in developing differentiable frameworks for molecular dynamics, Schoenholz and Cubuk (2019) PDEs, Han et al. (2018); Long et al. (2017, 2019); Lu et al. (2019b) and ODEs. Chen et al. (2018)

On statistical physics and molecular dynamics

In machine learning for molecular dynamics, automatic differentiation has been applied in analyzing latent structure of molecular kinetics,

Mardt et al. (2018); Xie et al. (2019); Wu et al. (2018); Li et al. (2019); Wehmeyer and Noé (2018); Ceriotti (2019); Post et al. (2019); Wang et al. (2019); Hernández et al. (2018); Chen et al. (2018) fitting models from quantum chemistry calculations Schütt et al. (2017); Bartók et al. (2010); Yao et al. (2018); Behler and Parrinello (2007); Mailoa et al. (2019); Zhang et al. (2018); Smith et al. (2017) or experimental measurements, Xie et al. (2019); Xie and Zhang (2018) learning model reduction of atomistic simulations Lu et al. (2019b); Ma et al. (2018); John and Csányi (2017); Wang and Gómez-Bombarelli (2019); Wang et al. (2018); Zhang et al. (2018); Bejagam et al. (2018); Durumeric and Voth (2019); Hoffmann et al. (2019) and sampling microstates. Noé and Wu (2018); Bonati et al. (2019); Guo et al. (2018); Rogal et al. (2019); Schneider et al. (2017) For the computation of free energy in simulations, adaptive methods and variational methods have been proposed with bias Hamiltonians on a specified reaction coordinate, Darve et al. (2008) or invertible transformations Jarzynski (2001); Vaikuntanathan and Jarzynski (2008) between initial and target configurations. For non-equilibrium simulations, variational methods have been applied in the computation of large deviation functions to better sample rare events. Nguyen and Vaikuntanathan (2016); Das and Limmer (2019); Dolezal and Jack (2019) Studying optimal control protocols has also been a recent focus for several physical systems. Bukov et al. (2017); Rotskoff et al. (2017); Cavina et al. (2018, 2018)


WW thanks Toyota Research Institute, SA thanks the MIT Buchsbaum Fund, and RGB thanks DARPA AMD, MIT DMSE and Toyota Faculty Chair for financial support.


  • M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng (2016) TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. External Links: 1603.04467 Cited by: §8.3.
  • N. Anand, R. Eguchi, and P. S. Huang (2019) Fully differentiable full-atom protein backbone generation. In Deep Generative Models for Highly Structured Data, DGS@ICLR 2019 Workshop, Cited by: §7.
  • S. Axelrod and P. Brumer (2018) An efficient approach to the quantum dynamics and rates of processes induced by natural incoherent light. The Journal of chemical physics 149 (11), pp. 114104. Cited by: §5.
  • S. Axelrod and P. Brumer (2019) Multiple time scale open systems: reaction rates and quantum coherence in model retinal photoisomerization under incoherent excitation. The Journal of chemical physics 151 (1), pp. 014104. Cited by: §2.1, §5, §5.
  • B. Balzer, S. Hahn, and G. Stock (2003) Mechanism of a photochemical funnel: a dissipative wave-packet dynamics study. Chemical physics letters 379 (3), pp. 351. Cited by: §5.
  • Y. Bar-Sinai, S. Hoyer, J. Hickey, and M. P. Brenner (2019)

    Learning data-driven discretizations for partial differential equations

    Proceedings of the National Academy of Sciences of the United States of America 116 (31), pp. 15344–15349. External Links: Document, 1808.04930, ISSN 10916490 Cited by: §7.
  • A. P. Bartók, M. C. Payne, R. Kondor, and G. Csányi (2010) Gaussian Approximation Potentials: The Accuracy of Quantum Mechanics, without the Electrons. Physical Review Letters 104 (13), pp. 136403. External Links: Document, 0910.1019, Link Cited by: §7.
  • P. W. Battaglia, R. Pascanu, M. Lai, D. Rezende, and K. Kavukcuoglu (2016) Interaction Networks for Learning about Objects, Relations and Physics. Advances in Neural Information Processing Systems, pp. 4509–4517. External Links: 1612.00222 Cited by: §7.
  • J. Behler and M. Parrinello (2007) Generalized neural-network representation of high-dimensional potential-energy surfaces. Physical Review Letters 98 (14), pp. 146401. External Links: Document, ISSN 0031-9007 Cited by: §7.
  • K. K. Bejagam, S. Singh, Y. An, and S. A. Deshmukh (2018) Machine-Learned Coarse-Grained Models. Journal of Physical Chemistry Letters 9 (16), pp. 4667–4672. External Links: Document, ISSN 19487185 Cited by: §7.
  • L. Bonati, Y. Y. Zhang, and M. Parrinello (2019) Neural networks-based variationally enhanced sampling. Proceedings of the National Academy of Sciences of the United States of America 116 (36), pp. 17641–17647. External Links: Document, 1904.01305, ISSN 10916490 Cited by: §7.
  • W. R. Briggs and J. L. Spudich (2005) Handbook of photosensory receptors. John Wiley & Sons. Cited by: §2.1.
  • D. Brinks, F. D. Stefani, F. Kulzer, R. Hildner, T. H. Taminiau, Y. Avlasevich, K. Müllen, and N. F. Van Hulst (2010) Visualizing and controlling vibrational wave packets of single molecules. Nature 465 (7300), pp. 905–908. Cited by: §2.1.
  • M. Bukov, A. G. R. Day, D. Sels, P. Weinberg, A. Polkovnikov, and P. Mehta (2017) Reinforcement Learning in Different Phases of Quantum Control. Physical Review X 8 (3). External Links: Document, 1705.00565, Link Cited by: §7.
  • V. Cavina, A. Mari, A. Carlini, and V. Giovannetti (2018) Optimal thermodynamic control in open quantum systems. Physical Review A 98 (1), pp. 012139. External Links: Document, 1709.07400, ISSN 24699934 Cited by: §7.
  • V. Cavina, A. Mari, A. Carlini, and V. Giovannetti (2018) Variational approach to the optimal control of coherently driven, open quantum system dynamics. Physical Review A 98 (5), pp. 052125. External Links: Document, 1807.07450, ISSN 24699934 Cited by: §7.
  • M. Ceriotti (2019) Unsupervised machine learning in atomistic simulations, between predictions and understanding. Vol. 150, American Institute of Physics Inc.. External Links: Document, 1902.05158, ISSN 00219606 Cited by: §7.
  • R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud (2018)

    Neural Ordinary Differential Equations

    Advances in neural information processing systems. External Links: Document, 1806.07366, ISBN 9781139108188 Cited by: §1, §2.2, §7.
  • W. Chen, A. R. Tan, and A. L. Ferguson (2018)

    Collective variable discovery and enhanced sampling using autoencoders: Innovations in network architecture and error function design

    Journal of Chemical Physics 149 (7), pp. 072312. External Links: Document, ISSN 00219606 Cited by: §7.
  • G. E. Crooks (1999) Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics 60 (3), pp. 2721–2726. External Links: Document, 9901352, ISSN 1063651X Cited by: §3.
  • E. Darve, D. Rodríguez-Gómez, and A. Pohorille (2008) Adaptive biasing force method for scalar and vector free energy calculations. Journal of Chemical Physics 128 (14), pp. 144120. External Links: Document, ISSN 00219606 Cited by: §7.
  • A. Das and D. T. Limmer (2019) Variational control forces for enhanced sampling of nonequilibrium molecular dynamics simulations. Journal of Chemical Physics 151 (24), pp. 244123. External Links: Document, 1909.03589, ISSN 00219606 Cited by: §7.
  • J. Dolezal and R. L. Jack (2019) Large deviations and optimal control forces for hard particles in one dimension. Journal of Statistical Mechanics: Theory and Experiment 2019 (12), pp. 123208. External Links: Document, 1906.07043 Cited by: §7.
  • A. E.P. Durumeric and G. A. Voth (2019) Adversarial-residual-coarse-graining: Applying machine learning theory to systematic molecular coarse-graining. Journal of Chemical Physics 151 (12), pp. 124110. External Links: Document, 1904.00871, ISSN 00219606, Link Cited by: §7.
  • D. K. Duvenaud, D. Maclaurin, J. Aguilera-Iparraguirre, R. Gómez-Bombarelli, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams (2015) Convolutional Networks on Graphs for Learning Molecular Fingerprints. In Advances in Neural Information Processing Systems, pp. 2215–2223. External Links: 1509.09292 Cited by: §2.1, §8.3.
  • J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl (2017) Neural Message Passing for Quantum Chemistry. arXiv:1704.01212. Note: From Duplicate 2 (Neural Message Passing for Quantum Chemistry - Gilmer, Justin; Schoenholz, Samuel S.; Riley, Patrick F.; Vinyals, Oriol; Dahl, George E.)
  • (41) From Duplicate 1 (Neural Message Passing for Quantum Chemistry - Gilmer, Justin; Schoenholz, Samuel S.; Riley, Patrick F.; Vinyals, Oriol; Dahl, George E.)
  • (42) From Duplicate 2 (Neural Message Passing for Quantum Chemistry - Gilmer, Justin; Schoenholz, Samuel S.; Riley, Patrick F.; Vinyals, Oriol; Dahl, George E.)
  • (43) From Duplicate 1 (Neural Message Passing for Quantum Chemistry - Gilmer, Justin; Schoenholz, Samuel S.; Riley, Patrick F.; Vinyals, Oriol; Dahl, George E.)
  • (44) Comment: 14 pages
  • (45) From Duplicate 2 (Neural Message Passing for Quantum Chemistry - Gilmer, Justin; Schoenholz, Samuel S; Riley, Patrick F; Vinyals, Oriol; Dahl, George E)
  • (46) From Duplicate 2 (Neural Message Passing for Quantum Chemistry - Gilmer, Justin; Schoenholz, Samuel S.; Riley, Patrick F.; Vinyals, Oriol; Dahl, George E.)
  • (47) From Duplicate 1 (Neural Message Passing for Quantum Chemistry - Gilmer, Justin; Schoenholz, Samuel S.; Riley, Patrick F.; Vinyals, Oriol; Dahl, George E.)
  • (48) Comment: 14 pages
  • External Links: 1704.01212 Cited by: §2.1.
  • W. Grathwohl, R. T.Q. Chen, J. Bettencourt, I. Sutskever, and D. Duvenaud (2019) Ffjord: Free-form continuous dynamics for scalable reversible generative models. In 7th International Conference on Learning Representations, ICLR 2019, External Links: 1810.01367 Cited by: §7.
  • S. Greydanus, M. Dzamba, and J. Yosinski (2019) Hamiltonian Neural Networks. External Links: 1906.01563 Cited by: §7.
  • R. B. Grosse, Z. Ghahramani, and R. P. Adams (2015) Sandwiching the marginal likelihood using bidirectional Monte Carlo. External Links: 1511.02543, Link Cited by: §3.
  • A. Z. Guo, E. Sevgen, H. Sidky, J. K. Whitmer, J. A. Hubbell, and J. J. De Pablo (2018) Adaptive enhanced sampling by force-biasing using neural networks. Journal of Chemical Physics 148 (13), pp. 134108. External Links: Document, ISSN 00219606, Link Cited by: §7.
  • M. Habeck (2017) Model evidence from nonequilibrium simulations. Advances in Neural Information Processing Systems, pp. 1753–1762. Cited by: §3.
  • A. Haché, Y. Kostoulas, R. Atanasov, J. Hughes, J. Sipe, and H. Van Driel (1997) Observation of coherently controlled photocurrent in unbiased, bulk gaas. Physical Review Letters 78 (2), pp. 306. Cited by: §2.1.
  • S. Hahn and G. Stock (2000) Quantum-mechanical modeling of the femtosecond isomerization in rhodopsin. The Journal of Physical Chemistry B 104 (6), pp. 1146–1149. Cited by: §2.1, §5.
  • J. Han, A. Jentzen, and E. Weinan (2018)

    Solving high-dimensional partial differential equations using deep learning

    Proceedings of the National Academy of Sciences of the United States of America 115 (34), pp. 8505–8510. External Links: Document, 1707.02568, ISSN 10916490 Cited by: §7.
  • C. X. Hernández, H. K. Wayment-Steele, M. M. Sultan, B. E. Husic, and V. S. Pande (2018) Variational encoding of complex dynamics. Physical Review E 97 (6), pp. 062412. External Links: Document, 1711.08576, ISSN 24700053 Cited by: §7.
  • C. Hoffmann, R. Menichetti, K. H. Kanekal, and T. Bereau (2019) Controlled exploration of chemical space by machine learning of coarse-grained representations. Physical Review E 100 (3), pp. 033302. External Links: Document, 1905.01897, ISSN 24700053 Cited by: §7.
  • P. Holl, V. Koltun, and N. Thuerey (2020) Learning to Control PDEs with Differentiable Physics. External Links: 2001.07457 Cited by: §1.
  • S. Hormoz and M. P. Brenner (2011) Design principles for self-assembly with short-range interactions. Proceedings of the National Academy of Sciences of the United States of America 108 (13), pp. 5193–5198. External Links: Document, ISSN 10916490 Cited by: §3.
  • Y. Hu, L. Anderson, T. Li, Q. Sun, N. Carr, J. Ragan-Kelley, and F. Durand (2019) DiffTaichi: Differentiable Programming for Physical Simulation. External Links: 1910.00935 Cited by: §7.
  • Y. Hu, J. Liu, A. Spielberg, J. B. Tenenbaum, W. T. Freeman, J. Wu, D. Rus, and W. Matusik (2019) ChainQueen: A real-time differentiable physical simulator for soft robotics. In Proceedings - IEEE International Conference on Robotics and Automation, Vol. 2019-May, pp. 6265–6271. External Links: Document, 1810.01054, ISBN 9781538660263, ISSN 10504729 Cited by: §7.
  • J. Ingraham, V. K. Garg, R. Barzilay, and T. Jaakkola (2019a) Generative models for graph-based protein design. Technical report Cited by: §7.
  • J. Ingraham, A. Riesselman, C. Sander, and D. Marks (2019b) Learning Protein Structure with a Differentiable Simulator. In International Conference on Learning Representations, Cited by: §7.
  • W. M. Jacobs, A. Reinhardt, and D. Frenkel (2015) Communication: Theoretical prediction of free-energy landscapes for complex self-assembly. Journal of Chemical Physics 142 (2), pp. 021101. External Links: Document, 1501.02249, ISSN 00219606 Cited by: §3.
  • W. M. Jacobs, A. Reinhardt, and D. Frenkel (2015) Rational design of self-assembly pathways for complex multicomponent structures. Proceedings of the National Academy of Sciences of the United States of America 112 (20), pp. 6313–6318. External Links: Document, 1502.01351, ISSN 10916490 Cited by: §1, §3.
  • C. Jarzynski (1997) Nonequilibrium equality for free energy differences. Physical Review Letters 78 (14), pp. 2690–2693. External Links: Document, 9610209, ISSN 10797114 Cited by: §3.
  • C. Jarzynski (1997) Equilibrium free-energy differences from nonequilibrium measurements: A master-equation approach. Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics 56 (5), pp. 5018–5035. External Links: Document, 9707325, ISSN 1063651X Cited by: §3.
  • C. Jarzynski (2001) Targeted free energy perturbation. Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics 65 (4), pp. 5. External Links: Document, 0109461 Cited by: §7.
  • S. T. John and G. Csányi (2017) Many-Body Coarse-Grained Interactions Using Gaussian Approximation Potentials. The Journal of Physical Chemistry B 121 (48), pp. 10934–10949. External Links: Document, ISSN 1520-6106, Link Cited by: §7.
  • J. E. Jones (1924) On the Determination of Molecular Fields. II. From the Equation of State of a Gas. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 106 (738), pp. 463–477. External Links: Document, ISSN 1364-5021 Cited by: §4, §4.
  • R. S. Judson and H. Rabitz (1992) Teaching lasers to control molecules. Physical review letters 68 (10), pp. 1500. Cited by: §2.1.
  • C. Lavigne and P. Brumer (2017) Interfering resonance as an underlying mechanism in the adaptive feedback control of radiationless transitions: retinal isomerization. The Journal of chemical physics 147 (11), pp. 114107. Cited by: §2.1.
  • C. Lavigne and P. Brumer (2019a) Considerations regarding one-photon phase control. arXiv preprint arXiv:1910.13878. Cited by: §2.1.
  • C. Lavigne and P. Brumer (2019b) Ultrashort pulse two-photon coherent control of a macroscopic phenomena: light-induced current from channelrhodopsin-2 in live brain cells. arXiv preprint arXiv:1907.07741. Cited by: §2.1.
  • S. Li, C. Dong, L. Zhang, and L. Wang (2019) Neural Canonical Transformation with Symplectic Flows. External Links: 1910.00024 Cited by: §7.
  • Y. Li, H. He, J. Wu, D. Katabi, and A. Torralba (2019) Learning Compositional Koopman Operators for Model-Based Control. External Links: 1910.08264 Cited by: §1, §7.
  • J. Liang, M. Lin, and V. Koltun (2019) Differentiable Cloth Simulation for Inverse Problems. Advances in Neural Information Processing Systems 32 (NeurIPS), pp. 771–780. Cited by: §1, §7.
  • H. J. Liao, J. G. Liu, L. Wang, and T. Xiang (2019) Differentiable Programming Tensor Networks. Physical Review X 9 (3), pp. 031041. External Links: Document, 1903.09650, ISSN 21603308 Cited by: §7.
  • Z. Long, Y. Lu, and B. Dong (2019) PDE-Net 2.0: Learning PDEs from data with a numeric-symbolic hybrid deep network. Journal of Computational Physics 399. External Links: Document, 1812.04426, ISSN 10902716 Cited by: §7.
  • Z. Long, Y. Lu, X. Ma, and B. Dong (2017) PDE-Net: Learning PDEs from Data. 35th International Conference on Machine Learning, ICML 2018 7, pp. 5067–5078. External Links: 1710.09668 Cited by: §1, §7.
  • L. Lu, X. Meng, Z. Mao, and G. E. Karniadakis (2019a) DeepXDE: A deep learning library for solving differential equations. External Links: 1907.04502 Cited by: §1.
  • P. Y. Lu, S. Kim, and M. Soljačić (2019b)

    Extracting Interpretable Physical Parameters from Spatiotemporal Systems using Unsupervised Learning

    External Links: 1907.06011 Cited by: §1, §7, §7.
  • C. Ma, J. Wang, and W. E (2018) Model Reduction with Memory and the Machine Learning of Dynamical Systems. arxiv:1808.04258. Cited by: §7.
  • S. Maeda, Y. Harabuchi, M. Takagi, T. Taketsugu, and K. Morokuma (2016) Artificial Force Induced Reaction (AFIR) Method for Exploring Quantum Chemical Potential Energy Surfaces. Chemical Record 16 (5), pp. 2232–2248. External Links: Document, ISSN 15280691, Link Cited by: §3.
  • J. P. Mailoa, M. Kornbluth, S. Batzner, G. Samsonidze, S. T. Lam, J. Vandermause, C. Ablitt, N. Molinari, and B. Kozinsky (2019) A fast neural network approach for direct covariant forces prediction in complex multi-element extended systems. Nature Machine Intelligence 1 (10), pp. 471–479. External Links: Document, 1905.02791 Cited by: §7, §8.3.
  • A. Mardt, L. Pasquali, H. Wu, and F. Noé (2018) VAMPnets for deep learning of molecular kinetics. Nature Communications 9 (1), pp. 5. External Links: Document, ISSN 2041-1723 Cited by: §7.
  • G. J. Martyna, M. L. Klein, and M. Tuckerman (1992) Nosé-Hoover chains: The canonical ensemble via continuous dynamics. The Journal of Chemical Physics 97 (4), pp. 2635–2643. External Links: Document, ISSN 00219606 Cited by: §2.1, §8.1.
  • D. Mendels, G. Piccini, Z. F. Brotzakis, Y. I. Yang, and M. Parrinello (2018) Folding a small protein using harmonic linear discriminant analysis. Journal of Chemical Physics 149 (19), pp. 194113. External Links: Document, 1808.07895, ISSN 00219606, Link Cited by: §3.
  • J. Morton, F. D. Witherden, M. J. Kochenderfer, and A. Jameson (2018) Deep dynamical modeling and control of unsteady fluid flows. In Advances in Neural Information Processing Systems, Vol. 2018-Decem, pp. 9258–9268. External Links: 1805.07472, ISSN 10495258 Cited by: §7.
  • R. M. Neal (1998) Annealed Importance Sampling. Statistics and Computing 11 (2), pp. 125–139. External Links: 9803008, Link Cited by: §3.
  • M. Nguyen and S. Vaikuntanathan (2016) Design principles for nonequilibrium self-assembly. Proceedings of the National Academy of Sciences of the United States of America 113 (50), pp. 14231–14236. External Links: Document, 1507.08971, ISSN 10916490 Cited by: §3, §7.
  • F. Noé and H. Wu (2018) Boltzmann Generators-Sampling Equilibrium States of Many-Body Systems with Deep Learning. arXiv:1812.01729. Cited by: §7.
  • S. Nosé (1984) A unified formulation of the constant temperature molecular dynamics methods. The Journal of Chemical Physics 81 (1), pp. 511–519. External Links: Document, ISSN 00219606 Cited by: §2.1, §8.1.
  • L. A. Pachón and P. Brumer (2013) Mechanisms in environmentally assisted one-photon phase control. The Journal of chemical physics 139 (16), pp. 164123. Cited by: §2.1.
  • K. Palczewski (2012) Chemistry and biology of vision. Journal of Biological Chemistry 287 (3), pp. 1612–1619. Cited by: §2.1.
  • M. Parrinello and A. Rahman (1982) Strain fluctuations and elastic constants. The Journal of Chemical Physics 76 (5), pp. 2662–2666. External Links: Document, ISSN 00219606 Cited by: §2.1, §8.1.
  • A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. D. Facebook, A. I. Research, Z. Lin, A. Desmaison, L. Antiga, O. Srl, and A. Lerer (2019)

    Automatic differentiation in PyTorch

    In Advances in Neural Information Processing Systems 32, pp. 8024–8035. Cited by: §8.3.
  • K. Paul, P. Sengupta, E. D. Ark, H. Tu, Y. Zhao, and S. A. Boppart (2017) Coherent control of an opsin in living brain tissue. Nature physics 13 (11), pp. 1111–1116. Cited by: §2.1.
  • L. S. Pontryagin, E. F. Mishchenko, V. G. Boltyanskii, and R. V. Gamkrelidze (1962) The mathematical theory of optimal processes. Wiley. Cited by: §2.2.
  • M. Post, S. Wolf, and G. Stock (2019) Principal component analysis of nonequilibrium molecular dynamics simulations. Journal of Chemical Physics 150 (20). External Links: Document, 1903.08105, ISSN 00219606 Cited by: §7.
  • V. I. Prokhorenko, A. M. Nagy, S. A. Waschuk, L. S. Brown, R. R. Birge, and R. D. Miller (2006) Coherent control of retinal isomerization in bacteriorhodopsin. science 313 (5791), pp. 1257–1261. Cited by: §2.1.
  • J. Rogal, E. Schneider, and M. E. Tuckerman (2019) Neural-Network-Based Path Collective Variables for Enhanced Sampling of Phase Transformations. Physical Review Letters 123 (24), pp. 245701. External Links: Document, 1905.01536, ISSN 10797114 Cited by: §7.
  • G. M. Rotskoff, G. E. Crooks, and E. Vanden-Eijnden (2017) Geometric approach to optimal nonequilibrium control: Minimizing dissipation in nanomagnetic spin systems. Physical Review E 95 (1), pp. 012148. External Links: Document, 1607.07425, ISSN 24700053 Cited by: §7.
  • G. M. Rotskoff and E. Vanden-Eijnden (2018) Dynamical computation of the density of states using nonequilibrium importance sampling. Physical Review Letters 122 (15). External Links: Document, 1809.11132 Cited by: §1, §3.
  • M. Sajfutdinow, W. M. Jacobs, A. Reinhardt, C. Schneider, and D. M. Smith (2018) Direct observation and rational design of nucleation behavior in addressable self-assembly. Proceedings of the National Academy of Sciences of the United States of America 115 (26), pp. E5877–E5886. External Links: Document, 1712.01315, ISSN 10916490 Cited by: §1.
  • A. Sanchez-Gonzalez, V. Bapst, K. Cranmer, and P. Battaglia (2019) Hamiltonian Graph Networks with ODE Integrators. External Links: 1909.12790 Cited by: §7.
  • C. Schenck and D. Fox (2018) SPNets: Differentiable Fluid Dynamics for Deep Neural Networks. External Links: 1806.06094 Cited by: §7.
  • E. Schneider, L. Dai, R. Q. Topper, C. Drechsel-Grau, and M. E. Tuckerman (2017) Stochastic Neural Network Approach for Learning High-Dimensional Free Energy Surfaces. Physical Review Letters 119 (15), pp. 150601. External Links: Document, ISSN 0031-9007, Link Cited by: §7.
  • S. S. Schoenholz and E. D. Cubuk (2019) JAX, M.D.: End-to-End Differentiable, Hardware Accelerated, Molecular Dynamics in Pure Python. External Links: 1912.04232 Cited by: §7.
  • K. Schulten and P. Tavan (1978) A mechanism for the light-driven proton pump of halobacterium halobium. Nature 272 (5648), pp. 85–86. Cited by: §2.1.
  • K. T. Schütt, F. Arbabzadah, S. Chmiela, K. R. Müller, and A. Tkatchenko (2017) Quantum-chemical insights from deep tensor neural networks. Nature Communications 8, pp. 13890. External Links: Document, ISBN 2041-1723, ISSN 2041-1723 Cited by: §2.1, §8.3.
  • K. T. Schütt, P. Kindermans, H. E. Sauceda, S. Chmiela, A. Tkatchenko, and K. Müller (2017)

    SchNet: A continuous-filter convolutional neural network for modeling quantum interactions

    External Links: 1706.08566 Cited by: §7, §8.3.
  • A. W. Senior, R. Evans, J. Jumper, J. Kirkpatrick, L. Sifre, T. Green, C. Qin, A. Žídek, A. W.R. Nelson, A. Bridgland, H. Penedones, S. Petersen, K. Simonyan, S. Crossan, P. Kohli, D. T. Jones, D. Silver, K. Kavukcuoglu, and D. Hassabis (2020) Improved protein structure prediction using potentials from deep learning. Nature 577 (7792), pp. 706–710. External Links: Document, ISSN 14764687 Cited by: §7.
  • M. Shapiro and P. Brumer (2003) Principles of the quantum control of molecular processes. Principles of the Quantum Control of Molecular Processes, by Moshe Shapiro, Paul Brumer, pp. 250. ISBN 0-471-24184-9. Wiley-VCH, February 2003., pp. 250. Cited by: §2.1.
  • D. A. Sivak and G. E. Crooks (2016) Thermodynamic geometry of minimum-dissipation driven barrier crossing. Physical Review E 94 (5), pp. 052106. External Links: Document, 1608.04444, ISSN 24700053 Cited by: §1.
  • J. S. Smith, O. Isayev, and A. E. Roitberg (2017) ANI-1: an extensible neural network potential with DFT accuracy at force field computational cost. Chemical Science 8 (4), pp. 3192–3203. External Links: Document, 1610.08935, ISSN 20416539 Cited by: §7.
  • T. Stievater, X. Li, D. G. Steel, D. Gammon, D. Katzer, D. Park, C. Piermarocchi, and L. Sham (2001) Rabi oscillations of excitons in single quantum dots. Physical Review Letters 87 (13), pp. 133603. Cited by: §2.1.
  • S. Tafoya, S. J. Large, S. Liu, C. Bustamante, and D. A. Sivak (2019) Using a system’s equilibrium behavior to reduce its energy dissipation in nonequilibrium processes. Proceedings of the National Academy of Sciences of the United States of America 116 (13), pp. 5920–5924. External Links: Document, ISSN 10916490 Cited by: §1.
  • T. Tamayo-Mendoza, C. Kreisbeck, R. Lindh, and A. Aspuru-Guzik (2018) Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree-Fock. ACS Central Science 4 (5), pp. 559–566. External Links: Document, 1711.08127, ISSN 23747951 Cited by: §7.
  • R. J. L. Townshend, R. Bedi, P. A. Suriana, and R. O. Dror (2018) End-to-End Learning on 3D Protein Structure for Interface Prediction. pp. 15642–15651. External Links: 1807.01297, Link Cited by: §7.
  • T. V. Tscherbul and P. Brumer (2014) Excitation of biomolecules with incoherent light: quantum yield for the photoisomerization of model retinal. The Journal of Physical Chemistry A 118 (17), pp. 3100–3111. Cited by: §2.1.
  • T. V. Tscherbul and P. Brumer (2015) Quantum coherence effects in natural light-induced processes: cis–trans photoisomerization of model retinal under incoherent excitation. Physical Chemistry Chemical Physics 17 (46), pp. 30904–30913. Cited by: §2.1, §5, §5.
  • S. Vaikuntanathan and C. Jarzynski (2008) Escorted free energy simulations: Improving convergence by reducing dissipation. Physical Review Letters 100 (19), pp. 190601. External Links: Document, 0804.3055, ISSN 00319007 Cited by: §3, §7.
  • J. Wang, C. Wehmeyer, F. Noe’, C. Clementi, S. Olsson, C. Wehmeyer, A. Pérez, N. E. Charron, G. de Fabritiis, F. Noé, C. Clementi, A. Perez, N. E. Charron, G. de Fabritiis, F. Noe, and C. Clementi (2018) Machine Learning of coarse-grained Molecular Dynamics Force Fields. arXiv:1812.01736, pp. acscentsci.8b00913. External Links: Document, ISSN 2374-7943 Cited by: §7.
  • W. Wang and R. Gómez-Bombarelli (2019) Coarse-graining auto-encoders for molecular dynamics. npj Computational Materials 5 (1). External Links: Document, ISBN 4152401902615, ISSN 20573960 Cited by: §7.
  • Y. Wang, J. M. L. Ribeiro, and P. Tiwary (2019) Past–future information bottleneck for sampling molecular reaction coordinate simultaneously with thermodynamics and kinetics. Nature Communications 10 (1), pp. 1–8. External Links: Document, ISSN 20411723 Cited by: §7.
  • W. S. Warren, H. Rabitz, and M. Dahleh (1993) Coherent control of quantum dynamics: the dream is alive. Science 259 (5101), pp. 1581–1589. Cited by: §2.1.
  • C. Wehmeyer and F. Noé (2018) Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics. The Journal of Chemical Physics 148 (24), pp. 241703. External Links: Document, 1710.11239, ISSN 0021-9606 Cited by: §7.
  • H. Wu, A. Mardt, L. Pasquali, and F. Noe (2018) Deep Generative Markov State Models. arXiv:1805.07601. Cited by: §7.
  • T. Xie, A. France-Lanord, Y. Wang, Y. Shao-Horn, and J. C. Grossman (2019) Graph dynamical networks for unsupervised learning of atomic scale dynamics in materials. Nature Communications 10 (1). External Links: Document, 1902.06836, ISSN 20411723 Cited by: §7.
  • W. J. Xie, Y. Qi, and B. Zhang (2019) Characterizing chromatin folding coordinate and landscape with deep learning. bioRxiv, pp. 824417. External Links: Document, Link Cited by: §7.
  • W. J. Xie and B. Zhang (2018) Learning mechanism of chromatin domain formation with big data. bioRxiv (17), pp. 1–15. External Links: Document Cited by: §7.
  • G. Yang, X. Huang, Z. Hao, M. Liu, S. Belongie, and B. Hariharan (2019) PointFlow: 3D Point Cloud Generation with Continuous Normalizing Flows. External Links: 1906.12320, Link Cited by: §7.
  • K. Yao, J. E. Herr, D. W. Toth, R. Mckintyre, and J. Parkhill (2018) The TensorMol-0.1 model chemistry: a neural network augmented with long-range physics. Chemical Science 9 (8), pp. 2261–2269. External Links: Document, ISSN 2041-6520 Cited by: §7, §8.3.
  • L. Zhang, W. E, and L. Wang (2018) Monge-Amp‘ere Flow for Generative Modeling. External Links: 1809.10188 Cited by: §7.
  • L. Zhang, J. Han, H. Wang, R. Car, and W. E (2018) DeePCG: Constructing coarse-grained models via deep neural networks. The Journal of Chemical Physics 149 (3), pp. 34101. External Links: Document, ISSN 0021-9606 Cited by: §7.
  • L. Zhang, J. Han, H. Wang, R. Car, and E. Weinan (2018) Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics. Physical Review Letters 120 (14), pp. 143001. External Links: Document, 1707.09571, ISSN 10797114 Cited by: §7, §8.3.
  • Y. D. Zhong, B. Dey, and A. Chakraborty (2019) Symplectic ODE-Net: Learning Hamiltonian Dynamics with Control. External Links: 1909.12077 Cited by: §1, §7.
  • L. Zhu, V. Kleiman, X. Li, S. P. Lu, K. Trentelman, and R. J. Gordon (1995) Coherent laser control of the product distribution obtained in the photoexcitation of HI. Science 270 (5233), pp. 77–80. Cited by: §2.1.

8 Appendix

8.1 Nose-Hover chain integrator

Here we describe the Nose-Hover Chain Nosé (1984); Martyna et al. (1992), the constant temperature integrator algorithm mentioned in the paper. We applied this integrator to the Lennard Jones system and polymer examples to simulate systems with constant temperature control. Here we define the variables used in the integrator:

  • : number of particles

  • : number of virtual variables used in the chain

  • : index for individual degrees of freedom,

  • : index for virtual variables in the chain

  • : momentum for each degree of freedom

  • : position for each degree of freedom

  • : mass for each particle in the simulation

  • : coupling strengths to the heat baths variable in the chain

  • : virtual momenta

The coupled equations of motion are:


The Nose Hover Chain integrator performs effective temperature control, and the integrated dynamics sample the Boltzmann distribution. The integrator is deterministic and time-reversible. The control of other thermodynamic variables can be realized with other integrator protocols, such as the Rahman-Parrinelo method to maintain constant pressure. Parrinello and Rahman (1982)

8.2 Learned Control Hamiltonian for Two-state Isomerization

We provide the learned electric field spectrum for the electric field driven two-state isomerization example in figure 6.

Figure 6: The comparison between the initialized electric field (epoch 1) with the learned electric field frequency (epoch 50).

8.3 Graph neural networks

The model is based on graph convolution, which has achieved state-of-the-art predictive performance for chemical properties and molecular energies/forces. Duvenaud et al. (2015); Schütt et al. (2017); Zhang et al. (2018); Yao et al. (2018); Mailoa et al. (2019). In our work, we utilized the SchNet Schütt et al. (2017) architecture to learn the control Hamiltonian. The model consists of a message step and update step to systematically gather information from neighboring atoms. A 3D molecular graph is used, and the existence of a connection between atoms is decided by a fixed distance cutoff. Defining as the index for each atom and its neighbors as , the graph convolutions process iteratively updates the atomic embedding by aggregating ‘‘messages" from their connected atoms and their edge features . This update process is summarized by


By performing this operation several times, a many-body correlation function can be constructed to represent the potential energy surface of a molecular system. In the case of SchNet, the update function is simply a summation over the atomic embeddings. The message function is parameterized by the following equations:


where the

are independent multi-layer perceptrons (MLP). For each convolution

, a separate message function is applied to characterize atom correlations at different scales. After taking element-wise products of atomic fingerprints and pair-interaction fingerprints , the joint fingerprint is further parameterized by another MLP to incorporate more non-linearity into the model. The final updated fingerprints are used as inputs to two fully-connected layers that yield atom-wise energies. The sum of these energies gives the total energy of the system. The atomic forces are the negative gradients of the energy with respect to atomic positions. They are easily computed through automatic differentiation implemented in PyTorch Paszke et al. (2019) and Tensorflow. Abadi et al. (2016) We used PyTorch in our demonstrations.