Learning to grow: control of materials self-assembly using evolutionary reinforcement learning
We show that neural networks trained by evolutionary reinforcement learning can enact efficient molecular self-assembly protocols. Presented with molecular simulation trajectories, networks learn to change temperature and chemical potential in order to promote the assembly of desired structures or choose between isoenergetic polymorphs. In the first case, networks reproduce in a qualitative sense the results of previously-known protocols, but faster and with slightly higher fidelity; in the second case they identify strategies previously unknown, from which we can extract physical insight. Networks that take as input the elapsed time of the simulation or microscopic information from the system are both effective, the latter more so. The network architectures we have used can be straightforwardly adapted to handle a large number of input data and output control parameters, and so can be applied to a broad range of systems. Our results have been achieved with no human input beyond the specification of which order parameter to promote, pointing the way to the design of synthesis protocols by artificial intelligence.
READ FULL TEXT