Adversarial Attacks on Uncertainty Enable Active Learning for Neural Network Potentials
Neural network (NN)-based interatomic potentials provide fast prediction of potential energy surfaces with the accuracy of electronic structure methods. However, NN predictions are only reliable within well-learned training domains, with unknown behavior when extrapolating. Uncertainty quantification through NN committees identify domains with low prediction confidence, but thoroughly exploring the configuration space for training NN potentials often requires slow atomistic simulations. Here, we employ adversarial attacks with a differentiable uncertainty metric to sample new molecular geometries and bootstrap NN potentials. In combination with an active learning loop, the extrapolation power of NN potentials is improved beyond the original training data with few additional samples. The framework is demonstrated on multiple examples, leading to better sampling of kinetic barriers and collective variables without extensive prior data on the relevant geometries. Adversarial attacks are new ways to simultaneously sample the phase space and bootstrap NN potentials, increasing their robustness and enabling a faster, accurate prediction of potential energy landscapes.
READ FULL TEXT