Combining high performance hardware, cloud computing, and deep learning frameworks to accelerate physical simulations: probing the Hopfield Network
The synthesis of high performance computing (particularly Graphics Processing Units), cloud computing services (like Google Colab), and high-level deep learning frameworks (such as PyTorch) has powered the burgeoning field of artificial intelligence. While these technologies are popular in the computer science discipline, the physics community is less aware of how such innovations, freely available online, can improve research and education. In this tutorial, we take the Hopfield Network as an example to show how the confluence of these fields can dramatically accelerate physics-based computer simulations and remove technical barriers in implementing such programs, thereby making physics experimentation and education faster and more accessible. To do so, we introduce the cloud, the GPU, and AI frameworks that can be easily repurposed for physics simulation. We then introduce the Hopfield Network and explain how to produce large-scale simulations visualizations for free in the cloud with very little code (fully self-contained in the text). Finally, we suggest programming exercises throughout the paper, geared towards advanced undergraduate students studying physics, biophysics, or computer science.
READ FULL TEXT