Machine Learning based Autotuning of a GPU-accelerated Computational Fluid Dynamics Code
A machine learning-based autotuning technique is employed to optimize 14 key parameters associated with GPU kernel scheduling, including the number of thread blocks and threads within a block. This technique involves independent training for a single type of GPU as well as combined training for multiple types of GPUs. To evaluate the effectiveness of the autotuning approach, a computational fluid dynamics problem, accelerated by a single GPU, is tested for training and testing on the C2075, P100, and V100 GPUs. The training and testing results presented in this study demonstrate the potential of artificial neural networks in autotuning a wide range of parameters to achieve high performance in computational fluid dynamics applications. Remarkably, this approach requires only a small fraction of samples from a large search space.
READ FULL TEXT