Energy-Conserving Neural Network for Turbulence Closure Modeling

01/31/2023
by   Toby van Gastelen, et al.
0

In turbulence modeling, and more particularly in the Large-Eddy Simulation (LES) framework, we are concerned with finding closure models that represent the effect of the unresolved subgrid-scales on the resolved scales. Recent approaches gravitate towards machine learning techniques to construct such models. However, the stability of machine-learned closure models and their abidance by physical structure (e.g. symmetries, conservation laws) are still open problems. To tackle both issues, we take the `discretize first, filter next' approach, in which we apply a spatial averaging filter to existing energy-conserving (fine-grid) discretizations. The main novelty is that we extend the system of equations describing the filtered solution with a set of equations that describe the evolution of (a compressed version of) the energy of the subgrid-scales. Having an estimate of this energy, we can use the concept of energy conservation and derive stability. The compressed variables are determined via a data-driven technique in such a way that the energy of the subgrid-scales is matched. For the extended system, the closure model should be energy-conserving, and a new skew-symmetric convolutional neural network architecture is proposed that has this property. Stability is thus guaranteed, independent of the actual weights and biases of the network. Importantly, our framework allows energy exchange between resolved scales and compressed subgrid scales and thus enables backscatter. To model dissipative systems (e.g. viscous flows), the framework is extended with a diffusive component. The introduced neural network architecture is constructed such that it also satisfies momentum conservation. We apply the new methodology to both the viscous Burgers' equation and the Korteweg-De Vries equation in 1D and show superior stability properties when compared to a vanilla convolutional neural network.

READ FULL TEXT

page 8

page 14

page 22

page 28

page 31

research
02/15/2020

Data-Driven Variational Multiscale Reduced Order Models

We propose a new data-driven reduced order model (ROM) framework that ce...
research
01/10/2023

Conservation properties of a leapfrog finite-difference time-domain method for the Schrödinger equation

We study the probability and energy conservation properties of a leap-fr...
research
08/19/2022

Momentum-conserving ROMs for the incompressible Navier-Stokes equations

Projection-based model order reduction of an ordinary differential equat...
research
04/18/2019

Physical Symmetries Embedded in Neural Networks

Neural networks are a central technique in machine learning. Recent year...
research
03/15/2023

A Multifidelity deep operator network approach to closure for multiscale systems

Projection-based reduced order models (PROMs) have shown promise in repr...
research
12/16/2020

Time-Continuous Energy-Conservation Neural Network for Structural Dynamics Analysis

Fast and accurate structural dynamics analysis is important for structur...
research
06/17/2020

Interface learning of multiphysics and multiscale systems

Complex natural or engineered systems comprise multiple characteristic s...

Please sign up or login with your details

Forgot password? Click here to reset