Optimal Transport-inspired Deep Learning Framework for Slow-Decaying Problems: Exploiting Sinkhorn Loss and Wasserstein Kernel

08/26/2023
by   Moaad Khamlich, et al.
0

Reduced order models (ROMs) are widely used in scientific computing to tackle high-dimensional systems. However, traditional ROM methods may only partially capture the intrinsic geometric characteristics of the data. These characteristics encompass the underlying structure, relationships, and essential features crucial for accurate modeling. To overcome this limitation, we propose a novel ROM framework that integrates optimal transport (OT) theory and neural network-based methods. Specifically, we investigate the Kernel Proper Orthogonal Decomposition (kPOD) method exploiting the Wasserstein distance as the custom kernel, and we efficiently train the resulting neural network (NN) employing the Sinkhorn algorithm. By leveraging an OT-based nonlinear reduction, the presented framework can capture the geometric structure of the data, which is crucial for accurate learning of the reduced solution manifold. When compared with traditional metrics such as mean squared error or cross-entropy, exploiting the Sinkhorn divergence as the loss function enhances stability during training, robustness against overfitting and noise, and accelerates convergence. To showcase the approach's effectiveness, we conduct experiments on a set of challenging test cases exhibiting a slow decay of the Kolmogorov n-width. The results show that our framework outperforms traditional ROM methods in terms of accuracy and computational efficiency.

READ FULL TEXT

page 4

page 8

page 16

page 17

page 18

page 19

page 20

page 22

research
10/30/2017

Learning to solve inverse problems using Wasserstein loss

We propose using the Wasserstein loss for training in inverse problems. ...
research
01/29/2020

Domain decomposition for entropy regularized optimal transport

We study Benamou's domain decomposition algorithm for optimal transport ...
research
11/08/2018

An Optimal Transport View on Generalization

We derive upper bounds on the generalization error of learning algorithm...
research
08/17/2023

A Novel Loss Function Utilizing Wasserstein Distance to Reduce Subject-Dependent Noise for Generalizable Models in Affective Computing

Emotions are an essential part of human behavior that can impact thinkin...
research
07/19/2023

Manifold Learning with Sparse Regularised Optimal Transport

Manifold learning is a central task in modern statistics and data scienc...
research
08/04/2021

The Theory of Perfect Learning

The perfect learning exists. We mean a learning model that can be genera...

Please sign up or login with your details

Forgot password? Click here to reset