Cooperative data-driven modeling

11/23/2022
by   Aleksandr Dekhovich, et al.
0

Data-driven modeling in mechanics is evolving rapidly based on recent machine learning advances, especially on artificial neural networks. As the field matures, new data and models created by different groups become available, opening possibilities for cooperative modeling. However, artificial neural networks suffer from catastrophic forgetting, i.e. they forget how to perform an old task when trained on a new one. This hinders cooperation because adapting an existing model for a new task affects the performance on a previous task trained by someone else. The authors developed a continual learning method that addresses this issue, applying it here for the first time to solid mechanics. In particular, the method is applied to recurrent neural networks to predict history-dependent plasticity behavior, although it can be used on any other architecture (feedforward, convolutional, etc.) and to predict other phenomena. This work intends to spawn future developments on continual learning that will foster cooperative strategies among the mechanics community to solve increasingly challenging problems. We show that the chosen continual learning strategy can sequentially learn several constitutive laws without forgetting them, using less data to achieve the same error as standard training of one law per model.

READ FULL TEXT

page 6

page 8

page 10

research
04/08/2023

A multifidelity approach to continual learning for physical systems

We introduce a novel continual learning method based on multifidelity de...
research
09/07/2022

The Role Of Biology In Deep Learning

Artificial neural networks took a lot of inspiration from their biologic...
research
10/11/2022

Continual Learning by Modeling Intra-Class Variation

It has been observed that neural networks perform poorly when the data o...
research
03/13/2017

Continual Learning Through Synaptic Intelligence

While deep learning has led to remarkable advances across diverse applic...
research
11/26/2021

Latent Space based Memory Replay for Continual Learning in Artificial Neural Networks

Memory replay may be key to learning in biological brains, which manage ...
research
03/12/2021

Continual Learning for Recurrent Neural Networks: a Review and Empirical Evaluation

Learning continuously during all model lifetime is fundamental to deploy...
research
03/22/2022

Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals

Humans can learn several tasks in succession with minimal mutual interfe...

Please sign up or login with your details

Forgot password? Click here to reset