Neural Network Training Using ℓ_1-Regularization and Bi-fidelity Data

05/27/2021
by   Subhayan De, et al.
0

With the capability of accurately representing a functional relationship between the inputs of a physical system's model and output quantities of interest, neural networks have become popular for surrogate modeling in scientific applications. However, as these networks are over-parameterized, their training often requires a large amount of data. To prevent overfitting and improve generalization error, regularization based on, e.g., ℓ_1- and ℓ_2-norms of the parameters is applied. Similarly, multiple connections of the network may be pruned to increase sparsity in the network parameters. In this paper, we explore the effects of sparsity promoting ℓ_1-regularization on training neural networks when only a small training dataset from a high-fidelity model is available. As opposed to standard ℓ_1-regularization that is known to be inadequate, we consider two variants of ℓ_1-regularization informed by the parameters of an identical network trained using data from lower-fidelity models of the problem at hand. These bi-fidelity strategies are generalizations of transfer learning of neural networks that uses the parameters learned from a large low-fidelity dataset to efficiently train networks for a small high-fidelity dataset. We also compare the bi-fidelity strategies with two ℓ_1-regularization methods that only use the high-fidelity dataset. Three numerical examples for propagating uncertainty through physical systems are used to show that the proposed bi-fidelity ℓ_1-regularization strategies produce errors that are one order of magnitude smaller than those of networks trained only using datasets from the high-fidelity models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset