DeepAI AI Chat
Log In Sign Up

A novel residual whitening based training to avoid overfitting

by   Anand Ramakrishnan, et al.

In this paper we demonstrate that training models to minimize the autocorrelation of the residuals as an additional penalty prevents overfitting of the machine learning models. We use different problem extrapolative testing sets, and invoking decorrelation objective functions, we create models that can predict more complex systems. The models are interpretable, extrapolative, data-efficient, and capture predictable but complex non-stochastic behavior such as unmodeled degrees of freedom and systemic measurement noise. We apply this improved modeling paradigm to several simulated systems and an actual physical system in the context of system identification. Several ways of composing domain models with neural models are examined for time series, boosting, bagging, and auto-encoding on various systems of varying complexity and non-linearity. Although this work is preliminary, we show that the ability to combine models is a very promising direction for neural modeling.


page 1

page 2

page 3

page 4


Dependent Matérn Processes for Multivariate Time Series

For the challenging task of modeling multivariate time series, we propos...

Do We Really Need Deep Learning Models for Time Series Forecasting?

Time series forecasting is a crucial task in machine learning, as it has...

Exploring Physical Latent Spaces for Deep Learning

We explore training deep neural network models in conjunction with physi...

Non-neural Models Matter: A Re-evaluation of Neural Referring Expression Generation Systems

In recent years, neural models have often outperformed rule-based and cl...

Two ways towards combining Sequential Neural Network and Statistical Methods to Improve the Prediction of Time Series

Statistic modeling and data-driven learning are the two vital fields tha...

Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality

Avoiding overfitting is a central challenge in machine learning, yet man...

Search Spaces for Neural Model Training

While larger neural models are pushing the boundaries of what deep learn...