A Look at the Effect of Sample Design on Generalization through the Lens of Spectral Analysis

06/06/2019
by   Bhavya Kailkhura, et al.
0

This paper provides a general framework to study the effect of sampling properties of training data on the generalization error of the learned machine learning (ML) models. Specifically, we propose a new spectral analysis of the generalization error, expressed in terms of the power spectra of the sampling pattern and the function involved. The framework is build in the Euclidean space using Fourier analysis and establishes a connection between some high dimensional geometric objects and optimal spectral form of different state-of-the-art sampling patterns. Subsequently, we estimate the expected error bounds and convergence rate of different state-of-the-art sampling patterns, as the number of samples and dimensions increase. We make several observations about generalization error which are valid irrespective of the approximation scheme (or learning architecture) and training (or optimization) algorithms. Our result also sheds light on ways to formulate design principles for constructing optimal sampling methods for particular problems.

READ FULL TEXT
research
02/20/2019

A Comprehensive Theory and Variational Framework for Anti-aliasing Sampling Patterns

In this paper, we provide a comprehensive theory of anti-aliasing sampli...
research
05/04/2021

A Priori Generalization Error Analysis of Two-Layer Neural Networks for Solving High Dimensional Schrödinger Eigenvalue Problems

This paper analyzes the generalization error of two-layer neural network...
research
09/27/2016

Generalization Error Bounds for Optimization Algorithms via Stability

Many machine learning tasks can be formulated as Regularized Empirical R...
research
09/05/2018

Controlled Random Search Improves Sample Mining and Hyper-Parameter Optimization

A common challenge in machine learning and related fields is the need to...
research
02/18/2021

Error estimates for DeepOnets: A deep learning framework in infinite dimensions

DeepOnets have recently been proposed as a framework for learning nonlin...
research
09/30/2019

A Closer Look at Data Bias in Neural Extractive Summarization Models

In this paper, we take stock of the current state of summarization datas...
research
12/22/2022

A Mathematical Framework for Learning Probability Distributions

The modeling of probability distributions, specifically generative model...

Please sign up or login with your details

Forgot password? Click here to reset