Batched Predictors Generalize within Distribution

07/18/2023
by   Andreas Loukas, et al.
0

We study the generalization properties of batched predictors, i.e., models tasked with predicting the mean label of a small set (or batch) of examples. The batched prediction paradigm is particularly relevant for models deployed to determine the quality of a group of compounds in preparation for offline testing. By utilizing a suitable generalization of the Rademacher complexity, we prove that batched predictors come with exponentially stronger generalization guarantees as compared to the standard per-sample approach. Surprisingly, the proposed bound holds independently of overparametrization. Our theoretical insights are validated experimentally for various tasks, architectures, and applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2012

Venn-Abers predictors

This paper continues study, both theoretical and empirical, of the metho...
research
05/26/2023

Set-based Neural Network Encoding

We propose an approach to neural network weight encoding for generalizat...
research
06/09/2022

On the Generalization and Adaption Performance of Causal Models

Learning models that offer robust out-of-distribution generalization and...
research
06/21/2022

Performance Prediction Under Dataset Shift

ML models deployed in production often have to face unknown domain chang...
research
03/29/2020

Prediction of properties of steel alloys

We present a study of possible predictors based on four supervised machi...
research
03/07/2022

Trajectory Test-Train Overlap in Next-Location Prediction Datasets

Next-location prediction, consisting of forecasting a user's location gi...
research
02/11/2020

Limited memory predictors with compact explicit representations

The paper presents limited memory time-invariant linear integral predict...

Please sign up or login with your details

Forgot password? Click here to reset