Higher-Order Generalization Bounds: Learning Deep Probabilistic Programs via PAC-Bayes Objectives

03/30/2022
by   Jonathan Warrell, et al.
0

Deep Probabilistic Programming (DPP) allows powerful models based on recursive computation to be learned using efficient deep-learning optimization techniques. Additionally, DPP offers a unified perspective, where inference and learning algorithms are treated on a par with models as stochastic programs. Here, we offer a framework for representing and learning flexible PAC-Bayes bounds as stochastic programs using DPP-based methods. In particular, we show that DPP techniques may be leveraged to derive generalization bounds that draw on the compositionality of DPP representations. In turn, the bounds we introduce offer principled training objectives for higher-order probabilistic programs. We offer a definition of a higher-order generalization bound, which naturally encompasses single- and multi-task generalization perspectives (including transfer- and meta-learning) and a novel class of bound based on a learned measure of model complexity. Further, we show how modified forms of all higher-order bounds can be efficiently optimized as objectives for DPP training, using variational techniques. We test our framework using single- and multi-task generalization settings on synthetic and biological data, showing improved performance and generalization prediction using flexible DPP model representations and learned complexity measures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2021

PAC-BUS: Meta-Learning Bounds via PAC-Bayes and Uniform Stability

We are motivated by the problem of providing strong generalization guara...
research
06/11/2022

A General framework for PAC-Bayes Bounds for Meta-Learning

Meta learning automatically infers an inductive bias, that includes the ...
research
12/07/2020

Generalization bounds for deep learning

Generalization in deep learning has been the topic of much recent theore...
research
09/08/2023

Generalization Bounds: Perspectives from Information Theory and PAC-Bayes

A fundamental question in theoretical machine learning is generalization...
research
12/29/2021

Learning Higher-Order Programs without Meta-Interpretive Learning

Learning complex programs through inductive logic programming (ILP) rema...
research
04/20/2020

Generalization Error Bounds via mth Central Moments of the Information Density

We present a general approach to deriving bounds on the generalization e...
research
11/03/2017

Lifelong Learning by Adjusting Priors

In representational lifelong learning an agent aims to continually learn...

Please sign up or login with your details

Forgot password? Click here to reset