Evaluated CMI Bounds for Meta Learning: Tightness and Expressiveness

10/12/2022
by   Fredrik Hellström, et al.
0

Recent work has established that the conditional mutual information (CMI) framework of Steinke and Zakynthinou (2020) is expressive enough to capture generalization guarantees in terms of algorithmic stability, VC dimension, and related complexity measures for conventional learning (Harutyunyan et al., 2021, Haghifam et al., 2021). Hence, it provides a unified method for establishing generalization bounds. In meta learning, there has so far been a divide between information-theoretic results and results from classical learning theory. In this work, we take a first step toward bridging this divide. Specifically, we present novel generalization bounds for meta learning in terms of the evaluated CMI (e-CMI). To demonstrate the expressiveness of the e-CMI framework, we apply our bounds to a representation learning setting, with n samples from n̂ tasks parameterized by functions of the form f_i ∘ h. Here, each f_i ∈ℱ is a task-specific function, and h ∈ℋ is the shared representation. For this setup, we show that the e-CMI framework yields a bound that scales as √(𝒞(ℋ)/(nn̂) + 𝒞(ℱ)/n), where 𝒞(·) denotes a complexity measure of the hypothesis class. This scaling behavior coincides with the one reported in Tripuraneni et al. (2020) using Gaussian complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2020

Individually Conditional Individual Mutual Information Bound on Generalization Error

We propose a new information-theoretic bound on generalization error bas...
research
08/08/2023

Meta-Learning Operators to Optimality from Multi-Task Non-IID Data

A powerful concept behind much of the recent progress in machine learnin...
research
05/18/2022

Meta-Learning Sparse Compression Networks

Recent work in Deep Learning has re-imagined the representation of data ...
research
01/18/2021

Robustness of Meta Matrix Factorization Against Strict Privacy Constraints

In this paper, we explore the reproducibility of MetaMF, a meta matrix f...
research
10/21/2020

Conditional Mutual Information Bound for Meta Generalization Gap

Meta-learning infers an inductive bias—typically in the form of the hype...
research
11/09/2021

Towards a Unified Information-Theoretic Framework for Generalization

In this work, we investigate the expressiveness of the "conditional mutu...
research
06/13/2018

Far-HO: A Bilevel Programming Package for Hyperparameter Optimization and Meta-Learning

In (Franceschi et al., 2018) we proposed a unified mathematical framewor...

Please sign up or login with your details

Forgot password? Click here to reset