DeepAI
Log In Sign Up

Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm

11/02/2021
by   Yuheng Bu, et al.
8

We provide an information-theoretic analysis of the generalization ability of Gibbs-based transfer learning algorithms by focusing on two popular transfer learning approaches, α-weighted-ERM and two-stage-ERM. Our key result is an exact characterization of the generalization behaviour using the conditional symmetrized KL information between the output hypothesis and the target training samples given the source samples. Our results can also be applied to provide novel distribution-free generalization error upper bounds on these two aforementioned Gibbs algorithms. Our approach is versatile, as it also characterizes the generalization errors and excess risks of these two Gibbs algorithms in the asymptotic regime, where they converge to the α-weighted-ERM and two-stage-ERM, respectively. Based on our theoretical results, we show that the benefits of transfer learning can be viewed as a bias-variance trade-off, with the bias induced by the source distribution and the variance induced by the lack of target samples. We believe this viewpoint can guide the choice of transfer learning algorithms in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/18/2022

Information-theoretic Characterizations of Generalization Error for the Gibbs Algorithm

Various approaches have been developed to upper bound the generalization...
07/28/2021

Characterizing the Generalization Error of Gibbs Algorithm with Symmetrized KL information

Bounding the generalization error of a supervised learning algorithm is ...
05/18/2020

Information-theoretic analysis for transfer learning

Transfer learning, or domain adaptation, is concerned with machine learn...
06/25/2020

Between-Domain Instance Transition Via the Process of Gibbs Sampling in RBM

In this paper, we present a new idea for Transfer Learning (TL) based on...
07/12/2022

An Information-Theoretic Analysis for Transfer Learning: Error Bounds and Applications

Transfer learning, or domain adaptation, is concerned with machine learn...
06/30/2021

Learning Bounds for Open-Set Learning

Traditional supervised learning aims to train a classifier in the closed...
12/18/2019

Research Frontiers in Transfer Learning – a systematic and bibliometric review

Humans can learn from very few samples, demonstrating an outstanding gen...