Towards a theory of out-of-distribution learning

09/29/2021
by   Ali Geisa, et al.
55

What is learning? 20^st century formalizations of learning theory – which precipitated revolutions in artificial intelligence – focus primarily on 𝑖𝑛-𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 learning, that is, learning under the assumption that the training data are sampled from the same distribution as the evaluation distribution. This assumption renders these theories inadequate for characterizing 21^st century real world data problems, which are typically characterized by evaluation distributions that differ from the training data distributions (referred to as out-of-distribution learning). We therefore make a small change to existing formal definitions of learnability by relaxing that assumption. We then introduce 𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐞𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲 (LE) to quantify the amount a learner is able to leverage data for a given problem, regardless of whether it is an in- or out-of-distribution problem. We then define and prove the relationship between generalized notions of learnability, and show how this framework is sufficiently general to characterize transfer, multitask, meta, continual, and lifelong learning. We hope this unification helps bridge the gap between empirical practice and theoretical guidance in real world problems. Finally, because biological learning continues to outperform machine learning algorithms on certain OOD challenges, we discuss the limitations of this framework vis-á-vis its ability to formalize biological learning, suggesting multiple avenues for future research.

READ FULL TEXT

page 8

page 11

page 12

page 13

page 16

page 18

page 24

page 25

research
05/28/2021

Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning

Despite recent advances in its theoretical understanding, there still re...
research
09/03/2020

A general approach to bridge the reality-gap

Employing machine learning models in the real world requires collecting ...
research
04/27/2023

Deep Transfer Learning for Automatic Speech Recognition: Towards Better Generalization

Automatic speech recognition (ASR) has recently become an important chal...
research
02/21/2018

Continual Lifelong Learning with Neural Networks: A Review

Humans and animals have the ability to continually acquire and fine-tune...
research
03/03/2021

Out of Distribution Generalization in Machine Learning

Machine learning has achieved tremendous success in a variety of domains...
research
08/31/2021

Towards Out-Of-Distribution Generalization: A Survey

Classic machine learning methods are built on the i.i.d. assumption that...
research
08/29/2023

Incorporating Neuro-Inspired Adaptability for Continual Learning in Artificial Intelligence

Continual learning aims to empower artificial intelligence (AI) with str...

Please sign up or login with your details

Forgot password? Click here to reset