From Multi-label Learning to Cross-Domain Transfer: A Model-Agnostic Approach

07/24/2022
by   Jesse Read, et al.
0

In multi-label learning, a particular case of multi-task learning where a single data point is associated with multiple target labels, it was widely assumed in the literature that, to obtain best accuracy, the dependence among the labels should be explicitly modeled. This premise led to a proliferation of methods offering techniques to learn and predict labels together, for example where the prediction for one label influences predictions for other labels. Even though it is now acknowledged that in many contexts a model of dependence is not required for optimal performance, such models continue to outperform independent models in some of those very contexts, suggesting alternative explanations for their performance beyond label dependence, which the literature is only recently beginning to unravel. Leveraging and extending recent discoveries, we turn the original premise of multi-label learning on its head, and approach the problem of joint-modeling specifically under the absence of any measurable dependence among task labels; for example, when task labels come from separate problem domains. We shift insights from this study towards building an approach for transfer learning that challenges the long-held assumption that transferability of tasks comes from measurements of similarity between the source and target domains or models. This allows us to design and test a method for transfer learning, which is model driven rather than purely data driven, and furthermore it is black box and model-agnostic (any base model class can be considered). We show that essentially we can create task-dependence based on source-model capacity. The results we obtain have important implications and provide clear directions for future work, both in the areas of multi-label and transfer learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2018

Multi-Label Transfer Learning for Semantic Similarity

The semantic relations between two short texts can be defined in multipl...
research
03/31/2015

Multi-label Classification using Labels as Hidden Nodes

Competitive methods for multi-label classification typically invest in l...
research
04/20/2020

Unsupervised Person Re-identification via Multi-label Classification

The challenge of unsupervised person re-identification (ReID) lies in le...
research
04/24/2023

Distilling from Similar Tasks for Transfer Learning on a Budget

We address the challenge of getting efficient yet accurate recognition s...
research
07/30/2016

Multi-task Learning with Weak Class Labels: Leveraging iEEG to Detect Cortical Lesions in Cryptogenic Epilepsy

Multi-task learning (MTL) is useful for domains in which data originates...
research
09/09/2021

Learning with Different Amounts of Annotation: From Zero to Many Labels

Training NLP systems typically assumes access to annotated data that has...
research
10/18/2022

Transfer learning with weak labels from radiology reports: application to glioma change detection

Creating large annotated datasets represents a major bottleneck for the ...

Please sign up or login with your details

Forgot password? Click here to reset