Embedding Adaptation is Still Needed for Few-Shot Learning

04/15/2021
by   Sébastien M. R. Arnold, et al.
0

Constructing new and more challenging tasksets is a fruitful methodology to analyse and understand few-shot classification methods. Unfortunately, existing approaches to building those tasksets are somewhat unsatisfactory: they either assume train and test task distributions to be identical – which leads to overly optimistic evaluations – or take a "worst-case" philosophy – which typically requires additional human labor such as obtaining semantic class relationships. We propose ATG, a principled clustering method to defining train and test tasksets without additional human knowledge. ATG models train and test task distributions while requiring them to share a predefined amount of information. We empirically demonstrate the effectiveness of ATG in generating tasksets that are easier, in-between, or harder than existing benchmarks, including those that rely on semantic information. Finally, we leverage our generated tasksets to shed a new light on few-shot classification: gradient-based methods – previously believed to underperform – can outperform metric-based ones when transfer is most challenging.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset