Few-shot learning using pre-training and shots, enriched by pre-trained samples

09/19/2020
by   Detlef Schmicker, et al.
0

We use the EMNIST dataset of handwritten digits to test a simple approach for few-shot learning. A fully connected neural network is pre-trained with a subset of the 10 digits and used for few-shot learning with untrained digits. Two basic ideas are introduced: during few-shot learning the learning of the first layer is disabled, and for every shot a previously unknown digit is used together with four previously trained digits for the gradient descend, until a predefined threshold condition is fulfilled. This way we reach about 90 accuracy after 10 shots.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2022

On Measuring the Intrinsic Few-Shot Hardness of Datasets

While advances in pre-training have led to dramatic improvements in few-...
research
09/28/2020

Interventional Few-Shot Learning

We uncover an ever-overlooked deficiency in the prevailing Few-Shot Lear...
research
08/22/2021

FEDI: Few-shot learning based on Earth Mover's Distance algorithm combined with deep residual network to identify diabetic retinopathy

Diabetic retinopathy(DR) is the main cause of blindness in diabetic pati...
research
01/14/2022

Learning from One and Only One Shot

Humans can generalize from only a few examples and from little pre-train...
research
08/19/2019

A Kings Ransom for Encryption: Ransomware Classification using Augmented One-Shot Learning and Bayesian Approximation

Newly emerging variants of ransomware pose an ever-growing threat to com...
research
01/26/2023

Explore the Power of Dropout on Few-shot Learning

The generalization power of the pre-trained model is the key for few-sho...
research
04/14/2017

Distributional Modeling on a Diet: One-shot Word Learning from Text Only

We test whether distributional models can do one-shot learning of defini...

Please sign up or login with your details

Forgot password? Click here to reset