DeepAI AI Chat
Log In Sign Up

Few-shot Learning for Decoding Brain Signals

by   Myriam Bontonou, et al.

Few-shot learning consists in addressing data-thrifty (inductive few-shot) or label-thrifty (transductive few-shot) problems. So far, the field has been mostly driven by applications in computer vision. In this work, we are interested in stressing the ability of recently introduced few-shot methods to solve problems dealing with neuroimaging data, a promising application field. To this end, we propose a benchmark dataset and compare multiple learning paradigms, including meta-learning, as well as various backbone networks. Our experiments show that few-shot methods are able to efficiently decode brain signals using few examples, and that graph-based backbones do not outperform simple structure-agnostic solutions, such as multi-layer perceptrons.


page 1

page 2

page 3

page 4


Looking back to lower-level information in few-shot learning

Humans are capable of learning new concepts from small numbers of exampl...

How to train your MAML

The field of few-shot learning has recently seen substantial advancement...

Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks

Few-shot learning for neural networks (NNs) is an important problem that...

Rethink and Redesign Meta learning

Recently, Meta-learning has been shown as a promising way to improve the...

It's DONE: Direct ONE-shot learning without training optimization

Learning a new concept from one example is a superior function of human ...

Learning from Few Samples: A Survey

Deep neural networks have been able to outperform humans in some cases l...

Model-Agnostic Graph Regularization for Few-Shot Learning

In many domains, relationships between categories are encoded in the kno...