AutoTransfer: Subject Transfer Learning with Censored Representations on Biosignals Data

We provide a regularization framework for subject transfer learning in which we seek to train an encoder and classifier to minimize classification loss, subject to a penalty measuring independence between the latent representation and the subject label. We introduce three notions of independence and corresponding penalty terms using mutual information or divergence as a proxy for independence. For each penalty term, we provide several concrete estimation algorithms, using analytic methods as well as neural critic functions. We provide a hands-off strategy for applying this diverse family of regularization algorithms to a new dataset, which we call "AutoTransfer". We evaluate the performance of these individual regularization strategies and our AutoTransfer method on EEG, EMG, and ECoG datasets, showing that these approaches can improve subject transfer learning for challenging real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2021

Inter-subject Deep Transfer Learning for Motor Imagery EEG Decoding

Convolutional neural networks (CNNs) have become a powerful technique to...
research
10/20/2016

Kernel Alignment for Unsupervised Transfer Learning

The ability of a human being to extrapolate previously gained knowledge ...
research
03/13/2020

Ultra Efficient Transfer Learning with Meta Update for Cross Subject EEG Classification

Electroencephalogram (EEG) signal is widely used in brain computer inter...
research
10/14/2017

Arbitrage-Free Regularization

We introduce a path-dependent geometric framework which generalizes the ...
research
09/19/2023

Amplifying Pathological Detection in EEG Signaling Pathways through Cross-Dataset Transfer Learning

Pathology diagnosis based on EEG signals and decoding brain activity hol...
research
10/03/2021

TinyFedTL: Federated Transfer Learning on Tiny Devices

TinyML has rose to popularity in an era where data is everywhere. Howeve...
research
05/17/2022

Monotonicity Regularization: Improved Penalties and Novel Applications to Disentangled Representation Learning and Robust Classification

We study settings where gradient penalties are used alongside risk minim...

Please sign up or login with your details

Forgot password? Click here to reset