Transfer learning often describes an approach to discover and exploit some shared structure in the data that is invariant across data sets. In the context of brain-computer interfaces (BCIs), where the aim is to provide a direct neural communication and control channel for individuals, e.g., with severe neuromuscular disorders, the concept of transfer learning gains significant interest given its potential benefit in reducing BCI system calibration times by exploiting neural data recorded from other subjects. Given the limited data collection times under adequate concentration and consciousness with patients, this becomes essential for a potential patient end-user of the BCI system. Several pieces of work in this domain aim to find neural features (representations) that are invariant across subjects or sessions to calibrate BCIs [1, 2, 3], or learn a structure for the set of decision rules and how they differ across subjects and sessions [4, 5].
Going beyond neural interfaces, significant progress was recently achieved in domain transfer learning by adversarially censored invariant representations within the growing field of deep learning in computer vision and image processing[6, 7, 8, 9, 10, 11, 12, 13]. These methods rely on learning generative models of the data that allow synthesis of data samples from latent representations, which can be achieved with variational autoencoders (VAEs)  for unsupervised feature learning, or generative adversarial networks (GANs) , where the supervision is alleviated by penalizing inaccurate samples using an adversarial game. Consistently, these are trained with adversarial censoring to learn representations that are aimed to be independent from some nuisance variables (e.g., a representative variable for factors of variations across data sets). In the light of these recent work, we introduce this progress in adversarial representation learning as a novel approach for transfer learning in BCIs.
Various aspects of deep convolutional neural networks (CNNs) in computer vision have been already introduced to extract features for task-specific decoding in electoencephalogram (EEG) based BCIs [16, 17], as well as for recent attempts to learn deep generative models for EEG [18, 19, 20]. In the present study, we extend these lines of work and propose a transfer learning approach for BCIs based on the exploitation of adversarial training for subject-invariant representation learning. Particularly, the proposed approach [9, 13] aims to learn subject-invariant representations by simultaneously training a conditional VAE and an adversarial network to enforce invariance of the learned data representations with respect to subject identity. This adversarial training procedure, with VAEs based on CNN architectures, yields data representations that work as features that are disentangled from subject-specific nuisance variations, which enables decoding for unseen BCI subjects. Our results demonstrate the advantage of this approach with a proof-of-concept based on analyses of EEG data recorded from 103 subjects during a motor imagery BCI experiment.
Let denote the data set for subject consisting of trials, where is the raw EEG data at trial recorded from channels for discretized time samples, and is the corresponding class label from a set of class labels. In a subject-to-subject transfer learning problem, the aim is to learn a parametric encoder which can be generalized across subjects, and extracts latent representations from the data that are useful in discriminating different tasks or brain states indicated by their corresponding class labels . Accordingly, let(i.e., an -dimensional vector with a value of 1 at the ’th index and zero in other indices), which represents the nuisance variable in our adversarial representation learning frameworks that will be enforced to be independent of.
Ii-B Conditional Variational Autoencoder (cVAE)
VAEs  learn a generative model as a pair of encoder and decoder networks. The encoder learns a latent representation from the data , while the decoder aims to reconstruct the data from the learned representation . In this variational framework the encoder is stochastic, meaning that the decoder uses a learned posterior , whose parameters are given by the encoder network. The decoder is provided with samples from this posterior distribution as input .
In the conditional VAE (cVAE) framework , the decoder is conditioned on a nuisance variable as an additional input besides , and the encoder is expected to learn representations that are invariant of , since is already given as input to the decoder. The loss function to be minimized in this cVAE framework, which is also known as the evidence lower bound (ELBO), is given by:
where the first term is the reconstruction loss of the decoder, and the second term is the encoder variational posterior loss. This framework implicitly enforces invariance for with respect to . However this is known to be not perfectly achieved in practice, which paves the way for adversarial training methods in representation learning .
Ii-C Adversarial Conditional VAE (A-cVAE)
In the proposed adversarial cVAE (A-cVAE) framework [9, 13], a conditional VAE and an adversary to enforce invariance with respect to (i.e., subject identifiers) are simultaneously trained. Specifically, alongside a cVAE that takes EEG time-series data
as input to the encoder and estimatesat the decoder, an adversary is trained that takes learned representations as input, and estimates .
We extend Eq. (1) to obtain the A-cVAE loss function. For the deterministic decoder, reconstruction loss is determined by the mean squared error of the estimated time-series EEG data. Furthermore, softmax cross-entropy loss of the adversary network is inversely added to the loss function for A-cVAE which is then denoted as:
where is a weight parameter to adjust the impact of adversarial censoring on learned representations. Alternatingly once per batch with A-cVAE parameter updates, the adversary is also individually trained to minimize its softmax cross-entropy loss .
Ii-D Model Architecture and Classifier Training
In our implementations, the encoder and decoder have convolutional architectures embedding temporal and spatial filterings motivated by the results achieved with EEGNet , Deep ConvNet and Shallow ConvNet . Parameterization and details of the convolutional cVAE architecture are broadly illustrated in Fig. 1, and provided in detail in Table I. The two fully connected layers at the output of the encoder generate two -dimensional parameter vectors and , which are then used to sample . The nuisance variable vector is then concatenated to the sampled as the input for the decoder. We used temporal convolution kernels of size , and spatial convolution kernels of size , and a latent vector dimensionality of
. Adjacent to the cVAE, the adversary is realized as a single hidden layer multilayer perceptron (MLP) with ReLU nonlinearity after the first layer, and we fixed adversarial censoring weight parameterin all experiments.
Following adversarial representation learning using a set of training data samples, the encoder is kept static and then using the same training data samples, a classifier is trained that is connected to the output of the encoder. Specifically, all training data samples were again used as input to the static encoder that was previously optimized, and using the obtained parameters at the output of the encoder, a latent vectoris sampled which was then used as an input to a classifier. The classifier was also realized as a single hidden layer MLP with ReLU nonlinearity after the first layer. Classifier training was performed to minimize its softmax cross-entropy loss . The adversary network had output dimensionality of , and the classifier had an output dimensionality of . Both the adversary and the classifier hidden layers had nodes.
|Encoder 1||40 Temporal Conv1D ()|
|BatchNorm + ReLU + Dropout (0.25)|
|Encoder 2||40 Spatial Conv1D ()|
|BatchNorm + ReLU + Dropout (0.25)|
|Encoder 3||Reshape (Flatten)|
|2 Fully-Connected Layers|
|Latent ()||Sample with estimated parameters|
|Decoder 1||Fully-Connected Layer|
|ReLU + Reshape|
|Decoder 2||40 Spatial Deconv1D ()|
|BatchNorm + ReLU + Dropout (0.25)|
|Decoder 3||40 Temporal Deconv1D ()|
|BatchNorm + ReLU + Dropout (0.25)|
Ii-E Dataset and Implementation
We used the publicly available PhysioNet EEG Motor Movement/Imagery Dataset , which was collected using the BCI2000 instrumentation system . The dataset consists of over 1500 one- and two-minute EEG recordings, obtained from 109 subjects. Throughout the experiments, subjects were placed in front of a computer screen and were instructed to perform cue-based motor execution/imagery tasks while 64-channel EEG were recorded at a sampling rate of 160 Hz. These tasks included executing the movement of the right or left hand, opening and closing of both fists or legs; or just the imagination of these movements. Each trial lasted four-seconds with inter-trial resting periods of same length. At the beginning of the experiments, eyes-open and eyes-closed resting-state EEG were also recorded. Each subject participated in the experiment for a single session.
From this data set, six subjects’ data were discarded due to irregular timestamp alignments, resulting in a total of 103 subjects. We used trials that correspond to right and left hand motor imagination to evaluate our proposed approach on a conventional BCI paradigm . This resulted in a total of 45 four-second trials per subject, with binary class labels corresponding to right or left hand imagery. We randomly selected 13 subjects to hold-out for further across-subjects transfer learning experiments. Using the remaining 90 subjects’ data, the networks were trained over a training set of 3240 trials, while validations were performed with the remaining 810 trials including data from all subjects. We implemented all analyses with the Chainer deep learning framework 
. Networks were trained with 100 trials per batch for 750 epochs (25,000 iterations), and parameter updates were performed once per batch with Adam .
Iii Experimental Results
Iii-a EEG Pre-Processing and Model Evaluation
All subjects’ data were epoched into the time-interval where the neural changes induced by motor imagery are emphasized . Specifically, from the four second duration, the 1-to-3 seconds interval after the imagery cue onset were extracted to be used in experiments, resulting in a time-series length of . Raw EEG data were normalized to have zero mean. Note that this pre-processing statistics (i.e., data mean) is only computed on the training data, and then applied to validation and transfer subjects’ data.
We evaluate adversarial representation learning with the following frameworks: (1) A-cVAE, (2) cVAE, (3) adversarially censored VAE without conditioning (A-VAE), (4) basic convolutional encoder (CNN). Implementation of (1) corresponds to the Sections II-C and II-D. The approach in (2) is expected to reveal the practical deficiencies of only using decoder conditioning for representation invariance. In that case, we still train an adversary in parallel but do not feed the adversarial loss to the overall training objective, i.e., using Eq. (1). Method (3) is expected to reveal the tradeoff between enforcing invariance with an adversary but still preserving enough information in to allow sufficient decoder learning (c.f. a similar approach in ). This corresponds to using the same objective as A-cVAE, but not providing at the decoder input. Finally, (4) depicts a baseline case that uses the same CNN encoder architecture in combination with an MLP classifier but only trained end-to-end from scratch (via softmax cross-entropy loss for classification) rather first training the encoder within a VAE.
Iii-B Across-Subjects Transfer Learning
To observe representation invariance, accuracies of the adversary network over 90 subjects after training are presented in Table II. In this context, a higher accuracy indicates more subject-specific information remaining in the learned representations , which results in better decoding of by the adversary. Therefore a lower adversary accuracy is representative of better invariant representation learning, as observed through the least leakage with A-cVAE.
Distributions of transfer learning classification accuracies for the 13 held-out subjects are shown in Fig. 2. Using zero subject-specific training or fine-tuning data, we observe accuracies up to with A-cVAE. Consistently with the results in Table II, we observe a decrease of accuracies in cVAE and A-VAE with respect to A-cVAE. For baseline CNN, the model tends to memorize the training data without any subject-invariance attempt, resulting in high variation of accuracies across the 13 subjects as intuitively expected.
|Training Data||Validation Data|
In this work we introduced adversarial invariant representation learning as a novel approach to transfer learning in BCIs. We revealed that learning subject-invariant representations by adversarial censoring can be a significantly useful tool for subject-transfer learning. We demonstrated an empirical proof-of-concept with EEG data recorded from 103 subjects during a motor imagery BCI experiment.
Hereby, we mainly focused on the results regarding the invariance of representations and the across-subjects transfer learning capability of the models. However the proposed approach can be further extended in the context of semi-supervised transfer learning in BCIs, such as using a short calibration time for fine-tuning and semi-supervised transfer, learning session-invariant representations to reduce user-oriented BCI system calibration times, or learning disentangled representations that exploit adversarial censoring to learn partly subject-invariant, and partly subject-variant representations. We highlight that these frameworks should be of significant interest in the field of neural interfaces.
-  M. Krauledat, M. Tangermann, B. Blankertz, and K.-R. Müller, “Towards zero training for brain-computer interfacing,” PloS one, vol. 3, no. 8, p. e2967, 2008.
-  H. Kang, Y. Nam, and S. Choi, “Composite common spatial pattern for subject-to-subject transfer,” IEEE Signal Processing Letters, vol. 16, no. 8, pp. 683–686, 2009.
-  W. Samek, F. C. Meinecke, and K.-R. Müller, “Transferring subspaces between subjects in brain–computer interfacing,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 8, pp. 2289–2298, 2013.
M. Alamgir, M. Grosse-Wentrup, and Y. Altun, “Multitask learning for
brain-computer interfaces,” in
Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, 2010, pp. 17–24.
-  V. Jayaram, M. Alamgir, Y. Altun, B. Schölkopf, and M. Grosse-Wentrup, “Transfer learning in brain-computer interfaces,” IEEE Computational Intelligence Magazine, vol. 11, no. 1, pp. 20–31, 2016.
-  H. Edwards and A. Storkey, “Censoring representations with an adversary,” arXiv preprint arXiv:1511.05897, 2015.
-  A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” arXiv preprint arXiv:1511.05644, 2015.
-  M. F. Mathieu, J. J. Zhao, J. Zhao, A. Ramesh, P. Sprechmann, and Y. LeCun, “Disentangling factors of variation in deep representation using adversarial training,” in Advances in Neural Information Processing Systems, 2016, pp. 5040–5048.
-  G. Lample, N. Zeghidour, N. Usunier, A. Bordes, L. Denoyer et al., “Fader networks: Manipulating images by sliding attributes,” in Advances in Neural Information Processing Systems, 2017.
-  A. Creswell, A. A. Bharath, and B. Sengupta, “Conditional autoencoders with adversarial information factorization,” arXiv preprint arXiv:1711.05175, 2017.
E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative
domain adaptation,” in
Computer Vision and Pattern Recognition, vol. 1, no. 2, 2017, p. 4.
-  J. Shen, Y. Qu, W. Zhang, and Y. Yu, “Adversarial representation learning for domain adaptation,” arXiv preprint arXiv:1707.01217, 2017.
-  Y. Wang, T. Koike-Akino, and D. Erdogmus, “Invariant representations from adversarially censored autoencoders,” arXiv preprint arXiv:1805.08097, 2018.
-  D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” arXiv preprint arXiv:1312.6114, 2013.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, 2014.
-  V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance, “EEGNet: A compact convolutional network for EEG-based brain-computer interfaces,” arXiv preprint arXiv:1611.08024, 2016.
-  R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball, “Deep learning with convolutional neural networks for EEG decoding and visualization,” Human Brain Mapping, vol. 38, no. 11, pp. 5391–5420, 2017.
-  P. Bashivan, I. Rish, M. Yeasin, and N. Codella, “Learning representations from EEG with deep recurrent-convolutional neural networks,” arXiv preprint arXiv:1511.06448, 2015.
-  Y. Luo and B.-L. Lu, “EEG data augmentation for emotion recognition using a conditional Wasserstein GAN,” in International Conference of the IEEE Engineering in Medicine and Biology Society, 2018.
-  K. G. Hartmann, R. T. Schirrmeister, and T. Ball, “EEG-GAN: Generative adversarial networks for electroencephalograhic (EEG) brain signals,” arXiv preprint arXiv:1806.01875, 2018.
-  K. Sohn, H. Lee, and X. Yan, “Learning structured output representation using deep conditional generative models,” in Advances in Neural Information Processing Systems, 2015, pp. 3483–3491.
-  A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley, “Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals,” Circulation, vol. 101, no. 23, pp. e215–e220, 2000.
-  G. Schalk, D. J. McFarland, T. Hinterberger, N. Birbaumer, and J. R. Wolpaw, “BCI2000: a general-purpose brain-computer interface (BCI) system,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 6, pp. 1034–1043, 2004.
-  G. Pfurtscheller and C. Neuper, “Motor imagery and direct brain-computer communication,” Proceedings of the IEEE, vol. 89, no. 7, pp. 1123–1134, 2001.
S. Tokui, K. Oono, S. Hido, and J. Clayton, “Chainer: a next-generation open
source framework for deep learning,” in
Proceedings of Workshop on Machine Learning Systems in the 29th Annual Conference on Neural Information Processing Systems, 2015.
-  D. P. Kingma and J. B. Adam, “A method for stochastic optimization,” in International Conference on Learning Representations, vol. 5, 2015.