Reconstructing ERP Signals Using Generative Adversarial Networks for Mobile Brain-Machine Interface

05/18/2020 ∙ by Young-Eun Lee, et al. ∙ Korea University 0

Practical brain-machine interfaces have been widely studied to accurately detect human intentions using brain signals in the real world. However, the electroencephalography (EEG) signals are distorted owing to the artifacts such as walking and head movement, so brain signals may be large in amplitude rather than desired EEG signals. Due to these artifacts, detecting accurately human intention in the mobile environment is challenging. In this paper, we proposed the reconstruction framework based on generative adversarial networks using the event-related potentials (ERP) during walking. We used a pre-trained convolutional encoder to represent latent variables and reconstructed ERP through the generative model which shape similar to the opposite of encoder. Finally, the ERP was classified using the discriminative model to demonstrate the validity of our proposed framework. As a result, the reconstructed signals had important components such as N200 and P300 similar to ERP during standing. The accuracy of reconstructed EEG was similar to raw noisy EEG signals during walking. The signal-to-noise ratio of reconstructed EEG was significantly increased as 1.3. The loss of the generative model was 0.6301, which is comparatively low, which means training generative model had high performance. The reconstructed ERP consequentially showed an improvement in classification performance during walking through the effects of noise reduction. The proposed framework could help recognize human intention based on the brain-machine interface even in the mobile environment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Brain-machine interfaces (BMIs) are technical systems that enable impaired people to communicate and control machines or robots by decoding human intention from brain signals [31, 19, 34, 4]. There are many state-of-the-art BMI systems to increase the performance of identifying user intention in a laboratory condition [16, 2]. In particular, BMIs under an ambulatory condition are important issues for practical BMIs to recognize human intention in the real world [7, 22, 23]. However, the movement artifacts can have difficulty detecting user intention because they affect electroencephalography (EEG) signals with large magnitudes. These artifacts could arise from head movement, electromyography, muscle activity, skin, and cable movement [30]. Several studies about BMIs in the ambulatory environment have been actively conducted by applying artifact removal methods in the pre-processing phase [24, 1]

or using the high-tech methodology in the feature extraction or classification phase to better understand user intention 

[13, 21]. These processes to reduce the effects of artifacts are essential for practical BMIs.

Generative models produced the data distribution through decoding progress commonly applying for audio, images, or videos. While the two trainable models contested each other, they learned in the direction that they do not know whether data are generated or not. Recently, a novel generative model is introduced using deep neural networks to represent and reconstruct such as generative adversarial networks (GANs) 

[6]

and many of its advanced version. GANs are machine learning frameworks consisting of two neural networks that contest each other in a zero-sum game. Deep convolutional GANs (DCGANs) 

[27] are an advanced model of GANs which can train models with convolutional layers to be more stable than normal GANs. Auxiliary classifier GANs (ACGANs) [25] are also one of the improved version of GANs, which can train the class information of data at the same time to improve the generative data. Various advanced versions of GANs are used differently depending on purposes such as data augmentation and style of image changes.

Recently, many studies have been reported to improve classification performance by using deep neural networks in EEG data [14, 20]. However, researchers have struggled to use traditional deep neural networks since EEG signals have different characteristics from typical input of deep neural networks. EEG has dynamic time series data and the amplitude of artifact is higher than the sources which contain the human intention. Thus, there are several attempts to fit EEG signals into deep neural networks. In Schirrmeister et al.[29]

, they introduced deep ConvNets which is EEG-fitted convolutional neural networks (CNN) and compared with traditional classifier for motor imagery, having much higher performance. EEGNET 

[15] was developed for EEG signals during BMI paradigms including P300 visual-evoked potential (VEP) paradigm. Moreover, a few papers recently used GANs in EEG data to generate another EEG data. In Hartmann et al.[8], they generated EEG signals of hand movement using GANs with different architectures, showing the signals generated well in time-series and frequency spectra. In addition, GANs were trained to classify and generate EEG data for driving fatigue [26]. To date, most studies applying GANs to EEG data are used for data augmentation purposes to improve classification performance.

GANs are used for noise reduction in a few studies. In Wolterink et al.[32], they reduced noises in computed tomography (CT) data using GANs with convolutional neural networks to minimize voxelwise loss. As a result, they produced more accurate CT images. Another researcher [3] performed the reduced noise in the CT image using models inspired by cycle-GANs [33] and PatchGan [9]. These studies demonstrated that not only normal image and audio data but also brain-related data such as brain imaging are applied for noise reduction using GANs.

In this paper, we proposed the reconstruction framework of event-related potential (ERP) from noisy EEG signals during walking. To reconstruct the EEG signals, we utilized a generative model framework inspired by EEGNET [15], DCGANs [27], and ACGANs [25] and then classified the ERP signals using convolutional discriminative models. To make the latent variables for the generative model, we used the pre-trained model consisted of convolutional neural networks, encoding noisy EEG signals in the ambulatory environment. We hypothesized that reconstructed EEG would contain ERP components but not have artifacts. We performed subject-dependent and subject-independent training sessions. We also evaluated the reconstructed ERP with the visual inspection, ERP performance, and the loss of the generative model. This work could be a noise reduction method and extracting user intention methods.

Fig. 2: Proposed GAN architectures for ERP reconstruction with the input of EEG signals. C indicates the number of channels and T indicates the time samples in a segment. The input of EEG signals during walking goes through the pre-trained model and latent variables are produced. Latent variables are the input of the generative model to reconstruct ERP signals. Discriminative model distinguishes whether the input signals are fake (reconstructed signals) or real (EEG signals during standing) and the classes.

Ii Materials and Methods

Ii-a Experimental Setup

Ii-A1 subjects

Eighteen healthy young subjects (four females, age 24.5 3.1 years) were included in this experiment. None of the subjects had a history of neurological, psychiatric, or any other pertinent disease that otherwise might have affected the experimental results. All subjects gave their written informed consent before the experiments. All experiments were carried out corresponding to the Declaration of Helsinki. This study was reviewed and approved by the Korea University Institutional Review Board (KUIRB-2019-0194-01).

The subjects were on the treadmill at 80 (5) cm in front of a 24 inch LCD monitor (refresh rate: 60 Hz, resolution: 1920 1080) and stood, walked at 1.6 m/s during the BMI paradigms (Fig. 1-(a)).

Ii-A2 Data acquisition

We used a wireless interface (MOVE system, Brain Product GmbH) and Ag/AgCl electrodes to acquire EEG signals from the scalp and Smarting System (mBrainTrain LLC) to record EEG signals. The cap electrodes were placed according to the 10-20 international system at locations in 32 channels: Fp1, Fp2, AFz, F7, F3, Fz, F4, F8, FC5, FC1, FC2, FC6, C3, Cz, C4, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, PO7, PO3, POz, PO4, PO8, O1, Oz, and O2. The impedance was maintained below 10 . We set the sampling rate at 500 Hz.

Ii-A3 Paradigm

We acquired data of ERP based on the OpenBMI (http://openbmi.org[17], BBCI (http://github.com/bbci/bbci_public[12], and Psychophysics (http://psychtoolbox.org) toolboxes [11] in Matlab (The Mathworks, Natick, MA).

ERP is an electrical potential induced in the central and parietal cortex in response to particular cognitive tasks [18]. Attention on target induces ERP components such as N200 and P300 which have task-relevant negative peaks 200 ms and positive peaks 300 ms after a target stimulus. In this experiment, this paradigm is executed with the target (‘OOO’) and non-target (‘XXX’) characters. The ratio of the target was 0.2 and the number of total trials is 300. In a trial, a stimulus was presented for 0.5 s, and showing cross to take a rest for randomly 0.5–1.5 s (Fig. 1-(b)).

Ii-B Generative Adversarial Networks

GANs were a novel generative model, consisting of two adversarial models which are generative model and discriminative model. The generative model produced the data which is the input of the discriminative model which distinguished whether the data is reconstructed or standing data and its class. The generative model is trained by mapping from latent variables to purpose data distribution. While the latent variables are normally random noise, we put feature vectors through the pre-trained model including CNN architectures from every single noisy EEG signal during walking. The discriminative model learned to distinguish whether EEG signals during or reconstructed fake signals (validity) and classify their classes. The contest progressed well-made fake signals versus precise discriminator. Thus, the purpose of the generative model is to increase the validity loss of the discriminative model which tried to reduce its loss.

Fig. 2 showed the frameworks of used GANs with the input of EEG signals to detect ERP signals. We used the pre-trained CNN model for the first encoder from the input. The generator reconstructed ERP signals from the input of noisy EEG signals. The discriminator distinguished the reconstructed data and data during standing whether real or fake and the classes.

Generator Discriminator
Layers Output Layers Output
Dense T/2 Conv2D CT8
Reshape 2T/164 ReLU CT8
Batch normalization 2T/164 dropout CT8
up sampling 4T/44 permute 8TC

zero padding

5T/44 conv2D 8T8
conv2D 5T/48 ReLU 8T8
ReLU 5T/48 max pooling 4T/48
Batch normalization 5T/48 dropout 4T/48
up sampling 10T8 Batch normalization 4T/48
conv2D 10TC conv2D 4T/44
ReLU 10TC ReLU 4T/44
Batch normalization 10TC max pooling 2T/164
up sampling 20TC dropout 2T/164
permute CT20 Batch normalization 2T/164
conv2D CT1 flatten T/2
ReLU CT1

TABLE I: Generative Architecture Layer and Output Shape
Fig. 3: Grand average of EEG signals during standing and walking and reconstructed EEG signals. The reconstructed signals are derived from the raw EEG signals during walking through the proposed framework.

Ii-B1 Discriminative model

The discriminative model was inspired by EEGNET [15]

fitted to EEG data consisting of convolutional layers, leaky rectified linear unit (ReLU), dropout, max pooling, and batch normalization. While the input of the discriminative model is cleaned EEG signals and reconstructed EEG signals, the output is whether real or fake and the classes.

Ii-B2 Generative model

The generative model was similar to the opposite of the structure of the discriminative model, which consisting of dense, convolutional layer, batch normalization, up sampling, and zero padding. The latent variable was produced by convolutional pre-trained model inputting of each noisy EEG samples. While the input of generative model is latent variables, the output is the reconstructed signals which are the same shape of noisy EEG samples.

Ii-B3 Training

Generative model generated the data distribution from latent values and discriminative model distinguished the data whether fake or not (validity). We used an advanced model of GANs that can offer not only the validity of data but also classification. The of GANs is trained to maximize the log-likelihood of validity and classification , which can be represented by following as:

(1)
(2)

where is trained to maximize , is trained to maximize .

All data were applied high-pass filter at 0.5 Hz and epoched into the time-interval between –200 ms and 800 ms based on the trigger. In reconstructed EEG signals, we selected one channel at Pz where the amplitude of ERP is apparent. To divide the dataset, we performed cross-validation for each subject which we left one subject as test set and trained rest subject dataset. This is named leave-one-subject-out which is cross validation inter trials but inter subjects 

[5]

. For the pre-trained data, we trained neural networks with walking dataset and tested standing dataset. As we managed the imbalanced data for training, we reduced the number of non-target trials as same as target trials. We trained GANs in each batch, the size with 32, using loss function cross-entropy for validity and classification of ERP, and mean squared error for generative model loss of calculating between the original input signals and reconstructing signals. We trained model using Adam optimizer 

[10]

with a learning rate of 0.0002 and the exponential decay rates of the 1st moment estimates of the gradient 0.5 

[27].

Fig. 4: The performance in subject-independent session. Standing and walking refer to raw EEG signals during standing and walking, respectively. The restructured refers to the regenerated signals through the proposed framework based on EEG signals during walking. One asterisk and two asterisks indicate and , significance level, respectively.
Fig. 5: Samples of reconstructed EEG signals. These signals were generated through the proposed framework based on EEG signals during walking. Each sample showed the time-series plot between –200 ms and 800 ms.

Iii Results and Discussion

We performed the training in subject-independent sessions. We have presented quantitative metrics indicating the training quality and diversity. Generative model and discriminative model were evaluated by visual inspection, classification performance, and signal-to-noise ratio (SNR). The statistic analysis of a two-tailed paired t-test was performed in the confidence level of 95% and 99%.

Iii-a Discriminative Model

Iii-A1 Visual inspection

Fig. 3 showed the grand average of noisy EEG signals during standing, EEG signals during walking, and reconstructed EEG signals from noisy signals. Although the EEG signals during standing had apparent N200 and P300 components, the signals during walking in the mobile environment had the lower amplitude at N200 and P300 because of the great amplitude of artifacts. The reconstructed EEG data also had strong N200 and P300 components in signals as standing signals. The figure showed the generators made data similar to standing data having high SNR.

Iii-A2 Classification performance

Fig. 4

showed the accuracy of standing data, noisy data during walking at 1.6 m/s, and reconstructed data. The classification performance was calculated by areas under the receiver operating characteristic curve (AUC). The AUC of EEG signals during walking decreased rather than standing condition because of the distortion of EEG signals with great amplitude in the mobile environment. However, the averaged AUC of reconstructed data was similar to walking condition.

Iii-A3 Signal quality

The SNR is an essential indicator of noise reduction, which is calculated by the root mean square (RMS) of the amplitude of peaks at P300 divided by RMS of the average amplitude of pre-stimulus baseline (-200 ms – 0 ms) at channel Pz [28]. Fig. 4 showed the average of SNR for standing, walking, and reconstructed data. The SNR during walking decreased than standing data because of data distorting due to motion artifacts. However, applying GANs led to an increase in SNR significantly (). This is because the amplitude of noise at pre-stimulus would be reduced a lot rather than P300 in reconstructed EEG signals. The SNR of reconstructed signals is even higher than EEG signals during standing.

Iii-B Generative Model

Fig. 5 showed the samples of reconstructed EEG signals. As can be seen, each sample had N200 and P300 components, which means that the generative model tried to reconstruct the signals containing ERP-related components. Moreover, the signals also similar to raw signals during walking which were original signals of their signals, so that they can retain the information of origins. A few samples seem to still contain artifacts in pre-stimulus time comparing to the EEG signals during standing. Fig. 5 also demonstrated the diversity of reconstructed EEG signals. Each sample of reconstructed EEG signals had different shapes, which means the generative model was trained with diversity, and each input of noisy signals impact the output. The MSE loss of the generative model was 0.6301, which is quite low value comparing to other training loss.

Iv Conclusion

In this paper, we proposed the ERP reconstruction framework from noisy signals during walking using GANs. In the mobile environment such as walking or running, the EEG signals contain great amplitude of artifacts, which is much huger than user intention. Therefore, there have been many challenges to enhance the BMI performance in the mobile environment by developing artifact removal methods, extracting critical features from brain signals, and developing classifiers using deep neural networks. Because of the properties of EEG, the neural networks for EEG would be different from the networks for images. One of the deep neural networks for EEG was deep ConvNets, which considered kernel size regarding channels and time series. Generating EEG signals using the generative model is a novel approach and combining deep ConvNets and generative model provided EEG signals generation. We constructed GANs able to not only generate the data and classify validity but also can do classification. For the dataset, we collected data of EEG signals during walking at 1.6 m/s on the treadmill, and performed oddball ERP paradigm.

The reconstructed data through GANs had significant components such as N200 and P300 similar to EEG signals during standing. Moreover, the SNR of reconstructed data was much higher than noisy signals in the walking environment. The reconstructing progress was diverse so that it produced various samples of ERPs containing N200 and P300. The loss of the generative model was comparatively low that means training generator had high performance. Therefore, it is thought to extract or reconstruct ERP components from noisy EEG signals which can enhance the ERP performance. The proposed framework would directly help BMIs in the mobile environment to reduct noise in terms of SNR. In the future, we would advance the model to improve the classification performance. Moreover, we will train all 32 channels of EEG signals to reconstruct ERP signals.

References

  • [1] S. Blum, N. Jacobsen, M. G. Bleichner, and S. Debener (2019-04) A Riemannian modification of artifact subspace reconstruction for EEG artifact handling. Front. Hum. Neurosci. 13, pp. 141. Cited by: §I.
  • [2] Y. Chen, A. D. Atnafu, I. Schlattner, W. T. Weldtsadik, M. Roh, H. J. Kim, S. Lee, B. Blankertz, and S. Fazli (2016-06) A high-security EEG-based login system with RSVP stimuli and dry electrodes. IEEE Trans. Inf. Forensic Secur. 11 (12), pp. 2635–2647. Cited by: §I.
  • [3] G. Choi, D. Ryu, Y. Jo, Y. S. Kim, W. Park, H. Min, and Y. Park (2019-02) Cycle-consistent deep learning approach to coherent noise reduction in optical diffraction tomography. Opt. Express 27 (4), pp. 4927–4943. Cited by: §I.
  • [4] X. Ding and S. Lee (2013-03) Changes of functional and effective connectivity in smoking replenishment on deprived heavy smokers: a resting-state fmri study. PLoS One 8 (3), pp. e59331. Cited by: §I.
  • [5] F. Fahimi, Z. Zhang, W. B. Goh, T. Lee, K. K. Ang, and C. Guan (2019-01)

    Inter-subject transfer learning with an end-to-end deep convolutional neural network for eeg-based bci

    .
    J. Neural Eng. 16 (2), pp. 026007. Cited by: §II-B3.
  • [6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Adv. Neural. Inf. Process. Syst. (NIPS), pp. 2672–2680. Cited by: §I.
  • [7] K. Gramann, J. T. Gwin, N. Bigdely-Shamlo, D. P. Ferris, and S. Makeig (2010-10) Visual evoked responses during standing and walking. Front. Hum. Neurosci. 4 (202). Cited by: §I.
  • [8] K. G. Hartmann, R. T. Schirrmeister, and T. Ball (2018) EEG-GAN: generative adversarial networks for electroencephalograhic (EEG) brain signals. arXiv preprint arXiv:1806.01875. Cited by: §I.
  • [9] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1125–1134. Cited by: §I.
  • [10] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §II-B3.
  • [11] M. Kleiner, D. Brainard, and D. Pelli (2007-08) What’s new in Psychtoolbox-3?. Percept. 36 (1 suppl.), pp. 14. Cited by: §II-A3.
  • [12] R. Krepki, B. Blankertz, G. Curio, and K. Müller (2007-02) The Berlin Brain-Computer Interface (bbci)–towards a new communication channel for online control in gaming applications. Multimed. Tools. Appl. 33 (1), pp. 73–90. Cited by: §II-A3.
  • [13] N. Kwak, K. Müller, and S. Lee (2015-08) A lower limb exoskeleton control system based on steady state visual evoked potentials. J. Neural Eng. 12 (5), pp. 056009. Cited by: §I.
  • [14] N. Kwak, K. Müller, and S. Lee (2017-02) A convolutional neural network for steady state visual evoked potential classification under ambulatory environment. PloS One 12 (2), pp. e0172578. Cited by: §I.
  • [15] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance (2018-07) EEGNet: a compact convolutional neural network for eeg-based brain–computer interfaces. J. Neural Eng. 15 (5), pp. 056013. Cited by: §I, §I, §II-B1.
  • [16] M. Lee, S. Fazli, J. Mehnert, and S. Lee (2015-08) Subject-dependent classification for robust idle state detection using multi-modal neuroimaging and data-fusion techniques in BCI. Pattern Recognit. 48 (8), pp. 2725–2737. Cited by: §I.
  • [17] M. Lee, O. Kwon, Y. Kim, H. Kim, Y. Lee, J. Williamson, S. Fazli, and S. Lee (2019-01) EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy. GigaScience 8 (5), pp. giz002. Cited by: §II-A3.
  • [18] M. Lee, J. Williamson, D. Won, S. Fazli, and S. Lee (2018-05) A high performance spelling system based on EEG-EOG signals with visual feedback. IEEE Trans. Neural Syst. Rehabil. Eng. 26 (7), pp. 1443–1459. Cited by: §II-A3.
  • [19] M. Lee, C. Park, C. Im, J. Kim, G. Kwon, L. Kim, W. H. Chang, and Y. Kim (2016-08) Motor imagery learning across a sequence of trials in stroke patients. Restor. Neurol. Neurosci. 34 (4), pp. 635–645. Cited by: §I.
  • [20] M. Lee, S. Yeom, B. Baird, O. Gosseries, J. O. Nieminen, G. Tononi, and S. Lee (2018) Spatio-temporal analysis of eeg signal during consciousness using convolutional neural network. In Proc. 6th Int. Conf. IEEE Brain-Computer Interface (BCI), pp. 1–3. Cited by: §I.
  • [21] Y. Lee and M. Lee (2020) Decoding visual responses based on deep neural networks with ear-EEG signals. In Proc. 8th Int. Conf. IEEE Brain-Computer Interface (BCI), pp. 1–6. Cited by: §I.
  • [22] T. P. Luu, S. Nakagome, Y. He, and J. L. Contreras-Vidal (2017-08) Real-time EEG-based brain-computer interface to a virtual avatar enhances cortical involvement in human treadmill walking. Sci. Rep. 7 (1), pp. 8895. Cited by: §I.
  • [23] B. R. Malcolm, J. J. Foxe, J. S. Butler, W. B. Mowrey, S. Molholm, and P. De Sanctis (2019-08) Long-term test-retest reliability of event-related potential (ERP) recordings during treadmill walking using the mobile brain/body imaging (MoBI) approach. Brain Res. 1716, pp. 62–69. Cited by: §I.
  • [24] A. D. Nordin, W. D. Hairston, and D. P. Ferris (2018-08) Dual-electrode motion artifact cancellation for mobile electroencephalography. J. Neural Eng. 15 (5), pp. 056024. Cited by: §I.
  • [25] A. Odena, C. Olah, and J. Shlens (2017) Conditional image synthesis with auxiliary classifier gans. In Proc. 34th Int. Conf. Machine Learning (ICML), pp. 2642–2651. Cited by: §I, §I.
  • [26] S. Panwar, P. Rad, J. Quarles, E. Golob, and Y. Huang (2019) A semi-supervised wasserstein generative adversarial network for classifying driving fatigue from EEG signals. In IEEE Int. Conf. Systems, Man and Cybernetics (SMC), pp. 3943–3948. Cited by: §I.
  • [27] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §I, §I, §II-B3.
  • [28] H. Schimmel (1967-07) The () reference: Accuracy of estimated mean components in average response studies. Science 157 (3784), pp. 92–94. Cited by: §III-A3.
  • [29] R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball (2017-08) Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 38 (11), pp. 5391–5420. Cited by: §I.
  • [30] E. Symeonidou, A. D. Nordin, W. D. Hairston, and D. P. Ferris (2018-04) Effects of cable sway, electrode surface area, and electrode mass on electroencephalography signal quality during motion. Sens. 18 (4), pp. 1073. Cited by: §I.
  • [31] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan (2002-06) Brain-computer interfaces for communication and control. Clin. Neurophysiol. 113 (6), pp. 767–791. Cited by: §I.
  • [32] J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum (2017-12) Generative adversarial networks for noise reduction in low-dose CT. IEEE Trans. Med. Imaging 36 (12), pp. 2536–2545. Cited by: §I.
  • [33] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2223–2232. Cited by: §I.
  • [34] X. Zhu, H. Suk, S. Lee, and D. Shen (2016-08)

    Canonical feature selection for joint regression and multi-class identification in alzheimer’s disease diagnosis

    .
    Brain Imaging Behav. 10 (3), pp. 818–828. Cited by: §I.