Biosignal Generation and Latent Variable Analysis with Recurrent Generative Adversarial Networks

05/17/2019 ∙ by Shota Harada, et al. ∙ 0

The effectiveness of biosignal generation and data augmentation with biosignal generative models based on generative adversarial networks (GANs), which are a type of deep learning technique, was demonstrated in our previous paper. GAN-based generative models only learn the projection between a random distribution as input data and the distribution of training data.Therefore, the relationship between input and generated data is unclear, and the characteristics of the data generated from this model cannot be controlled. This study proposes a method for generating time-series data based on GANs and explores their ability to generate biosignals with certain classes and characteristics. Moreover, in the proposed method, latent variables are analyzed using canonical correlation analysis (CCA) to represent the relationship between input and generated data as canonical loadings. Using these loadings, we can control the characteristics of the data generated by the proposed method. The influence of class labels on generated data is analyzed by feeding the data interpolated between two class labels into the generator of the proposed GANs. The CCA of the latent variables is shown to be an effective method of controlling the generated data characteristics. We are able to model the distribution of the time-series data without requiring domain-dependent knowledge using the proposed method. Furthermore, it is possible to control the characteristics of these data by analyzing the model trained using the proposed method. To the best of our knowledge, this work is the first to generate biosignals using GANs while controlling the characteristics of the generated data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Biosignals, such as electrocardiogram (ECG) and electroencephalogram (EEG) signals, strongly reflect human internal states. In particular, abnormality in the human body, including diseases, can cause visible changes in the patterns of biosignals. For example, myocardial infarction induces an increase in the Q-wave and ST segment of ECGs. Therefore, abnormality in the human body can be detected by classifying the patterns of biosignals. In fact, physicians refer to the patterns of biosignals to diagnose diseases and determine treatment.

Biosignal analysis benefits various fields such as medicine and healthcare. In the medical field, biosignal analysis is utilized to detect diseases such as myocardial infarction [1], epileptic seizures [2], and psychiatric disorders [3]. In healthcare applications, biosignal analysis is utilized for brain–computer interfaces (BCIs) and the control of prosthetic limbs based on electromyograms [4]. For BCIs, Rahul textitet al. have reported that electric wheelchairs are controlled using EEG [5]. In BCI applications in other fields, attempts to control drones using EEG have also been reported [6].

Numerous studies have reported that biosignals can be identified using a discriminative model of deep learning [7, 8]. Owing to the development of deep learning, a few studies have achieved considerable increases in classification accuracy.

The study of generative models based on deep learning was motivated by the contribution of generative adversarial networks (GANs) [9]

. GANs are a framework for learning a generative model. In a GAN, two neural networks, one for generating synthetic data and the other for discriminating the synthetic data from actual data, are simultaneously trained while competing with each other. A GAN-based method allows for the generation of data similar to given observations without the domain-dependent knowledge of a target. A large number of studies have been conducted using GANs for various purposes. In particular, numerous studies on GANs have been reported in the image domain for tasks such as image super resolution

[10], training stabilization [11], and domain transformation [12]. However, these studies mainly focus on the generation of images, and only a few studies have reported the generation of time-series data [13, 14, 15, 16].

Recently, we reported that biosignals can be generated using a GAN framework and that the generated signals are effective for data augmentation for biosignal classification [17]. In [17]

, the internal structure of each neural network in a GAN was developed based on a recurrent neural network (RNN) using long short-term memory (LSTM)

[18] for its hidden layers, thereby allowing for the adaptation of the GAN framework to time-series data generation. Several generative models of biosignals require the domain-dependent knowledge of target biosignals. In contrast, this method does not require domain-dependent knowledge. The validity of the biosignal generation method proposed in [17] was qualitatively evaluated using the overall similarity between training and generated data. The effectiveness for data augmentation was shown via biosignal classification experiments.

However, there were the following limitations in [17]:

  • The GANs should be prepared and independently trained for each class, resulting in an increase in the number of model parameters in proportion to the number of classes.

  • The generated data have not been evaluated quantitatively.

  • The behavior of the generator is unclear.

In this study, we propose a conditional generation method capable of generating multiple classes of time-series data from one model. The technical highlight of our study is to control the characteristics of the data generated from the proposed method by clarifying the relationship between the input and generated data. In the proposed method, class labels are simultaneously input to a generator and a discriminator and adapted to conditional generation. The aim of the proposed method is to reduce training cost and clarify the difference between the classes of training data by training the time-series data of multiple classes with a single model, in contrast to our previous method. In the experiment, the quality of the generated data is quantitatively evaluated using the similarity between the data generated by the proposed method and the training data. It is difficult to control the characteristics of the generated data because the input–output relationship in ordinary GANs is unclear. Therefore, we analyze the input–output relationship of GANs and control the generated data by referring to the analysis result.

The primary contributions of this work are as follows:

  • A conditional method for generating multiple classes of biosignals from a single model is developed.

  • The performance of the proposed method is quantitatively verified.

  • The behavior of the GAN-based generative model is analyzed to control the characteristics of the data generated by the proposed method.

Ii Related Work

Ii-a Biosignal Generation Models

Various biosignal generation models have been investigated for a long time [19, 20, 21, 22, 23]. The purpose of such studies is two-fold. One is the understanding of the mechanism of biological systems [20, 24, 25]. For example, Silva et al. [25] presented their view of the basic mechanisms of the routes to epileptic seizures. The other is the generation of data for evaluating biosignal processing algorithms [22, 23, 21]. McSharry et al.

proposed an ECG generation model based on three ordinary differential equations.

Biosignal generation models are categorized into two approaches, i.e., the mathematical model-based approach and machine learning-based approach. For the mathematical model-based approach, McSharry

et al. proposed an ECG generation model based on differential equations [23]. This model consists of three ordinary differential equations and can control various characteristics of generated signals, such as the interval between waves and the value of P-waves and Q-waves. Wendling et al. proposed a multiple coupled populations model, where each single population model consists of ordinary differential equations [22]. As an example of machine learning-based approaches, Koski et al. [19]

proposed an ECG generation model based on hidden Markov models (HMM). In

[19], artificial ECG signals were generated using an HMM and two-class classification between normal and pathological ECG was performed using an HMM. Even though both abovementioned approaches have the possibility of generating high-quality data with characteristics similar to original data, they have their advantages and disadvantages. On one hand, the mathematical model-based approach can change the characteristics of generated data by adjusting parameters; however, domain-dependent knowledge is required. On the other hand, the machine learning-based approach does not require domain-dependent knowledge and can therefore be applied to general applications, whereas the model-based approach performs well only if the assumed model structure sufficiently approximates the true data distribution.

The proposed method is a machine learning-based approach. Our method does not require distribution assumptions because it is based on a GAN, which is a neural-network-based generative model; however, this method generates models with low interpretability. Therefore, in this study, an analysis was performed from various viewpoints to clarify the behavior of our method.

Ii-B GANs

(a) GANs for image generation
(b) GANs for time-series generation
Fig. 1: Overview of the GAN framework.

GANs are a method for estimating generative models proposed by Goodfellow

et al. in 2014 [9]. Fig. 1

shows an overview of the GAN framework. Using this framework, it is possible to generate data similar to given observations without the domain-dependent knowledge of a target. GANs have received considerable attention in recent years, particularly in the computer vision community, and various derivatives have been proposed by changing their learning methods and structures.

A GAN consist of two different networks. One is a generative model referred to as a generator. A vector of random numbers is fed into the generator. The generator produces data with the same dimensions as those of training data. The other network is a discriminative model referred to as a discriminator. Training data and the data generated by the generator are input to the discriminator. Then, the network discriminates whether the input came from the training data or generated data.

The generator and discriminator are repeatedly trained in the GAN framework. Their relationship is frequently compared to that of banknote counterfeiters and police. The generator learns to generate data that the discriminator classifies as training data. In contrast, the discriminator learns to discriminate training data and generated data correctly. As a result, the generator gradually gains the ability to generate data that are similar to but not completely the same as training data. In other words, the generator learns the mapping from a distribution of random numbers onto the distribution of training data.

In recent years, numerous studies have been performed using GANs. Researchers have proposed new GANs for various purposes such as the improvement of learning stability, the generation of high-resolution images, and translation to different class images. For example, using Wasserstein GANs, Arjovsky et al.

improved learning stability, prevented mode collapse, and provided meaningful learning curves useful for hyperparameter searches. Moreover, methods of conditional generation using GANs have been reported

[16, 26, 27]. These methods achieve conditional generation by considering auxiliary information.

These studies mainly focus on the generation of images, and only a few studies have reported the generation of time-series data [13, 14, 15, 16]. For example, Yu et al.

proposed SeqGAN for natural language generation

[14]

. In SeqGAN, the generator and discriminator are constructed based on LSTMs and reinforcement learning is applied to the training of the generator. Dong

et al. proposed a music generation method based on convolutional GANs [15]. This method consists of multiple generators and discriminators that generate and identify the sound of each track.

Our previous study [17] has demonstrated the effectiveness of data enhancement using biosignals generated from GANs. In [17], biosignal generation and biosignal data augmentation were performed by a time-series data generation method based on GANs constructed with LSTM. However, certain limitations exist in our previous study. First, the proposed GAN-based method must be trained independently for each class. Second, the quality of generated data was not quantitatively evaluated. Finally, the latent variable space was not analyzed.

This study proposes a generation method based on GANs that can select the class of generated data based on the class label. In this method, it is not required to learn models independently in each class. Furthermore, the behavior of the proposed GAN is grasped through input–output analysis, and the generated data characteristics not considered during learning are controlled using the analysis results. Control by a class label is the same as existing conditionally generated GANs. The contribution of this work is to control the characteristics of the generated data not considered during training by analyzing the trained model.

(a) Structure of the proposed method.
(b) Internal structure of the generator. (c) Internal structure of the discriminator. The triangular node indicates multiplication.
Fig. 2: Overview of the proposed method. On one hand, the generator learns to generate data similar to the original biosignal. On the other hand, the discriminator learns to discriminate the data generated by the generator and the original biosignal. By learning these neural networks alternately, the generator can generate data close to the original biosignal.
: Training dataset

: Uniform distribution

: Discrete uniform distribution
: Number of classes
: Weights of the discriminator
: Weights of the generator
for number of training iterations do
     for number of unrolling  do
  • Sample minibatch of noise sequence samples from noise prior

  • Randomly generate class-label sequences from

  • Sample minibatch of examples from dataset

  • Sample class-label sequence corresponding to sampled ,

  • Update the discriminator by ascending its stochastic gradient:

     
         if first update at this iteration then
  • Save weights of the discriminator

         end if
     end for Sample minibatch of random sequence samples from noise prior Randomly generate class-label sequences from Update the generator by descending its stochastic gradient:
Load weights of the discriminator
end for
Algorithm 1 Training procedure of the proposed method

Iii Time-Series Data Generation Method

Fig. 2 shows the structure of the proposed method. Based on the GAN framework, the proposed method consists of generator and discriminator

. The proposed method is composed of an RNN based on LSTMs to adapt to time-series data, whereas most existing GANs are constructed based on convolutional neural networks.

Generator consists of a deep LSTM layer and a fully connected layer. The deep LSTM layer has hidden layers and

LSTM units in each hidden layer. The fully connected layer has a sigmoid function as the activation function. The generator receives a latent variable sequence,

, and an additional information sequence, ( is the length of the training data), as input data. More specifically, generator receives a sequence with and simultaneously at each time point , where and are combined and input as a vector. At each time point , is independently sampled from a uniform random distribution, . In addition, the class-label sequence, , is a constant sequence where the initial value is sampled from a discrete uniform distribution, , for each latent variable , where is the number of classes. The sequence of the output from the fully connected layer is treated as the conditionally generated time-series data, .

Discriminator consists of a deep LSTM layer, a fully connected layer, and an average pooling layer. The deep LSTM layer in the discriminator has the same number of hidden layers and LSTM units as the generator in each hidden layer, and the fully connected layer has a sigmoid function as the activation function. The average pooling layer outputs a scalar by averaging the input over its dimensions. As with generator , discriminator also receives a sequence with and simultaneously at each time point . The input data, , are sampled from training data or the uniform random distribution, . If is sampled from training data, the corresponding to that is a class label of that . If is generated from generator , the corresponding to that is sampled from the discrete uniform distribution, , where is sampled from . Given an input sequence, , and , the output of discriminator

is a scalar value representing the probability that

came from the training data.

In the training, and play the minimax game with the evaluation function defined as

(1)

where and are distributions of the training data and , respectively. The training procedure is shown in Algorithm 1

. The gradients with respect to the weights of the networks are calculated using the backpropagation-through-time method. Each minibatch for the training of

contains the same number of and , and there is no specification of the class label ratio of the minibatch used to train and .

The weights are updated based on an unrolled GAN, which is an updating rule proposed by Metz et al. [28]. Using this updating rule, we can avoid mode collapse and prevent the generator from generating biased data.

Iv Biosignal Generation Experiment

Biosignal generation experiments were conducted using three real-world biosignal datasets to quantitatively evaluate the biosignal generation of multiple classes using the proposed method. In this experiment, the data generated by the proposed method were evaluated qualitatively and quantitatively. First, the generated biosignals were qualitatively evaluated by comparing them with the actual biosignals. Then, the similarity between the generated data and training data was computed to evaluate the quality of the data.

Iv-a Datasets

Three real-world biosignal datasets from The UEA & UCR Time Series Classification Repository [29] were used in this study. All datasets were normalized in a range 0 to 1. The details of these datasets are as follows: The first dataset is an ECG dataset referred to as “ECG 200;” this dataset was created by Olszewski [30]. It consists of samples of an ECG series. Each series traces the electric activity during one heartbeat, and the length of each series is . Out of samples, are labeled as normal and the remaining are myocardial infarctions (or abnormal). In this paper, the normal and abnormal classes of the ECG200 dataset are referred to as class 1 and class 2, respectively. We randomly extracted samples for training data from the ECG200 dataset. The second dataset is an ECG dataset referred to as “TwoLeadECG,” which was collected and added to the repository by Keogh [31]. This dataset consists of samples of an ECG series, and the length of each series is (each time series reflects one heartbeat). In the TwoLeadECG dataset, two different leads of the ECG are considered, and each signal originates from one of these two leads. Out of samples, are labeled as class and the remaining are class . The aim of the TwoLeadECG dataset is to distinguish between the signals originating from each lead. In this study, the classes of the TwoLeadECG dataset are referred to as class 1 and class 2. We randomly extracted samples of each class as training data. The third dataset is an EEG dataset referred to as “Epileptic Seizure Recognition Data Set” [32]. This dataset contains normal series and abnormal series recorded during epileptic seizure. We randomly sampled for training data from entire data in the EEG dataset. The length of each series was .

Iv-B Experimental Setup

In the experiments of each dataset, the number of LSTM units of the generator and discriminator were , and the number of LSTM layers of the generator and the discriminator were . The Adam optimizer [33] with an initial learning rate of

was used for weight updating. The number of training epochs was set to be

.

(a) ECG200 dataset
(b) TwoLeadECG dataset
(c) EEG dataset
Fig. 3: Example of the original and generated signals. Three medoids obtained by -medoids clustering () from the original dataset are shown as the original signal examples. The signal most similar to each original signal example was selected from the data generated using the proposed method.

The similarity of the quantitative evaluation was computed by utilizing dynamic time warping (DTW) [34, 35]. The similarity obtained using DTW can be calculated as follows: Given two time-series data and , the DTW distance is computed by finding the best alignment between them. First, to align the two time-series data, an matrix is constructed, whose element is equal to , where and are the points of time-series data and , respectively. An alignment between the two time-series data is represented by a warping path, , where is the index of the matrix. Warping path starts at the bottom-left corner and ends at the top-right corner of the matrix. The best alignment is given by a warping path through the matrix that minimizes the total cost of aligning its points, and the corresponding minimum total cost is termed as the DTW distance. The DTW distance is calculated as

(2)

In addition, the quantitative evaluation compares the results obtained using the proposed method and the four existing data generation methods. The first method adds a noise sequence to the training data. We generated the noise sequence at each time-point

from a Gaussian distribution with zero mean and standard deviation

calculated across all training data. New samples can be generated as

(3)

where is the -th sample of the training data, and is a constant value. In our experiments, was set to be . The second method generates new data by interpolating between the training data of the same class label. New samples can be synthesized as follows:

(4)

where denotes the training data most similar to and is a coefficient related to interpolation in a range of . The similarity between training data is calculated by Euclidean distance. In our experiments, we used . The third method extrapolates between the training data of the same class label to generate new data. New samples are synthesized as

(5)

where denotes the training data most similar to , and is coefficient related to interpolation in a range of . The similarity between training data is calculated by Euclidean distance. In our experiments, we used . The final method uses an HMM to generate data. Each state of the HMM was constructed with a Gaussian mixture distribution. The parameters of the HMM were estimated using the Baum–Welch algorithm, and the number of states of the HMM was determined based on Akaike’s information criterion.

Iv-C Generation Results

Fig. 3 shows an example of the original data and the data generated using the proposed method. Figs. 3(a), (b), and (c) are examples for the ECG200, TwoLeadECG, EEG datasets, respectively. The left side of each figure is an example of class 1, and the other side is an example of class 2. In each figure, three medoids obtained by -medoids clustering () are shown as the original signal examples. For the generated signal examples, a sequence most similar to each original signal example is selected based on the DTW distance.

Iv-D Quality Evaluation

The average similarity between the original and generated data was computed to evaluate the quality of the data generated using the proposed method. Here, the original data were from the dataset used for training the proposed method, and the average similarity among the original data was used as a baseline for evaluation. In the evaluation procedure, first, the same amount of data was selected from each data group, and then, the similarities of all combinations of data for evaluation were calculated by brute force. The average DTW distance and standard deviation were used as the evaluation result. A small average DTW distance is a good result because this value indicates the dissimilarity between the target data and original data. However, if this value is extremely small, it implies that the target data are the same as the original data, which is a worse result.

Fig. 4 shows the quality of the data generated by each data augmentation method. Figs. 4(a) and (d) show the evaluated result of class 1 and class 2 of the ECG200 dataset, respectively. Figs. 4(b) and (e) show the evaluation result of class 1 and class 2 of the TwoLeadECG dataset, respectively. Figs. 4(c) and (f) show the evaluation result of class 1 and class 2 of the EEG dataset, respectively. Each bar indicates the average DTW distance among the original data and between the original data and target data generated using each data augmentation method. The horizontal axis labels indicate the evaluated data, the vertical axis represents the average similarity obtained using the DTW distance, and the error bar indicates the standard deviation of these similarities.

(a) Class 1 of ECG200 dataset
(b) Class 1 of TwoLeadECG dataset
(c) Class 1 of EEG dataset
(d) Class 2 of ECG200 dataset
(e) Class 2 of TwoLeadECG dataset
(f) Class 2 of EEG dataset
Fig. 4: Similarity of the data generated by each data augmentation method. Each bar in the graph shows the average value and standard deviation. The red dashed line indicates the accuracy of the classifier when data augmentation is not applied.

V Analysis of the Input–Output Relationship

Two experiments were performed to analyze the input–output relationship of the generator of the proposed method and confirm controllability. One was to analyze class labels and the other to analyze the latent variable space. Class-label analysis was performed to evaluate the discrimination between the classes of the generated data. Latent variable space analysis was performed to clarify the relationship between the input data as a latent variable and the characteristics of the generated data. Furthermore, the characteristics of the generated data were controlled using the results of this analysis.

V-a Analysis of Effect of Class Labels

Class labels were interpolated to verify whether it is possible to generate data that reflect the features of each class according to a given class label. An input class label, , was obtained by linear interpolation between the original class labels. If is close to a certain class label, the generated data strongly reflect the characteristics of the training data of the class. In addition, the difference between the data of each class was confirmed using the transition of the data generated by the proposed method.

The generator of the proposed method was given a fixed random sequence, , and class label . The data were divided into 100 class 1 and class 2 samples. By comparing the data generated with class label and the average value at each time point of the training data of each class, it is demonstrated that the generated data reflect the features of each class of training data.

Fig. 5 shows the result of the interpolation of class labels by the proposed method. In these figures, the amplitude of each generated data is shown as a heat map, and the vertical axis shows class label . Figs. 5(a), (b), and (c) are the results obtained using the proposed method trained by the ECG200, TwoLeadECG, and EEG datasets, respectively. In the heat map in the middle of each figure, the results of in order from the top are shown.

(a) ECG200 dataset
(b) TwoLeadECG dataset
(c) EEG dataset
Fig. 5: Results of the interpolation of class labels on the ECG200, TwoLeadECG, and EEG datasets. The top and bottom columns are data generated from fixed random sequence and class labels and , respectively. The vertical axis follows the change in class label , and the horizontal axis shows the data point of each time-series.

V-B Analysis of Latent Variable Space and Control of Generated Data

The latent variable space was analyzed to control the characteristics of the generated data,. In the GAN-based method, to generate data with certain characteristics, it is necessary to find input data with the desired characteristics manually from a large number of pairs of generated and input data. This is because there is no direct parameter for controlling the characteristics of the generated data. However, this task is highly time consuming and undesirable. Therefore, it is better to automatically control the behavior of the GAN-based method. The behavior of the proposed method can be understood by analyzing the input–output relationship.

Canonical correlation analysis (CCA) was conducted to analyze the latent variable space. CCA is a method of analyzing the interrelationship between two variable groups. It linearly converts each variable group into a variable group with the maximum correlation. CCA determines the transformation, , and it is defined as

(6)

Even though there is a limit in linear CCA, it was performed for obtaining a broad estimate of the behavior of the proposed method.

In this experiment, CCA was performed on the pairs of the input data and the variables extracted from the generated data. The various variables extracted from the generated data were the maximum value, the point of the maximum value, the minimum value, the point of the minimum value, maximum-to-minimum interval length, mean amplitude, and the mean frequency on the ECG200 and TwoLeadECG datasets. In case of the EEG dataset, mean amplitude, standard deviation, median, and mean frequency were extracted from the generated data. Then, the canonical loadings were obtained, which indicate the contribution rate of the original variable groups to the converted variable groups.

The generated data were controlled by changing the input data based on the canonical loadings obtained from CCA. The canonical loadings of the highest canonical correlation coefficient multiplied by a constant ranging from to were given to the proposed model as input data, and the characteristics of the generated data were observed.

Fig. 6 shows the results. The panels of Fig. 6 show the canonical loadings of the ECG200, and TwoLeadECG, and EEG datasets computed by CCA. The graph is the canonical loading corresponding to the st canonical correlation coefficient. In each figure, the left side is the canonical loading of the input data and the right side is the canonical loading of the data converted from the generated data.

(a) Class 1 of the ECG200 dataset
(b) Class 2 of the ECG200 dataset
(c) Class 1 of the TwoLeadECG dataset
(d) Class 2 of the TwoLeadECG dataset
(e) Class 1 of the EEG dataset
(f) Class 2 of the EEG dataset
Fig. 6: Results of the input–output analysis of the proposed method obtained from the CCA between the generated data and corresponding input data.

Fig. 7 shows the results of the attempt to control the generated data based on the first canonical loadings of the input data. The panels of Fig. 7 show the control results of the ECG200, TwoLeadECG, and EEG datasets. In each figure, the left, middle, and right parts are the input data based on the canonical loadings, the generated data, and the data converted by the generated data, respectively.

(a) Class 1 of ECG200 dataset
(b) Class 2 of ECG200 dataset
(c) Class 1 of TwoLeadECG dataset
(d) Class 2 of TwoLeadECG dataset
(e) Class 1 of EEG dataset
(f) Class 2 of EEG dataset
Fig. 7: Example of the result of controlling the generated data based on the CCA results. These input data for the control data were obtained from the first canonical loadings of the input data.

Vi Discussion

Fig. 3 confirms that the proposed method generates time-series data that have characteristics similar to the original data. For the ECG200 dataset, the peak close to the initial time point and the rapid decrease and increase around the th time point are retained in the generated data. For the other datasets, the characteristics of the training data are mostly reproduced. In addition, from the comparison of classes and of the generated data, it is confirmed that the feature is captured for each class.

Fig. 4 quantitatively shows that the quality of the generated data is high because the average similarity between the data generated by the proposed method and the original data is close to that of the original data. The results of the proposed method are not inferior compared to methods other than our previous method and the HMM. As the generated data of these methods are obtained by converting the training data in a simple manner, the fact that the result obtained by the proposed method is not inferior compared to the result of these methods further indicates that the quality of the data generated by the proposed method is high. Furthermore, the results of the proposed method are not inferior compared with our previous method, which trains each class independently. This result demonstrates that one model can replace the multiple models of our previous method, which should reduce calculation costs. The number of parameters of the proposed method is times the number of parameters of the conventional method when the number of layers and the number of LSTM units are equal in both methods, where is the number of classes.

Fig. 5 confirms that the feature of the generated data can be controlled by the auxiliary information given at the time of training by the proposed method. For the ECG200 dataset, the change in amplitude around the initial time point gradually becomes more moderate and the fluctuation of the amplitude around the intermediate time points gradually decreases when the class label changes from class to class . For the EEG dataset, the frequency of the generated data increases according to the change in the class label in Fig. 5(c). The transitions of the generated data are reasonable based on the details of each dataset. From these results, it is confirmed that it is possible to control the characteristics of generated biosignals by training a model using prior information such as class labels.

In Fig. 6, the relationship between the input and generated data is confirmed as the canonical loadings, and the behavior of the model generated by the proposed method can be grasped from the result. Furthermore, Fig. 7 reveals the effectiveness of the method of controlling the generated data based on the CCA result. In Figs. 6(a) and (b), there is a strong canonical correlation between the first to fourth time points of the input data and the maximum value and mean frequency of the generated data. As a result of control using the canonical loadings shown in Figs. 7(a) and (b), it is confirmed that the maximum value and mean frequency of the generated data increase according to the change in input data. In the other datasets, it is confirmed that the characteristics of the generated data are controlled according to the relationship between the input and generated data shown in Fig. 6. From these results, it is confirmed that the characteristics of the generated data not given as auxiliary information at the time of training, such as the mean frequency and maximum value, can be controlled using the input–output analysis based on CCA.

Vii Conclusion

In this study, a conditional generation method for time-series data based on GANs was proposed. In the proposed method, each neural network in a GAN was developed using LSTM units for its hidden layers, thereby allowing for the conditional generation of time-series data according to class labels. In this method, data similar to training data could be generated without requiring domain-dependent knowledge.

In the experiments, the ability of the proposed method to conditionally generate biosignals was confirmed using three real-world datasets and the controllability of the data generated by the proposed method was verified. First, the quality of the data generated using each method was quantitatively evaluated using similarity based on the DTW distance. The results showed the similarity between the original and generated data. Next, to verify the controllability of the generated data, the input–output relationship of the model generated using the proposed method was analyzed through CCA, and input data changed based on the CCA results were applied to the model. The input data changed using the CCA results were shown to generate data with the intended changes in the characteristics. To the best of our knowledge, this study is the first attempt to control the characteristics of the generated data using an analysis of the results of the data generated by a GAN.

A few limitations exist in this study. First, the learning termination condition of the proposed method cannot be uniquely determined because the loss value output from a GAN does not indicate its learning progress. Second, hyperparameters must be tuned because the quality of the generated data may vary depending on the hyperparameters. Third, the characteristics that do not change considerably in the training dataset cannot be controlled using the generated data. Fourth, it requires a substantially long time to train the model of the proposed method. Finally, the proposed method may not be able to reproduce high frequency components because the LSTM behaves like a low-pass filter [36].

References

  • [1] P. Kora, “ECG based myocardial infarction detection using hybrid firefly algorithm,” Comp. Methods Programs Biomedicine, vol. 152, pp. 141 – 148, 2017. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0169260717303516
  • [2] L. Wang et al.

    , “Automatic epileptic seizure detection in eeg signals using multi-domain feature extraction and nonlinear analysis,”

    Entropy, vol. 19, no. 6, 2017. [Online]. Available: http://www.mdpi.com/1099-4300/19/6/222
  • [3] A. Khodayari-Rostamabad et al., “Diagnosis of psychiatric disorders using eeg data and employing a statistical decision model,” in 2010 Annu. Int. Conf. IEEE Eng. Medicine Soc., Aug 2010, pp. 4006–4009.
  • [4] “Design and development of emg controlled prosthetics limb,” Procedia Engineering, vol. 38, pp. 3547 – 3551, 2012.
  • [5] Y. Rahul et al., “A review on eeg control smart wheel chair,” vol. 8, pp. 501–507, 12 2017.
  • [6] K. LaFleur et al., “Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface,” J. Neural Eng., vol. 10, no. 4, p. 046003, 2013. [Online]. Available: http://stacks.iop.org/1741-2552/10/i=4/a=046003
  • [7] B. Pourbabaee et al., “Deep convolutional neural networks and learning ECG features for screening paroxysmal atrial fibrillation patients,” IEEE Trans. Syst., Man, Cybern.: Syst., pp. 1–10, June 2017.
  • [8] S. Chambon et al., “A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 26, no. 4, pp. 758–769, Apr. 2018.
  • [9] I. Goodfellow et al., “Generative adversarial nets,” in Advances Neural Inform. Process. Syst., 2014, pp. 2672–2680.
  • [10] C. Ledig et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in

    IEEE Conf. Comp. Vision Pattern Recognition

    , July 2017, pp. 105–114.
  • [11] T. Miyato et al., “Spectral normalization for generative adversarial networks,” in Int. Conf. Learning Representations, 2018.
  • [12] J.-Y. Zhu et al.

    , “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in

    IEEE Int. Conf. Comp. Vision, 2017.
  • [13] O. Mogren, “C-RNN-GAN: A continuous recurrent neural network with adversarial training,” in Constructive Mach. Learning Workshop NIPS, Dec. 2016.
  • [14] L. Yu et al., “SeqGAN: Sequence generative adversarial nets with policy gradient,” in Assoc. Advencement Artificial Intell., Aug. 2017.
  • [15]

    Y.-H. Y. Hao-Wen Dong, “Convolutional generative adversarial networks with binary neurons for polyphonic music generation,” in

    Int. Soc. Music Inf. Retrieval Conf., Paris, France, 2018, pp. 190–196.
  • [16] C. Esteban et al., “Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs,” arXiv e-prints, p. arXiv:1706.02633, Jun 2017.
  • [17] S. Harada et al., “Biosignal data augmentation based on generative adversarial networks,” in 40th Annu. Int. Conf. IEEE Eng. Medicine Soc., July 2018.
  • [18] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  • [19] A. Koski, “Modelling ecg signals with hidden markov models,” Artificial Intell. Medicine, vol. 8, no. 5, pp. 453 – 471, 1996. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0933365796003521
  • [20] T. Yamanobe et al., “Analysis of the response of a pacemaker neuron model to transient inputs,” Biosystems, vol. 48, no. 1, pp. 287 – 295, 1998. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0303264798000768
  • [21] D. Farina et al., “A model for the generation of synthetic intramuscular emg signals to test decomposition algorithms,” IEEE Tran. Biomed. Eng., vol. 48, no. 1, pp. 66–77, Jan 2001.
  • [22] F. Wendling et al., “Relevance of nonlinear lumped-parameter models in the analysis of depth-eeg epileptic signals,” Biological Cybernetics, vol. 83, no. 4, pp. 367–378, Sept. 2000. [Online]. Available: https://doi.org/10.1007/s004220000160
  • [23] P. E. McSharry et al., “A dynamical model for generating synthetic electrocardiogram signals,” IEEE Trans. Biomed. Eng., vol. 50, no. 3, pp. 289–294, March 2003.
  • [24] M. J. Rempe et al., “Mathematical modeling of sleep state dynamics in a rodent model of shift work,” Neurobiology of Sleep and Circadian Rhythms, vol. 5, pp. 37 – 51, 2018. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S2451994417300111
  • [25] F. L. Da Silva et al., “Epilepsies as dynamical diseases of brain systems: Basic models of the transition between normal and epileptic activity,” Epilepsia, vol. 44, no. s12, pp. 72–83. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.0013-9580.2003.12005.x
  • [26] A. Odena et al., “Conditional image synthesis with auxiliary classifier GANs,” in Proceedings of the 34th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, D. Precup and Y. W. Teh, Eds., vol. 70.   International Convention Centre, Sydney, Australia: PMLR, Aug. 2017, pp. 2642–2651. [Online]. Available: http://proceedings.mlr.press/v70/odena17a.html
  • [27] X. Chen et al., “InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets,” in Advances in Neural Information Processing Systems 29, D. D. Lee et al., Eds.   Curran Associates, Inc., 2016, pp. 2172–2180.
  • [28] L. Metz et al., “Unrolled generative adversarial networks,” in Int. Conf. Learning Representations, 2017.
  • [29] A. Bagnall et al., “The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances,” Data Mining and Knowledge Discovery, vol. Online First, 2016.
  • [30] R. T. Olszewski, “Generalized feature extraction for structural pattern recognition in time-series data,” Ph.D. dissertation, Pittsburgh, PA, USA, 2001, aAI3040489.
  • [31] Y. Chen et al., “The UCR time series classification archive,” July 2015, www.cs.ucr.edu/~eamonn/time_series_data/.
  • [32] R. G. Andrzejak et al., “Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state,” Physical Rev. E, vol. 64, pp. 061 907–1–061 907–8, Nov. 2001. [Online]. Available: https://link.aps.org/doi/10.1103/PhysRevE.64.061907
  • [33] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Int. Conf. Learning Representations, 2015.
  • [34] R. Bellman and R. Kalaba, “On adaptive control processes,” IRE Transactions on Automatic Control, vol. 4, no. 2, pp. 1–9, November 1959.
  • [35] H. Sakoe and S. Chiba, “Dynamic programming algorithm optimization for spoken word recognition,” IEEE Trans. Acoustics, Speech, Signal Processing, vol. 26, no. 1, pp. 43–49, Feb. 1978.
  • [36] Y. Bengio et al., “Advances in optimizing recurrent networks,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, May 2013, pp. 8624–8628.