Translation Between Waves, wave2wave

07/20/2020
by   Tsuyoshi Okita, et al.
0

The understanding of sensor data has been greatly improved by advanced deep learning methods with big data. However, available sensor data in the real world are still limited, which is called the opportunistic sensor problem. This paper proposes a new variant of neural machine translation seq2seq to deal with continuous signal waves by introducing the window-based (inverse-) representation to adaptively represent partial shapes of waves and the iterative back-translation model for high-dimensional data. Experimental results are shown for two real-life data: earthquake and activity translation. The performance improvements of one-dimensional data was about 46 and that of high-dimensional data was about 1625 the original seq2seq.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

09/17/2020

Learning a Deep Part-based Representation by Preserving Data Distribution

Unsupervised dimensionality reduction is one of the commonly used techni...
04/30/2022

Testing Overidentifying Restrictions with High-Dimensional Data and Heteroskedasticity

This paper proposes a new test of overidentifying restrictions (called t...
09/09/2019

Outlier Detection in High Dimensional Data

High-dimensional data poses unique challenges in outlier detection proce...
02/28/2020

Robust Unsupervised Neural Machine Translation with Adversarial Training

Unsupervised neural machine translation (UNMT) has recently attracted gr...
08/01/2018

Low-Latency Neural Speech Translation

Through the development of neural machine translation, the quality of ma...
03/13/2022

Automated fault tree learning from continuous-valued sensor data: a case study on domestic heaters

Many industrial sectors have been collecting big sensor data. With recen...
06/14/2021

z-anonymity: Zero-Delay Anonymization for Data Streams

With the advent of big data and the birth of the data markets that sell ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

and contributed equally.

The problem of shortage of training data but can be supplied by other sensor data is called an opportunistic sensor problem [7]. For example in human activity logs, the video data can be missing in bathrooms by ethical reasons but can be supplied by environmental sensors which have less ethical problems. For this purpose we propose to extend the sequence-to-sequence (seq2seq) model [5] to translate signal wave (continuous time-series signals) into other signal wave . The straight-forward extension does not apply by two reasons: (1) the lengths of and are radically different, and (2) both and are high dimensions.

First, while most of the conventional seq2seq models handle the input and output signals whose lengths are in the same order, we need to handle the output signals whose length are sometimes considerably different than the input signals. For example, the sampling rate of ground motion sensor is and the duration of an earthquake is about . That is, the length of the output signal wave is times longer in this case. Therefore, the segmentation along temporal axis and discarding uninformative signal waves are required. Second, signal waves could be high dimensions; motion capture data is in -dimensionals and acceleormeter data is in -dimensionals. While most of the conventional seq2seq does not require the high dimensional settings, meaning that it is not usual to translate multiple languages simultaneously, we need to translate signal waves in high dimensions into other signal waves in high dimensions simultaneously.

To overcome these two problems we propose 1) the window-based representation function and 2) the wave2wave iterative back-translation model in this paper. Our contributions are the following:

  • We propose a sliding window-based seq2seq model wave2wave (Section 3.1),

  • We propose the wave2wave iterative back-translation model (Section 3.2) which is the key to outperform for high-dimensional data.

2 seq2seq

Architecture with context vector

Let denotes a source sentence consisting of time-series words, and denotes a target sentence corresponding to

. With the assumption of a Markov property, the conditional probability

, translation from a source sentence to a target sentence, is decomposed into a time-step translation as in where and

is a context vector representing the information of source sentence

to generate an output word .

To realize such time-step translation, the seq2seq architecture consists of (a) a RNN (Reccurent Neural Network) encoder and (b) a RNN decoder. The RNN encoder computes the current hidden state

given the previous hidden state and the current input , as in where denotes a multi-layered RNN unit. The RNN decoder computes a current hidden state given the previous hidden state and then compute an output by and where denotes a conditional RNN unit, is the output function to convert and

to the logit of

, and denotes parameters in RNN units.

With training data , the parameters

are optimized so as to minimize the loss function of log-likelihood

) or squared error ).

Global Attention

To obtain the context vector , we use global attention mechanism [5]. The global attention considers an attention mapping in a global manner, between encoder hidden states and a decoder hidden step by , where the score is computed by weighted inner product by , where the weight parameter is obtained so as to minimize the loss function . Then, the context vector is obtained as a weighted average of encoder hidden states by

3 Proposed method: wave2wave

Figure 1: Overall architecture of our method, wave2wave, consisting RNN encoder and decoder with context vector and sliding window representation. Input and output time-series data are toy examples where the input is generated by combining sine waves with random magnitudes and pepriods. The output is the version of the input flipped horizontally.

The problems of global attention model are that (1) the lengths of input and output are radically different, and that (2) both input and output sequences are high dimensionals. For example in activity translation, there are

motion sensors and accelerometer sensors. Their frequency rates are as high as and respectively. Therefore, the number of steps , in both encoder and decoders are prohibitively large so that the capturing information of source sentence is precluded in the context vector .

3.1 Window-based representation

Let us consider the case that source and target sentences are multi-dimensional continuous time-series, signal waves, as shown in Fig. 1 111We note that signal waves in Fig. 1 are depicted as one-dimensional waves for clear visualization.. That is, each signal at time-step is expressed as -dimensional vector —there are sensors in the source side. Then a source signal wave consists of -step -dimensional signal vectors, i.e., .

To capture an important shape informaion from complex signal waves (see Fig. 1), we introduce trainable window-based representation function as

(1)

where is a -th window with fixed window-width , expressed as -matrix as

(2)

and is extracted representation vector inputted to the seq2seq encoder as shown in Fig. 1 —the dimension of is the same as the one of the hidden vector .

Similarly, to approximate the complex target waves well, we introduce inverse representation function, which is separately trained from as

(3)

where is the -th output vector from seq2seq decoder as shown in Fig 1, and is a window matrix which is corresponding to a partial wave of target waves .

The advantage of window-based architecture are three-fold: firstly, the number of steps in both encoder and decoder could be largely reduced and make the seq2seq with context vector work stably. Secondly, the complexity and variation in the shape inside windows are also largely reduced in comparison with the entire waves. Thus, important information could be extracted from source waves and the output sequence could be accurately approximated by relatively simple representation and inverse-representation functions respectively. Thirdly, both representation and inverse-representation functions are trained end-to-end manner by minimizing the loss where both functions are modeled by fully-connected (FC) networks.

Fig. 1

depicts the overall architecture of our wave2wave with an example of toy-data. The wave2wave consists of encoder and decoder with long-short term memory (LSTM) nodes in their inside, representation function

and inverse-representation function . In this figure, one-dimensional -time-step continuous time-series are considered as an input and an output and the width of window is set to — there are window steps for both encoder and decoder, i.e., and . Then, encoder-window-matrix is converted to dimensional encoder-representation vector by the representation function . Meanwhile, the output decoder, dimensional decoder-representation , is converted to decoder-window-matrix by the inverse representation function .

3.2 Wave2wave iterative model

We consider two different ways to implement high-dimensional sensor data. Since NMT for machine translation handles embeddings of words, the straightforward extention to high-dimensional settings uses the -dimensional source signal at the same time step as source embeddings, and the -dimensional target signal at the same time step as target embeddings. We call this an wave2wave model, i.e. the standard model. Alternatively, we can build independent embeddings separately for corresponding individual 1-dimensional target signal at each time step while we use the same -dimensional source signal embeddings. We call this a Wave2WaveIterative model. We suppose that the former model would be effective when sensor data are correlated while the latter model would be effective when sensor data are independent. Algorithm 1 shows the latter algorithm.

Data: src, tgt, ,
def trainWave2WaveIterative(, ):
       for j = (1,) do
             f(j) = trainWave2Wave(, );
            
       end for
      
Algorithm 1 Wave2waveIterative model

The back-translation is a technique to improve the performance by bi-directional translation removing the noise under a neutral-biased translation [3]. We deploy this technique which we call the wave2wave iterative back-translation model.

4 Evaluation on real-life data: ground motion translation

In this section, we apply our proposed method, wave2wave, to predict a broadband-ground motion from only its long-period motion, caused by the same earthquake. In this section, wave2wave translates one dimensional signal wave into one dimensional signal wave.

Ground motions of earthquakes cause fatal damages on buildings and infrastructures. Physics-based numerical simulators are used to generate ground motions at a specific place, given the property of earthquake, e.g., location and scale to estimate the damages on buildings and infrastructures 

[4]. However, the motion generated by simulators are limited only long periods, longer than second due to heavy computational costs, and the lack of detailed knowledge of the subsurface structure.

method train loss test loss
simple encoder-decoder 1.13 0.53
simple encoder-decoder 0.90 0.47
simple encoder-decoder 0.41 0.63
simple seq2seq
simple seq2seq
simple seq2seq
wave2wave
wave2wave
wave2wave
Table 1: Mean squared loss of simple encoder-decoder methods, simple seq2seq methods and our wave2wave in earthquake ground motion data

A large amount of ground motion data have been collected by K(kyosin)-NET over the past 20 years in Japan. Machine learning approaches would be effective to predict broadband-ground motions including periods less than

second, from simulated long period motions. From this perspective, we apply our method wave2wave to this problem by setting long-ground motion as an input and broadband-ground motion as an output, with the squared loss function .

As for training data, we use ground motion data collected at the observation station, IBR011, located at Ibaraki prefecture, Japan from Jan. 1, 2000 to Dec. 31, 2017—originally there are data but data related Tohoku earthquakes and the source deeper than are removed. As for testing, we use ground-motion data of earthquakes occurred at the beginning of 2018.

In addition, both long and broadband ground motion data are cropped to the fixed length, i.e., and its amplitude is smoothed using RMS (Root Mean Square) envelope with windows to capture essential property of earthquake motion. Moreover, as for data augmentation, in-phase and quadrature components, and those absolute values are extracted from each ground motion. That is, there are totally training data. Fig. (a)a shows an example of components of a ground motion of earthquake occurred on May 17, 2018, Chiba in Japan, and corresponding RMS envelopes.

Table 1 depict the mean-squared loss of training of three methods, simple encoder-decoder, simple seq2seq, and our proposed method with the same setting as the toy data except . This table shows that our wave2wave methods basically outperform other methods although wave2wave with the small window-width is lost by simple encoder-decoder with large hidden layer in train loss. This indicates that window-based representation and inverse-representation functions are helpful similarly in toy data.

Fig. (b)b depicts examples of predicted broadband ground motions of earthquakes occurred on Jan. 24 and May 17, 2018. These show that our method wave2wave predict enveloped broadband ground motion well given long-period ground motion although there is little overfitting due to small training data.

It is expected that predicted broadband-motion combined with simulated long-period motion could be used for more accurate estimation of the damages on buildings and infrastructures.

5 Evaluation on real-life data: activity translation

This section deploys wave2wave for activity translation (Refer Fig. 2). Until the previous section, the signals were one dimensions. The signals in this section are high-dimensional in their inputs as well as outputs. The dimensions of motion capture, video, and accelerometer are 129, 48, and 18 dimensions, respectively, in

Figure 2: Figure shows activity translation task and activity recognition task which we conduct experiments.

the case of MHAD dataset222http://tele-immersion.citris-uc.org/berkeleymhad.. Under the mild assumption that the targeted person which are recorded in three different modalities, including motion capture, video, and accelerometer, are synchronized and the noise such as the effect of other surrounding persons is eliminated. Hence, we assume that each signal shows one of the multi-view projections of a single person. That is, we can intuitively think that they are equivalent. Under this condition, we do a translation from motion capture to video (Similarly, accelerometer to motion capture, and video to accelerometer, and these inverse directions).

5.0.1 Overall Architecture

Wave Signal

Figure 2 shows that motion capture and video can be considered as wave signal. When video is input, takes the form of pose vectors which are converted by OpenPose library [1]. Then, this representation is convereted into the window representation by . When motion capture is input, takes the form of motion capture vectors. In this way we used these signals for input as well as output for wave2wave. The raw output are reconstructed by for the output of representation .

Wave Signal Dimensionality Reduction

As an alternative to use FC layer before the input, we use the clustering algorithm, specifically an affinity propagation [2], in order to reduce the size of representation as a whole. While most clustering algorithms need to supply the number of clusters beforehand, this affinity propagation algorithm solves the appropriate cluster number as a result.

Multi-Resolution Spatial Pyramid

Additinaly structures in wave2wave is the multi-scalability since the frame rate of multimodal data are considerably different. We adopted the approach of multi-resolution spatial pyramid by a dynamic pose [6]. We assume that the sequence of frames across modalities is synchronized and sampled at a given temporal step and concatenated to form a spatio-temporal 3-d volume.

(, ppl ppl ppl ppl ppl ppl
seq2seq baseline seq2seq clustering
wave2wave wave2waveIte wave2waveIteBacktrans
(1,16) 2.13 4.73
(5,80)
(10,160) 0.42 3.48
(20,320) 0.72 3.75
(30,480) 1.21 4.11
(60,960) 4.30 6.82
Table 2: Figure shows major experimental results for acc2moc.

5.0.2 Experimental Evaluation

Experimental Setups

We used the MHAD dataset from Berkeley. We used video, accelerometer, and mocap modalities. We used video with Cluster-01/Cam01-02 subsets, and the whole mocap (optical) and accelerometer data with 12 persons/5 trials. Video input was preprocessed by OpenPose which identifies 48 dimensions of vectors. Optical mocap had the position of the keypoints whose dimension was 129. Accelerometer were placed in 6 places in the body whose dimension was 18. We used the parameters in wave2wave with loss function with LSTM modules 500, embedding size 500, dropout 3, maximum sentence length 400, and batch size 100. We used Adam optimizer. We used for multi-resolution spatial pyramid. We used the same parameter set for wave2wave interactive model. We use Titan Xp.

Human Understandability

One characteristic of activity translation can be observed in the direction of wave2wave translation with accelerometer to video, e.g. acc2cam. That is, the accelerometer data takes the form that is not understandable by human beings by its nature but translation to video makes this visible. By selecting 50 test cases, the human could understand 48 cases. 96 % is fairly good. The second characteristic of activity translation is opportunistic sensor problem, e.g. when we cannot use video camera in bathrooms, we use other sensor modality, e.g. accelerometer, and then translate it to video which can use at this opportunity. This corresponds to the case of acceleromter to video, e.g. acc2cam. We conduct this experiments. Upon watching the video signals on a screen we could observe the basic human movements. By selecting 50 test cases, the human could understand 43 cases.

Experimental Results

Major experimental results are shown in Table 4. We used . For each window size we measured one target with perplexity (ppl) and the whole target with perplexity (ppl). We compared several wave2wave results with (1) the seq2seq model without dimensionality reduction (via clustering), (2) the seq2seq model with dimensionality reduction. All the experiments are done with the direction from accelerometer to motion capture (acc2moc).

Firstly, the original seq2seq model did not work well without dimensionality reduction of input space. The perplexity was . This figure suggests that the optimization of deep learning did not go progress due to the complexity of the training data or the bad initialization. However, the results were improved fairly well if we do dimensionality reduction using clustering. This figure is close to the results by wave2wave (iterative) with .

Secondly, performed better than other window size for perplexity when . When this became high dimensional, the wave2wave iterative model performed better than the wave2wave mode: vs in perplexity. Since motion capture has dimensions, the representation space becomes when we let denote the parameter space of one point in motion capture. Compared with this the wave2wave iterative model equipped with the representation space linear with . The wave2wave iterative model has an advantage in this point. Moreover, the wave2wave iterative back-translation model made the best score in perplexity when as well as .

6 Conclusion

We proposed a method to translate between waves wave2wave with a sliding window-based mechanism and iterative back-translation model for high-dimensional data. Experimental results for two real-life data show that this is positive. Performance improvements were about 46 % in test loss for one dimensional case and about 1625 % in perplexity for high-dimensional case using the iterative back-translation model.

(a) Example of enveloped ground motion
(b) Predicted broadband ground motion on Jan. 24, 2018
(c) Predicted broadband ground motion on May. 17, 2018
Figure 3: top: Example of original and enveloped ground motion data with in-phase, quadrature components and these absolute values. middle and bottom: predicted broadband ground motion by our methods wave2wave for earthquakes occurred on Jan. 24, 2018 and May. 17, 2018.

References

  • [1] Z. Cao, T. Simon, S. Wei, and Y. Sheikh (2017)

    Realtime multi-person 2d pose estimation using part affinity fields

    .
    CVPR. Cited by: §5.0.1.
  • [2] B. J. Frey and D. Dueck (2007) Clustering by passing messages between data points. Science, 315(5814), pp. 972–976. Cited by: §5.0.1.
  • [3] V. C. D. Hoang, P. Koehn, G. Haffari, and T. Cohn (2018) Iterative back-translation for neural machine translation. Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pp. 18–24. Cited by: §3.2.
  • [4] A. Iwaki and H. Fujiwara (2013) Synthesis of high-frequency ground motion using information extracted from low-frequency ground motion: a case study in kanto area. journal of Japan sssociation for earthquake engineering 13, pp. 1–18. Cited by: §4.
  • [5] M. Luong, H. Pham, and C. D. Manning (2015-09) Effective approaches to attention-based neural machine translation.

    Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

    , pp. 1412–1421.
    Cited by: §1, §2.
  • [6] N. Neverova, C. Wolf, G. W.Taylor, and F. Nebout (2014) Multi-scale deep learning for gesture detection and localization. Workshop on Looking at People (ECCV). Cited by: §5.0.1.
  • [7] D. Roggen, G. Tröster, P. Lukowicz, A. Ferscha, and R. Chavarriaga (2013) Opportunistic human activity and context recognition. Computer 46 (2), pp. 36–45. External Links: Document, ISSN 0018-9162 Cited by: §1.