Text2Action: Generative Adversarial Synthesis from Language to Action

10/15/2017 ∙ by Hyemin Ahn, et al. ∙ Seoul National University 0

In this paper, we propose a generative model which learns the relationship between language and human action in order to generate a human action sequence given a sentence describing human behavior. The proposed generative model is a generative adversarial network (GAN), which is based on the sequence to sequence (SEQ2SEQ) model. Using the proposed generative network, we can synthesize various actions for a robot or a virtual agent using a text encoder recurrent neural network (RNN) and an action decoder RNN. The proposed generative network is trained from 29,770 pairs of actions and sentence annotations extracted from MSR-Video-to-Text (MSR-VTT), a large-scale video dataset. We demonstrate that the network can generate human-like actions which can be transferred to a Baxter robot, such that the robot performs an action based on a provided sentence. Results show that the proposed generative network correctly models the relationship between language and action and can generate a diverse set of actions from the same sentence.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

“Any human activity is impregnated with language because it takes places in an environment that is build up through language and as language”[1]. As such, human behavior is deeply related to the natural language in our lives. A human has the ability to perform an action corresponding to a given sentence, and conversely one can verbally understand the behavior of an observed person. If a robot can also perform actions corresponding to a given language description, it will make the interaction with robots easier.

Finding the link between language and action has been a great interest in machine learning. There are datasets which provide human whole body motions and corresponding word or sentence annotations

[2, 3]. In additions, there have been attempts for learning the mapping between language and human action [4, 5]. In [4]

, hidden Markov models (HMMs)

[6] is used to encode motion primitives and to associate them with words. [5] used a sequence to sequence (Seq2Seq) model [7] to learn the relationship between the natural language and the human actions.

In this paper, we choose to use a generative adversarial network (GAN) [8], which is a generative model, consisting of a generator and a discriminator . and plays a two-player minimax game, such that tries to create more realistic data that can fool and tries to differentiate between the data generated by and the real data. Based on this adversarial training method, it has been shown that GANs can synthesize realistic high-dimensional new data, which is difficult to generate through manually designed features. [9, 10, 11]. In addition, it has been proven that a GAN has a unique solution, in which captures the distribution of the real data and does not distinguish the real data from the data generated from [8]. Thanks to these features of GANs, our experiment also shows that GANs can generate more realistic action than the previous work [5].

Fig. 1: An overview of the proposed generative model. It is a generative adversarial network [8] based on the sequence to sequence model [7], which consists of a text encoder and an action decoder based on recurrent neural networks [12]

. When the RNN-based text encoder has processed the input sentence into a feature vector, the RNN-based action decoder converts the processed language features to corresponding human actions.

The proposed generative model is a GAN based on the Seq2Seq model. The objective of a Seq2Seq model is to learn the relationship between the source sequence and the target sequence, so that it can generate a sequence in the target domain corresponding to the sequence in the input domain [7]. As shown in Figure 1, the proposed model consists of a text encoder and an action decoder based on recurrent neural networks (RNNs) [12]. Since both sentences and actions are sequences, a RNN is a suitable model for both the text encoder and action decoder. The text encoder converts an input sentence, a sequence of words, into a feature vector. A set of processed feature vectors is transferred to the action decoder, where actions corresponding to the input sequence are generated. When decoding processed feature vectors, we have used an attention mechanism based decoder [13].

Fig. 2: The text encoder , the generator , and the discriminator constituting the proposed generative model. The pair of and , and the pair of and are Seq2Seq model composed of the RNN-based encoder and decoder. Each rectangular denotes the LSTM cell of the RNN. (a) The text encoder processes the set of word embedding vectors into its hidden states . The generator takes and decodes it into the set of feature vectors and samples the set of random noise vectors . It receives and as inputs and generates the human action sequence . (b) After the text encoder encodes the input sentence information into its hidden states , the discriminator also takes and decodes it into the set of feature vectors . By taking and as inputs, identifies whether is real or fake.

In order to train the proposed generative network, we have chosen to use the MSR-Video to Text (MSR-VTT) dataset, which contains web video clips with annotation texts [14]. Existing datasets [2, 3] are not suitable for our purpose since videos are recorded in laboratory environments. One remaining problem is that the MSR-VTT dataset does not provide human pose information. Hence, for each video clip, we have extracted a human upper body pose sequence using the convolutional pose machine (CPM) [15]. Extracted 2D poses are converted to 3D poses and used as our dataset [16]. We have gathered pairs of sentence descriptions and action sequences, containing descriptions and actions. Each sentence description is paired with about 10 to 12 actions.

The remaining of the paper is constructed as follows. The proposed Text2Action network is given in Section II. Section III describes the structure of the proposed generative model and implementation details. Section IV shows various 3D human like action sequences obtained from the proposed generative network and discusses the result. In addition, we demonstrate that a Baxter robot can generate the action based on a provided sentence.

Ii Text2Action Network

Let denote an input sentence composed of words. Here, is the one-hot vector representation of the th word, where is the size of vocabulary. In this paper, we encode into , the word embedding vectors for the sentence, based on the word2vec model [17]. Here, is the word embedding representation of , such that , where is the word embedding matrix. is the dimension of a word embedding vector. With our dataset, we have pretrained based on the method presented in [17].

Since the proposed generative network is a GAN, it consists of a generator and a discriminator as shown in Figure 2. The objective of the generator is to generate a proper human action sequence corresponding to the input embedding sentence representation , and the objective of the discriminator is to differentiate the real actions from fake actions considering the given sentence . A text encoder, , encodes an embedded sentence, , into its hidden states, , such that contains the processed information related to . Here, and is the dimension of the hidden state.

Let denote an action sequence with pose vectors. Here, denotes the th human pose vector and is the dimension of a human pose vector. The pair of the text encoder and the generator is a Seq2Seq model. The generator converts into the target human pose sequence . In order to generate , the generator decodes the hidden states of into a set of language feature vectors based on the attention mechanism [13]. Here, denotes a feature vector for generating the th human pose and is the dimension of the feature vector , which is the same as the dimension of .

In addition, a set of random noise vectors is provided to , where

is a random noise vector from the zero-mean Gaussian distribution with a unit variance and

is the dimension of a random noise vector. With a set of feature vectors and a set of random noise vectors , the generator synthesizes a corresponding human action sequence , such that (see Figure 2). Here, the first human pose input is set to the mean pose of all first human poses in the training dataset.

The objective of the discriminator is to differentiate the generated from and the real human action data . As shown in Figure 2, it also decodes the hidden state of into the set of language feature vectors based on the attention mechanism [13]. With a set of feature vectors and a human action sequence as inputs, the discriminator determines whether is fake or real considering . The output from the last RNN cell is the result of the discriminator such that (see Figure 2). The discriminator returns if the is identified as real.

In order to train and , we use the value function defined as follows [8]:

(1)

and play a two-player minimax game on the value function , such that tries to create more realistic data that can fool and tries to differentiate between the data generated by and the real data.

Iii Network Structure

Iii-a RNN-based Text Encoder

The RNN-based text encoder shown in Figure 2 encodes the input information into its hidden states of the LSTM cell [12]. Let us denote the hidden states of the text encoder as , where

(2)

Here, is the dimension of the hidden state , and is the nonlinear function in a LSTM cell operating as follows:

(3)
(4)
(5)
(6)
(7)
(8)

where denotes the element-wise production and the

denotes the sigmoid function. The dimension of the matrices and vectors are as follows:

, , , , , , , , , , , , , and .

Iii-B Generator

After the text encoder encodes into its hidden states , the generator decodes into the set of feature vectors based on the attention mechanism [13], where is calculated as follows:

(9)

The weight of each feature is computed as

(10)

where

(11)

Here, the dimensions of matrices and vectors are as follows: , .

After encoding the language feature , a set of random noise vectors is provided to . With and , the generator synthesizes a corresponding human action sequence such that . Let denote the hidden states of the LSTM cells composing . Each hidden state of the LSTM cell , where is the dimension of the hidden state, is computed as follows:

(12)
(13)
(14)
(15)
(16)
(17)

and the output pose at time , is computed as

(18)

The dimensions of matrices and vectors are as follows: , , , , , , , , , , , , , , , and . is the nonlinear function constructed based on the attention mechanism presented in [13].

Iii-C Discriminator

The discriminator also decodes into the set of feature vectors based on the attention mechanism (see equations (9)-(11)) [13]. The discriminator takes and as inputs and generates its scalar value result such that (see Figure 2). It returns 1 if the input has been determined as the real data. Let denote the hidden states of the LSTM cell composing , where and is the dimension of the hidden state which is same as the one of . The output of is calculated from its last hidden state as follows:

where , . The hidden state of is computed as as similar in (12)-(17), where is the zero vector instead of the random vector such that .

Fig. 3:

The overall structure of proposed network. First, we train an autoencoder which maps between the natural language and the human motion. Its encoder maps the natural language to the human action, and decoder maps the human action to the natural language. After training this autoencoder, only the encoder part is extracted in order to generate the conditional information related to the input sentence so that

and can use.

Iii-D Implementation Details

The desired performance was not obtained properly when we tried to train the entire network end-to-end. Therefore, we pretrain the RNN-text encoder first. Regarding this, the text encoder is trained by training an autoencoder which learns the relationship between the natural language and the human action as shown in Figure 3. This autoencoder consists of the text-to-action encoder that maps the natural language to the human action, and the action-to-text decoder which reconstructs the human action back to the natural language description. Both the text-to-action encoder and the action-to-text decoder are Seq2Seq models based on the attention mechanism [13].

The encoding part of the text-to-action encoder corresponds to the text encoder in our network, such that it encodes into its encoder’s hidden states using (2)-(8). Based on (see (9)-(11)), the hidden states of its decoder is calculated as (see (12)-(17)), and is generated (see (18)). Here, is the zero vector instead of random vector such that .

The action-to-text decoder works on the similar principle as above. After its encoder encodes the human action sequence into its hidden states (see Figure 3), the decoding part of the action-to-text decoder decodes into the set of feature vectors (see (9)-(11)). Based on , the hidden states of its decoder , is calculated as . (see (12)-(17)). From the hidden states , the word embedding representation of the sentence is reconstructed as (see (18)).

In order to train this autoencoder network, we have used a loss function

defined as follows:

(19)

where

denotes the resulted estimation value of

. The constants and are used to control how much the estimation loss of the action sequence and the reconstruction loss of the word embedding vector sequence should be reduced.

1:Input: a set of input sentences and output action sequences , a number of training batch steps , batch size .
2:Train the autoencoder between the language and action
3:Initialize the text encoder with trained values
4:

Initialize weight matrices and bias vectors of

that are shared with the autoencoder to the trained values
5:for  do
6:   Randomly sample and
7:   Sample out the set of random vectors
8:   Encode sets of feature vectors
9:   Generate fake data
10:   for  do
11:      
12:      
13:   end for
14:   
15:   
16:   
17:   
18:end for
Algorithm 1 Training the Text-to-Action GAN

Overall steps for training the proposed network are presented in Algorithm 1. After training the autoencoder network, the part of the text encoder is extracted and passed to the generator and discriminator . In addition, in order to make the training of more stable, the weight matrices and bias vectors of that are shared with the autoencoder, , , , , , , , , , , , , , , , , , , , , are initialized to trained values. When training and with the GAN value function shown in (1), we do not train the text encoder . It is to prevent the pretrained encoded language information from being corrupted while training the network with the GAN value function.

For training the autoencoder network, we set the number of training epochs as

with batch size . The dimension of its hidden state in LSTM cell is set to . The Adam optimizer [18] is used to minimize the loss function and the learning rate is set to . For parameters and in the loss function , we use and . The dimension of the hidden state in the LSTM cell composing and is set to . The dimension of the random vector is set to , and it is sampled from the Gaussian noise such that . In order to train and , we set the number of epochs with batch size . The Adam optimizer [18] is used to maximize the value function and , and each learning rate is set to and . All values of these parameters are chosen empirically.

Regarding training the generator , we choose to maximize rather than minimizing since it has been shown to be more effective in practice from many cases [8, 9].

Iv Experiment

Fig. 4: The example dataset for the description ‘A woman is lifting weights’. From the video of the MSR-VTT dataset, we extract the 2D human pose based on the CPM [15]. Extracted 2D human poses are converted to 3D poses based on the code from [16]. The resulting 3D poses are used to train the proposed network.

Iv-a Dataset

In order to train the proposed generative network, we use a MSR-VTT dataset which provides Youtube video clips and sentence annotations [14]. As shown in Figure 4, we have extracted videos in which the human behavior is observed, and extracted the upper body 2D pose of the observed person through CPM [15]. Extracted 2D poses are converted to 3D poses and used as our dataset [16]. (The dataset will be made available publicly.) We choose to use only the upper body pose rather than the full body pose, since the occlusion near the lower body has been observed in the video considerably. Another option was to use the data presented in [3], but there are pairs of actions and sentence description, which has been judged to be insufficient to train our network.

Fig. 5: The illustration of how the extracted 3D human pose constructs the pose vector data .
Fig. 6: Generated various 3D actions for ‘A girl is dancing to the hip hop beat’. , , and denote the sampled different random noise vector sequences for generating various actions.
Fig. 7: Generated actions when different input sentence is given as input. When generating these actions, the random noise vector sequence is fixed and the input feature vectors are given differently to the generator .

Each extracted upper body pose for time is a 24-dimensional vector such that . The 3D position of the human neck, and other 3D vectors of seven other joints compose the pose vector data (see Figure 5). Since sizes of detected human poses are different, we have normalized the joint vectors such that for (see Figure 5). For the poses extracted incorrectly, we manually corrected the pose by hand. The corrected pose are then smoothed through Gaussian filtering. Each action sequence is seconds long, and the frame rate is fps, making a total of frames for an action sequence.

Regarding the language annotations, there were some annotations containing information that is not relevant to the human action. For example, for a sentence ‘a man in a brown jacket is addressing the camera while moving his hands wildly’, we cannot know whether the man wears a brown jacket or not with only human pose information. For these cases, we manually correct the annotation to include the information only related to the human action such that ‘a man is addressing the camera while moving his hands wildly’.

In total, we have gathered pairs of sentence descriptions and action sequences, which consists of descriptions and actions. Each sentence description pairs with about 10 to 12 actions. The time length of total action sequences is hours. The number of words included in the sentence description data is , and the size of vocabulary which makes up the data is .

Iv-B 3D Action Generation

We first examine how action sequences are generated when a fixed sentence input and a different random noise vector inputs are given to the trained network. Figure 6 shows three actions generated with one sentence input and three differently sampled random noise vector sequences such that . Generated pose vector data which contains , (see Figure 5) is fitted to the human skeleton of a predetermined size. The input sentence description is ‘A girl is dancing to the hip hop beat’, which is not included in the training dataset. In this figure, human poses in a rectangle represent the one action sequence, listed in time order from left to right. The time interval between the each pose is 0.5 second. It is interesting to note that even though the same sentence input is given, varied human actions are generated if the random vectors are different. In addition, it is observed that generated motions are all taking the action like dancing.

We also examine how the action sequence is generated when the input random noise vector sequence is fixed and the sentence input information varies. Figure 7 shows three actions generated based on the one fixed random noise vector sequence and three different sentence inputs such that . Input sentences are ‘A woman drinks a coffee’, ‘A muscular man exercises in the gym’, and ‘A chef is cooking a meal in the kitchen’. The disadvantage of the given result is that it is difficult to understand the concrete context by only seeing the action, since no tools or background information related to the given action is given. However, the first result in Figure 7 shows the action sequence as if a human is lifting right hand and getting close to the mouth as when drinking something (see the th frame). The second result shows the action sequence like a human moving with a dumbbell in both hands. The last result shows the action sequence as if a chef is cooking food in the kitchen and trying a sample.

Fig. 8: The result of comparison when the input sentence is ‘Woman dancing ballet with man’, which is included in the dataset. Each result is compared with the human action data that corresponds to the input sentence in the training dataset.

Iv-C Comparison with [5]

In order to see the difference between our proposed network and the network of [5], we have implemented the network presented in [5]

based on the Tensorflow and trained it with our dataset. First, we compare generated actions when we give the sentence ‘Woman dancing ballet with man’, which is included in the training dataset, as an input to the each network. The result of the comparison is shown in Figure 

8. The time interval between the each pose is 0.4 second. In this figure, results from both networks are compared to the human action data that matches to the input sentence in the training dataset. The result shows that our generative model synthesizes the human action sequence that is more similar to the data. Although the network presented in [5] also generates the action as the ballet player with both arms open, it is shown that the action sequence synthesized by our network is more natural and similar to the data.

Fig. 9: The result of comparison when the sentence which is not included in the dataset is given as an input. The input sentence given to the each network is ‘A drunk woman is stumbling while lifting heavy weights’. This sentence is a combinations of ‘A drunk woman stumbling’ and ‘Woman is lifting heavy weights’, which are included in the training dataset. The result shows that our proposed network generates the human action sequence corresponds to the proper combinations of the two training data. The generated action sequence is like a drunk woman staggering and lifting the weights.

In addition, we give the sentence which is not included in the training dataset as an input to each network. The result of the comparison is shown in Figure 9. The time interval between the each pose is 0.4 second. The given sentence is ‘A drunk woman is stumbling while lifting heavy weights’. It is a combinations of two sentences included in the training dataset, which are ‘A drunk woman stumbling’ and ‘Woman is lifting heavy weights’. Although we know that it is difficult to see the situation as described by the input sentence, this experiment is to test whether the proposed network has learned well about the relationship between natural language and human action and responds flexibly to input sentences. The action sequence generated from our network is like a drunk woman staggering and lifting the weights, while the action sequence generated from the network in [5] is just like a person lifting weights.

It is shown that the method suggested in [5] also produces the human action sequence that seems somewhat corresponding to the input sentence, however, the generated behaviors are all symmetric and not as dynamic as the data. It is because their loss function is designed to maximize the likelihood of the data, whereas the data contains asymmetric pose to the left or right. As an example of the ballet movement shown in Figure 8, our training data may have a left arm lifting action and a right arm lifting action to the same sentence ‘Woman dancing ballet with man’. But with the network that is trained to maximize the likelihood of the entire data, a symmetric pose to lift both arms has a higher likelihood and eventually become a solution of the network. On the other hand, our network which is trained based on the GAN value function (1) manages to generate various human action sequences that look close to the training data.

Fig. 10: Results of applying generated action sequence to the Baxter robot. The generated action sequence is applied to the Baxter robot based on the Baxter-teleoperation code from [19]. The time difference between frames capturing Baxter’s pose is about second.

Iv-D Generated Action for a Baxter Robot

We enable a Baxter robot to execute the give action trajectory defined in a 3D Cartesian coordinate system by referring the code from [19]. Since the maximum speed at which a Baxter robot can move its joint is limited, we slow down the given action trajectory and apply it to the robot. Figure 10 shows how the Baxter robot executes the given 3D action trajectory corresponding to the input sentence ‘A man is throwing something out’. Here, the time difference between frames capturing the Baxter’s pose is about second. We can see that the generated 3D action takes the action as throwing something forward.

V Conclusion

In this paper, we have proposed a generative model based on the Seq2Seq model [7] and generative adversarial network (GAN)[8], for enabling a robot to execute various actions corresponding to an input language description. In order to train the proposed network, we have used the MSR-Video to Text dataset [14], which contains recorded videos from real-world situations and uses a wider range of words in the language description than other datasets. Since the data do not contain 3D human pose information, we have extracted the 2D upper body pose of the observed person through convolutional pose machine [15]. Extracted 2D poses are converted to 3D poses and used as our dataset [16]. The generated 3D action sequence is transferred to a robot.

It is interesting to note that our generative model, which is different from other existing related works in terms of utilizing the advantages of the GAN, is able to generate diverse behaviors when the input random vector sequence changes. In addition, results show that our network can generate an action sequence that is more dynamic and closer to the actual data than the network presented presented in [5]. The proposed generative model, which understands the relationship between the human language and the action, generates an action corresponding to the input language. We believe that the proposed method can make actions by robots more understandable to their users.

References

  • [1] E. Ribes-Iñesta, “Human behavior as language: some thoughts on wittgenstein,” Behavior and Philosophy, pp. 109–121, 2006.
  • [2] W. Takano and Y. Nakamura, “Symbolically structured database for human whole body motions based on association between motion symbols and motion words,” Robotics and Autonomous Systems, vol. 66, pp. 75–85, 2015.
  • [3] M. Plappert, C. Mandery, and T. Asfour, “The kit motion-language dataset,” Big data, vol. 4, no. 4, pp. 236–252, 2016.
  • [4] W. Takano and Y. Nakamura, “Statistical mutual conversion between whole body motion primitives and linguistic sentences for human motions,” The International Journal of Robotics Research, vol. 34, no. 10, pp. 1314–1328, 2015.
  • [5] M. Plappert, C. Mandery, and T. Asfour, “Learning a bidirectional mapping between human whole-body motion and natural language using deep recurrent neural networks,” arXiv preprint arXiv:1705.06400, 2017.
  • [6] S. R. Eddy, “Hidden markov models,” Current opinion in structural biology, vol. 6, no. 3, pp. 361–365, 1996.
  • [7] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in neural information processing systems, 2014, pp. 3104–3112.
  • [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
  • [9] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” in Proc. of the 33rd International Conference on International Conference on Machine Learning-Volume 48.   JMLR. org, 2016, pp. 1060–1069.
  • [10] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-realistic single image super-resolution using a generative adversarial network,” arXiv preprint arXiv:1609.04802, 2016.
  • [11] A. Dosovitskiy and T. Brox, “Generating images with perceptual similarity metrics based on deep networks,” in Advances in Neural Information Processing Systems, 2016, pp. 658–666.
  • [12]

    S. Hochreiter and J. Schmidhuber, “Long short-term memory,”

    Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  • [13] O. Vinyals, Ł. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton, “Grammar as a foreign language,” in Advances in Neural Information Processing Systems, 2015, pp. 2773–2781.
  • [14] J. Xu, T. Mei, T. Yao, and Y. Rui, “Msr-vtt: A large video description dataset for bridging video and language,” in

    Proc. of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2016, pp. 5288–5296.
  • [15] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh, “Convolutional pose machines,” in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4724–4732.
  • [16] X. Zhou, M. Zhu, S. Leonardos, K. G. Derpanis, and K. Daniilidis, “Sparseness meets deepness: 3d human pose estimation from monocular video,” in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4966–4975.
  • [17]

    T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in

    Advances in neural information processing systems, 2013, pp. 3111–3119.
  • [18] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [19] P. Steadman. (2015) baxter-teleoperation. [Online]. Available: https://github.com/ptsteadman/baxter-teleoperation