Spoken language understanding (SLU) is a core component of voice interaction applications. Traditionally, SLU is performed on sentences generated by voice activity detection on user queries. In this work, we focus on turn-level spoken language understanding, that is, to understand the ultimate intent when a user speaks one or more utterances compositionally in one turn. Figure 1 illustrates a voice ordering example for turn-level SLU. The user speaks 4 utterances in a sequence to the agent as “I want two cups of americanos and one cup of latte with vanilla all big cup americanos less sugar”. The agent interprets the utterances as two order creation actions and two order modification actions, and finally executes the actions and generates the order automatically as “two big cups of americanos with less sugar and one big cup of latte with vanilla”.
Turn-level SLU consists of multiple sub-tasks. Firstly, spoken language contains disfluencies (e.g. repetitions or repairs) and no explicit structures (e.g. sentence boundaries). Hence, disfluency removal and sentence segmentation are necessary for the downstream language understanding component. Secondly, coreference resolution, intent segmentation and classification, and slot extraction are needed for inferring the ultimate intent of multiple utterances. As shown in Figure 1, the first mention of “americanos” refers to the product “americanos” in an order creation action. The second mention of “americanos”, through coreference resolution, is decided as referring to the previously mentioned “americanos”, and triggers an order modification action.
A traditional pipelined approach solves the aforementioned sub-tasks through a sequence of components. However, development of these components usually requires non-trivial annotation effort for supervised training and it is not easy to port the pipeline to new domains or scale it up. To address these problems, we propose an end-to-end statistical model with weak supervision. Weakly supervised learning has been extensively studied in the fields of semantic parsing and program synthesisCheng et al. (2017); Guu:17; Krishnamurthy:17; Liang:17; Rabinovich:17; Suhr:18a; Suhr:18b; Goldman et al. (2018); Liang:18, where indirect supervisions (e.g. question-denotation pairs) are adopted. Direct supervisions (e.g., question-program pairs) require annotating programs, but annotating programs is known to be expensive and difficult to scale up. Compared to direct supervisions, training data for indirect supervisions are easy to collect with low cost. The end-to-end learning approach with weak supervision can scale up easily compared to supervised learning approaches.
It is challenging to develop a semantic parser for turn-level SLU based on question-denotation pairs. Firstly, there is a large search space for program exploration. As shown in Figure 1, a user speaks multiple utterances in a session, where a utterance can have different intents (e.g. creation or modification) and slots. Incorrect interpretations of utterances, and incorrect programs , can accidentally produce the correct target denotations. These incorrect programs are denoted spurious programs. Sophisticated algorithms are required for solving complicated problems with long promising trajectories and guarding against spurious programs. Secondly
, the complexity of the problems has high divergence. Some turns only have one utterance (easy task), while others may have much more utterances (hard task). Policy gradient estimates for long trajectories tend to have more variance with weak supervisionLiang:18. The hard task needs more diverse and massive amounts of training data than the easy task. As a result, uniformly sampling data for exploration and training suffers from sample inefficiency and over-fitting problems.
In this work, we propose randomized beam search with memory augmentation (RBSMA) for improving exploration of long and promising programs for complicated problems. Randomized beam search can improve exploration efficiency Guu:17. With the enhancement of memory, RBSMA can learn from failed trials and guide the exploration towards unexplored promising directions. With the cache of highest reward programs per turn, RBSMA can re-sample highest reward programs despite the adopted randomized exploration strategy.
Curriculum learning (CL) can deal with the diversity of sample complexity Liang:17, where the learner focuses on easy ones at first, then gradually puts more weights on more difficult ones. Despite great empirical results, most CL methods are based on hand-crafted curricula. In this way, an expert defines the level of complexity of a sample and designs curricula, which makes the approach difficult to scale up for complicated problems. In this work, we extend automated CL approaches for supervised learning Graves:17 to weakly supervised learning. We use self reward gain as a signal of reward, which measures the learning progress when the learner is fed with a batch of question-denotation pairs sampled from some tasks. Then, the reward is used in a nonstationary multi-armed bandit setting, which then determines a stochastic syllabus and provides training data to the learner in a principled order.
We evaluate our proposed model for turn-level SLU using real-world user logs of a commercial voice ordering system. Experimental results demonstrate that when trained on a small number of end-to-end annotated sessions collected with low cost, the proposed model performs comparably to the deployed pipelined system, saving the development labor over an order of magnitude. In particular, RBSMA improves the test set accuracy by 7.8% relative compared to the standard beam search. Automated CL leads to better generalization and further improves the test set accuracy by 5.0% relative, reaching an overall 10.4% relative gain over standard beam search without CL.
It should be noted that the proposed model is not limited to the voice ordering applications. It can be readily applied in other voice interaction systems for turn-level SLU (e.g., constrain search space by understanding user’s multiple queries compositionally), semantic parsing and program synthesis, among others. The technical contributions in this work are as follows:
We develop an end-to-end statistical model with weak supervision (denotations) for turn-level SLU and find that it can perform well, easily scale up and port to new domains.
We propose randomized beam search with memory augmentation (RBSMA). We show that RBSMA can explore long promising trajectories for complicated problems more efficiently than the standard beam search.
We develop an automated curriculum learning approach for weakly supervised learning to address the diversity of problem complexities. We observe that automated CL can lead to faster training and better generalization.
2 Related Work
Recently there has been a lot of progress in learning neural semantic parsers with weak supervision Cheng et al. (2017); Guu:17; Krishnamurthy:17; Liang:17; Rabinovich:17; Suhr:18a; Suhr:18b; Goldman et al. (2018); Liang:18
. Systematic search was explored to improve exploration of reinforcement learning (RL) and stability of weak supervisionGuu:17; Liang:17. Memory Augmented Policy Optimization (MAPO) Liang:18 was proposed using a memory buffer of promising trajectories to reduce the variance of policy optimization. In this work, we extend these ideas by proposing randomized beam search with memory augmentation to improve the exploration efficiency. CL can deal with the diversity of sample complexity Liang:17. In this work, we develop an effective approach extending automated CL for supervised learning Graves:17 to CL for weakly supervised learning.
3 Task Description
3.1 Two General Tasks
Inspired by Goldman et al. (2018), turn-level SLU is roughly divided into a lexical task (i.e., mapping words and phrases to tags that are parts of a program) and a structural task (i.e., combining tags into a program). We collect a typed dictionary and their aliases, and conduct the lexical task by word matching. Figure 1 shows the matched tag sequence as “Number:Two Product:Americano Number:One Product:Latte Flavor:Vanilla Product:All Size:Big Product:Americano Comment:Less-Sugar”. The tagging process filters out noise and task-irrelevant words in the automatic speech recognition (ASR) output, easing the downstream structural task.
The structural task aims at generating the target action sequence (program) based on the tag sequence. Each action in the sequence is a token of the program. The action sequence is then executed to generate the final denotation. The target action sequence for the above tag sequence is “(create Americano Two) (create Latte One Vanilla) (modify All Big) (modify Americano Less-Sugar)”. In our turn-level SLU setup, the structural task is quite challenging due to multiple types of tag manipulations: (1) Tag Deletion: Repetitive tags created due to disfluency should be removed, for example, “americano americano big cup” should be transformed to “(create Americano Big)”. (2) Tag Segmentation: Intent segmentation will group a set of tags and generate a corresponding action. For example, “two big americano cold one latte” should be segmented into two actions, “(create Americano Two Big Cold) (create Latte One)”111 We use the heuristics of segmentation based on the Number tag, hence “cold” is grouped into the first action.
We use the heuristics of segmentation based on the Number tag, hence “cold” is grouped into the first action.. (3) Tag Copy and Assignment: For nested structures, tags on the root node should be copied and assigned to leaf nodes. For example, “two hot lattes one big cup one small cup” should be transformed to “(create Latte One Big Hot) (create Latte One Small Hot)” . (4) Tag Global Assignment: Some tags should be assigned to the node with a long distance for the modification purpose, that is, co-reference resolution is implicitly modeled. For example, “one americano two lattes americano big cup” should be interpreted as “(create Americano One) (create Latte Two) (modify Americano Big)”.
3.2 Problem Statement
Given a training set of examples , where is a sequence of utterances within a turn, is the tag sequence of that is produced by the lexical task, is a set of objects that the agent should generate according to . Our goal is to learn a semantic parser that maps a turn of utterances to a program , such that when is executed by the agent, it yields the correct denotation .
In our turn-level SLU setup based on voice ordering, the objects in the denotation have internal structures. For example, an ordered object refers to one product, which contains several properties such as product name and number of cups (Section 6.1). Based on studying real-world user logs of a commercial voice ordering system, we use two kinds of functions for the tokens in a program: (1) Create Function: (create ) (2) Modify Function: (modify ). Here is the key property (e.g. product name) and is mandatory, while the other properties are optional and will be set to default values when missing (e.g. refers to number of cup, its default value is one), denotes the number of properties, and are properties and also parameters for a function.
4 Model Description
We decompose the program generation task into two subtasks. One is function type (function name) generation, depending on the context; the other is selection of the set of tags produced by the lexical task as parameters for each function type. We utilize a seq2seq model based on the semantic parser in Guu:17; Goldman et al. (2018) and extend it with pointer-generator See:17. In this way, the decoder vocabulary size is significantly reduced.
The probability of a program is the product of the probabilities of its tokens given the history:. We approximate the conditional probability of a program given the input turn by the tag sequence given . is computed as , where is probability of the function name generation subtask (rather than selection) for timestep , is probability of generating function name , is the attention weight.
We now describe our exploration-based learning algorithm with weak supervision. To use weak supervision, we treat the program as a latent variable that is approximately marginalized. To describe the learning objective, we define , where is the execution result of , the first part is a binary signal indicating whether the task has been completed by producing the target denotation , the second part computes the edit distance between the execution result from and the target denotation, providing a meaningful signal for uncompleted task situations.
The objective is to maximize the following function:
where is the program space, and are the programs found by beam search.
5.1 RBSMA : Randomized Beam Search with Memory Augmentation
Beam search is a powerful approach for facilitating systematic search through the large space of programs for training with weak supervision. Typically, at each decoding step, we maintain a beam of program prefixes of length , then expand the program prefixes fully to program pool of length and keep the top program prefixes with the highest model probabilities out of .
We explore randomized beam search Guu:17, which combines the standard beam search with the randomized off-policy exploration of RL. Extensive studies in RL show that noise injection in the action space (i.e., when decoding program tokens) can significantly improve the exploration efficiency. For weak supervision, randomized beam search can increase the chance of finding correct programs. Instead of keeping the top scored program prefixes at each decoding step, we either uniformly sample a program prefix out of with probability or pick the highest scoring program prefix in with probability (-greedy method). However, for turn-level SLU, the program space is very large. Although this randomized strategy can aid exploration, it could repeatedly sample the same incorrect programs over time, since most programs in the program space are incorrect. Hence exploration is still guided by the current model policy and long tail promising trajectories are difficult to be explored. For example, most turns only contain creation actions, therefore the model will assign high probabilities for creation actions. Thus, it is difficult to explore target programs that are composed of both creation and modification actions.
To address these problems, we extend randomized beam search with memory augmentation, i.e., RBSMA. We maintain a set of fully explored program prefixes for turn . We first filter out fully explored program prefixes in that exist in , then select the programs in the remaining using the -greedy method. In a sense, enables us to learn from failed trials, considering that most prefixes in refer to incorrect programs. Hence this approach helps guide the exploration towards unexplored promising directions. However, even if we can sample the correct program for once, we can hardly re-sample it due to the adopted randomized exploration strategy, which makes the learning process difficult to converge. Therefore, we maintain a cache of highest reward programs explored so far for each turn . After a procedure of beam search, we augment the beam search result with programs in and update the highest reward programs in with . The pseudo code for RBSMA is shown in Algorithm 1.
Recently MAPO Liang:18 was proposed as using a memory buffer of promising trajectories to reduce the variance of policy optimization for program synthesis. There are two major differences between our proposed RBSMA and MAPO. Firstly, RBSMA is based on beam search and MAPO employs Monte Carlo (MC) style sampling. The MC style sampling methods tend to revisit the programs with the highest distribution; whereas, after the highest probability program in a peaky distribution under the model policy, beam search still can use its remaining beam capacity to explore at least other programs. Secondly, RBSMA utilizes a randomized exploration strategy, which has been proved to improve the efficiency of exploration and is critical for solving the complicated problems in turn-level SLU.
5.2 Automated Curriculum Learning
Considering the diversity of problem complexity, and to facilitate faster and better learning, we explore automated CL that organizes data into a curriculum and presents it in a principled order to the learning algorithm. We organize the training set into tasks . An ensemble of all the tasks is a curriculum. A sample is a batch of data drawn randomly from one of the tasks.
Inspired by Graves:17, we view a curriculum containing tasks as an -armed bandit and design a syllabus as an adaptive policy which seeks to maximize payoffs from this bandit and continuously adapts to optimize the learning progress. An agent selects a sequence of arms (tasks) over T rounds. After each round, the selected task produces a payoff (real-valued reward) and the payoffs for the other tasks are not observed.
The bandit is non-stationary because the parameters related to update during training. Therefore, the payoff for each arm (task) can change between successive choices. Following Graves:17, we use adversarial bandits, denoted Exp3.S algorithm Auer et al. (2002); Graves:17, as shown below:
where is policy that is defined by a set of weights , refers to the extent of noise injection, is the observed reward at round , is the arm selected at round from
based on estimated bandit probability distributions of success,is re-scaled reward for arm , is the learning step.
Different from the loss-driven progress signals explored in Graves:17 for supervised learning, for weak supervision, we consider self reward gain (SRG) as the learning progress signal, by comparing the predictions made by the model before and after training on some sample . We denote the model parameters before and after training on by and , respectively. To avoid bias, we sample another from the same task of .
where program is predicted by model on out of ; equals that was defined early in Section 5; is computed similarly using model .
Finally, we re-scale to the interval of by min-max normalization for better convergence and assign the rescaled SRG to payoff . The pseudo code for Automated CL with SRG is shown in Algorithm 2.
6 Experiments and Analysis
The example application is voice ordering for coffee. One item (object) in a coffee order contains seven properties, summarized in Table 1. For evaluation, we only need to collect question-denotation pairs, that is, a session (turn) of user utterances and its final order. We find that the final order is easy to annotate and the weak supervision data can be collected with low cost.
|Property Name||Examples||Total #|
|number||one, two, three..||20|
|cup size||small, middle,big||3|
|location||pack, dine in||2|
|comment||less sugar, little ice..||18|
We create two datasets, namely, recorded100 and log1144. recorded100 is composed of 100 sessions by recording customers making orders by talking to a human clerk in a coffee shop, with the orders generated manually by the clerk. log1144 is composed of 1144 sessions extracted from real-world user logs of a commercial voice ordering system in a coffee shop, where the orders are labelled manually. After manually transcribing user utterances based on ASR output, recorded100 is used as the training set and log1144 as the test set. The data statistics are summarized in Table 2. One major goal of the proposed model is to use a small amount of weak supervision data collected with low cost to train a high-performing turn-level SLU system. Hence we intentionally train on a small amount of training data (recorded100) and test on a much larger test set to evaluate generalization of the proposed model.
In both datasets, we observe users order up to three different items in a session. Columns r1, r2, and r3 in Table 2 show the percentage of sessions with ordering one, two, or three items, respectively 222An example of ordering three items (r3) in the test set is “one middle cup of mocha and one big cup of latte with vanilla two cups of regular lattes all take away”.
. Sessions with one ordered item (easy task) are much more than sessions with three ordered items (the most complex problems in this setup). Note that one item has seven different properties, and both creation and modification actions might be included in the session. Hence, the program space for r3 is extremely large. We evaluate our proposed model on the training and test sets. The evaluation metric is accuracy, i.e., the percentage of the execution resultbased on the generated program equaling the target result .
6.2 The Pipelined Baseline
We take the deployed pipelined system as the baseline. In this system, first, we transform the utterances in a turn into a sequence of tags based on the lexical task (Section 3.1). Second, we remove contiguous repeated tags for disfluency removal. Third, inspired by the shift-reduce parser, we maintain a stack of tags and a set of ordered items. Initially, we empty the stack and the set. Then, we look ahead each of un-scanned tags, and make decisions of actions based on hand-crafted rules. Some decision may shift the current tag to the stack, while others reduce the current stack. Then we decide whether the reduce action is a creation action or a modification action also based on hand-crafted rules. The baseline approach scales up poorly (e.g., for more combinations of order items) and is difficult to port to new domains.
6.3 Details of Model and Training
The seq2seq model for program generation consists of a BiLSTM encoder and a feedforward decoder with dimensions of hidden states as 30 and 50, respectively. The decoder takes input encoder hidden states as well as embeddings of the last 5 decoded tokens and bag-of-words vector of all the decoded tokens. The decoding beam size is 40. Token embedding dimension is 12. Encoder input tag embeddings are initialized as follows. Giventypes of tags in total, the tag embedding dimension is . The first dimension is the index of this tag in the set of all tags; the dimension has value 1 or 0, indicating whether the tag is in type or not; the dimension is the index of the tag in type . Tag embeddings are then optimized end-to-end.
We define in as the total number of different properties between items in and in . To train the parameters , similar to Guo:18, we re-scale based on its ranking for better convergence and optimization towards better programs. We set the reward of programs with top-ranked as 1.0, and set the reward for the rest as 0. Note that the original is discrete, thus there are multiple top-ranked programs with re-scaled reward 1.0. We also employ code assistance to help prune the search space by checking syntax of partially generated programs, following previous weak supervision work Liang:17. To encourage exploration with -greedy, we set in RBSMA. For automated CL, we define tasks based on the difficulty of the denotation . A straightforward measure of difficulty is the number of ordered items in . Here we assign the training data that includes just one ordered item in the denotation to task (easy task), and data with multiple ordered items to task (hard task). The parameters for the Exp3.S algorithm (Section 5.2) are , . We uniformly sample from all the tasks for the first 80 steps for warm up. Adam is used for optimization, with learning rate 0.001, and mini-batch size of 8333Hyperparameters are optimized based on the training set accuracy..
We compare the proposed model (denoted WeakSup) to the deployed pipelined system (baseline). For ablation analysis, we evaluate the following variants of our approach. WeakSup_SBS_Uni is variant of WeakSup based on the standard beam search, i.e., removing the memory in RBSMA and set , also with uniformly sampling from both easy and hard tasks, i.e., no CL. WeakSup_SBS is variant of WeakSup based on the standard beam search and with CL. WeakSup_Uni is variant of WeakSup with RBSMA but no CL.
As shown in Table 3, WeakSup performs slightly better than the deployed Pipelined system on the test set, while trained only on a small number of end-to-end annotated sessions collected with low cost. We observe that 92.3% errors made by WeakSup_SBS on the training set belong to the hard task (i.e., ordering more than one item) since they are more difficult for exploration. WeakSup achieves 100% accuracy on the training dataset, demonstrating that long promising programs for complicated problems can be explored by RBSMA. Pipelined achieves 95% accuracy on the training set, indicating that it is difficult to cover all the hard problems through hand-crafted rules. Without CL, RBSMA improves the test set accuracy by 5.1% relative (from 78.1% to 82.1%) over the standard beam search; with CL, 7.8% relative (from 80.0% to 86.2%). CL improves generalization on the test data. With RBSMA, CL achieves 5% relative gain (comparing WeakSup with WeakSup_Uni), while maintaining similar training set accuracy. The combination of RBSMA and CL improves the test set accuracy by 10.4% relative (78.1% to 86.2%). The total training time for the proposed model WeakSup is 1.5 hours on one Tesla M40 GPU. Summing up data preparation and computation time, the proposed model saves development effort over an order of magnitude compared to the deployed pipelined system.
6.5 Analysis of Exploration Strategy
We study exploration strategies comparing the standard beam search vs. RBSMA, automated CL (cl) vs. uniform sampling (uniform). We evaluate beam_searh_uniform, beam_searh_cl, rbsma_uniform, rbsma_cl. We measure the exploration progress by the training set accuracy for a given epoch, shown in Figure2.
We have two observations. First, exploration with RBSMA progresses slowly at the very beginning compared to beam search (probably due to the randomized exploration strategy), but catches up and solves all the training set problems in the end. In contrast, exploration with the standard beam search stops progressing after epoch 52. Second, automated CL progresses faster than uniform sampling for most of the time. Particularly after epoch 35, rbsma_cl significantly improves over rbsma_uniform, probably due to the adaptive policy (sampling more samples from hard task).
6.6 Analysis of Adaptive Policy of Automated Curriculum Learning
The efficacy of the adaptive policy of our proposed automated CL algorithm is illustrated in Figure 3. Pai(easy) denotes the probability of sampling easy task under the policy, while pai (hard) the probability of sampling hard task. Acc(easy) denotes the accuracy of easy task in the training set, and acc(hard) the accuracy of hard task. Figure 3 reveals a consistent strategy, first focusing on easy task, then alternatively on both easy and hard tasks, and finally focusing more on hard task but still sampling from easy task. This automatically learned complex strategy is challenging to achieve even with carefully hand-crafted curricula, due to challenge in defining acceptable performance of tasks. Also, our proposed approach continuously samples from easy task when learning hard ones, effectively addressing the forgetting problem, as acc(easy) in Figure 3 does not degrade when acc(hard) improves. In contrast, effectively crafting these mixing strategies is challenging Zaremba:14.
We present an end-to-end statistical model with weak supervision for turn-level SLU. We propose two techniques for better exploration and generalization: (1) RBSMA for complicated problems with long programs, (2) automated CL for weakly supervised learning for dealing with the diversity of problem complexity. Experimental results on real-world user logs show that our model performs comparably to the deployed pipelined system, greatly saving the development labor. Both RBSMA and automated CL significantly improve exploration efficiency and generalization.
- Auer et al. (2002) P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. 2002. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77.
- Cheng et al. (2017) J. Cheng, S. Reddy, V. Saraswat, and M. Lapata. 2017. Learning structured natural language representations for semantic parsing. In Proceedings of ACL.
- Goldman et al. (2018) Omer Goldman, Veronica Latcinnik, Udi Naveh, Amir Globerson, and Jonathan Berant. 2018. Weakly supervised semantic parsing with abstract examples. In Proceedings of ACL.